text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Human Epigenome Project ( HEP ) is a multinational science project, with the stated aim to "identify, catalog, and interpret genome-wide DNA methylation patterns of all human genes in all major tissues". [ 1 ] It is financed by government funds as well as private investment, via a consortium of genetic research organisations.
The call for such a project was widely suggested and supported by cancer research scientists from all over the world. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
The HEP consortium is made up of the following organizations: [ citation needed ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Human_Epigenome_Project |
Human Factors in Engineering and Design is an engineering textbook, currently in its seventh edition. [ 1 ] First published in 1957 by Ernest J. McCormick, the book is considered a classic in human factors and ergonomics , and one of the best-established texts in the field. [ 2 ] [ 3 ] It is frequently taught in upper-level and graduate courses in the U.S., and is relied on by practicing human factors and ergonomics professionals. [ 3 ]
The text is divided into six sections: Introduction; Information Input; Human Output and Control; Work Space and Arrangement; Environment; and Human Factors: Selected Topics. [ 2 ]
The text is divided into six sections:
Introduction: Provides an overview of the field of human factors and ergonomics, including its history, goals, and methods.
Information Input: Discusses how humans perceive and process information from the environment, including vision, hearing, and other senses.
Human Output and Control: Examines human physical and cognitive capabilities and limitations in controlling systems and performing tasks.
Work Space and Arrangement: Covers the design of workspaces and equipment to optimize human performance and comfort, including anthropometry and workplace layout.
Environment: Explores the effects of environmental factors on human performance, such as lighting, noise, temperature, and vibration.
Human Factors: Selected Topics: Addresses specialized topics such as human-computer interaction, automation , and safety.
Since its first publication, the book has been updated and expanded several times to reflect advances in the field. The seventh edition, published in 2018 by Mark S. Sanders and Ernest J. McCormick, includes emerging topics such as digital technology, automation, and artificial intelligence.
Human Factors in Engineering and Design has had a significant impact on the field of human factors and ergonomics. The book has helped shape the development of the field and provided a framework for designing human-centered systems. It continues to be a valuable resource for students, researchers, and practicing professionals.
This article about a book on engineering is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Human_Factors_in_Engineering_and_Design |
Chemical
Neurological
The Human Fertilisation and Embryology Act 1990 (c. 37) is an Act of the Parliament of the United Kingdom . It created the Human Fertilisation and Embryology Authority which is in charge of human embryo research, along with monitoring and licensing fertility clinics in the United Kingdom. [ 1 ]
The Authority is composed of a chairman, a deputy chairman, and however many members are appointed by the UK Secretary of State. They are in charge of reviewing information about human embryos and subsequent development, provision of treatment services, and activities governed by the Act of 1990. [ 1 ] The Authority also offers information and advice to people seeking treatment, and to those who have donated gametes or embryos for purposes or activities covered in the Act of 1990. Some of the subjects under the Human Fertilisation and Embryology Act of 1990 are prohibitions in connection with gametes, embryos, and germ cells . [ 2 ]
The Act also addresses licensing conditions, code of practice, and procedure of approval involving human embryos. [ 3 ] This only concerns human embryos which have reached the two cell zygote stage, at which they are considered "fertilised" in the act. [ 2 ] It also governs the keeping and using of human embryos, but only outside a woman's body. The act contains amendments to UK law regarding termination of pregnancy, surrogacy and parental rights. [ 2 ]
The Human Fertilization and Embryology Act 1990 regulates ex-vivo human embryo creation and the research involving them. This act established the Human Fertilisation and Embryology Authority (HFEA) to regulate treatment and research in the UK involving human embryos. In 2001, an extension of the Act legalized embryo research for the purposes of "increasing knowledge about the development of embryos," "increasing knowledge about serious disease," and "enabling any such knowledge to be applied in developing treatments for serious disease." The HFEA grants licenses and research permission for up to three years, based on approval of five steps by the Research License Committee. [ 4 ]
HFEA policies are reviewed by specialists in the field regularly. After research and literature are reviewed, and open public meetings are held, the summarized information is presented to the Human Fertilisation Embryology Authority. [ 4 ]
Donors must meet certain criteria in order to be eligible for sperm, egg, or embryo donation. The donor can donate for research purposes or fertility treatment. Donors should find a HFEA licensed clinic, or can go through the National Gamete Donation Trust . [ 5 ]
The HFEA is carrying out a detailed review to determine the best way to reduce the risk of multiple pregnancies with in vitro fertilization (IVF). For example, Nadya Suleman (or "Octomom") is publicly known for giving birth to octuplets after IVF treatment.
This policy allows for the use of techniques which alter the mitochondrial DNA of the egg or an embryo used in IVF, to prevent serious mitochondrial diseases from being inherited.
The policies reviewed by HFEA cover everything from human reproductive cloning to the creation of human-animal hybrids, and include subjects such as ethics with scientific and social significance.
Sperm, eggs and embryos received in the donation process are currently tested for many medical conditions, and also quarantined for six months to reduce the risk of complications to the mother and child. Other than a screening for genetic disorders, donors are tested for HIV , hepatitis B , and hepatitis C . [ 6 ]
Embryos must be donated by a woman between the ages of 18 and 35 years old, who has also undergone a medical screening and given informed consent (which can be revoked at any point up until the embryo is used). [ 6 ]
"Welfare of the Child" review (multiple pregnancy), for people seeking IVF treatment. While there is always a risk of having a multiples pregnancy after receiving IVF treatment, HFEA is reviewing policies which will reduce this dangerous possibility. No more than two eggs or embryos can be legally implanted in a woman in an IVF treatment. There is a 25% success rate of this procedure per treatment cycle. [ 6 ]
Includes safety procedure regulations at fertility clinics; includes safe cryopreservation of eggs and embryos. Eggs and embryos are stored for ten years after the initial treatment. If the patient decides not to pursue another pregnancy, the eggs and embryos can be donated for research or to another couple for fertility treatments. [ 6 ]
In donor-assisted conception, the donor may not receive any monetary compensation (in the UK), although they may have related expenses covered. [ 7 ]
Sperm, eggs and embryos are stored in liquid nitrogen using cryopreservation (defined as the freezing of cells or whole tissues to sub-zero temperatures—the boiling point of liquid nitrogen). [ 8 ] This method preserves living organisms in a state where they can be restored to how they were before freezing. [ 8 ]
A cryoprotective compound (a liquid called cryopreservation medium), along with carefully controlled cooling and warming cycles ensure that minimal damage is done to the cells. [ 9 ] [ 8 ]
However, the freezing process is still somewhat damaging. Therefore, men wishing to donate sperm or have it stored for future use must make six sperm deposits for every one child they wish to have, due to the 50% survival rate of the sperm in each deposit. [ 8 ] The sperm is then put into straw shaped vials, and placed in a storage tank of either liquid nitrogen, or liquid nitrogen vapor. The sub-zero temperatures of the liquid generally range from -150 degrees Celsius, to -196 degrees Celsius.
According to HFEA, the storage period for both human gametes and embryos cannot exceed ten years. [ 9 ] HFEA requires a full informed consent from each party that has any relation to the egg, gametes, or embryo, all of which must be stored in accordance with their consents. [ 9 ]
Exceptions to the informed consent of gamete storage:
The act states that it is legal to "take" gametes or accept those provided, and store them without a person's consent, if the person is considered incapable, or until they "acquire such capacity." [ 9 ]
However, under paragraphs 9 and 10 of HFEA 1990, a person's gametes cannot be legally stored in the UK after their death. [ 9 ]
In July 1982 the Warnock Committee Inquiry was established. It was "to consider recent and potential developments in medicine and science related to human fertilisation and embryology; to consider what policies and safeguards should be applied, including consideration of the social, ethical, and legal implications of these developments; and to make recommendations." [ 10 ]
The Warnock Report was published on 18 July 1984. The report stated that a regulator was needed due to the 'special status' of embryos. [ 10 ]
In 1985 the Interim Licensing Authority was created. It was supposed to regulate work and research regarding human in vitro fertilisation until a permanent government legislation was passed. It remained the only authority until 1990.
The Unborn Children Protection Bill was also created in 1985. It was written by Enoch Powell and prohibited embryonic research. The Health Secretary would only have been allowed an embryo to be kept and implanted if it was for the sole purpose of assisting a named woman to bear a child. No other reason was allowed. This bill was not passed. It was reintroduced in 1986, where it again failed to pass. This was repeated again 1989. [ 10 ]
The Surrogacy Arrangements Act 1985 was the first law that governed surrogacy arrangements. It criminalized commercial surrogacy arrangements. [ 10 ]
In 1987 the framework for human fertilisation and embryology was created. A white paper was published in regards to the recommendations of the Warnock Report. [ 10 ]
In 1990 the Human Fertilisation and Embryology Act 1990 was passed. The Human Fertilisation and Embryology Authority , HFEA, officially started work August 1, 1991. [ 10 ]
The act covers several areas:
Within the act an embryo is defined as a live human embryo where fertilisation is complete, complete is defined as the appearance of a two cell zygote.
The act states that eggs, sperm, and embryo can only be stored for a finite amount of time in very specific conditions that are regulated by the Human Fertilisation and Embryology Authority . [ 11 ]
Research on human embryos can only be performed for specifically defined purposes that must be considered 'necessary and desirable' by the Human Fertilisation and Embryology Authority . Research can only be performed on an embryo for a maximum of fourteen days or until the primitive streak appears. The genetic composition of any cell within the embryo cannot be altered during the embryo's formation for research. [ 11 ]
The act defined several purposes: [ 3 ]
Section 37 [ 12 ] of the Act amends the Abortion Act 1967 . The section specifies and broadens the conditions where abortion is legal.
Women who consider abortion are referred to two doctors. Each doctor then advises her whether abortion is a suitable decision based on the conditions listed below. An abortion is granted only when the doctors reach a unanimous decision that the woman may terminate her pregnancy. An abortion that is performed without this decision or under any other circumstances is considered unlawful.
Abortion may be granted under one of the following circumstances: [ 12 ]
The registered medical practitioner that performs the abortion will continue to act in accordance with the Infant Life (Preservation) Act 1929 . [ 12 ]
In 1991 the statutory storage period and special expeditions sections were revisited. Regulations were extended storage periods for eggs and sperm. Licensing rules for egg and sperm storage were also clarified. [ 10 ]
A Disclosure of Information Act was created in 1992. This allowed the Human Fertilisation and Embryology Authority to disclose information to others with the patient's consent. for example, information could be shared with their general practitioner.
The Criminal Justice and Public Order Act 1994 added section 156. This prohibited the treatment of cells from aborted embryos. During the same year the Parental Orders regulations allowed parental orders to be made in surrogacy cases. [ 10 ]
In 1996 the permitted storage period for embryos was extended.
The Human Fertilisation and Embryology (Deceased Fathers) Act 2003 amended section 28 in 2000. [ 13 ]
Sperm may be taken from a deceased male to fertilize an egg if the corresponding man and woman were:
In 2001 the Human Fertilisation and Embryology Regulations were added. These regulations extended the purposes that an embryo can be created for in regards to research. [ 11 ]
In addition, the Human Reproductive Cloning Act 2001 was passed. This essential made human reproductive cloning illegal by outlawing the implantation of research embryos.
As of 2004 the Disclosure of Donor Information Regulations were formed. Any sperm or egg donors registered after April 1, 2005, were required to pass on name and last address given to the offspring. [ 10 ] During this time Parliament began reviewing the Human Fertilisation and Embryology Act 1990. [ 14 ]
Licensing of all establishments handling gametes for treatment was required as of 2007 in the Quality and Safety Regulations.
In 2006 a white paper was published regarding a revised legislation for fertility. This led to the Human Fertilisation and Embryology Act 2008 , HFE, being passed. This was a major review of fertility legislation, updating and amending the act of 1990. In 2009 the HFE act was passed. This is the current law in the UK. [ 10 ] | https://en.wikipedia.org/wiki/Human_Fertilisation_and_Embryology_Act_1990 |
Chemical
Neurological
The Human Fertilisation and Embryology Act 2008 (c. 22) is an act of the Parliament of the United Kingdom . The Act constitutes a major review and update of the Human Fertilisation and Embryology Act 1990 . The Guardian described the bill as a ‘landmark piece of legislation’ intended to bring UK fertility law in line with rapidly advancing scientific practices. [ 2 ]
According to the Department of Health , the Act's key provisions are: [ 3 ]
The Bill's discussion in Parliament did not permit time to debate whether it should extend abortion rights under the Abortion Act 1967 to also cover Northern Ireland . The 2008 Act does not alter the status quo. [ 4 ]
The Act also repealed and replaced the Human Reproductive Cloning Act 2001 .
The inclusion of hybrid embryo research provisions led to intense moral debates in Parliament, with one faction praising the potential for life-saving therapies and another warning against ‘unforeseen consequences.’” [ 5 ]
Under the act, new rules regarding the designation of a second parent in cases of IVF treatment came into force on 6 April 2009. Prior to these changes, UK law automatically recognized the husband in a married couple undergoing IVF as the child’s second legal parent. The 2008 Act extended this right to lesbian couples and single women, allowing them to nominate a second parent who was not necessarily a spouse or civil partner.
The Human Fertilisation and Embryology Authority (HFEA) advised prospective parents to consider delaying IVF treatment until the new regulations took effect, if they wished to take advantage of the updated second-parent provisions. [ 6 ] | https://en.wikipedia.org/wiki/Human_Fertilisation_and_Embryology_Act_2008 |
The Human Fertilisation and Embryology Authority ( HFEA ) is an executive non-departmental public body of the Department of Health and Social Care in the United Kingdom. It is a statutory body that regulates and inspects all clinics in the United Kingdom providing in vitro fertilisation (IVF), artificial insemination and the storage of human eggs , sperm or embryos . It also regulates human embryo research.
After the birth of Louise Brown , the world's first IVF baby, in 1978, there was concern about the implications of this new technology. In 1982, the UK government formed a committee chaired by philosopher Mary Warnock to look into the issues and see what action needed to be taken.
Hundreds of interested individuals including doctors, scientists and organisations such as health, patient and parent organisations as well as religious groups gave evidence to the committee.
In the years following the Warnock report, [ 2 ] proposals were brought forward by the government in the publication of a white paper Human Fertilisation and Embryology: A Framework for Legislation in 1987. The Human Fertilisation and Embryology Act 1990 [ 3 ] was drafted taking the report into account. [ citation needed ]
Updated developments since the Human Fertilisation and Embryology Act 2008
Since the enactment of the 2008 Act, the regulatory framework governing assisted reproductive technologies and embryo research in the United Kingdom has continued to evolve in response to rapid scientific advances and changing ethical considerations:
Advances in mitochondrial donation and three-parent IVF:
In 2015, the HFEA approved mitochondrial donation procedures – commonly known as three-parent IVF – making the UK the first country to legalise this technique. This innovative approach enables women at risk of transmitting mitochondrial diseases to have genetically related children, significantly reducing the risk of passing on these conditions. Ongoing clinical experience and data collection have contributed to the refinement of patient selection criteria and long-term monitoring protocols under the strict oversight of the Act ( https://www.hfea.gov.uk/treatments/explore-all-treatments/mitochondrial-donation/ , [HFEA, 2015]).
Refinements in gene editing regulation:
The emergence of genome editing technologies, including CRISPR, has prompted further regulatory reviews. Although hereditary genome editing remains restricted, controlled research on somatic cell modifications is permitted. In recent years, the HFEA has provided clear guidance on the conditions under which embryonic and non-hereditary gene editing research can be pursued, ensuring that experiments proceed under strict scientific and ethical standards ( https://www.hfea.gov.uk/what-we-do/embryo-research/ , [HFEA, updated guidance]).
Enhanced ethical oversight and parental recognition:
Reflecting evolving social norms, the framework of the Act has been revisited to accommodate diverse family structures. The update further clarifies the legal recognition of parental responsibility for homosexual relationships and unmarried couples. Additionally, the scope of donor anonymity and data transparency has been adjusted to balance the rights of donor-conceived individuals with donor privacy concerns, as well as facilitate more robust research through secure, anonymous data ( https://www.gov.uk/government/publications/hfea-review-update , [Department of Health, UK]).
Impact on clinical practices and outcome monitoring:
Along with technological advances, measures introduced to improve clinical outcomes continue to be refined. Strategies to reduce the incidence of multiple births – such as promoting single embryo transfer protocols – have been more widely adopted and closely monitored, contributing to increased patient safety, more predictable treatment outcomes and the optimisation of fertility care practices across the UK ( https://www.hfea.gov.uk/news/2018/single-embryo-transfer-guidance/ , [HFEA News, 2018]).
Ongoing legislative and policy review:
The dynamic nature of reproductive science and technology necessitates periodic review of the provisions of the Act. The HFEA, in collaboration with government bodies and independent experts, is committed to updating guidelines and policies. These reviews ensure that the legal framework remains responsive to future innovations and ethically complex scenarios, thereby retaining public trust and maintaining high standards in reproductive medicine and embryo research ( https://www.hfea.gov.uk/review-of-the-hfea-legislation/ , [HFEA Legislative Review]).
The 1990 Act provided for the establishment of the Human Fertilisation and Embryology Authority (HFEA), an executive, non-departmental public body, the first statutory body of its type in the world.
The HFEA is the independent regulator for IVF treatment and human embryo research and came into effect on 1 August 1991. The 1990 Act ensured the regulation, through licensing, of:
The Act also requires the HFEA keep a database of every IVF treatment carried out since that date and a database relating to all cycles and use of donated gametes (egg and sperm).
In 2001, the Human Fertilisation and Embryology (Research Purposes) Regulations 2001/188 extended the purposes for which embryo research could be licensed to include "increasing knowledge about the development of embryos", "increasing knowledge about serious disease", and "enabling any such knowledge to be applied in developing treatments for serious disease".
This allows researchers to carry out embryonic stem cell research and therapeutic cloning providing that an HFEA Licence Committee considers the use of embryos necessary or desirable for one of these purposes of research.
The Human Reproductive Cloning Act 2001 was introduced to explicitly prohibit reproductive cloning in the UK, but it was repealed by the Human Fertilisation and Embryology Act 2008 .
In 2004, the Human Fertilisation and Embryology Authority (Disclosure of Donor Information) Regulations 2004/1511, enabled donor-conceived children to access the identity of their sperm, egg or embryo donor upon reaching the age of 18.
The Regulations were implemented on 1 April 2005 and any donor who donated sperm, eggs or embryos from that date onwards is, by law, identifiable. Since that date, any person born as a result of donation is entitled to request and receive the donor's name and last known address, once they reach the age of 18.
The European Union Tissues and Cells Directives (EUTCD) introduced common safety and quality standards for human tissues and cells across the European Union (EU).
The purpose of the directives was to facilitate a safer and easier exchange of tissues and cells (including human eggs and sperm) between member states and to improve safety standards for European citizens.
The EUTCD was adopted by the Council of Ministers on 2 March 2004 and published in the Official Journal of the European Union on 7 April 2004. Member States were obliged to comply with its provisions from 7 April 2006.
In 2005, the House of Commons Science and Technology Select Committee published a report on Human Reproductive Technologies and the Law.
This inquiry investigated the legislative framework provided by the 1990 Act and challenges presented by technological advance and "recent changes in ethical and societal attitudes".
In light of the Committee's report, and legislative changes that had already been made, the Department of Health undertook a review of the 1990 Act. They then held a public consultation based on their review of the Act, and following this published a White Paper, Review of the Human Fertilisation and Embryology Act, within which Government presented its initial proposals to revise the legislation.
A Joint Committee of both houses scrutinised the Government's recommendations, and provided its views on what ought to be the final form of the Bill to be brought to parliament.
The Bill was finally brought to the House of Lords in November 2007, passing through the House of Commons through Spring and Autumn of 2008, and finally receiving Royal Assent on 13 November 2008.
The HFE Act 2008 updates the law to ensure it is fit for purpose in the 21st century. It is divided into three parts:
The main new elements of the Act are:
The current statutory functions of the HFEA, as a regulator under the HFE Acts 1990 and 2008 and other legislation include:
Multiple pregnancy is the single biggest risk to patients and children born as a result of fertility treatment. Women undergoing IVF treatment are twenty times more likely to have a multiple birth than if they conceive naturally.
After carefully considering views from clinics, patients and professional bodies, the HFEA decided to set a maximum multiple birth rate that clinics should not exceed, which will be lowered each year. All clinics will have their own strategy setting out how they will lower the multiple birth rate in their clinic by identifying the patients for whom single embryo transfer is the most appropriate treatment. The HFEA aims to reduce multiple births from IVF treatment to 10% over a period of years.
Former Chairs include Professor Lisa Jardine , Walter Merricks , Shirley Harrison, Lord Harries , Dame Suzi Leather , Baroness Deech , Sir Colin Campbell and Sally Cheshire.
Other notable former members include Professor Emily Jackson and Margaret Auld , [ 10 ] former Chief Nursing Officer for Scotland. | https://en.wikipedia.org/wiki/Human_Fertilisation_and_Embryology_Authority |
The International Human Frontier Science Program Organization ( HFSPO ) is a non-profit organization, based in Strasbourg , France , that funds basic research in life sciences. The organization implements the Human Frontier Science Program (HFSP) and is supported by 14 countries and the European Commission. Yoshihiro Yoneda is the HFSPO President and Chair of the Board of Trustees since 2024.
In 1986, Japanese scientists, supported by the Japanese Prime Minister's Council for Science and Technology, conducted a feasibility study to explore international collaboration in basic research. Subsequent discussions involving scientists from G7 summit nations and the European Union led to the "London Wise Men's Conference" in April 1987, endorsing the idea. Prime Minister Yasuhiro Nakasone proposed the Human Frontier Science Program at the Venice Economic Summit in June 10th 1987, gaining support from the Economic Summit partners and the Chairman of the European Community. The International Human Frontier Science Program Organization (HFSPO) was then established in 1989, with its secretariat in Strasbourg, France . Since 1990, the program has granted over 7000 awards to researchers from 70+ countries with 28 HFSP awardees later receiving the Nobel Prize for their scientific contributions.
HFSPO secures financial backing from a range of governments and research councils, including Australia, Canada, France, Germany, India, Israel, Italy, Japan, Republic of Korea, New Zealand, Singapore, Switzerland, the UK, USA, and the European Commission, representing non- G7 EU members. These contributions are consolidated into a unified budget, which is used to fund research fellowships and grants through HFSPO's peer review system, with a primary emphasis on science.
The organization offers Research grants , which encourage collaboration among scientists globally. These grants come in two types: Research Grants - Early Career and Research Grants - Program.
Postdoctoral Fellowships cater to individuals seeking experience in foreign labs, especially those early in their careers exploring different research fields. Fellows can also use these opportunities to establish independent research labs in their home countries.
Cross-Disciplinary Fellowships are designed for postdocs with Ph.D. degrees in the physical sciences , chemistry , mathematics , engineering and computer sciences who aim to gain training in biology .
HFSP funding primarily supports postdoctoral initiatives, with no provisions for undergraduate or PhD students.
International peer review is a fundamental part of the awarding process, involving two committees—one for Fellowships and one for Research Grants, each composed of 24 to 26 scientists. These committees have a diverse global representation of scientific experts, reviewing applications across all HFSP-supported scientific fields. The evaluation procedures undergo regular review, and the HFSP secretariat collaborates closely with committee members and the Council of Scientists.
In 2010, HFSP established the HFSP Nakasone Award to honour former Prime Minister Yasuhiro Nakasone of Japan for his vision in launching HFSP as a program of support for international collaboration and to foster early career scientists in a global context. The HFSP Nakasone Award is designed to recognise scientists who have undertaken frontier-moving research, including technological breakthroughs, which has advanced biological research. Both senior and junior scientists are eligible and peer-recognised excellence is the major criterion. The award can be made to an individual or a team of scientists. Award winners receive an unrestricted research grant of USD 10,000, a medal and personalised certificate. The award ceremony is held at the annual HFSP Awardees Meeting where the award winners are expected to deliver the HFSP Nakasone Lecture.
Recipients of the Award:
Launched in October 2006, the HFSP Journal aims to foster communication between scientists publishing innovative research at the frontiers of the life sciences. Peer review is designed to allow for the unique requirements of such papers and is overseen by an Editorial Board with members from different disciplines. The HFSP Journal offers its authors the option to pay a fee to make their research articles Open Access immediately upon publication. For other articles, access is limited to subscribers for the first 6 months after publication, but access is free thereafter.
The HFSP Journal ceased publication in July 2010 and was bought by the scientific publisher Taylor & Frances, to be re-launched in 2011.
In 2015, the HFSP reported that the former journal name had been hijacked in an apparent attempt to defraud researchers into publishing an apparent scam journal.
As of this edit , this article uses content from "Human Frontier Science Program" , which is licensed in a way that permits reuse under the Creative Commons Attribution-ShareAlike 3.0 Unported License , but not under the GFDL . All relevant terms must be followed. | https://en.wikipedia.org/wiki/Human_Frontier_Science_Program |
" Human Genetic Diversity: Lewontin's Fallacy " is a 2003 paper by A. W. F. Edwards in the journal BioEssays . [ 1 ] He criticises an argument first made in Richard Lewontin 's 1972 article " The Apportionment of Human Diversity ", that the practice of dividing humanity into races is taxonomically invalid because any given individual will often have more in common genetically with members of other population groups than with members of their own. [ 2 ] Edwards argued that this does not refute the biological reality of race since genetic analysis can usually make correct inferences about the perceived race of a person from whom a sample is taken, and that the rate of success increases when more genetic loci are examined. [ 1 ]
Edwards' paper was reprinted, commented upon by experts such as Noah Rosenberg , [ 3 ] and given further context in an interview with philosopher of science Rasmus Grønfeldt Winther in a 2018 anthology. [ 4 ] Edwards' critique is discussed in a number of academic and popular science books, with varying degrees of support. [ 5 ] [ 6 ] [ 7 ]
Some scholars, including Winther and Jonathan Marks , dispute the premise of "Lewontin's fallacy", arguing that Edwards' critique does not actually contradict Lewontin's argument. [ 7 ] [ 8 ] [ 9 ] A 2007 paper in Genetics by David J. Witherspoon et al. concluded that the two arguments are in fact compatible, and that Lewontin's observation about the distribution of genetic differences across ancestral population groups applies "even when the most distinct populations are considered and hundreds of loci are used". [ 10 ]
In the 1972 study " The Apportionment of Human Diversity ", Richard Lewontin performed a fixation index ( F ST ) statistical analysis using 17 markers, including blood group proteins, from individuals across classically defined "races" (Caucasian, African, Mongoloid, South Asian Aborigines, Amerinds, Oceanians, and Australian Aborigines). He found that the majority of the total genetic variation between humans (i.e., of the 0.1% of DNA that varies between individuals), 85.4%, is found within populations, 8.3% of the variation is found between populations within a "race", and only 6.3% was found to account for the racial classification. Numerous later studies have confirmed his findings. [ 6 ] Based on this analysis, Lewontin concluded, "Since such racial classification is now seen to be of virtually no genetic or taxonomic significance either, no justification can be offered for its continuance."
This argument has been cited as evidence that racial categories are biologically meaningless, and that behavioral differences between groups are not caused by genetic differences. [ 7 ] One example is the "Statement on 'Race'" published by the American Anthropological Association in 1998, which rejected the existence of races as unambiguous, clearly demarcated, biologically distinct groups. [ 11 ]
Edwards argued that while Lewontin's statements on variability are correct when examining the frequency of different alleles (variants of a particular gene) at an individual locus (the location of a particular gene) between individuals, it is nonetheless possible to classify individuals into different racial groups with an accuracy that approaches 100 percent when one takes into account the frequency of the alleles at several loci at the same time. This happens because differences in the frequency of alleles at different loci are correlated across populations—the alleles that are more frequent in a population at two or more loci are correlated when we consider the two populations simultaneously. Or in other words, the frequency of the alleles tends to cluster differently for different populations. [ 12 ]
In Edwards' words, "most of the information that distinguishes populations is hidden in the correlation structure of the data". These relationships can be extracted using commonly used ordination and cluster analysis techniques. Edwards argued that, even if the probability of misclassifying an individual based on the frequency of alleles at a single locus is as high as 30% (as Lewontin reported in 1972), the misclassification probability becomes close to zero if enough loci are studied. [ 13 ]
Edwards' paper stated that the underlying logic was discussed in the early years of the 20th century. Edwards wrote that he and Luigi Luca Cavalli-Sforza had presented a contrasting analysis to Lewontin's, using very similar data, already at the 1963 International Congress of Genetics . Lewontin participated in the conference but did not refer to this in his later paper. Edwards argued that Lewontin used his analysis to attack human classification in science for social reasons. [ 13 ]
Evolutionary biologist Richard Dawkins discusses genetic variation across human races in his book The Ancestor's Tale . [ 5 ] In the chapter "The Grasshopper's Tale", he characterizes the genetic variation between races as a very small fraction of the total human genetic variation, but he disagrees with Lewontin's conclusions about taxonomy, writing: "However small the racial partition of the total variation may be, if such racial characteristics as there are highly correlate with other racial characteristics, they are by definition informative, and therefore of taxonomic significance." [ 5 ] Neven Sesardić has argued that, unbeknownst to Edwards, Jeffry B. Mitton had already made the same argument about Lewontin's claim in two articles published in The American Naturalist in the late 1970s. [ 14 ] [ 15 ] [ 16 ]
Biological anthropologist Jonathan M. Marks agrees with Edwards that correlations between geographical areas and genetics obviously exist in human populations but goes on to write:
What is unclear is what this has to do with 'race' as that term has been used through much in the twentieth century—the mere fact that we can find groups to be different and can reliably allot people to them is trivial. Again, the point of the theory of race was to discover large clusters of people that are principally homogeneous within and heterogeneous between, contrasting groups. Lewontin's analysis shows that such groups do not exist in the human species, and Edwards' critique does not contradict that interpretation. [ 7 ]
The view that while geographic clustering of biological traits does exist, this does not lend biological validity to racial groups, was proposed by several evolutionary anthropologists and geneticists prior to the publication of Edwards' critique of Lewontin. [ 11 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ]
In the 2007 paper "Genetic Similarities Within and Between Human Populations", [ 10 ] Witherspoon et al. attempt to answer the question "How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?" The answer depends on the number of polymorphisms used to define that dissimilarity, and the populations being compared. When they analysed three geographically distinct populations (European, African, and East Asian) and measured genetic similarity over many thousands of loci, the answer to their question was "never"; however, measuring similarity using smaller numbers of loci yielded substantial overlap between these populations. Rates of between-population similarity also increased when geographically intermediate and admixed populations were included in the analysis. [ 10 ]
Witherspoon et al. write:
Since an individual's geographic ancestry can often be inferred from his or her genetic makeup, knowledge of one's population of origin should allow some inferences about individual genotypes. To the extent that phenotypically important genetic variation resembles the variation studied here, we may extrapolate from genotypic to phenotypic patterns. ... However, the typical frequencies of alleles responsible for common complex diseases remain unknown. The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population. Thus, caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes. [ 10 ]
Witherspoon et al. add: "A final complication arises when racial classifications are used as proxies for geographic ancestry. Although many concepts of race are correlated with geographic ancestry, the two are not interchangeable, and relying on racial classifications will reduce predictive power still further." [ 10 ]
In a 2014 paper, Rasmus Grønfeldt Winther argues that "Lewontin's fallacy" is effectively a misnomer, as there really are two different sets of methods and questions at play in studying the genomic population structure of our species: "variance partitioning" and "clustering analysis". According to Winther, they are "two sides of the same mathematics coin" and neither "necessarily implies anything about the reality of human groups". [ 8 ] | https://en.wikipedia.org/wiki/Human_Genetic_Diversity:_Lewontin's_Fallacy |
The Human Genetics Commission (HGC) was an advisory non-departmental public body that advised the UK government on the ethical and social aspects of genetics . This included genetic testing , cloning and other aspects of molecular medicine. The Commission was created after a review of the UK government biotechnology advisory framework in 1999. It was chaired initially by the lawyer, Baroness Helena Kennedy QC and, from 2007 to 2009, the acting chair was Sir John Sulston . From 2009, the Commission was chaired by Professor Jonathan Montgomery and comprised 21 members whose backgrounds include the law , medicine , consumer affairs, philosophy and ethics, scientific research, and clinical practice. Representatives of the Chief Medical Officers of England , Scotland , Wales , and Northern Ireland also sat on the Commission. [ 1 ] [ 2 ]
The Commission was abolished when quangos were reviewed by the newly elected government in October 2010. The Commission published its final paper in May 2012. [ 1 ] [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/Human_Genetics_Commission |
The Human Genome Project ( HGP ) was an international scientific research project with the goal of determining the base pairs that make up human DNA , and of identifying, mapping and sequencing all of the genes of the human genome from both a physical and a functional standpoint. It started in 1990 and was completed in 2003. [ 1 ] It was the world's largest collaborative biological project. [ 2 ] Planning for the project began in 1984 by the US government , and it officially launched in 1990. It was declared complete on 14 April 2003, and included about 92% of the genome. [ 3 ] Level "complete genome" was achieved in May 2021, with only 0.3% of the bases covered by potential issues. [ 4 ] [ 5 ] The final gapless assembly was finished in January 2022. [ 6 ]
Funding came from the US government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside the government by the Celera Corporation , or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centres in the United States, the United Kingdom, Japan, France, Germany, and China, [ 7 ] working in the International Human Genome Sequencing Consortium (IHGSC).
The Human Genome Project originally aimed to map the complete set of nucleotides contained in a human haploid reference genome , of which there are more than three billion. The genome of any given individual is unique; mapping the human genome involved sequencing samples collected from a small number of individuals and then assembling the sequenced fragments to get a complete sequence for each of the 23 human chromosome pairs (22 pairs of autosomes and a pair of sex chromosomes, known as allosomes). Therefore, the finished human genome is a mosaic, not representing any one individual. Much of the project's utility comes from the fact that the vast majority of the human genome is the same in all humans.
The Human Genome Project was a 13-year-long publicly funded project initiated in 1990 with the objective of determining the DNA sequence of the entire euchromatic human genome within 13 years. [ 8 ] [ 9 ] The idea that sets of inherited genes predicted the concept of mapping a disease gene to a chromosomal region originated in the work of Ronald A. Fisher , whose work is also credited with later initiating the project. [ 10 ] [ 11 ] In 1977, Walter Gilbert , Frederick Sanger , and Paul Berg invented these methods of sequencing DNA. [ 12 ] [ 13 ]
In May 1985, Robert Sinsheimer organized a workshop at the University of California, Santa Cruz , to discuss the feasibility of building a systematic reference genome using gene sequencing technologies. [ 14 ] Gilbert wrote the first plan for what he called The Human Genome Institute on the plane ride home from the workshop. [ 15 ] In March 1986, the Santa Fe Workshop was organized by Charles DeLisi and David Smith of the Department of Energy 's Office of Health and Environmental Research (OHER). [ 16 ] At the same time Renato Dulbecco , President of the Salk Institute for Biological Studies , first proposed the concept of whole genome sequencing in an essay in Science . [ 17 ] The published work, titled "A Turning Point in Cancer Research: Sequencing the Human Genome", was shortened from the original proposal of using the sequence to understand the genetic basis of breast cancer. [ 18 ] James Watson , one of the discoverers of the double helix shape of DNA in the 1950s, followed two months later with a workshop held at the Cold Spring Harbor Laboratory. Thus the idea for obtaining a reference sequence had three independent origins: Sinsheimer, Dulbecco and DeLisi. Ultimately it was the actions by DeLisi that launched the project. [ 19 ] [ 20 ] [ 21 ] [ 22 ]
The fact that the Santa Fe Workshop was motivated and supported by a federal agency opened a path, albeit a difficult and tortuous one, [ 23 ] for converting the idea into public policy in the United States. In a memo to the Assistant Secretary for Energy Research Alvin Trivelpiece , then-Director of the OHER Charles DeLisi outlined a broad plan for the project. [ 24 ] This started a long and complex chain of events that led to the approved reprogramming of funds that enabled the OHER to launch the project in 1986, and to recommend the first line item for the HGP, which was in President Reagan's 1988 budget submission, [ 23 ] and ultimately approved by Congress. Of particular importance in congressional approval was the advocacy of New Mexico Senator Pete Domenici , whom DeLisi had befriended. [ 25 ] Domenici chaired the Senate Committee on Energy and Natural Resources, as well as the Budget Committee, both of which were key in the DOE budget process. Congress added a comparable amount to the NIH budget, thereby beginning official funding by both agencies. [ citation needed ]
Trivelpiece sought and obtained the approval of DeLisi's proposal from Deputy Secretary William Flynn Martin . This chart [ 26 ] was used by Trivelpiece in the spring of 1986 to brief Martin and Under Secretary Joseph Salgado regarding his intention to reprogram $4 million to initiate the project with the approval of John S. Herrington . [ citation needed ] This reprogramming was followed by a line item budget of $13 million in the Reagan administration 's 1987 budget submission to Congress. [ 16 ] It subsequently passed both Houses. The project was planned to be completed within 15 years. [ 27 ]
In 1990 the two major funding agencies, DOE and the National Institutes of Health , developed a memorandum of understanding to coordinate plans and set the clock for the initiation of the Project to 1990. [ 28 ] At that time, David J. Galas was Director of the renamed "Office of Biological and Environmental Research" in the US Department of Energy's Office of Science and James Watson headed the NIH Genome Program. In 1993, Aristides Patrinos succeeded Galas and Francis Collins succeeded Watson, assuming the role of overall Project Head as Director of the NIH National Center for Human Genome Research (which would later become the National Human Genome Research Institute ). A working draft of the genome was announced in 2000 and the papers describing it were published in February 2001. A more complete draft was published in 2003, and genome "finishing" work continued for more than a decade after that. [ citation needed ]
The $3 billion project was formally founded in 1990 by the US Department of Energy and the National Institutes of Health, and was expected to take 15 years. [ 29 ] In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Australia, China, and a myriad of other spontaneous relationships. [ 30 ] The project ended up costing less than expected, at about $2.7 billion (equivalent to about $5 billion in 2021). [ 7 ] [ 31 ] [ 32 ] Most of the genome was mapped over a two-year span. [ 33 ]
Two technologies enabled the project: gene mapping and DNA sequencing . The gene mapping technique of restriction fragment length polymorphism (RFLP) arose from the search for the location of the breast cancer gene by Mark Skolnick of the University of Utah, [ 34 ] which began in 1974. [ 35 ] Seeing a linkage marker for the gene, in collaboration with David Botstein , Ray White and Ron Davis conceived of a way to construct a genetic linkage map of the human genome. This enabled scientists to launch the larger human genome effort. [ 36 ]
Because of widespread international cooperation and advances in the field of genomics (especially in sequence analysis ), as well as parallel advances in computing technology, a 'rough draft' of the genome was finished in 2000 (announced jointly by US President Bill Clinton and British Prime Minister Tony Blair on 26 June 2000). [ 37 ] [ 38 ] This first available rough draft assembly of the genome was completed by the Genome Bioinformatics Group at the University of California, Santa Cruz , primarily led by then-graduate student Jim Kent and his advisor David Haussler . [ 39 ] Ongoing sequencing led to the announcement of the essentially complete genome on 14 April 2003, two years earlier than planned. [ 40 ] [ 41 ] In May 2006, another milestone was passed on the way to completion of the project when the sequence of the very last chromosome was published in Nature . [ 42 ]
The various institutions, companies, and laboratories which participated in the Human Genome Project are listed below, according to the NIH : [ 7 ]
Notably the project was not able to sequence all of the DNA found in human cells ; rather, the aim was to sequence only euchromatic regions of the nuclear genome, which make up 92.1% of the human genome. The remaining 7.9% exists in scattered heterochromatic regions such as those found in centromeres and telomeres . These regions by their nature are generally more difficult to sequence and so were not included as part of the project's original plans. [ 43 ]
The Human Genome Project (HGP) was declared complete in April 2003. An initial rough draft of the human genome was available in June 2000 and by February 2001 a working draft had been completed and published followed by the final sequencing mapping of the human genome on 14 April 2003. Although this was reported to cover 99% of the euchromatic human genome with 99.99% accuracy, a major quality assessment of the human genome sequence was published on 27 May 2004, indicating over 92% of sampling exceeded 99.99% accuracy which was within the intended goal. [ 44 ]
In March 2009, the Genome Reference Consortium (GRC) released a more accurate version of the human genome, but that still left more than 300 gaps, [ 45 ] while 160 such gaps remained in 2015. [ 46 ]
Though in May 2020 the GRC reported 79 "unresolved" gaps, [ 47 ] accounting for as much as 5% of the human genome, [ 48 ] months later, the application of new long-range sequencing techniques and a hydatidiform mole -derived cell line in which both copies of each chromosome are identical led to the first telomere-to-telomere, truly complete sequence of a human chromosome, the X chromosome . [ 49 ] Similarly, an end-to-end complete sequence of human autosomal chromosome 8 followed several months later. [ 50 ]
In April 2022, the Telomere-to-Telomere ( T2T ) consortium published a complete sequence of the non- Y chromosomes , highlighting the 8% of the human genome that the HGP had not sequenced. [ 51 ] [ 52 ] [ 53 ] [ 54 ] The T2T consortium then used this newly completed genome sequence [ 55 ] as a reference to identify over 2 million additional genomic variants. [ 56 ] In August 2023, Rhie et al. reported the successful sequencing of the previously missing regions of the Y chromosome, achieving the full sequencing of all 24 human chromosomes. [ 57 ] [ 58 ]
The sequencing of the human genome holds benefits for many fields, from molecular medicine to human evolution . The Human Genome Project, through its sequencing of the DNA, can help researchers understand diseases including: genotyping of specific viruses to direct appropriate treatment; identification of mutations linked to different forms of cancer; the design of medication and more accurate prediction of their effects; advancement in forensic applied sciences; biofuels and other energy applications; agriculture, animal husbandry , bioprocessing ; risk assessment ; bioarcheology , anthropology and evolution .
The sequence of the DNA is stored in databases available to anyone on the Internet. The US National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank , along with sequences of known and hypothetical genes and proteins. Other organizations, such as the UCSC Genome Browser at the University of California, Santa Cruz, [ 59 ] and Ensembl [ 60 ] present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data because the data itself is difficult to interpret without such programs. Generally speaking, advances in genome sequencing technology have followed Moore's Law , a concept from computer science which states that integrated circuits can increase in complexity at an exponential rate. [ 61 ] This means that the speeds at which whole genomes can be sequenced can increase at a similar rate, as was seen during the development of the Human Genome Project. By 2023, the speed record for sequencing a genome was around five hours; more often, however, it takes weeks. [ 33 ]
The process of identifying the boundaries between genes and other features in a raw DNA sequence is called genome annotation and is in the domain of bioinformatics . While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. Beginning in 2008, a new technology known as RNA-seq was introduced that allowed scientists to directly sequence the messenger RNA in cells. This replaced previous methods of annotation, which relied on the inherent properties of the DNA sequence, with direct measurement, which was much more accurate. Today, annotation of the human genome and other genomes relies primarily on deep sequencing of the transcripts in every human tissue using RNA-seq. These experiments have revealed that over 90% of genes contain at least one and usually several alternative splice variants, in which the exons are combined in different ways to produce 2 or more gene products from the same locus. [ 62 ]
The genome published by the HGP does not represent the sequence of every individual's genome. It is the combined mosaic of a small number of anonymous donors, of African, European, and East Asian ancestry. The HGP genome is a scaffold for future work in identifying differences among individuals. [ citation needed ] Subsequent projects sequenced the genomes of multiple distinct ethnic groups, though as of 2019 there is still only one "reference genome". [ 63 ]
Key findings of the draft (2001) and complete (2004) genome sequences include:
The human genome has approximately 3.1 billion base pairs . [ 69 ] The Human Genome Project was started in 1990 with the goal of sequencing and identifying all base pairs in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. It is considered a megaproject .
The genome was broken into smaller pieces; approximately 150,000 base pairs in length. [ 70 ] These pieces were then ligated into a type of vector known as " bacterial artificial chromosomes ", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small " shotgun " project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the " hierarchical shotgun " approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing. [ 71 ] [ 72 ]
Funding came from the US government through the National Institutes of Health in the United States, and a UK charity organization, the Wellcome Trust , as well as numerous other groups from around the world. The funding supported a number of large sequencing centers including those at Whitehead Institute , the Wellcome Sanger Institute (then called The Sanger Centre) based at the Wellcome Genome Campus , Washington University in St. Louis , and Baylor College of Medicine . [ 29 ] [ 73 ]
The UN Educational, Scientific and Cultural Organization (UNESCO) served as an important channel for the involvement of developing countries in the Human Genome Project. [ 74 ]
In 1998 a similar, privately funded quest was launched by the American researcher Craig Venter , and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300 million Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project. While the Celera project focused its efforts on production sequencing and assembly of the human genome, the public HGP also funded mapping and sequencing of the worm , fly , and yeast genomes, funding of databases, development of new technologies, supporting bioinformatics and ethics programs, as well as polishing and assessment of the genome assembly. [ 75 ] Both the Celera and public approaches spent roughly $250 million on the production sequencing effort. [ 76 ] For sequence assembly, Celera made use of publicly available maps at GenBank , which Celera was capable of generating, but the availability of which was "beneficial" to the privately-funded project. [ 65 ]
Celera used a technique called whole genome shotgun sequencing , employing pairwise end sequencing , [ 77 ] which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome.
Celera initially announced that it would seek patent protection on "only 200–300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100–300 targets. The firm eventually filed preliminary ("place-holder") patent applications on 6,500 whole or partial genes.
Celera also promised to publish their findings in accordance with the terms of the 1996 " Bermuda Statement ", by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitors were compelled to release the first draft of the human genome before Celera for this reason. On 7 July 2000, the UCSC Genome Bioinformatics Group released the first working draft on the web. The scientific community downloaded about 500 GB of information from the UCSC genome server in the first 24 hours of free and unrestricted access. [ 78 ]
In March 2000 President Clinton , along with Prime Minister Tony Blair in a dual statement, urged that all researchers who wished to research the sequence should have "unencumbered access" to the genome sequence. [ 79 ] The statement sent Celera's stock plummeting and dragged down the biotechnology -heavy Nasdaq . The biotechnology sector lost about $50 billion in market capitalization in two days. [ citation needed ]
Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper ) [ 65 ] described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to approximately 92% of the sequence currently. [ citation needed ]
In the International Human Genome Sequencing Consortium (IHGSC) public-sector HGP, researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones taken from many different libraries were used in the overall project, with most of those libraries being created by Pieter J. de Jong. Much of the sequence (>70%) of the reference genome produced by the public HGP came from a single anonymous male donor from Buffalo, New York , ( code name RP11; the "RP" refers to Roswell Park Comprehensive Cancer Center ). [ 80 ] [ 81 ]
HGP scientists used white blood cells from the blood of two male and two female donors (randomly selected from 20 of each) – each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, because of quality considerations. One minor technical issue is that male samples contain just over half as much DNA from the sex chromosomes (one X chromosome and one Y chromosome ) compared to female samples (which contain two X chromosomes ). The other 22 chromosomes (the autosomes) are the same for both sexes.
Although the main sequencing phase of the HGP has been completed, studies of DNA variation continued in the International HapMap Project , whose goal was to identify patterns of single-nucleotide polymorphism (SNP) groups (called haplotypes , or "haps"). The DNA samples for the HapMap came from a total of 270 individuals; Yoruba people in Ibadan , Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre d'Etude du Polymorphisme Humain (CEPH) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe.
In the Celera Genomics private-sector project, DNA from five different individuals was used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science ) that his DNA was one of 21 samples in the pool, five of which were selected for use. [ 82 ] [ 83 ]
With the sequence in hand the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes. [ 28 ] [ 70 ]
It is anticipated that detailed knowledge of the human genome will offer new avenues for advances in medicine and biotechnology . Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics , started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, hemostasis disorders , cystic fibrosis , liver diseases, and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management. [ 84 ] [ 85 ]
There are also many tangible benefits for biologists. For example a researcher investigating a certain form of cancer may have narrowed down their search to a particular gene. By visiting the human genome database on the internet, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its functions, its evolutionary relationships to other human genes, or to genes in mice, yeast, or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, and diseases associated with this gene or other datatypes. Further, a deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes , it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them. [ 86 ]
Analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution . In many cases, evolutionary questions can now be framed in terms of molecular biology ; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles , the development of embryos with body plans, the vertebrate immune system ) can be related to the molecular level. Many questions about the similarities and differences between humans and their closest relatives (the primates , and indeed the other mammals ) are expected to be illuminated by the data in this project. [ 84 ] [ 87 ]
The project inspired and paved the way for genomic work in other fields, such as agriculture. For example by studying the genetic composition of Tritium aestivum , the world's most commonly used bread wheat, great insight has been gained into the ways that domestication has impacted the evolution of the plant. [ 88 ] It is being investigated which loci are most susceptible to manipulation, and how this plays out in evolutionary terms. Genetic sequencing has allowed these questions to be addressed for the first time, as specific loci can be compared in wild and domesticated strains of the plant. This will allow for advances in genetic modification in the future which could yield healthier and disease-resistant wheat crops, among other things.
At the onset of the Human Genome Project, several ethical, legal, and social concerns were raised in regard to how increased knowledge of the human genome could be used to discriminate against people . One of the main concerns of most individuals was the fear that both employers and health insurance companies would refuse to hire individuals or refuse to provide insurance to people because of a health concern indicated by someone's genes. [ 89 ] In 1996, the United States passed the Health Insurance Portability and Accountability Act (HIPAA), which protects against the unauthorized and non-consensual release of individually identifiable health information to any entity not actively engaged in the provision of healthcare services to a patient. [ 90 ]
Along with identifying all of the approximately 20,000–25,000 genes in the human genome (estimated at between 80,000 and 140,000 at the start of the project), the Human Genome Project also sought to address the ethical, legal, and social issues that were created by the onset of the project. [ 91 ] For that, the Ethical, Legal, and Social Implications (ELSI) program was founded in 1990. Five percent of the annual budget was allocated to address the ELSI arising from the project. [ 29 ] [ 92 ] This budget started at approximately $1.57 million in the year 1990, but increased to approximately $18 million in the year 2014. [ 93 ]
While the project may offer significant benefits to medicine and scientific research, some authors have emphasized the need to address the potential social consequences of mapping the human genome. Historian of science Hans-Jörg Rheinberger wrote that "the prospect of 'molecularizing' diseases and their possible cure will have a profound impact on what patients expect from medical help, and on a new generation of doctors' perception of illness." [ 94 ]
In July 2024, an investigation by Undark Magazine [ 95 ] and co-published with STAT News [ 96 ] revealed for the first time several ethical lapses by the scientists spearheading the Human Genome Project. Chief among these was the use of roughly 75 percent of a single donor's DNA in the construction of the reference genome, despite informed consent forms, provided to each of the 20 anonymous donors participating, that indicated no more than 10 percent of any one donor's DNA would be used. About 10 percent of the reference genome belonged to one of the project's lead scientists, Pieter De Jong. [ 95 ]
relationship to healthcare and to the federally funded Human Genome Project. | https://en.wikipedia.org/wiki/Human_Genome_Project |
Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease (a major cause of myocardial infarction (MI)), as well as treatment for the damage that occurs to the heart after MI. [ 1 ] [ 2 ] After MI, the myocardium suffers from reperfusion injury which leads to death of cardiomyocytes and detrimental remodelling of the heart, consequently reducing proper cardiac function. [ 2 ] Transfection of cardiac myocytes with human HGF reduces ischemic reperfusion injury after MI. The benefits of HGF therapy include preventing improper remodelling of the heart and ameliorating heart dysfunction post-MI. [ 1 ] [ 3 ]
Human hepatocyte growth factor (HGF) is an 80kD [ 1 ] pleiotropic protein that is endogenously produced by a variety of cell types from the mesenchymal cell lineage (such as cardiomyocytes and neurons). [ 4 ] It is produced and proteolytically cleaved to its active state in response to cellular injury or during apoptosis . HGF binds to c-met receptors found on mesenchymal cell types to produce its many different effects such as increased cellular motility, morphogenesis, proliferation and differentiation. [ 5 ] Research has shown that HGF has potent angiogenic , anti-fibrotic , and anti-apoptotic properties. [ 1 ] [ 4 ] [ 5 ] [ 6 ] [ 3 ] [ 7 ] [ 8 ] It has also been shown to act as a chemoattractant for adult mesenchymal stem cells via c-met receptor binding. [ 4 ] [ 5 ]
Animal research has demonstrated that administration of HGF cDNA plasmids into ischemic cardiac tissue can increase cardiac function (improved left ventricular ejection fraction and fractional shortening compared to control subjects) after induced MI or ischemia. [ 6 ] [ 3 ] Transfection with HGF plasmids in damaged cardiac tissue also promotes angiogenesis (increased capillary density compared to control subjects), as well as decreasing detrimental remodelling of the tissue at the site of injury (decreased fibrotic deposition). [ 4 ] [ 6 ] [ 7 ] The increased production of HGF by transfected cardiomyocytes during injury has also shown to be a powerful chemo-attractant of adult mesenchymal stem cells via HGF/c-Met binding. [ 4 ] [ 5 ] The mitogenic and morphogenic properties of HGF induce recruited stem cells to take on cardiomyocyte phenotypes, potentially helping in the healing of ischemic tissue. [ 5 ] The benefits of HGF in experimental models have led to its investigation in clinical trials. A phase I clinical trial entailed injecting an adenovirus vector with the human HGF (Ad-hHGF) gene into the coronary vessels localized to ischemic tissue. Results demonstrate that it is in fact safe to administer the Ad-hHGF vector into patients with coronary artery disease in hopes of re-vascularizing damaged tissue in patients for which coronary artery bypass surgery (CABG) or percutaneous coronary intervention (PCI) are not available or possible. Despite the trial’s limitations ( i.e. no assessment of left ventricular function and sample size was quite small), upon follow up assessments at 12 months, none of the patients receiving the treatment had been readmitted to hospital for MI, angina or aggravated heart failure. [ 1 ] | https://en.wikipedia.org/wiki/Human_HGF_plasmid_DNA_therapy |
The Human Medicines Regulations 2012 in the United Kingdom were created, under statutory authority of the European Communities Act 1972 and the Medicines Act 1968 in 2012. The body responsible for their upkeep is the Medicines and Healthcare products Regulatory Agency . The regulations partially repealed the Medicines Act 1968 in line with EU legislation. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
In October 2020, the regulations were amended to expand the workforce eligible to administer COVID-19 vaccines, so enabling additional healthcare professionals to vaccinate the public. This was a temporary provision, but in January 2022 it was announced that this would be made permanent as would the provision for community pharmacy contractors to provide COVID-19 and flu vaccines “away from their normal registered premises”. [ 5 ]
Regulation 174 provides an exemption to the requirement for authorisation of Regulation 46, allowing for the sale or supply of any medicinal product to be temporarily authorised by the licensing authority (MHRA) in response to the suspected or confirmed spread of pathogenic agents, toxins, chemical agents or nuclear radiation. [ 6 ]
This article relating to law in the United Kingdom , or its constituent jurisdictions, is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Human_Medicines_Regulations_2012 |
The Human Metabolome Database (HMDB) [ 1 ] [ 2 ] [ 3 ] [ 4 ] is a comprehensive, high-quality, freely accessible, online database of small molecule metabolites found in the human body. It has been created by the Human Metabolome Project funded by Genome Canada [ 5 ] and is one of the first dedicated metabolomics databases. The HMDB facilitates human metabolomics research, including the identification and characterization of human metabolites using NMR spectroscopy , GC-MS spectrometry and LC/MS spectrometry. To aid in this discovery process, the HMDB contains three kinds of data: 1) chemical data, 2) clinical data, and 3) molecular biology / biochemistry data (Fig. 1–3). The chemical data includes 41,514 metabolite structures with detailed descriptions along with nearly 10,000 NMR, GC-MS and LC/MS spectra.
The clinical data includes information on >10,000 metabolite- biofluid concentrations and metabolite concentration information on more than 600 different human diseases . The biochemical data includes 5,688 protein (and DNA ) sequences and more than 5,000 biochemical reactions that are linked to these metabolite entries. [ 5 ] Each metabolite entry in the HMDB contains more than 110 data fields with 2/3 of the information being devoted to chemical/clinical data and the other 1/3 devoted to enzymatic or biochemical data. Many data fields are hyperlinked to other databases ( KEGG , MetaCyc , PubChem , Protein Data Bank , ChEBI , Swiss-Prot , and GenBank ) and a variety of structure and pathway viewing applets. The HMDB database supports extensive text, sequence, spectral, chemical structure and relational query searches. It has been widely used in metabolomics, clinical chemistry , biomarker discovery and general biochemistry education.
Four additional databases, DrugBank, [ 6 ] [ 7 ] [ 8 ] T3DB, [ 9 ] SMPDB [ 10 ] and FooDB are also part of the HMDB suite of databases. DrugBank contains equivalent information on ~1,600 drug and drug metabolites, T3DB contains information on 3,100 common toxins and environmental pollutants , SMPDB contains pathway diagrams for 700 human metabolic and disease pathways, while FooDB contains equivalent information on ~28,000 food components and food additives .
The first version of HMDB was released on January 1, 2007, [ 1 ] followed by two subsequent versions on January 1, 2009 (version 2.0), [ 2 ] August 1, 2009 (version 2.5), September 18, 2012 (version 3.0) [ 4 ] and Jan. 1, 2013 (version 3.5), [ 11 ] 2017 (version 4.0). [ 12 ] 2022 (version 5.0). [ 11 ] Details for each of the major HMDB versions (up to version 5.0) is provided in Table 1.
All data in HMDB is non-proprietary or is derived from a non-proprietary source. It is freely accessible and available to anyone. In addition, nearly every data item is fully traceable and explicitly referenced to the original source. HMDB data is available through a public web interface and downloads. | https://en.wikipedia.org/wiki/Human_Metabolome_Database |
The Human Microbiome Project ( HMP ) was a United States National Institutes of Health (NIH) research initiative to improve understanding of the microbiota involved in human health and disease. Launched in 2007, [ 1 ] the first phase (HMP1) focused on identifying and characterizing human microbiota. The second phase, known as the Integrative Human Microbiome Project (iHMP) launched in 2014 with the aim of generating resources to characterize the microbiome and elucidating the roles of microbes in health and disease states. The program received $170 million in funding by the NIH Common Fund from 2007 to 2016. [ 2 ]
Important components of the HMP were culture-independent methods of microbial community characterization, such as metagenomics (which provides a broad genetic perspective on a single microbial community), as well as extensive whole genome sequencing (which provides a "deep" genetic perspective on certain aspects of a given microbial community, i.e. of individual bacterial species). The latter served as reference genomic sequences — 3000 such sequences of individual bacterial isolates are currently planned — for comparison purposes during subsequent metagenomic analysis. The project also financed deep sequencing of bacterial 16S rRNA sequences amplified by polymerase chain reaction from human subjects. [ 3 ]
Prior to the HMP launch, it was often reported in popular media and scientific literature that there are about 10 times as many microbial cells and 100 times as many microbial genes in the human body as there are human cells; this figure was based on estimates that the human microbiome includes around 100 trillion bacterial cells and an adult human typically has around 10 trillion human cells. [ 4 ] In 2014 the American Academy of Microbiology published a FAQ that emphasized that the number of microbial cells and the number of human cells are both estimates, and noted that recent research had arrived at a new estimate of the number of human cells at around 37 trillion cells, meaning that the ratio of microbial to human cells is probably about 3:1. [ 4 ] [ 5 ] In 2016 another group published a new estimate of ratio as being roughly 1:1 (1.3:1, with "an uncertainty of 25% and a variation of 53% over the population of standard 70 kg males"). [ 6 ] [ 7 ]
Despite the staggering number of microbes in and on the human body, little was known about their roles in human health and disease. Many of the organisms that make up the microbiome have not been successfully cultured , identified, or otherwise characterized. Organisms thought to be found in the human microbiome, however, may generally be categorized as bacteria , members of domain Archaea , yeasts , and single-celled eukaryotes as well as various helminth parasites and viruses , the latter including viruses that infect the cellular microbiome organisms (e.g., bacteriophages ). The HMP set out to discover and characterize the human microbiome, emphasizing oral, skin, vaginal, gastrointestinal, and respiratory sites.
The HMP will address some of the most inspiring, vexing and fundamental scientific questions today. Importantly, it also has the potential to break down the artificial barriers between medical microbiology and environmental microbiology. It is hoped that the HMP will not only identify new ways to determine health and predisposition to diseases but also define the parameters needed to design, implement and monitor strategies for intentionally manipulating the human microbiota, to optimize its performance in the context of an individual's physiology. [ 8 ]
The HMP has been described as "a logical conceptual and experimental extension of the Human Genome Project ." [ 8 ] In 2007 the HMP was listed on the NIH Roadmap for Medical Research [ 9 ] as one of the New Pathways to Discovery . Organized characterization of the human microbiome is also being done internationally under the auspices of the International Human Microbiome Consortium. [ 10 ] The Canadian Institutes of Health Research , through the CIHR Institute of Infection and Immunity, is leading the Canadian Microbiome Initiative to develop a coordinated and focused research effort to analyze and characterize the microbes that colonize the human body and their potential alteration during chronic disease states. [ 11 ]
The HMP involved participation from many research institutions, including Stanford University , the Broad Institute , Virginia Commonwealth University , Washington University , Northeastern University , MIT , the Baylor College of Medicine , and many others . Contributions included data evaluation, construction of reference sequence data sets, ethical and legal studies, technology development, and more. [ citation needed ]
The HMP1 included research efforts from many institutions. [ 12 ] The HMP1 set the following goals: [ 13 ]
In 2014, the NIH launched the second phase of the project, known as the Integrative Human Microbiome Project (iHMP). The goal of the iHMP was to produce resources to create a complete characterization of the human microbiome, with a focus on understanding the presence of microbiota in health and disease states. [ 14 ] The project mission, as stated by the NIH, was as follows:
The iHMP will create integrated longitudinal datasets of biological properties from both the microbiome and host from three different cohort studies of microbiome-associated conditions using multiple "omics" technologies. [ 14 ]
The project encompassed three sub-projects carried out at multiple institutions. Study methods included 16S rRNA gene profiling, whole metagenome shotgun sequencing , whole genome sequencing , metatranscriptomics , metabolomics / lipidomics , and immunoproteomics . The key findings of the iHMP were published in 2019. [ 15 ]
The Vaginal Microbiome Consortium team at Virginia Commonwealth University led research on the Pregnancy & Preterm Birth project with a goal of understanding how the microbiome changes during the gestational period and influences the neonatal microbiome. The project was also concerned with the role of the microbiome in the occurrence of preterm births, which, according to the CDC , account for nearly 10% of all births [ 16 ] and constitutes the second leading cause of neonatal death. [ 17 ] The project received $7.44 million in NIH funding. [ 18 ]
The Inflammatory Bowel Disease Multi'omics Data (IBDMDB) team was a multi-institution group of researchers focused on understanding how the gut microbiome changes longitudinally in adults and children suffering from IBD . IBD is an inflammatory autoimmune disorder that manifests as either Crohn's disease or ulcerative colitis and affects about one million Americans. [ 19 ] Research participants included cohorts from Massachusetts General Hospital , Emory University Hospital / Cincinnati Children's Hospital , and Cedars-Sinai Medical Center . [ 20 ]
Researchers from Stanford University and the Jackson Laboratory of Genomic Medicine worked together to perform a longitudinal analysis on the biological processes that occur in the microbiome of patients at risk for Type 2 Diabetes . T2D affects nearly 20 million Americans with at least 79 million pre-diabetic patients, [ 21 ] and is partially characterized by marked shifts in the microbiome compared to healthy individuals. The project aimed to identify molecules and signaling pathways that play a role in the etiology of the disease. [ 22 ]
The impact to date of the HMP may be partially assessed by examination of research sponsored by the HMP. Over 650 peer-reviewed publications were listed on the HMP website from June 2009 to the end of 2017, and had been cited over 70,000 times. [ 23 ] At this point the website was archived and is no longer updated, although datasets do continue to be available. [ 24 ]
Major categories of work funded by HMP included:
Developments funded by HMP included:
On 13 June 2012, a major milestone of the HMP was announced by the NIH director Francis Collins . [ 51 ] The announcement was accompanied with a series of coordinated articles published in Nature [ 52 ] [ 53 ] and several journals including the Public Library of Science (PLoS) on the same day. [ 54 ] [ 55 ] [ 56 ] By mapping the normal microbial make-up of healthy humans using genome sequencing techniques, the researchers of the HMP have created a reference database and the boundaries of normal microbial variation in humans. [ 57 ]
From 242 healthy U.S. volunteers, more than 5,000 samples were collected from tissues from 15 (men) to 18 (women) body sites such as mouth, nose, skin, lower intestine (stool) and vagina. All the DNA, human and microbial, were analyzed with DNA sequencing machines. The microbial genome data were extracted by identifying the bacterial specific ribosomal RNA, 16S rRNA . The researchers calculated that more than 10,000 microbial species occupy the human ecosystem and they have identified 81 – 99% of the genera . In addition to establishing the human microbiome reference database, the HMP project also discovered several "surprises", which include: [ citation needed ]
Among the first clinical applications utilizing the HMP data, as reported in several PLoS papers, the researchers found a shift to less species diversity in vaginal microbiome of pregnant women in preparation for birth, and high viral DNA load in the nasal microbiome of children with unexplained fevers. Other studies using the HMP data and techniques include role of microbiome in various diseases in the digestive tract, skin, reproductive organs and childhood disorders. [ 51 ]
Pharmaceutical microbiologists have considered the implications of the HMP data in relation to the presence / absence of 'objectionable' microorganisms in non-sterile pharmaceutical products and in relation to the monitoring of microorganisms within the controlled environments in which products are manufactured. The latter also has implications for media selection and disinfectant efficacy studies. [ 58 ] | https://en.wikipedia.org/wiki/Human_Microbiome_Project |
Human Proteinpedia , which is closely associated with Institute of Bioinformatics (IOB), Bangalore and Johns Hopkins University , is a portal for sharing and integration of human proteomic data. [ 1 ] [ 2 ] It allows research laboratories to contribute and maintain protein annotations. Human Protein Reference Database ( HPRD ) integrates data, that is deposited in Human Proteinpedia along with the existing literature curated information at the context of an individual protein. [ 3 ] [ 4 ] In essence, researchers can add new data to HPRD by registering to Human Proteinpedia. The data deposited in Human Proteinpedia is freely available for download. Emphasizing the importance of proteomics data disposition to public repositories, Nature Methods recommends Human Proteinpedia in their editorial. [ 5 ] More than 70 labs participate in this effort.
Data pertaining to post-translational modifications , protein–protein interactions , tissue expression, expression in cell lines , subcellular localization and enzyme substrate relationships can be submitted to Human Proteinpedia.
Protein annotations present in Human Proteinpedia are derived from a number of platforms such as
This portal that allows adding of protein information was developed as a collaborative effort between the laboratory of Dr. Akhilesh Pandey at Johns Hopkins University and the Institute of Bioinformatics .
* What are the criteria for contributing data?
Any investigator who fulfills the following criteria can contribute data:
i) provides experimentally derived data, and,
ii) is willing to share data, and,
iii) is willing to be listed as the 'contributor' of the data
* Can I contribute data anonymously?
Anonymous contributions are not allowed. Contributor details should be clearly presented while contributing data.
* Can bioinformatically predicted data be shared through Human Proteinpedia?
Predictions of any type are not allowed. Contributed data should be derived experimentally and should be accompanied with experimental evidence.
* Is the contributed data subjected to peer review?
The data are not subjected to peer review and the actual experimental data (raw or processed) should be provided.
* What will happen to conflicting results from different laboratories?
In cases where a given entry is documented as erroneous, we will consult with the contributing group(s) about deleting the entry. | https://en.wikipedia.org/wiki/Human_Proteinpedia |
The Human Reproductive Cloning Act 2001 (c. 23) was an Act of the Parliament of the United Kingdom "to prohibit the placing in a woman of a human embryo which has been created otherwise than by fertilisation". The act received Royal Assent on 4 December 2001.
On 14 January 2001, the British government passed The Human Fertilisation and Embryology (Research Purposes) Regulations 2001 [ 1 ] to amend the Human Fertilisation and Embryology Act 1990 by extending allowable reasons for embryo research to permit research around stem cells and cell nuclear replacement, thus allowing therapeutic cloning . However, on 15 November 2001, a pro-life group won a High Court legal challenge, which struck down the regulation and effectively left all forms of cloning unregulated in the UK. Their hope was that Parliament would fill this gap by passing prohibitive legislation. [ 2 ] [ 3 ] Parliament was quick to pass the Human Reproductive Cloning Act 2001 in order to explicitly prohibit reproductive cloning. The remaining gap with regard to therapeutic cloning was closed when the appeals courts reversed the previous decision of the High Court. [ 4 ]
The act was repealed and replaced by the Human Fertilisation and Embryology Act 2008 . | https://en.wikipedia.org/wiki/Human_Reproductive_Cloning_Act_2001 |
The Human Tissue (Scotland) Act 2006 (asp 4) is an Act of the Scottish Parliament enacted to consolidate and modernise the legal framework governing the removal, retention, and use of human tissue in Scotland . It replaces earlier legislation, including aspects of the Anatomy Act 1984 , and addresses ethical and legal concerns that had emerged in the early 2000s concerning the treatment of human remains. The Act regulates three principal uses of human tissue: its donation—primarily for transplantation , but also for research, education or training, and audit purposes; its removal, retention and use following a post-mortem examination; and its regulated use in anatomical examination and display. [ † 1 ]
By introducing the principle of "authorisation" (analogous to "consent" in other jurisdictions), the Act aims to ensure that an individual's wishes regarding the use of their body or body parts after death are respected. Its provisions represent a distinct legal approach from that adopted elsewhere in the United Kingdom, where the comparable legislation is the Human Tissue Act 2004 . [ 1 ]
In June 2017, the Scottish Government announced its intention to introduce legislation establishing an opt-out system for organ donation , with the objective of increasing donation rates. [ 2 ] This policy was implemented on 26 March 2021 through the Human Tissue (Authorisation) (Scotland) Act 2019 , which introduced a system of deemed authorisation for organ and tissue donation. Under this system, adults are presumed to have authorised donation unless they have explicitly opted out. Healthcare professionals are required to make reasonable efforts to determine the wishes of the deceased before proceeding. [ † 2 ] | https://en.wikipedia.org/wiki/Human_Tissue_(Scotland)_Act_2006 |
Human accelerated regions ( HARs ), first described in August 2006, [ 1 ] [ 2 ] are a set of 49 segments of the human genome that are conserved throughout vertebrate evolution but are strikingly different in humans . They are named according to their degree of difference between humans and chimpanzees (HAR1 showing the largest degree of human-chimpanzee differences). Found by scanning through genomic databases of multiple species, some of these highly mutated areas may contribute to human-specific traits. Others may represent loss of functional mutations, possibly due to the action of biased gene conversion [ 2 ] [ 3 ] rather than adaptive evolution . [ 4 ] [ 5 ] [ 6 ]
Several of the HARs encompass genes known to produce proteins important in neurodevelopment. HAR1 is a 106-base pair stretch found on the long arm of chromosome 20 overlapping with part of the RNA genes HAR1F and HAR1R. HAR1F is active in the developing human brain. The HAR1 sequence is found (and conserved) in chickens and chimpanzees but is not present in fish or frogs that have been studied. There are 18 base pair mutations different between humans and chimpanzees, far more than expected by its history of conservation. [ 1 ]
HAR2 includes HACNS1 a gene enhancer "that may have contributed to the evolution of the uniquely opposable human thumb , and possibly also modifications in the ankle or foot that allow humans to walk on two legs". Evidence to date shows that of the 110,000 gene enhancer sequences identified in the human genome , HACNS1 has undergone the most change during the evolution of humans following the split with the ancestors of chimpanzees . [ 7 ] The substitutions in HAR2 may have resulted in loss of binding sites for a repressor, possibly due to biased gene conversion. [ 8 ] [ 9 ]
The HAR regions may be downloaded from:
NCBI: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE180714
UCSC Genome Browser: https://genome.ucsc.edu/cgi-bin/hgTracks?db=hg38&lastVirtModeType=default&lastVirtModeExtraState=&virtModeType=default&virtMode=0&nonVirtPosition=&position=chrY%3A12356610%2D12382346&hgsid=2451636823_Q4Hu6KRd6b9l1ONskk5JMRWgM1oL | https://en.wikipedia.org/wiki/Human_accelerated_regions |
A human artificial chromosome ( HAC ) is a microchromosome that can act as a new chromosome in a population of human cells. That is, instead of 46 chromosomes, the cell could have 47 with the 47th being very small, roughly 6–10 megabases (Mb) in size instead of 50–250 Mb for natural chromosomes, and able to carry new genes introduced by human researchers. Ideally, researchers could integrate different genes that perform a variety of functions, including disease defense .
Alternative methods of creating transgenes , such as utilizing yeast artificial chromosomes and bacterial artificial chromosomes , lead to unpredictable problems. The genetic material introduced by these vectors not only leads to different expression levels, but the inserts also disrupt the original genome. [ 1 ] HACs differ in this regard, as they are entirely separate chromosomes. This separation from existing genetic material assumes that no insertional mutants would arise. [ 2 ] This stability and accuracy makes HACs preferable to other methods such as viral vectors , YACs, and BACs. [ 3 ] HACs allow for delivery of more DNA (including promoters and copy-number variation ) than is possible with viral vectors. [ 4 ]
Yeast artificial chromosomes and bacterial artificial chromosomes were created before human artificial chromosomes, which were first developed in 1997 . HACs are useful in expression studies as gene transfer vectors , as a tool for elucidating human chromosome function, and as a method for actively annotating the human genome . [ 5 ]
HACs were first constructed de novo in 1997 by adding alpha-satellite DNA to telomeric and genomic DNA in human HT1080 cells. This resulted in an entirely new microchromosome that contained DNA of interest, as well as elements allowing it to be structurally and mitotically stable, such as telomeric and centromeric sequences. [ 6 ] Due to the difficulty of de novo HAC formation, this method has largely been abandoned.
There are currently two accepted models for the creation of human artificial chromosome vectors. The first is to create a small minichromosome by altering a natural human chromosome. This is accomplished by truncating the natural chromosome, followed by the introduction of unique genetic material via the Cre-Lox system of recombination. The second method involves the literal creation of a novel chromosome de novo . [ 7 ] Progress regarding de novo HAC formation has been limited, as many large genomic fragments will not successfully integrate into de novo vectors. [ 5 ] Another factor limiting de novo vector formation is limited knowledge of what elements are required for construction, specifically centromeric sequences. [ 2 ] Challenges involving centromeric sequences are being overcome. [ 8 ]
A 2009 study has shown additional benefits of HACs, namely their ability to stably contain extremely large genomic fragments. Researchers incorporated the 2.4 Mb dystrophin gene, in which a mutation is a key causal element of Duchenne muscular dystrophy . The resulting HAC was mitotically stable, and correctly expressed dystrophin in chimeric mice. Previous attempts at correctly expressing dystrophin have failed. Due to its large size, it has never before been successfully integrated into a vector. [ 9 ]
In 2010, a refined human artificial chromosome called 21HAC was reported. 21HAC is based on a stripped copy of human chromosome 21, producing a chromosome 5 Mb in length. Truncation of chromosome 21 resulted in a human artificial chromosome that was mitotically stable. 21HAC was also able to be transferred into cells from a variety of species (mice, chickens, humans). Using 21HAC, researchers were able to insert a herpes simplex virus thymidine kinase coding gene into tumor cells. This "suicide gene" is required to activate many antiviral medications. These targeted tumor cells were successfully, and selectively, terminated by the antiviral drug ganciclovir in a population including healthy cells. This research opens a variety of opportunities for using HACs in gene therapy. [ 10 ]
In 2011, researchers formed a human artificial chromosome by truncating chromosome 14. Genetic material was then introduced using the Cre-Lox recombination system. This particular study focused on changes in expression levels by leaving portions of the existing genomic DNA. By leaving existing telomeric and sub-telomeric sequences, researchers were able to amplify expression levels of genes coding for erythropoietin production over 1000-fold. This work also has large gene therapy implications, as erythropoietin controls red blood cell formation. [ 11 ]
HACs have been used to create transgenic animals for use as animal models of human disease and for production of therapeutic products. [ 4 ] | https://en.wikipedia.org/wiki/Human_artificial_chromosome |
Human betaretrovirus (HBRV) , also known as Human mammary tumor virus , or Mouse mammary tumor-like virus is the human homologue of the Mouse mammary tumor virus (MMTV). The nomenclature for Human betaretrovirus was introduced following characterization of infection in patient with autoimmune liver disease suggesting the virus is not solely found in mice nor exclusively implicated in the development of neoplastic disease. [ 1 ] [ 2 ] [ 3 ] Evidence of HBRV has been documented in humans dating back at least 4500 years ago, [ 4 ] [ 5 ] and it stands as the only identified exogenous betaretrovirus affecting humans to date. [ 6 ]
The existence of this virus was suspected for decades. [ 7 ] Nucleotide sequences identifying a whole proviral betarerovirus were first reported in human breast cancer in 2001 [ 8 ] and lymphoid tissues of patients with autoimmune liver disease in 2003. Viral particles were isolated several years later. [ 9 ]
The HBRV encodes an approximately 9 kilobase single-stranded RNA genome , and shares significant virological similarities with MMTV. [ 1 ] [ 6 ] The human and mouse betaretrovirus are difficult to distinguish genetically, and structural proteins share 93% to 99% amino acid sequence similarity with each other and less than 35% with other betaretroviruses and the human endogenous betaretroviruses (HERV-K). [ 1 ] [ 6 ] By electron microscopy, both human betaretrovirus and MMTV have comparable morphological features and form 80-100 nm spherical and pleomorphic structures with eccentric nucleocapsid cores. [ 10 ] [ 2 ] [ 6 ]
Previously, these betaretroviruses were considered simple retroviruses encoding gag, pol and env genes but are now considered complex with the characterization of the regulator of MMTV expression (Rem) protein that acts as a nuclear export of the unspliced RNA. [ 11 ] [ 12 ] The HBRV genome encodes five possible open reading frames (ORFs) that correspond with the Gag, protease (Pro), polymerase (Pol), envelope (Env), regulator of MMTV expression (Rem) and superantigen (Sag) proteins found in MMTV. [ 1 ] [ 6 ] [ 11 ] [ 12 ] The viral superantigen is the most variable region within the betaretrovirus genome. [ 1 ] The viral superantigen mechanism is required to stimulate lymphocyte proliferation and enable viral replication within dividing cells; demonstration of superantigen activity is used to demonstrate MMTV infection in mice. [ 13 ]
The similarity of MMTV with HBRV suggests a zoonosis from mice to humans. The discovery of HBRV in humans, dating back thousands of years, [ 5 ] indicates an interspecies transmission of the virus between mice and humans coinciding with the development of agriculture . This transmission process may have resulted in the adaptation of MMTV to humans, ultimately evolving into HBRV. [ 4 ] MMTV can infect human cells, as demonstrated in co-cultivation studies using 293 human kidney, and HeLa human cervical adenocarcinoma, and Hs578T breast epithelial cells. [ 14 ] [ 15 ]
The route of HBRV transmission in humans remains unknown. However, some evidence suggests the possibility of microdroplet transmission, as viral sequences have been found in human saliva . [ 4 ] [ 16 ] It has been suggested that HBRV may be transmitted through saliva, as the virus can potentially reach the Waldeyer's ring structures in the throat. [ 4 ] Similar to observations in mice, both betaretrovirus particles and nucleic acid have been documented in human breast milk. [ 17 ] [ 10 ] However, human milk has been shown to have a destructive effect on MMTV particles, and this route of transmission is not consistent with the epidemiological data concerning breast feeding. [ 18 ] [ 19 ] [ 20 ]
While contemporary understanding of tropism remains limited, recent studies have provided insights into HBRV's ability to infect biliary epithelial cells and replicate within lymphoid tissue. [ 6 ] [ 21 ] [ 10 ]
Human betaretrovirus has been associated with various cancers [ 4 ] and autoimmune conditions , such as primary biliary cholangitis. [ 22 ] While HBRV may be a contributing factor, it is not the accepted cause at present, or the sole agent triggering these diseases. Other factors, such as genetic predisposition and other environmental exposures, are thought to play a contributary role in disease development. Nevertheless, several criteria used for linking environmental agents with disease have been firmly established for HBRV. [ 3 ] [ 23 ] The over-expression in human MCF7 cells of both WNT1 and FGF3 genes, main integration sites (INT) of MMTV in mouse, induces the synthesis of epithelial mesenchymal transition markers, mitochondrial proteins, glycolytic enzymes, and protein machinery synthesis. Many of these proteins are found transcriptionally overexpressed in human breast cancer cells in vivo. [ 24 ]
The potential association between human mammary tumor virus (HBRV) and breast cancer has been a subject of interest for approximately 50 years since betaretrovirus particles resembling MMTV were observed in breast milk derived from close relatives of patients with breast cancer. [ 10 ] Over the past three decades, numerous studies have provided substantial support to link a human mammary tumor virus with sporadic breast cancer and more recent research has identified viral sequences of HBRV in breast cancer samples from different regions, indicating the presence of the virus in breast cancer tissues. [ 25 ] [ 26 ] [ 27 ] [ 28 ]
More than 40 studies worldwide report evidence of HBRV infection in human sporadic breast cancer tissue ranging from ~30% to 40% of patients as compared to ~2% frequency in control samples. [ 4 ] [ 23 ]
The rate of HBRV infection in DCIS has been found double than in invasive forms (80%). This finding indicates that HBRV plays a role in cancer initiation rather than in cancer progression, in line with what is known in the murine model. [ 29 ] [ 30 ]
In contrast, hereditary breast carcinoma occurs as a result of etiopathogenetic factors unassociated with HBRV and this form of cancer has a very low frequency of HBRV ranging from 2-4%. [ 31 ] The mounting evidence regarding the potential similarity in pathogenic mechanisms between HBRV and MMTV has further strengthened the hypothesis that the virus could be relevant in understanding sporadic breast cancer development and progression. [ 4 ] [ 32 ] [ 23 ]
Human betaretrovirus (HBRV) has been extensively studied in its connection to the autoimmune liver disease, primary biliary cholangitis (PBC). [ 3 ] Various research approaches have been employed, including in vitro HBRV co-cultivation studies using biliary epithelium, the use of autoimmune biliary disease mouse models with MMTV infection and the study of patient samples. [ 3 ] These studies have provided valuable insights into the link between HBRV and PBC. For example, HBRV infection leads to the expression of autoantigens linked with the development of the anti-mitochondrial antibodies used to diagnose PBC, [ 10 ] [ 33 ] and MMTV infection in mice is also linked with mitochondrial antigen expression and antimitochondrial antibody production. [ 34 ] [ 35 ]
Using PBC patient samples, researchers have isolated HBRV and identified up to 3000 viral integration sites within the human genome, providing strong evidence of a transmissible betaretrovirus infection in patients diagnosed with PBC. [ 6 ] [ 21 ] Furthermore, HBRV insertions and betaretrovirus RNA were commonly observed at the site of disease in the biliary epithelia of patients with PBC, and also in patients with autoimmune hepatitis.
The diagnosis of human betaretrovirus virus infection remains a challenging task due to the lack of widely available, sensitive, and reproducible diagnostic tests. One serological ELISA assay using the HBRV Env protein was positive in 10% of breast cancer and PBC patients as compared to ~2% of healthy subjects. [ 36 ] Accordingly, this serological assay was less sensitive than the gold standard for demonstrating retroviral infection with proviral integrations. However, demonstration of genomic insertions is a research tool that is not readily adaptable for clinical use. HBRV is not readily detectable in blood by the polymerase chain reaction methodology and therefore a tissue diagnosis is required. However, this assay may be compromised by contamination. Further development of cellular immune assays using characterized HBRV Gag and Env peptides can be employed for diagnostic purposes by quantifying interferon-gamma production following stimulation of lymphocytes , providing a more sensitive assay than the ELISA. [ 37 ]
Although there is currently no approved treatment specifically targeted for human betaretrovirus infection, some studies have demonstrated efficacy of repurposed HIV antiretroviral therapy. [ 38 ] A randomized controlled trial using combination reverse transcriptase inhibitors, lamivudine and zidovudine , did not meet the study endpoints but showed a significant improvement in alkaline phosphatase , a biliary enzyme used to gauge disease activity in PBC patients. [ 39 ] Another randomized controlled trial using the combination of tenofovir, emtricitabine , and lopinavir , was stopped early due to gastrointestinal side effects. [ 40 ] However, patients able to tolerate long-term treatment demonstrated both biochemical and histological improvement. [ 41 ] [ 38 ]
The potential for immunotherapy of cancers exhibiting immunodominant betaretrovirus antigens has been studied in animal models. Using either a combination of monoclonal anti-MMTV p14 antibodies or adoptive T-cell transfer treatments, tumour growth was reduced in vivo. [ 42 ] This may have translational relevance, as related p14 antigens can be detected in benign hyperplasia patient samples predating the development of breast cancer, and in a proportion of human breast cancer samples. [ 43 ] Accordingly, the animal studies may provide a pathway for the future development of passive or active vaccination strategies to treat and possibly prevent human betaretrovirus-associated cancers. | https://en.wikipedia.org/wiki/Human_betaretrovirus |
Human biology is an interdisciplinary area of academic study that examines humans through the influences and interplay of many diverse fields such as genetics , evolution , physiology , anatomy , epidemiology , anthropology , ecology , nutrition , population genetics , and sociocultural influences. [ 1 ] [ 2 ] It is closely related to the biomedical sciences , biological anthropology and other biological fields tying in various aspects of human functionality. It wasn't until the 20th century when biogerontologist, Raymond Pearl , founder of the journal Human Biology , phrased the term "human biology" in a way to describe a separate subsection apart from biology. [ 3 ]
It is also a portmanteau term that describes all biological aspects of the human body, typically using the human body as a type organism for Mammalia , and in that context it is the basis for many undergraduate University degrees and modules. [ 4 ] [ 5 ]
Most aspects of human biology are identical or very similar to general mammalian biology. In particular, and as examples, humans :
The study of integrated human biology started in the 1920s, sparked by Charles Darwin 's theories which were re-conceptualized by many scientists. Human attributes, such as child growth and genetics, were put into question and thus human biology was created.
The key aspects of human biology are those ways in which humans are substantially different from other mammals. [ 6 ]
Humans have a very large brain in a head that is very large for the size of the animal. This large brain has enabled a range of unique attributes including the development of complex languages and the ability to make and use a complex range of tools . [ 7 ] [ 8 ]
The upright stance and bipedal locomotion is not unique to humans but humans are the only species to rely almost exclusively on this mode of locomotion. [ 9 ] This has resulted in significant changes in the structure of the skeleton including the articulation of the pelvis and the femur and in the articulation of the head.
In comparison with most other mammals, humans are very long lived [ 10 ] with an average age at death in the developed world of nearly 80 years old. [ 11 ] Humans also have the longest childhood of any mammal with sexual maturity taking 12 to 16 years on average to be completed.
Humans lack fur . Although there is a residual covering of fine hair, which may be more developed in some people, and localised hair covering on the head, axillary and pubic regions, in terms of protection from cold, humans are almost naked. The reason for this development is still much debated.
The human eye can see objects in colour but is not well adapted to low light conditions. The sense of smell and of taste are present but are relatively inferior to a wide range of other mammals . Human hearing is efficient but lacks the acuity of some other mammals. Similarly human sense of touch is well developed especially in the hands where dextrous tasks are performed but the sensitivity is still significantly less than in other animals, particularly those equipped with sensory bristles such as cats .
Human biology tries to understand and promotes research on humans as living beings as a scientific discipline. It makes use of various scientific methods , such as experiments and observations , to detail the biochemical and biophysical foundations of human life describe and formulate the underlying processes using models . As a basic science, it provides the knowledge base for medicine. A number of sub-disciplines include anatomy , cytology , histology and morphology .
The capabilities of the human brain and the human dexterity in making and using tools, has enabled humans to understand their own biology through scientific experiment, including dissection , autopsy , prophylactic medicine which has, in turn, enable humans to extend their life-span by understanding and mitigating the effects of diseases .
Understanding human biology has enabled and fostered a wider understanding of mammalian biology and by extension, the biology of all living organisms.
Human nutrition is typical of mammalian omnivorous nutrition requiring a balanced input of carbohydrates , fats , proteins , vitamins , and minerals. However, the human diet has a few very specific requirements. These include two specific amino acids, alpha-linolenic acid and linoleic acid without which life is not sustainable in the medium to long term. All other fatty acids can be synthesized from dietary fats. Similarly, human life requires a range of vitamins to be present in food and if these are missing or are supplied at unacceptably low levels, metabolic disorders result which can end in death. The human metabolism is similar to most other mammals except for the need to have an intake of Vitamin C to prevent scurvy and other deficiency diseases. Unusually amongst mammals, a human can synthesize Vitamin D3 using natural UV light from the sun on the skin. This capability may be widespread in the mammalian world but few other mammals share the almost naked skin of humans. The darker the human's skin, the less it can manufacture Vitamin D3.
Human biology also encompasses all those organisms that live on or in the human body. Such organisms range from parasitic insects such as fleas and ticks , parasitic helminths such as liver flukes through to bacterial and viral pathogens . Many of the organisms associated with human biology are the specialised biome in the large intestine and the biotic flora of the skin and pharyngeal and nasal region. Many of these biotic assemblages help protect humans from harm and assist in digestion, and are now known to have complex effects on mood, and well-being.
Humans in all civilizations are social animals and use their language skills and tool making skills to communicate.
These communication skills enable civilizations to grow and allow for the production of art , literature and music , and for the development of technology . All of these are wholly dependent on the human biological specialisms.
The deployment of these skills has allowed the human race to dominate the terrestrial biome [ 12 ] to the detriment of most of the other species. | https://en.wikipedia.org/wiki/Human_biology |
A human chimera is a human with a subset of cells with a distinct genotype than other cells, that is, having genetic chimerism . In contrast, an individual where each cell contains genetic material from a human and an animal is called a human–animal hybrid , while an organism that contains a mixture of human and non-human cells would be a human-animal chimera . [ 1 ]
Some consider mosaicism to be a form of chimerism, [ 2 ] while others consider them to be distinct. [ 3 ] [ 4 ] [ 5 ]
Mosaicism involves a mutation of the genetic material in a cell, giving rise to a subset of cells that are different from the rest.
Natural chimerism is the fusion of more than one fertilized zygote in the early stages of prenatal development . It is much rarer than mosaicism. [ 5 ]
In artificial chimerism, an individual has one cell lineage that was inherited genetically at the time of the formation of the human embryo and the other that was introduced through a procedure, including organ transplantation or blood transfusion . [ 6 ] Specific types of transplants that could induce this condition include bone marrow transplants and organ transplants, as the recipient's body essentially works to permanently incorporate the new blood stem cells into it.
Natural chimerism has been documented in humans in several instances.
Human-animal chimeras include humans having undergone non-human to human xenotransplantation , which is the transplantation of living cells , tissues or organs from one species to another. [ 18 ] [ 19 ]
Patient derived xenografts are created by xenotransplantation of human tumor cells into immunocompromised mice, and is a research technique frequently used in pre-clinical oncology research. [ 20 ]
Non-artificial chimerism has traditionally been considered to be rare due the low amount of reported cases in medical literature. [ 28 ] However, this may be due to the fact that humans might not often be aware of this condition to begin with. There are usually no signs or symptoms for chimerism other than a few physical symptoms such as hyper-pigmentation , hypo-pigmentation , Blaschko's lines , body asymmetry or heterochromia iridum (possessing two different colored eyes). [ 29 ] However, these signs do not necessarily mean an individual is a chimera and should only be seen as possible symptoms. Again, forensic investigation or curiosity over an unexpected maternity/paternity DNA test result usually leads to the accidental discovery of this condition. By simply undergoing a DNA test, which usually consists of either a swift cheek swab or a blood test, the discovery of the once unknown second genome is made, therefore identifying that individual as a chimera. [ 30 ]
The concept of a "human hermaphrodite" resulting from chimerism is largely a misconception. [ 31 ] Most intersex individuals are not chimeras, [ 32 ] [ 31 ] and most human chimeras are not observed to have intersex traits. [ 31 ] Theoretically, if a gynandromorphic human chimera were to have fully functioning male and female gonad tissue, such an individual could self-fertilize ; [ 33 ] [ 34 ] this hypothesis is backed by the fact that hermaphroditic animal species commonly reproduce in this way, and it has been observed in a rabbit. [ 35 ] However, no such case of functional self-fertilization has ever been documented in humans; [ 36 ] and it is non-existent or extremely rare in mammals, [ 37 ] [ 38 ] [ 39 ] [ 40 ] [ 41 ] especially in humans. [ 42 ] [ 43 ] [ 44 ] [ 45 ] While humans are known to have sex characteristics that diverge from typical males or typical females, these individuals fall under the social umbrella of intersex conditions and traits, and some consider the term "hermaphrodite" to be a slur when applied to them. [ 46 ] [ 47 ] [ 48 ]
On 11 July 2005, a bill known as The Human Chimera Prohibition Act was introduced into the United States Congress by Senator Samuel Brownback ; however, it died in Congress sometime in the next year. The bill was introduced based on findings that science had progressed to the point where human and nonhuman species could be merged to create new forms of life. Because of this, ethical issues might arise as the line blurred between humans and other animals, and according to the bill with this blurring of lines would come a show of disrespect for human dignity. The final claim brought up in The Human Chimera Prohibition Act was that there was an increasing amount of zoonotic diseases , and that the creation of human-animal chimeras might allow these diseases to reach humans. [ 49 ]
On 22 August 2016, another bill, The Human-Animal Chimera Prohibition Act of 2016, was introduced to the United States House of Representatives by Christopher H. Smith . It identified a human-animal chimera as:
The bill would have prohibited the attempts to create a human-animal chimera, the transfer or attempt to transfer a human embryo into a nonhuman womb, the transfer or attempt to transfer a nonhuman embryo into a human womb, and the transport or receipt of an animal chimera for any purpose. Proposed penalties for violations of this bill included fines and/or imprisonment of up to 10 years. The bill was referred to the Subcommittee on Crime, Terrorism, Homeland Security, and Investigations on October 11, 2016, but died there. [ 50 ]
In the U.S., efforts into creating a chimeric entity appeared to be legal when the topic first came up. Developmental biologist Stuart Newman , a professor at New York Medical College in Valhalla, N.Y. , applied for a patent on a human-animal chimera in 1997 as a challenge to the U.S. Patent and Trademark Office and the U.S. Congress , motivated by his moral and scientific opposition to the notion that living things can be patented at all. Prior legal precedent had established that genetically engineered entities, in general, could be patented, even if they were based on beings occurring in nature. [ 51 ] After a seven-year process, Newman's patent finally received a flat rejection. The legal process had created a paper trail of arguments, giving Newman what he claimed was a victory. The Washington Post ran an article on the controversy that stated that it had raised "profound questions about the differences—and similarities—between humans and other animals, and the limits of treating animals as property." [ 51 ] | https://en.wikipedia.org/wiki/Human_chimera |
Human chorionic gonadotropin ( hCG ) is a hormone for the maternal recognition of pregnancy produced by trophoblast cells that are surrounding a growing embryo (syncytiotrophoblast initially), which eventually forms the placenta after implantation . [ 1 ] [ 2 ] The presence of hCG is detected in some pregnancy tests (HCG pregnancy strip tests). Some cancerous tumors produce this hormone; therefore, elevated levels measured when the patient is not pregnant may lead to a cancer diagnosis and, if high enough, paraneoplastic syndromes , however, it is unknown whether this production is a contributing cause or an effect of carcinogenesis . The pituitary analog of hCG, known as luteinizing hormone (LH), is produced in the pituitary gland of males and females of all ages. [ 1 ] [ 3 ]
Beta-hCG is initially secreted by the syncytiotrophoblast . [ 1 ]
Human chorionic gonadotropin is a glycoprotein composed of 237 amino acids with a molecular mass of 36.7 kDa , approximately 14.5kDa αhCG and 22.2kDa βhCG. [ 4 ]
It is heterodimeric , with an α (alpha) subunit identical to that of luteinizing hormone (LH), follicle-stimulating hormone (FSH), thyroid-stimulating hormone (TSH), and a β (beta) subunit that is unique to hCG.
The two subunits create a small hydrophobic core surrounded by a high surface area-to-volume ratio: 2.8 times that of a sphere. The vast majority of the outer amino acids are hydrophilic . [ 7 ]
beta-hCG is mostly similar to beta-LH , with the exception of a Carboxy Terminus Peptide (beta-CTP) containing four glycosylated serine residues that is responsible for hCG's longer half-life. [ 8 ]
Human chorionic gonadotropin interacts with the LHCG receptor of the ovary and promotes the maintenance of the corpus luteum for the maternal recognition of pregnancy at the beginning of pregnancy . This allows the corpus luteum to secrete the hormone progesterone during the first trimester. Progesterone enriches the uterus with a thick lining of blood vessels and capillaries so that it can sustain the growing fetus . [ 9 ]
It has been hypothesized that hCG may be a placental link for the development of local maternal immunotolerance . [ 10 ] For example, hCG-treated endometrial cells induce an increase in T cell apoptosis (dissolution of T cells ). These results suggest that hCG may be a link in the development of peritrophoblastic immune tolerance, and may facilitate the trophoblast invasion, which is known to expedite fetal development in the endometrium. [ 11 ] It has also been suggested that hCG levels are linked to the severity of morning sickness or hyperemesis gravidarum in pregnant women. [ 12 ]
Because of its similarity to LH, hCG can also be used clinically to induce ovulation in the ovaries as well as testosterone production in the testes. As the most abundant biological source is in women who are presently pregnant, some organizations collect urine from pregnant women to extract hCG for use in fertility treatment. [ citation needed ]
Human chorionic gonadotropin also plays a role in cellular differentiation/proliferation and may activate apoptosis. [ citation needed ]
Naturally, it is produced in the human placenta by the syncytiotrophoblast . [ 1 ]
Like any other gonadotropins , it can be extracted from the urine of pregnant women or produced from cultures of genetically modified cells using recombinant DNA technology.
In Pubergen , Pregnyl, Follutein, Profasi, Choragon and Novarel , it is extracted from the urine of pregnant women. In Ovidrel , it is produced with recombinant DNA technology. [ 13 ]
Three major forms of hCG are produced by humans, with each having distinct physiological roles. These include regular hCG, hyperglycosylated hCG, and the free beta-subunit of hCG. Degradation products of hCG have also been detected, including nicked hCG, hCG missing the C-terminal peptide from the beta-subunit, and free alpha-subunit, which has no known biological function. Some hCG is also made by the pituitary gland with a pattern of glycosylation that differs from placental forms of hCG. [ 1 ]
Regular hCG is the main form of hCG associated with the majority of pregnancy and in non-invasive molar pregnancies. This is produced in the trophoblast cells of the placental tissue. Hyperglycosylated hCG is the main form of hCG during the implantation phase of pregnancy, with invasive molar pregnancies, and with choriocarcinoma . [ 14 ]
Gonadotropin preparations of hCG can be produced for pharmaceutical use from animal or synthetic sources. [ citation needed ]
Blood or urine tests measure hCG. These can be pregnancy tests . hCG-positive can indicate an implanted blastocyst and mammalian embryogenesis or can be detected for a short time following childbirth or pregnancy loss. Tests can be done to diagnose and monitor germ cell tumors and gestational trophoblastic diseases .
Concentrations are commonly reported in thousandth international units per milliliter (mIU/mL). The international unit of hCG was originally established in 1938 and has been redefined in 1964 and in 1980. [ 15 ] At the present time, 1 international unit is equal to approximately 2.35×10 −12 moles, [ 16 ] or about 6×10 −8 grams. [ 17 ]
It is also possible to test for hCG to have an approximation of the gestational age. [ 18 ]
Most tests employ a monoclonal antibody, which is specific to the β-subunit of hCG (β-hCG). This procedure is employed to ensure that tests do not make false positives by confusing hCG with LH and FSH. (The latter two are always present at varying levels in the body, whereas the presence of hCG almost always indicates pregnancy.) [ citation needed ]
Many hCG immunoassays are based on the sandwich principle , which uses antibodies to hCG labeled with an enzyme or a conventional or luminescent dye.
Pregnancy urine dipstick tests are based on the lateral flow technique.
The hCG levels grow exponentially after conception and implantation. [ 21 ] hCG levels typically peak around weeks 8-11 of pregnancy and are generally higher in the first trimester compared to the second trimester.
The following is a list of serum hCG levels:
LMP is the last menstrual period dated from the first day of the last menstrual period
If a pregnant woman has serum hCG levels that are higher than expected, they may be experiencing a multiple pregnancy or an abnormal uterine growth. Falling hCG levels may indicate the possibility of a miscarriage. hCG levels which are rising at a slower rate than expected may indicate an ectopic pregnancy . [ 22 ]
The ability to quantitate the βhCG level is useful in monitoring germ cell and trophoblastic tumors , follow-up care after miscarriage , and diagnosis of and follow-up care after treatment of ectopic pregnancy . The lack of a visible fetus on vaginal ultrasound after βhCG levels reach 1500 mIU/mL is strongly indicative of an ectopic pregnancy. [ 23 ] Still, even an hCG over 2000 IU/L does not necessarily exclude the presence of a viable intrauterine pregnancy in such cases. [ 24 ]
As pregnancy tests, quantitative blood tests and the most sensitive urine tests usually detect hCG between 6 and 12 days after ovulation. [ 25 ] It must be taken into account, however, that total hCG levels may vary in a very wide range within the first 4 weeks of gestation, leading to false results during this period. [ 26 ] A rise of 35% over 48 hours is proposed as the minimal rise consistent with a viable intrauterine pregnancy. [ 24 ]
Gestational trophoblastic disease like hydatidiform moles ("molar pregnancy") or choriocarcinoma may produce high levels of βhCG due to the presence of syncytiotrophoblasts , part of the villi that make up the placenta, and despite the absence of an embryo. This, as well as several other conditions, can lead to elevated hCG readings in the absence of pregnancy. [ citation needed ]
hCG levels are also a component of the triple test , a screening test for certain fetal chromosomal abnormalities/birth defects. High hCG levels in the maternal serum could suggest Down syndrome , potentially due to continued hCG production by the placenta beyond the first trimester. [ 27 ]
A study of 32 normal pregnancies came to the result that a gestational sac of 1–3 mm was detected at a mean hCG level of 1150 IU/L (range 800–1500), a yolk sac was detected at a mean level of 6000 IU/L (range 4500–7500) and fetal heartbeat was visible at a mean hCG level of 10,000 IU/L (range 8650–12,200). [ 28 ]
Human chorionic gonadotropin can be used as a tumor marker , [ 29 ] as its β subunit is secreted by some cancers including seminoma , choriocarcinoma , teratoma with elements of choriocarcinoma , other germ cell tumors , hydatidiform mole , and islet cell tumor . For this reason, a positive result in males can be a test for testicular cancer . The normal range for men is between 0-5 mIU/mL. Combined with alpha-fetoprotein , β-HCG is an excellent tumor marker for the monitoring of germ cell tumors . [ 30 ]
Human chorionic gonadotropin injection is extensively used for final maturation induction in lieu of luteinizing hormone . In the presence of one or more mature ovarian follicles, ovulation can be triggered by the administration of HCG. As ovulation will happen between 38 and 40 hours after a single HCG injection, [ 31 ] procedures can be scheduled to take advantage of this time sequence, [ 32 ] [ unreliable medical source? ] such as intrauterine insemination or sexual intercourse. Also, patients that undergo IVF , in general, receive HCG to trigger the ovulation process, but have an oocyte retrieval performed at about 34 to 36 hours after injection, a few hours before the eggs actually would be released from the ovary. [ citation needed ]
As hCG supports the corpus luteum , administration of hCG is used in certain circumstances to enhance the production of progesterone .
Several vaccines against human chorionic gonadotropin (hCG) for the prevention of pregnancy are currently in clinical trials. [ 33 ]
In males, hCG injections are used to stimulate the Leydig cells to synthesize testosterone . [ 34 ] The intratesticular testosterone is necessary for spermatogenesis from the sertoli cells . Typical medical uses for hCG in males include treating certain types of hypogonadism (either as monotherapy, or, more commonly, in combination with exogenous testosterone ), as well as to either treat or prevent infertility, for example, during testosterone replacement therapy hCG is often used to restore or maintain fertility and prevent testicular atrophy. [ 35 ] [ 36 ]
In the case of female patients who want to be treated with HCG Pubergen, Pregnyl: [ citation needed ] a) Since infertile female patients who undergo medically assisted reproduction (especially those who need in vitro fertilization ), are known to often be suffering from tubal abnormalities, after a treatment with this drug they might experience many more ectopic pregnancies . This is why early ultrasound confirmation at the beginning of a pregnancy (to see whether the pregnancy is intrauterine or not) is crucial. Pregnancies that have occurred after a treatment with this drug have a higher risk of multiple pregnancy . Female patients who have thrombosis, severe obesity, or thrombophilia should not be prescribed this medicine as they have a higher risk of arterial or venous thromboembolic events after or during a treatment with HCG Pubergen, Pregnyl. b)Female patients who have been treated with this medicine are usually more prone to pregnancy losses. [ citation needed ]
In the case of male patients: A prolonged treatment with HCG Pubergen, Pregnyl is known to regularly lead to increased production of androgen. Therefore: Patients who have overt or latent cardiac failure, hypertension, renal dysfunction, migraines, or epilepsy might not be allowed to start using this medicine or may require a lower dose of HCG Pubergen, Pregnyl. This drug should be used with extreme caution in the treatment of prepubescent teenagers in order to reduce the risk of precocious sexual development or premature epiphyseal closure. This type of patients' skeletal maturation should be closely and regularly monitored. [ citation needed ]
Both male and female patients who have the following medical conditions must not start a treatment with HCG Pubergen, Pregnyl: (1) Hypersensitivity to this drug or to any of its main ingredients. (2) Known or possible androgen-dependent tumors for example male breast carcinoma or prostatic carcinoma.
HCG is included in some sports' banned substances lists.
When exogenous AAS (Anabolic Androgenic Steroids) are put into the male body, natural negative-feedback loops cause the body to shut down its own production of testosterone via shutdown of the hypothalamic-pituitary-gonadal axis ( HPGA ). This causes testicular atrophy, among other things. HCG is commonly used during and after steroid cycles to maintain and restore testicular size as well as normal testosterone production. [ 37 ]
High levels of AASs, that mimic the body's natural testosterone, trigger the hypothalamus to shut down its production of gonadotropin-releasing hormone (GnRH) from the hypothalamus. Without GnRH, the pituitary gland stops releasing luteinizing hormone (LH). LH normally travels from the pituitary via the blood stream to the testes, where it triggers the production and release of testosterone. Without LH, the testes shut down their production of testosterone. [ 38 ] In males, HCG helps restore and maintain testosterone production in the testes by mimicking LH and triggering the production and release of testosterone. [ citation needed ]
Professional athletes who have tested positive for HCG have been temporarily banned from their sport, including a 50-game ban from MLB for Manny Ramirez in 2009 [ 39 ] and a 4-game ban from the NFL for Brian Cushing for a positive urine test for HCG. [ 40 ] Mixed Martial Arts fighter Dennis Siver was fined $19,800 and suspended 9 months for being tested positive after his bout at UFC 168 . [ 41 ] Jurickson Profar tested positive for the substance and was suspended for 80-games on 3/31/2025. [ 42 ]
British endocrinologist Albert T. W. Simeons proposed HCG as an adjunct to an ultra-low-calorie weight-loss diet (fewer than 500 calories). [ 43 ] Simeons, while studying pregnant women in India on a calorie-deficient diet, and obese boys with pituitary issues ( Frölich's syndrome ) treated with low-dose HCG, observed that both lost fat rather than lean (muscle) tissue. [ 43 ] He reasoned that HCG must be programming the hypothalamus to do this in the former cases in order to protect the developing fetus by promoting mobilization and consumption of abnormal , excessive adipose deposits. Simeons in 1954 published a book entitled Pounds and Inches , designed to combat obesity. Simeons, practicing at Salvator Mundi International Hospital in Rome, Italy, recommended low-dose daily HCG injections (125 IU) in combination with a customized ultra-low-calorie (500 cal/day, high-protein, low-carbohydrate/fat) diet, which was supposed to result in a loss of adipose tissue without loss of lean tissue. [ 43 ]
Other researchers did not find the same results when attempting experiments to confirm Simeons' conclusions, and in 1976 in response to complaints the FDA required Simeons and others to include the following disclaimer on all advertisements: [ 44 ]
These weight reduction treatments include the injection of HCG, a drug which has not been approved by the Food and Drug Administration as safe and effective in the treatment of obesity or weight control. There is no substantial evidence that HCG increases weight loss beyond that resulting from caloric restriction, that it causes a more attractive or "normal" distribution of fat, or that it decreases the hunger and discomfort associated with calorie-restrictive diets.
There was a resurgence of interest in the "HCG diet" following promotion by Kevin Trudeau , who was banned from making HCG diet weight-loss claims by the U.S. Federal Trade Commission in 2008, and eventually jailed over such claims. [ 45 ] [ 46 ]
A 1976 study in the American Journal of Clinical Nutrition [ 47 ] concluded that HCG is not more effective as a weight-loss aid than dietary restriction alone. [ 48 ]
A 1995 meta analysis found that studies supporting HCG for weight loss were of poor methodological quality and concluded that "there is no scientific evidence that HCG is effective in the treatment of obesity; it does not bring about weight-loss or fat-redistribution, nor does it reduce hunger or induce a feeling of well-being". [ 49 ]
On November 15, 2016, the American Medical Association (AMA) passed policy that "The use of human chorionic gonadotropin (HCG) for weight loss is inappropriate." [ 50 ]
There is no scientific evidence that HCG is effective in the treatment of obesity. The meta-analysis found insufficient evidence supporting the claims that HCG is effective in altering fat-distribution, hunger reduction, or in inducing a feeling of well-being. The authors stated "…the use of HCG should be regarded as an inappropriate therapy for weight reduction…" In the authors opinion, "Pharmacists and physicians should be alert on the use of HCG for Simeons therapy. The results of this meta-analysis support a firm standpoint against this improper indication. Restraints on physicians practicing this therapy can be based on our findings."
According to the American Society of Bariatric Physicians, no new clinical trials have been published since the definitive 1995 meta-analysis. [ 51 ]
The scientific consensus is that any weight loss reported by individuals on an "HCG diet" may be attributed entirely to the fact that such diets prescribe calorie intake of between 500 and 1,000 calories per day, substantially below recommended levels for an adult, to the point that this may risk health effects associated with malnutrition. [ 52 ]
Controversy about, and shortages [ 53 ] of, injected HCG for weight loss have led to substantial Internet promotion of " homeopathic HCG" for weight control. The ingredients in these products are often obscure, but if prepared from true HCG via homeopathic dilution, they contain either no HCG at all or only trace amounts. Moreover, it is highly unlikely that oral HCG is bioavailable due to the fact that digestive protease enzymes and hepatic metabolism renders peptide-based molecules (such as insulin and human growth hormone) biologically inert. HCG can likely only enter the bloodstream through injection. [ citation needed ]
The United States Food and Drug Administration has stated that over-the-counter products containing HCG are fraudulent and ineffective for weight loss. They are also not protected as homeopathic drugs and have been deemed illegal substances. [ 54 ] HCG is classified as a prescription drug in the United States and it has not been approved for over-the-counter sales by the FDA as a weight loss product or for any other purposes, and therefore neither HCG in its pure form nor any preparations containing HCG may be sold legally in the country except by prescription. [ 55 ] In December 2011, FDA and FTC started to take actions to pull unapproved HCG products from the market. [ 55 ] In the aftermath, some suppliers started to switch to "hormone-free" versions of their weight loss products, where the hormone is replaced with an unproven mixture of free amino acids [ 56 ] or where radionics is used to transfer the "energy" to the final product. [ citation needed ]
As of December 6, 2011 [update] , the United States Food and Drug Administration has prohibited the sale of homeopathic and over-the-counter hCG diet products and declared them fraudelent and banned. [ 55 ] [ 57 ] [ 58 ]
Catholic Bishops in Kenya [ 59 ] are among those who have spread a conspiracy theory [ 60 ] asserting that HCG forms part of a covert sterilization program, forcing denials from the Kenyan government. [ 59 ]
In order to induce a stronger immune response, some versions of human chorionic gonadotropin-based anti-fertility vaccines were designed as conjugates of the β subunit of HCG covalently linked to tetanus toxoid . [ 33 ] [ 61 ] It was alleged that a non-conjugated tetanus vaccine used in developing countries was laced with a human chorionic gonadotropin-based anti-fertility drug and was distributed as a means of mass sterilization . [ 62 ] This charge has been vigorously denied by the World Health Organization (WHO) and UNICEF . [ 63 ] Others have argued that an hCG-laced vaccine could not possibly be used for sterilization, since the effects of the anti-fertility vaccines are reversible (requiring booster doses to maintain infertility) and a non-conjugated vaccine is likely to be ineffective. [ 64 ] Finally, independent testing of the tetanus vaccine by Kenya's health authorities revealed no traces of the human chorionic gonadotropin hormone. [ 65 ] | https://en.wikipedia.org/wiki/Human_chorionic_gonadotropin |
The human climate niche is the ensemble of climate conditions that have sustained human life and human activities, like agriculture, on the globe for the last millennia. The human climate niche is estimated by calculating the human population density with respect to mean annual temperature. [ 1 ] [ 2 ] The human population distribution as a function of mean annual temperature is bimodal and results in two modes; one at 15 °C and another one at ~20 to 25 °C. [ 2 ] Crops and livestock required for sustaining the human population are also limited to the similar niche conditions . Given the rise in mean global temperatures , the human population is projected to experience climate conditions beyond the human climate niche. Some projections show that considering temperature and demographic changes, 2.0 and 3.7 billion people will live in out of the niche by 2030 and 2090, respectively. [ 2 ]
This article about climate change is a stub . You can help Wikipedia by expanding it .
See guidelines for writing about climate change . Further suggestions might be found on the article's talk page .
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Human_climate_niche |
Human cloning is the creation of a genetically identical copy of a human. The term is generally used to refer to artificial human cloning , which is the reproduction of human cells and tissue . It does not refer to the natural conception and delivery of identical twins . The possibilities of human cloning have raised controversies . These ethical concerns have prompted several nations to pass laws regarding human cloning.
Two commonly discussed types of human cloning are therapeutic cloning and reproductive cloning .
Therapeutic cloning would involve cloning cells from a human for use in medicine and transplants. It is an active area of research, and is in medical practice over the world. Two common methods of therapeutic cloning that are being researched are somatic-cell nuclear transfer and (more recently) pluripotent stem cell induction .
Reproductive cloning would involve making an entire cloned human, instead of just specific cells or tissues.
Although the possibility of cloning humans had been the subject of speculation for much of the 20th century, scientists and policymakers began to take the prospect seriously in 1969. J. B. S. Haldane was the first to introduce the idea of human cloning, for which he used the terms "clone" and "cloning", [ 1 ] which had been used in agriculture since the early 20th century. In his speech on "Biological Possibilities for the Human Species of the Next Ten Thousand Years" at the Ciba Foundation Symposium on Man and his Future in 1963, he said: [ 2 ]
It is extremely hopeful that some human cell lines can be grown on a medium of precisely known chemical composition. Perhaps the first step will be the production of a clone from a single fertilized egg, as in Brave New World ...
Assuming that cloning is possible, I expect that most clones would be made from people aged at least fifty, except for athletes and dancers, who would be cloned younger. They would be made from people who were held to have excelled in a socially acceptable accomplishment...
Nobel Prize-winning geneticist Joshua Lederberg advocated cloning and genetic engineering in an article in The American Naturalist in 1966 and again, the following year, in The Washington Post . [ 3 ] He sparked a debate with conservative bioethicist Leon Kass , who wrote at the time that "the programmed reproduction of man will, in fact, dehumanize him." Another Nobel Laureate , James D. Watson , publicized the potential and the perils of cloning in his Atlantic Monthly essay, "Moving Toward the Clonal Man", in 1971. [ 4 ]
With the cloning of a sheep known as Dolly in 1996 by somatic cell nuclear transfer (SCNT), the idea of human cloning became a hot debate topic. [ 5 ] Many nations outlawed it, while a few scientists promised to make a clone within the next few years. The first hybrid human clone was created in November 1998, by Advanced Cell Technology . It was created using SCNT; a nucleus was taken from a man's leg cell and inserted into a cow's egg from which the nucleus had been removed, and the hybrid cell was cultured and developed into an embryo . The embryo was destroyed after 12 days. [ 6 ]
In 2004 and 2005, Hwang Woo-suk , a professor at Seoul National University , published two separate articles in the journal Science claiming to have successfully harvested pluripotent, embryonic stem cells from a cloned human blastocyst using SCNT techniques. Hwang claimed to have created eleven different patient-specific stem cell lines. This would have been the first major breakthrough in human cloning. [ 7 ] However, in 2006 Science retracted both of his articles on account of clear evidence that much of his data from the experiments was fabricated. [ 8 ]
In January 2008, Dr. Andrew French and Samuel Wood of the biotechnology company Stemagen announced that they successfully created the first five mature human embryos using SCNT. In this case, each embryo was created by taking a nucleus from a skin cell (donated by Wood and a colleague) and inserting it into a human egg from which the nucleus had been removed. The embryos were developed only to the blastocyst stage, at which point they were studied in processes that destroyed them. Members of the lab said that their next set of experiments would aim to generate embryonic stem cell lines; these are the "holy grail" that would be useful for therapeutic or reproductive cloning. [ 9 ] [ 10 ]
In 2011, scientists at the New York Stem Cell Foundation announced that they had succeeded in generating embryonic stem cell lines, but their process involved leaving the oocyte 's nucleus in place, resulting in triploid cells, which would not be useful for cloning. [ 11 ] [ 12 ] [ 13 ]
In 2013, a group of scientists led by Shoukhrat Mitalipov published the first report of embryonic stem cells created using SCNT. [ 14 ] In this experiment, the researchers developed a protocol for using SCNT in human cells, which differs slightly from the one used in other organisms. Four embryonic stem cell lines from human fetal somatic cells were derived from those blastocysts. All four lines were derived using oocytes from the same donor, ensuring that all mitochondrial DNA inherited was identical. [ 11 ] A year later, a team led by Robert Lanza at Advanced Cell Technology reported that they had replicated Mitalipov's results and further demonstrated the effectiveness by cloning adult cells using SCNT. [ 5 ] [ 15 ]
In 2018, the first successful cloning of primates using SCNT was reported with the birth of two live female clones, crab-eating macaques named Zhong Zhong and Hua Hua . [ 16 ] [ 17 ]
In somatic cell nuclear transfer ("SCNT"), the nucleus of a somatic cell is taken from a donor and transplanted into a host egg cell , which had its own genetic material removed previously, making it an enucleated egg. After the donor somatic cell genetic material is transferred into the host oocyte with a micropipette, the somatic cell genetic material is fused with the egg using an electric current. Once the two cells have fused, the new cell can be permitted to grow in a surrogate or artificially . [ 18 ] This is the process that was used to successfully clone Dolly the sheep (see § History ). [ 5 ] The technique, now refined, has indicated that it was possible to replicate cells and reestablish pluripotency, or "the potential of an embryonic cell to grow into any one of the numerous different types of mature body cells that make up a complete organism". [ 19 ]
Creating induced pluripotent stem cells ("iPSCs") is a long and inefficient process. Pluripotency refers to a stem cell that has the potential to differentiate into any of the three germ layers : endoderm (interior stomach lining, gastrointestinal tract, the lungs), mesoderm (muscle, bone, blood, urogenital), or ectoderm (epidermal tissues and nervous tissue). [ 20 ] A specific set of genes, often called "reprogramming factors", are introduced into a specific adult cell type. These factors send signals in the mature cell that cause the cell to become a pluripotent stem cell. This process is highly studied and new techniques are being discovered frequently on how to improve this induction process.
Depending on the method used, reprogramming of adult cells into iPSCs for implantation could have severe limitations in humans. If a virus is used as a reprogramming factor for the cell, cancer-causing genes called oncogenes may be activated . These cells would appear as rapidly dividing cancer cells that do not respond to the body's natural cell signaling process. However, in 2008 scientists discovered a technique that could remove the presence of these oncogenes after pluripotency induction, thereby increasing the potential use of iPSC in humans. [ 21 ]
Both the processes of SCNT and iPSCs have benefits and deficiencies. Historically, reprogramming methods were better studied than SCNT derived embryonic stem cells (ESCs). [ 11 ] However, more recent studies have put more emphasis on developing new procedures for SCNT-ESCs. The major advantage of SCNT over iPSCs at this time is the speed with which cells can be produced. iPSCs derivation takes several months while SCNT would take a much shorter time, which could be important for medical applications. New studies are working to improve the process of iPSC in terms of both speed and efficiency with the discovery of new reprogramming factors in oocytes. [ citation needed ] Another advantage SCNT could have over iPSCs is its potential to treat mitochondrial disease , as it uses a donor oocyte. [ 11 ] No other advantages are known at this time in using stem cells derived from one method over stem cells derived from the other. [ 22 ]
Work on cloning techniques has advanced understanding of developmental biology in humans. Observing human pluripotent stem cells grown in culture provides great insight into human embryo development , which otherwise cannot be seen. Scientists are now able to better define steps of early human development. Studying signal transduction along with genetic manipulation within the early human embryo has the potential to provide answers to many developmental diseases and defects. Many human-specific signaling pathways have been discovered by studying human embryonic stem cells. Studying developmental pathways in humans has given developmental biologists more evidence toward the hypothesis that developmental pathways are conserved throughout species. [ 23 ]
iPSCs and cells created by SCNT are useful for research into the causes of disease, and as model systems used in drug discovery . [ 24 ] [ 25 ]
Cells produced with SCNT, or iPSCs could eventually be used in stem cell therapy , [ 26 ] or to create organs to be used in transplantation, known as regenerative medicine . Stem cell therapy is the use of stem cells to treat or prevent a disease or condition. Bone marrow transplantation is a widely used form of stem cell therapy. [ 27 ] No other forms of stem cell therapy are in clinical use at this time. Research is underway to potentially use stem cell therapy to treat heart disease , diabetes , and spinal cord injuries . [ 28 ] [ 29 ] Regenerative medicine is not in clinical practice, but is heavily researched for its potential uses. This type of medicine would allow for autologous transplantation, thus removing the risk of organ transplant rejection by the recipient. [ 30 ] For instance, a person with liver disease could potentially have a new liver grown using their same genetic material and transplanted to remove the damaged liver. [ 31 ] In current research, human pluripotent stem cells have been promised as a reliable source for generating human neurons, showing the potential for regenerative medicine in brain and neural injuries. [ 32 ]
In bioethics , the ethics of cloning refers to a variety of ethical positions regarding the practice and possibilities of cloning , especially human cloning. While many of these views are religious in origin, for instance relating to Christian views of procreation and personhood, [ 33 ] the questions raised by cloning engage secular perspectives as well, particularly the concept of identity. [ 34 ]
Advocates support development of therapeutic cloning in order to generate tissues and whole organs to treat patients who otherwise cannot obtain transplants, [ 35 ] to avoid the need for immunosuppressive drugs , [ 36 ] and to stave off the effects of aging. [ 37 ] Advocates for reproductive cloning believe that parents who cannot otherwise procreate should have access to the technology. [ 38 ]
Opposition to therapeutic cloning mainly centers around the status of embryonic stem cells , which has connections with the abortion debate . [ 39 ] The moral argument put forward is based on the notion that embryos deserve protection from the moment of their conception because it is at this precise moment that a new human entity emerges, already a unique individual. [ 40 ] Since it is deemed unacceptable to sacrifice human lives for any purpose, the argument asserts that the destruction of embryos for research purposes is no longer justifiable. [ 41 ]
Some opponents of reproductive cloning have concerns that technology is not yet developed enough to be safe – for example, the position of the American Association for the Advancement of Science as of 2014, [update] [ 42 ] while others emphasize that reproductive cloning could be prone to abuse (leading to the generation of humans whose organs and tissues would be harvested), [ 43 ] [ 44 ] and have concerns about how cloned individuals could integrate with families and with society at large. [ 45 ] [ 46 ]
Members of religious groups are divided. Some Christian theologians perceive the technology as usurping God's role in creation and, to the extent embryos are used, destroying a human life; [ 33 ] others see no inconsistency between Christian tenets and cloning's positive and potentially life-saving benefits. [ 47 ] [ 48 ]
There have been consistent calls in Canada to ban human reproductive cloning since the 1993 Report of the Royal Commission on New Reproductive Technologies. Polls have indicated that an overwhelming majority of Canadians oppose human reproductive cloning, though the regulation of human cloning continues to be a significant national and international policy issue. The notion of "human dignity" is commonly used to justify cloning laws. The basis for this justification is that reproductive human cloning necessarily infringes notions of human dignity. [ 58 ] [ 59 ] [ 60 ] [ 61 ]
In the Eleventh Amendment to the Criminal Law, which came into effect on March 1, 2021, an additional provision was added to Article 336, which stipulates that "implanting gene-edited or cloned human embryos into human or animal bodies, or implanting gene-edited, cloned Implantation of cloned animal embryos into human bodies, if the circumstances are serious, shall be sentenced to fixed-term imprisonment of not more than three years or criminal detention and a fine; if the circumstances are especially serious, the sentence shall be fixed-term imprisonment of not less than three years but not more than seven years and a fine." [ 63 ]
Albania , Andorra , Bosnia and Herzegovina , Bulgaria , Croatia , Cyprus , Czech Republic , Denmark , Estonia , Finland , France , Georgia , Greece , Hungary , Iceland , Latvia , Liechtenstein , Lithuania , Moldova , Montenegro , North Macedonia , Norway , Portugal , Romania , San Marino , Serbia , Slovakia , Slovenia , Spain , Switzerland , Turkey
India has already succeeded in mammalian cloning. [ 76 ]
In Morocco, all research on human embryos or fetuses is forbidden, as is the conception of human embryos or fetuses for research or experimental purposes, in accordance with article 7 of Dahir no. 1–19–50. [ 79 ]
The first license was granted on 11 August 2004, to researchers at the University of Newcastle to allow them to investigate treatments for diabetes , Parkinson's disease and Alzheimer's disease . [ 93 ] The Human Fertilisation and Embryology Act 2008 , a major review of fertility legislation, repealed the 2001 Cloning Act by making amendments of similar effect to the 1990 Act. The 2008 Act also allows experiments on hybrid human-animal embryos. [ 94 ]
In 1998, 2001, 2004, 2005, 2007 and 2009, the United States Congress voted whether to ban all human cloning, both reproductive and therapeutic ( Stem Cell Research Enhancement Act ). [ 99 ] Divisions in the Senate , or an eventual veto from the sitting President ( George W. Bush in 2005 and 2007), over therapeutic cloning prevented either competing proposal (a ban on both forms or on reproductive cloning only) from being passed into law. On 10 March 2010, a bill (HR 4808) was introduced with a section banning federal funding for human cloning. [ 100 ] Such a law, if passed, would not have prevented research from occurring in private institutions (such as universities) that have both private and federal funding. However, the 2010 law was not passed.
Ten states, California, Connecticut, Illinois, Iowa, Maryland, Massachusetts, Missouri, Montana, New Jersey and Rhode Island, have "clone and kill" laws that prevent cloned embryo implantation for childbirth, but allow embryos to be destroyed. [ 101 ]
The Patients First Act of 2017 (HR 2918, 115th Congress) aims to promote stem cell research, using cells that are "ethically obtained", that could contribute to a better understanding of diseases and therapies, as well as promote the "derivation of pluripotent stem cell lines without the creation of human embryos". [ 102 ]
Science fiction has used cloning, most commonly and specifically human cloning, due to the fact that it brings up controversial questions of identity. [ 112 ] [ 113 ] Humorous fiction, such as Multiplicity (1996) [ 114 ] and the Maxwell Smart feature The Nude Bomb (1980), have featured human cloning. [ 115 ] A recurring sub-theme of cloning fiction is the use of clones as a supply of organs for transplantation . Robin Cook's 1997 novel Chromosome 6 , Michael Bay's The Island , and Nancy Farmer's 2002 novel House of the Scorpion [ 116 ] are examples of this; Chromosome 6 also features genetic manipulation and xenotransplantation . [ 117 ] The Star Wars saga makes use of millions of human clones to form the Grand Army of the Republic that participated in the Clone Wars . The series Orphan Black follows human clones' stories and experiences as they deal with issues and react to being the property of a chain of scientific institutions. [ 118 ] In the 2019 horror film Us , the entirety of the United States' population is secretly cloned. Years later, these clones (known as The Tethered) reveal themselves to the world by successfully pulling off a mass genocide of their counterparts. [ 119 ] [ 120 ]
In the 2005 novel Never Let Me Go , Kazuo Ishiguro crafts a subtle exploration into the ethical complications of cloning humans for medical advancement and longevity. | https://en.wikipedia.org/wiki/Human_cloning |
Human composting is a process for the final disposition of human remains in which microbes convert a deceased body into compost . In the early 21st century, a form of human composting that contains and accelerates the process was legalized in several U.S. states as natural organic reduction . [ 1 ] [ 2 ]
In the 21st century, several factors led to development of human composting as one of several proposals for alternative deathcare . [ 3 ]
As described in the 1963 exposé The American Way of Death , the for-profit death care industry in the United States evolved after the Civil War to promote ostentatious and resource-intense funerary customs mainly for burial, including embalming with chemicals, expensive coffins , and highly decorated gravesites. [ 4 ] Following the exposé, cremation grew in popularity as a simpler alternative, outnumbering burials nationwide by 2015. [ 5 ] However, cremation itself is under scrutiny due to the use of fossil fuels in retorts and the emissions released by combustion (which may include toxic mercury from dental amalgam ). [ 6 ]
Although the natural decomposition of human corpses into soil is a long-standing practice, Katrina Spade (founder of Recompose ) is credited with pursuing research on ways to accelerate the process using methods previously used with livestock. [ 7 ] The process was the subject of scientific study at Washington State University. [ 8 ]
Composting is an aerobic method of decomposing organic solid matter to recycle it. [ 9 ] The process involves decomposing organic material into a humus-like material, known as compost, which can fertilize plants. [ 10 ] Composting organisms require four equally important ingredients to work effectively: carbon, nitrogen, oxygen and water. [ 11 ] [ 12 ]
As described in patent application and news reports, Recompose's method entails placement of human corpses in a container along with a composting feedstock of plant material. In reports, this is described as a mixture of woodchips , straw , and alfalfa . [ 13 ] Recompose estimates they use 729 cubic feet (20.6 m 3 ) of plant material. [ 14 ] The mixture is aerated (and optionally rotated) to encourage the temperature of the mixture to rise until thermophile microbes decompose the body and the feedstock. [ 7 ] [ 15 ] In addition to developing the composting process itself, Spade worked with engineer Oren Bernstein to design containers and frames to compost several bodies within a single complex. [ 15 ]
In this manner, the transformation can be sped up to as little as 1–2 months. [ 7 ] The soil can be returned to loved ones in containers and scattered, similar to cremains. [ 16 ] Recompose estimates that per person, their process yields soil in the amount of 27 cubic feet (0.76 m 3 ) by volume and 1,000 pounds (450 kg) by weight. [ 14 ]
There are various terms for specific methods of composting human remains. These include:
Private companies who perform natural organic reduction have trademarks and patents for specific methods of natural organic reduction. News reports have genericized these terms.
Persons with certain diseases (such as tuberculosis , Creutzfeldt–Jakob disease , and ebola ) are ineligible for human composting due to pathogens that may survive the composting process. [ 26 ] [ 27 ]
Similar to cremation, certain materials in a human body must be handled with care. Implants with batteries (such as pacemakers) or radioactive materials (such as brachytherapy seeds) present risks that require removal before a body is composted. Bone fragments may require pulverization in the middle of the composting process to decompose further. Metals (such as those from hip replacement ) must be removed from composted remains. [ 28 ] [ 29 ] [ 30 ]
In Washington, regulations require the testing of composted remains for levels of toxins including arsenic , cadmium , lead , mercury , and selenium , as well as fecal coliform and salmonella pathogens. Remains exceeding limits may not be released into the environment. [ 31 ] [ 32 ]
States that legalized natural organic reduction may have individual restrictions on the handling of organically reduced human remains. These include Colorado's prohibition on growing food with soil from human remains, [ 33 ] and California's allowing state or local agencies to prohibit scattering in specific areas. [ 34 ]
Proponents say human composting is more economical, environmentally friendly, and respectful of the body and the earth than the methods of disposal that are typically practiced in technologically advanced societies. Cremation uses fossil fuels or large amounts of wood for funeral pyres (both of which generate polluting smoke and release large amounts of carbon), and conventional burial is land-intensive, has a high carbon footprint, and frequently involves disposing of bodily fluids and liquefied organs in the sewer and injecting the body with toxic embalming chemicals. By contrast, human composting, like natural burial , is a natural process and contributes ecological value by preserving the body's nutrient material. [ 35 ]
Author and YouTuber Caitlin Doughty , writing in favor of legalization in New York state, argues that the process "fulfills many people’s desire to nurture the earth after dying." [ 36 ] An editorial in Undark Magazine argues that "natural organic reduction respects the human body and spirit, supports rather than sullies the earth, and works with nature rather than against it." [ 37 ]
Critics say the rapid decomposition process is inappropriate for human bodies. The Catholic Church in the United States , for example has argued that it does not confer the respect due to bodily remains, [ 38 ] [ 39 ] [ 40 ] though other Catholics have maintained that human composting "fulfill[s] in a more direct way the Biblical declaration that we are dust and to dust we shall return (Genesis 3:19)." [ 41 ] Orthodox Jewish interpretations of Halakha religious law oppose the sped-up composting process, saying it lacks appropriate reverence for the dead, with the matter under debate in other variations of Judaism. [ 42 ] [ 43 ]
Composting of human remains has required explicit authorization from jurisdictions with changes to environmental and professional licensing. Washington was the first U.S. state to legalize, regulate, and license the practice. [ 35 ] [ 44 ] [ 45 ] Three burial businesses in the state of Washington offer human composting as of December 2022 [update] . [ 42 ] [ 46 ]
In the United States, rapid human composting has become legally allowed or approved to become allowed in the future in thirteen states as of 2025 [update] . [ 47 ]
The Funeral Rule ( 16 CFR 453 ) enacted by the Federal Trade Commission is a U.S. federal regulation protecting consumers by requiring funeral providers provide information concerning their goods and services. In 2020, the Commission underwent a formal review of the Rule.
In 2022, it published the results of its review, including a section on "New Forms of Disposition" including natural organic reduction, stating:
The Commission is considering modifying the Rule to explicitly include new methods of disposition, such as alkaline hydrolysis and human natural organic reduction. The Rule could then clarify that such providers could offer direct or immediate services with a reduced basic services fee. The Commission is also considering updating the Rule to adapt to new methods of disposition, for example the Rule requirements to offer and provide disclosures about alternative containers for direct services. The Commission wants to ensure the Rule does not stifle innovation and believes the proposed changes help level the playing field for providers of new alternative methods. [ 60 ]
In 2023 the FTC sponsored a panel to discuss natural organic reduction and other new forms of disposition. [ 61 ]
The administrator of the United States National Cemetery System has authorized the placement of "a portion of remains transformed by natural organic reduction" in in-ground burial sections (including green burial sections) and designated scatter gardens at VA national cemeteries that have these options. Those whose remains are scattered or interred in this way may be eligible for memorial markers. [ 62 ]
A National Collaborating Centre for Environmental Health study funded by the Public Health Agency of Canada notes that while Canada has yet to legalize the process, "Canadians can access the service in US states such as Washington, the first North American jurisdiction to make it legal." The study notes that the Canadian government should "consider whether inspection or restrictions on the end use of compost transported across borders is required, from jurisdictions where the process is currently permitted, to jurisdictions where it is not." [ 63 ]
A 2023 [update] Euronews report noted that within the European Union no national-level government has legalized composting of human remains. [ 64 ]
The German state of Schleswig-Holstein approved a pilot for a human composting process dubbed reerdigung ("reburial"). [ 65 ] [ 66 ]
In 2024, a research project funded by the French National Research Agency and jointly conducted by the organization Humo Sapiens, the University of Bordeaux , and University of Lille began in with an aim toward a working prototype process by 2026. [ 67 ] In 2023, Élodie Jacquier-Laforge authored legislation to legalize the process in the National Assembly . [ 68 ]
Groups active in France and Belgium are campaigning for legalization of the process under the name "humusation." Brussels politician Bernard Clerfayt stated his opposition to local legalization, citing a study. [ 64 ]
In May 2020, the Health Council of the Netherlands issued an advisory report on the admissibility of new techniques of disposing of the dead. It found that "the available information on human composting is, as yet, insufficient to make possible an assessment." The report reviewed existing guidance in European regulatory frameworks and reports from European institutions about animal composting. It cites a European Food Safety Authority for composting of dead-on-farm pigs, in which the composted remains are sent for incineration and not release into the environment. [ 69 ] [ 70 ]
Deborah Smith of the UK's National Association of Funeral Directors noted that human composting has not been undertaken in the United Kingdom. [ 71 ]
In 2023, the Church of England stated that it is considering the theological, practical and pastoral issues of the practice. [ 72 ]
As part of its 13th Programme of Law Reform, the Law Commission for England and Wales is considering regulations for human composting among other new funerary methods. The project started at the beginning of 2024 and will run until spring 2026. It will end with a final report and draft Bill. [ 73 ] | https://en.wikipedia.org/wiki/Human_composting |
Human decontamination is the process of removing hazardous materials from the human body, including chemicals, radioactive substances , and infectious material .
People suspected of being contaminated are usually separated by sex, and led into a decontamination tent, trailer, or pod, where they shed their potentially contaminated clothes in a strip-down room. They then enter a wash-down room where they are showered. Finally, they enter a drying and re-robing room to be issued clean clothing, a jumpsuit, or other attire. Some more structured facilities include six rooms (strip-down, wash-down and examination rooms, for each of men's and women's side as per attached drawing). Some facilities, such as MODEC , and many others, are remotely operable, and function like "human car washes ". Common lathering in soap removes external dust that may contain radioisotopes. [ 1 ] It is advised that when lathering, effort should be made not to spread potential dust that deposited onto exposed, unclothed areas of skin, to areas that were once likely clean. [ 2 ]
Mass decontamination is the decontamination of large numbers of people. The ACI World Aviation Security Standing Committee describes a decontamination process thus, specifically referring to plans for Los Angeles authorities:
The disinfection/decontamination process is akin to putting humans through a car wash after first destroying their garments. Los Angeles World Airports have put in place a contingency plan to disinfect up to 10,000 persons who might have been exposed to biological or chemical substances.
Most hospitals in the United States are prepared to handle a large influx of patients from a terrorist attack. Volunteer hospital decontamination teams are common and trained to set up showers or washing equipment, to wear personal protective equipment , and to ensure the safety of both the victims and the community during the response. From a planning perspective it must be remembered that first responders in Level A or B personal protective equipment (PPE) will have a limited working duration, typically 20 minutes to 2 hours. [ 3 ]
Typically these teams use decontamination showers built into the hospital or tents which are set up outside in order to decontaminate individuals. Beyond terrorism incidents, common exposures may be related to factory spills, agricultural incidents, and vehicle accidents. Incidents are common in both urban and rural communities. Hospital decontamination is a component of the Hospital Incident Command System and is required in the standards set forth by the Joint Commission .
Decontamination exercises are frequently used to test the preparedness of emergency plans and personnel.
Exercises are of three types:
Collaboration among various levels of authority, and among various countries, is required to address bioterror threats, because contamination knows no boundaries. Disease and contamination do not stop at the border from one country to another. Thus organizations such as NATO bring together member countries to practice how to contain an outbreak, setup quarantine facilities, and care for displaced persons . [ 4 ]
"Dofficers" (Decontamination officers in the "doffing" or disrobing area) are often police or military personnel, ready to handle potentially unruly persons who refuse to cooperate with first responders.
For example, the U.S. ARMY SOLDIER AND BIOLOGICAL CHEMICAL COMMAND suggests that:
Paul Rega, M.D., FACEP , and Kelly Burkholder-Allen also note, in "The ABCs of Bioterrorism", an additional advantage in decontaminating everyone found at the scene of an incident, because this will help the authorities in searching through everyone's clothes to find suspicious items:
Chris Seiple, in "Another Perspective on the Domestic Role of the Military in Consequence Management", suggests that the evidence gathering process of identifying contaminated people and their belongings should also include the process of video surveillance :
Although there are the obvious privacy concerns in surveillance, one can also argue that due to the high risk nature of terrorism, such surveillance is warranted, as it is in other high risk areas like bathing complexes where surveillance is often used because of the risk of drowning . In these cases the importance of safety may often be thought to outweigh privacy concerns.
One of the elements that separates a drill from a real-life situation is dealing with panicked or uncooperative victims. Security personnel should be assigned to the area for crowd control and to ensure appropriate flow of individuals in and out of the decontamination area. [ citation needed ]
In a real attack, the perpetrators may be among the victims, or some of the victims may be in possession of contraband , or of evidence that might help law enforcement in solving the crime. [ citation needed ]
Another consideration is that some of the perpetrator victims might refuse to go through decon because this would result in discovery of the contraband they may be hiding.
For example, a person with explosives strapped to his or her body, under their clothing, would likely not be so willing to take it off. Such a victim might try to escape, and need to be restrained for decontamination.
Separate male and female officers (decontamination officers) deal with potentially unruly patients, by restraining the hands using flex cuffs, and cutting off the shirt, then removing shoes and pants normally. This usually requires a couple of officers.
The Belfast Telegraph describes such a situation:
See also [ 5 ] Battalion Chief Michael Farri:
Radioactive contamination can enter the body through ingestion , inhalation , absorption , or injection . This will result in a committed dose of radiation. [ citation needed ]
For this reason, it is important to use personal protective equipment when working with radioactive materials. Radioactive contamination may also be ingested as the result of eating contaminated plants and animals or drinking contaminated water or milk from exposed animals. Following a major contamination incident, all potential pathways of internal exposure should be considered. [ citation needed ]
Successfully used on Harold McCluskey , chelation therapy and other treatments exist for internal radionuclide contamination. [ 6 ] | https://en.wikipedia.org/wiki/Human_decontamination |
Human ecosystems are human-dominated ecosystems of the anthropocene era that are viewed as complex cybernetic systems by conceptual models that are increasingly used by ecological anthropologists and other scholars to examine the ecological aspects of human communities in a way that integrates multiple factors as economics, sociopolitical organization, psychological factors, and physical factors related to the environment.
A human ecosystem has three central organizing concepts: human environed unit (an individual or group of individuals), environment, interactions and transactions between and within the components. [ 1 ] The total environment includes three conceptually distinct, but interrelated environments: the natural, human constructed, and human behavioral. These environments furnish the resources and conditions necessary for life and constitute a life-support system. [ 2 ] | https://en.wikipedia.org/wiki/Human_ecosystem |
Human engineered cardiac tissues (hECTs) are derived by experimental manipulation of pluripotent stem cells, such as human embryonic stem cells (hESCs) and, more recently, human induced pluripotent stem cells (hiPSCs) to differentiate into human cardiomyocytes . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Interest in these bioengineered cardiac tissues has risen due to their potential use in cardiovascular research and clinical therapies. These tissues provide a unique in vitro model to study cardiac physiology with a species-specific advantage over cultured animal cells in experimental studies. [ 1 ] hECTs also have therapeutic potential for in vivo regeneration of heart muscle. [ 2 ] [ 3 ] hECTs provide a valuable resource to reproduce the normal development of human heart tissue, understand the development of human cardiovascular disease (CVD), and may lead to engineered tissue-based therapies for CVD patients. [ 3 ]
hESCs and hiPSCs are the primary cells used to generate hECTs. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Human pluripotent stem cells are differentiated into cardiomyocytes (hPSC-CMs) in culture through a milieu containing small-molecule mediators (e.g. cytokines, growth and transcription factors). [ 1 ] [ 6 ] [ 7 ] Transforming hPSC-CMs into hECTs incorporates the use of 3-dimensional (3D) tissue scaffolds to mimic the natural physiological environment of the heart. [ 1 ] [ 2 ] [ 3 ] [ 8 ] This 3D scaffold, along with collagen – a major component of the cardiac extracellular matrix [ 9 ] – provides the appropriate conditions to promote cardiomyocyte organization, growth and differentiation. [ 1 ] [ 2 ] [ 3 ] [ 7 ] [ 8 ]
At the intracellular level, hECTs exhibit several essential structural features of cardiomyocytes, including organized sarcomeres , gap-junctions , and sarcoplasmic reticulum structures; [ 1 ] however, the distribution and organization of many of these structures is characteristic of neonatal heart tissue rather than adult human heart muscle. [ 1 ] [ 3 ] [ 4 ] [ 8 ] Recently, the combined effects of electrical and dynamic stimulation were found to significantly enhance the functional maturation of hECTs, resulting in improved alignment, structure, and organization, enhanced calcium handling capacity, increased expression of contractile and structural protein genes, and enhanced vascular network formation, closely resembling healthy in vivo conditions. [ 10 ] hECTs also express key cardiac genes ( α-MHC , SERCA2a and ACTC1 ) nearing the levels seen in the adult heart. [ 1 ] Analogous to the characteristics of ECTs from animal models, [ 11 ] [ 12 ] hECTs beat spontaneously [ 1 ] and reconstitute many fundamental physiological responses of normal heart muscle, such as the Frank-Starling mechanism [ 1 ] [ 7 ] and sensitivity to calcium. [ 1 ] hECTs show dose-dependent responses to certain drugs, such as morphological changes in action potentials due to ion channel blockers [ 4 ] [ 13 ] and modulation of contractile properties by inotropic and lusitropic agents. [ 1 ] [ 7 ]
Even with current technologies, hECT structure and function is more at the level of newborn heart muscle than adult myocardium. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 8 ] Nonetheless, important advances have led to the generation of hECT patches for myocardial repair in animal models [ 14 ] [ 15 ] and use for in vitro models of drug screening. [ 1 ] [ 3 ] [ 13 ] hECTs can also be used to experimentally model CVD using genetic manipulation and adenoviral-mediated gene transfer . [ 1 ] [ 16 ] In animal models of myocardial infarction (MI), hECT injection into the hearts of rats [ 17 ] and mice [ 18 ] reduces infarct size and improves heart function and contractility. As a proof of principle, grafts of engineered heart tissues have been implanted in rats following MI with beneficial effects on left ventricular function. [ 19 ] The use of hECTs in generating tissue engineered heart valves is also being explored to improve current heart valve constructs for in vivo animal studies. [ 20 ] As tissue engineering technology advances to overcome current limitations, hECTs are a promising avenue for experimental drug discovery, screening and disease modelling and in vivo repair. | https://en.wikipedia.org/wiki/Human_engineered_cardiac_tissues |
In biology , the epigenome of an organism is the collection of chemical changes to its DNA and histone proteins that affects when, where, and how the DNA is expressed ; these changes can be passed down to an organism's offspring via transgenerational epigenetic inheritance . Changes to the epigenome can result in changes to the structure of chromatin and changes to the function of the genome . [ 1 ] The human epigenome , including DNA methylation and histone modification , is maintained through cell division (both mitosis and meiosis ). [ 2 ] The epigenome is essential for normal development and cellular differentiation , enabling cells with the same genetic code to perform different functions. The human epigenome is dynamic and can be influenced by environmental factors such as diet , stress , and toxins .
The epigenome is involved in regulating gene expression, development, tissue differentiation, and suppression of transposable elements . Unlike the underlying genome, which remains largely static within an individual, the epigenome can be dynamically altered by environmental conditions.
The main types of epigenetic changes include: [ 3 ]
Addition of a methyl group to the DNA molecule, typically at cytosine bases. This modification generally leads to gene silencing by preventing the binding of transcription factors and other proteins necessary for gene expression. [ 3 ]
DNA functionally interacts with a variety of epigenetic marks, such as cytosine methylation, also known as 5-methylcytosine (5mC). This epigenetic mark is widely conserved and plays major roles in the regulation of gene expression, in the silencing of transposable elements and repeat sequences . [ 4 ]
Individuals differ with their epigenetic profile, for example the variance in CpG methylation among individuals is about 42%. On the contrary, epigenetic profile (including methylation profile) of each individual is constant over the course of a year, reflecting the constancy of our phenotype and metabolic traits. Methylation profile, in particular, is quite stable in a 12-month period and appears to change more over decades. [ 5 ]
CoRSIVs are Co rrelated R egions of S ystemic I nterindividual V ariation in DNA methylation. They span only 0.1% of the human genome, so they are very rare; they can be inter-correlated over long genomic distances (>50 kbp). CoRSIVs are also associated with genes involved in a lot of human disorders, including tumors, mental disorders and cardiovascular diseases. It has been observed that disease-associated CpG sites are 37% enriched in CoRSIVs compared to control regions and 53% enriched in CoRSIVs relative to tDMRs (tissue specific Differentially Methylated Regions). [ 6 ]
Most of the CoRSIVs are only 200 – 300 bp long and include 5–10 CpG dinucleotides, the largest span several kb and involve hundreds of CpGs. These regions tend to occur in clusters and the two genomic areas of high CoRSIV density are observed at the major histocompatibility ( MHC ) locus on chromosome 6 and at the pericentromeric region on the long arm of chromosome 20. [ 6 ]
CoRSIVs are enriched in intergenic and quiescent regions (e.g. subtelomeric regions) and contain many transposable elements, but few CpG islands (CGI) and transcription factor binding sites. CoRSIVs are under-represented in the proximity of genes, in heterochromatic regions, active promoters , and enhancers . They are also usually not present in highly conserved genomic regions. [ 6 ]
CoRSIVs can have a useful application: measurements of CoRSIV methylation in one tissue can provide some information about epigenetic regulation in other tissues, indeed we can predict the expression of associated genes because systemic epigenetic variants are generally consistent in all tissues and cell types. [ 7 ]
Quantification of the heritable basis underlying population epigenomic variation is also important to delineate its cis- and trans-regulatory architecture. In particular, most studies state that inter-individual differences in DNA methylation are mainly determined by cis-regulatory sequence polymorphisms , probably involving mutations in TFBSs (Transcription Factor Binding Sites) with downstream consequences on local chromatin environment. The sparsity of trans-acting polymorphisms in humans suggests that such effects are highly deleterious. Indeed, trans-acting factors are expected to be caused by mutations in chromatin control genes or other highly pleiotropic regulators. If trans-acting variants do exist in human populations, they probably segregate as rare alleles or originate from somatic mutations and present with clinical phenotypes, as is the case in many cancers. [ 4 ]
DNA methylation (in particular in CpG regions) is able to affect gene expression: hypermethylated regions tend to be differentially expressed. In fact, people with a similar methylation profile tend to also have the same transcriptome . Moreover, one key observation from human methylation is that most functionally relevant changes in CpG methylation occur in regulatory elements, such as enhancers.
Anyway, differential expression concerns only a slight number of methylated genes: only one fifth of genes with CpG methylation shows variable expression according to their methylation state. It is important to notice that methylation is not the only factor affecting gene regulation . [ 5 ]
It was revealed by immunostaining experiments that in human preimplantation embryos there is a global DNA demethylation process. After fertilisation , the DNA methylation level decreases sharply in the early pronuclei . This is a consequence of active DNA demethylation at this stage. But global demethylation is not an irreversible process, in fact de novo methylation occurring from the early to mid-pronuclear stage and from the 4-cell to the 8-cell stage. [ 8 ]
The percentage of DNA methylation is different in oocytes and in sperm : the mature oocyte has an intermediate level of DNA methylation (72%), instead the sperm has high level of DNA methylation (86%). Demethylation in paternal genome occurs quickly after fertilisation, whereas the maternal genome is quite resistant at the demethylation process at this stage. Maternal different methylated regions (DMRs) are more resistant to the preimplantation demethylation wave. [ 8 ]
CpG methylation is similar in germinal vesicle (GV) stage, intermediate metaphase I (MI) stage and mature metaphase II (MII) stage. Non-CpG methylation continues to accumulate in these stages. [ 8 ]
Chromatin accessibility in germline was evaluated by different approaches, like sc ATAC-seq and sciATAC-seq, scCOOL-seq, scNOMe-seq and sc DNase-seq . Stage-specific proximal and distal regions with accessible chromatin regions were identified. Global chromatin accessibility is found to gradually decrease from the zygote to the 8-cell stage and then increase. Parental allele-specific analysis shows that paternal genome becomes more open than the maternal genome from the late zygote stage to the 4-cell stage, which may reflect decondensation of the paternal genome with replacement of protamines by histones . [ 8 ]
DNA methylation imbalances between homologous chromosomes show sequence-dependent behavior. Difference in the methylation state of neighboring cytosines on the same chromosome occurs due to the difference in DNA sequence between the chromosomes. Whole-genome bisulfite sequencing (WGBS) is used to explore sequence-dependent allele-specific methylation (SD-ASM) at a single-chromosome resolution level and comprehensive whole-genome coverage. The results of WGBS tested on 49 methylomes revealed CpG methylation imbalances exceeding 30% differences in 5% of the loci. [ 9 ]
On the sites of gene regulatory loci bound by transcription factors the random switching between methylated and unmethylated states of DNA was observed. This is also referred as stochastic switching and it is linked to selective buffering of gene regulatory circuit against mutations and genetic diseases. Only rare genetic variants show the stochastic type of gene regulation.
The study made by Onuchic et al. was aimed to construct the maps of allelic imbalances in DNA methylation, gene transcription, and also of histone modifications. 36 cell and tissue types from 13 participant donors were used to examine 71 epigenomes. The results of WGBS tested on 49 methylomes revealed CpG methylation imbalances exceeding 30% differences in 5% of the loci. The stochastic switching occurred at thousands of heterozygous regulatory loci that were bound to transcription factors. The intermediate methylation state is referred to the relative frequencies between methylated and unmethylated epialleles. The epiallele frequency variations are correlated with the allele affinity for transcription factors.
The analysis of the study suggests that human epigenome in average covers approximately 200 adverse SD-ASM variants. The sensitivity of the genes with tissue-specific expression patterns gives the opportunity for the evolutionary innovation in gene regulation. [ 9 ]
Haplotype reconstruction strategy is used to trace chromatin chemical modifications (using ChIP-seq) in a variety of human tissues. Haplotype-resolved epigenomic maps can trace allelic biases in chromatin configuration. A substantial variation among different tissues and individuals is observed. This allows the deeper understanding of cis-regulatory relationships between genes and control sequences. [ 10 ]
Post-translational modifications of histone proteins, which include methylation, acetylation , phosphorylation , ubiquitination , and sumoylation . These modifications can either activate or repress gene expression by altering chromatin structure and accessibility of the DNA to transcriptional machinery.
The epigenetic profiles of human tissues reveals the following distinct histone modifications in different functional areas: [ 10 ]
Histone acetylation neutralizes the positive charge on histones. This weakens the electrostatic attraction to negatively charged DNA and causes unwinding of DNA from histones, making the DNA more accessible to the transcriptional machinery and hence resulting in transcriptional activation. [ 11 ]
Can lead to activation or repression of gene expression depending on the specific amino acids that are methylated.
Non-coding RNA (ncRNA) gene silencing involves various types of non-coding RNAs, such as microRNAs (miRNAs), long non-coding RNAs (lncRNAs), and small interfering RNAs (siRNAs). These RNA molecules can modulate gene expression by various mechanisms, including mRNA degradation, inhibition of translation, and chromatin remodeling. [ 3 ]
During the last few years, several methods have been developed to study the structural and consequently the functional modifications of chromatin. The first project that used epigenomic profiling to identify regulatory elements in the human genome was ENCODE (Encyclopedia of DNA Elements) that focused on profiling histone modifications on cell lines. A few years later ENCODE was included in the International Human Epigenome Consortium (IHEC), which aims to coordinate international epigenome studies. [ 12 ]
The structural modifications that these projects aim to study can be divided into five main groups:
Topological associated domains are a degree of structural organization of the genome of the cell. They are formed by regions of chromatin, sized from 100 kilobases up to megabases, which highly self-interact. The domains are linked by other genomic regions, which, based on their size, are either called “topological boundary regions” or “unorganized chromatin”. These boundary regions separate the topological domains from heterochromatin, and prevent the amplification of the latter. Topological domains are diffused in mammalian, although similar genome partitions were identified also in Drosophila . [ 13 ]
Topological domains in humans, like in other mammalians, have many functions regarding gene expression and transcriptional control process. Inside these domains, the chromatin shows to be well tangled, while in the boundary regions chromatin interactions are far less present. [ 14 ] These boundary areas in particular show some peculiarity that determine the functions of all the topological domains.
Firstly, they contain insulator regions and barrier elements, both of which function as inhibitors of further transcription from the RNA polymerase enzyme. [ 15 ] Such elements are characterized by the massive presence of insulator binding proteins CTCF .
Secondly, boundary regions block heterochromatin spreading, thus preventing the loss of useful genetic informations. This information derives from the observation that the heterochromatin mark H3K9me3 sequences clearly interrupts near boundary sequences. [ 16 ]
Thirdly, transcription start sites (TSS), housekeeping genes and tRNA genes are particularly abundant in boundary regions, denoting that those areas have a prolific transcriptional activity, thanks to their structural characteristics, different from other topological regions. [ 17 ] [ 18 ]
Finally, in the border areas of the topological domains and their surroundings there is an enrichment of Alu /B1 and B2 SINE retrotransposons . In the recent years, those sequences were referred to alter binding site of CTCF, thus interfering with expression of some genomic areas. [ 19 ]
Further proofs towards a role in genetic modulation and transcription regulation refers to the great conservation of the boundary pattern across mammalian evolution, with a dynamic range of small diversities inside different cell types, suggesting that these topological domains take part in cell-type specific regulatory events. [ 14 ]
The 4D Nucleome project aims to realize a 3D maps of mammalian genomes in order to develop predictive models to correlate epigenomic modifications with genetic variation. In particular the goal is to link genetic and epigenomic modifications with the enhancers and promoters which they interact with in three-dimensional space, thus discovering gene-set interactomes and pathways as new candidates for functional analysis and therapeutic targeting.
Hi-C [ 20 ] is an experimental method used to map the connections between DNA fragments in three-dimensional space on a genome-wide scale. This technique combines chemical crosslinking of chromatin with restriction enzyme digestion and next-generation DNA sequencing . [ 21 ]
This kind of studies are currently limited by the lack or unavailability of raw data. [ 12 ]
Epigenetics is a currently active topic in cancer research. Human tumors undergo a major disruption of DNA methylation and histone modification patterns. The aberrant epigenetic landscape of the cancer cell is characterized by a global genomic hypomethylation, CpG island promoter hypermethylation of tumor suppressor genes , an altered histone code for critical genes and a global loss of monoacetylated and trimethylated histone H4.
The idea that DNA damage drives aging by compromising transcription and DNA replication has been widely supported since it was initially developed the 1980s. [ 22 ] In recent decades, evidence has accumulated supporting the additional idea that DNA damage and repair elicit widespread epigenome alterations that also contribute to aging (e.g. [ 23 ] [ 24 ] ). Such epigenome changes include age-related changes in the patterns of DNA methylation and histone modification. [ 23 ]
As a prelude to a potential Human Epigenome Project , the Human Epigenome Pilot Project aims to identify and catalogue Methylation Variable Positions (MVPs) in the human genome . [ 25 ] Advances in sequencing technology now allow for assaying genome-wide epigenomic states by multiple molecular methodologies. [ 26 ] Micro- and nanoscale devices have been constructed or proposed to investigate the epigenome. [ 27 ]
An international effort to assay reference epigenomes commenced in 2010 in the form of the International Human Epigenome Consortium (IHEC). [ 28 ] [ 29 ] [ 30 ] [ 31 ] IHEC members aim to generate at least 1,000 reference (baseline) human epigenomes from different types of normal and disease-related human cell types . [ 32 ] [ 33 ] [ 34 ]
One goal of the NIH Roadmap Epigenomics Project Archived 2021-04-08 at the Wayback Machine is to generate human reference epigenomes from normal, healthy individuals across a large variety of cell lines, primary cells, and primary tissues. Data produced by the project, which can be browsed and downloaded from the Human Epigenome Atlas , fall into five types that assay different aspects of the epigenome and outcomes of epigenomic states (such as gene expression):
Reference epigenomes for healthy individuals will enable the second goal of the Roadmap Epigenomics Project, which is to examine epigenomic differences that occur in disease states such as Alzheimer's disease . | https://en.wikipedia.org/wiki/Human_epigenome |
The term human equivalent is used in a number of different contexts. This term can refer to human equivalents of various comparisons of animate and inanimate things.
Animal models are used to learn more about a disease, its diagnosis and its treatment , with animal models predicting human toxicity in up to 71% of cases. [ 1 ] The human equivalent dose (HED) or human equivalent concentration (HEC) is the quantity of a chemical that, when administered to humans, produces an effect equal to that produced in test animals by a smaller dose. [ 2 ] Calculating the HED is a step in carrying out a clinical trial of a pharmaceutical drug . [ 3 ]
The concept of human-equivalent energy (H-e) assists in understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a “feel” for the use of a given amount of energy by expressing it in terms of the relative quantity of energy needed for human metabolism , [ 4 ] assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts . [ 5 ] A light bulb running at 100 watts is running at 1.25 human equivalents (100/80), i.e. 1.25 H-e. On the other hand, a human may generate as much as 1,000 watts for a task lasting a few minutes, or even more for a task of a few seconds' duration, while climbing a flight of stairs may represent work at a rate of about 200 watts. [ 6 ]
The ages of domestic cats and dogs are often referred to in terms of " cat years " or " dog years ", representing a conversion to human-equivalent years. One formula for cat years is based on a cat reaching maturity in approximately 1 year, which could be seen as 16 in human terms, then adding about 4 years for every year the cat ages. A 5-year-old cat would then be (5 − 1) × 4 + 16 = 32 "cat years" (i.e. human-equivalent years), and a 10-year-old cat (10 − 1) × 4 + 16 = 52 in human terms. [ 7 ] | https://en.wikipedia.org/wiki/Human_equivalent |
Human evolutionary developmental biology or informally human evo-devo is the human-specific subset of evolutionary developmental biology . Evolutionary developmental biology is the study of the evolution of developmental processes across different organisms. It is utilized within multiple disciplines, primarily evolutionary biology and anthropology . Groundwork for the theory that "evolutionary modifications in primate development might have led to … modern humans" was laid by Geoffroy Saint-Hilaire , Ernst Haeckel , Louis Bolk , and Adolph Schultz. [ 1 ] Evolutionary developmental biology is primarily concerned with the ways in which evolution affects development, [ 2 ] and seeks to unravel the causes of evolutionary innovations. [ 3 ]
The approach is relatively new, but has roots in Schultz's The physical distinctions of man, from the 1940s. Shultz urged broad comparative studies to identify uniquely human traits. [ 4 ]
Brian Hall traces the roots of evolutionary developmental biology in his 2012 paper on its past present and future. He begins with Darwinian evolution and Mendel 's genetics, noting the tendency of the followers of both men in the early 20th century to follow separate paths and to set aside and ignore apparently inexplicable problems. [ 5 ] Greater understanding of genotypic and phenotypic structures from the 1940s enabled the unification of evolution and genetics in the modern synthesis . Molecular biology then enabled researchers to explore the mechanisms and evolution of embryonic development in molecular detail, including in humans. [ 5 ]
Many of the human evolutionary developmental biology studies have been modeled after primate studies and consider the two together in a comparative model. Brain ontogeny and human life history evolution were looked at by Leigh, in a 2006 paper. He compares brain growth patterns for Homo erectus and Homo sapiens to get at the evolution of brain size and weight. Leigh found three different patterns, all of which pointed to the growth rate of H. erectus either matching or exceeding H. erectus . [ 6 ] He makes the case that this finding had wide application and relevance to the overall study of human evolution. It is pertinent specifically to the connections between energy expenditure and brain development. These finding are of specific utility in studies on maternal energy expenditure. [ 6 ] Comparative study of nonhuman primates, fossils and modern humans to study patterns of brain growth to correlate human life history and brain growth. [ 6 ]
Jeremy De Silva and Julie Lesnik examined chimpanzee neonatal brain size to identify implications for brain growth in Homo erectus . This changed the understanding of differences and similarities of post-natal brain growth in humans and chimpanzees . The study found that there was a distinction necessary between growth time and growth rate. The times of growth were strikingly similar, but the rates were not. The paper further advocates the use of fossils to assess brain size in general and in relation to cranial capacity. [ 7 ]
Utilization of endocranial volume as a measure for brain size has been a popular methodology with the fossil record since Darwin in the mid 1800s. This measure has been used to access the metabolic requirements for brain growth and the subsequent trade-offs.
Some of the work on human evolutionary developmental biology has centered around the neotenous features that present in humans, but are not shared across the primate spectrum. Steven J. Gould discussed the presentation of neoteny with "terminal additions" in humans. [ 8 ] Neoteny is defined as the delayed or slowed development in humans when compared with their non-human primate counterparts. The "terminal additions" were extensions or reductions in the rate and scope of stages of development and growth. [ 8 ] [ pages needed ] Gould hypothesized that this process and production of neoteny in humans might be the key feature that ultimately lead to the emotional and communicative nature of humans. He credits this factor as an integral facet of human evolution. However, there have also been cautions against the application of this aspect to group ranking during it inappropriate as a measure of evolutionary achievement. [ 9 ]
Early comparative and human studies examined the fossil record to measure features like cranial sizes and capacities so as to infer brain size, growth rate, total growth and potential implications for energy expenditure. Helpful as this is, the static nature of individual fossils presents its own challenge. The phylogenic fossil line is itself a hypothesis, so anything based upon it is equally hypothetical. [ 10 ]
Using the fossil record of Neanderthals, modern humans, and chimpanzees, Gunz et al. examined that patterns of endocranial development. [ 11 ] They found that there are common features shared between the three, and that modern humans diverge from these common patterns in the first year of life. They concluded that even though much of the developmental results are similar insofar as brain size, the trajectories by which they arrived are not shared. Most of the differences between the two arise post-natally, in the first year, with cognitive development. [ 11 ]
There have been a number of studies that not only take incomplete fossil records into consideration, but have attempted to specifically identify the barriers presented by this condition. For example, Kieran McNulty covers the potential utilities and constraints of using incomplete fossil taxa to examine longitudinal development in Australopithecus africanis. [ 10 ]
Many studies on development have been human-specific. In his 2011 paper, Bernard Crespi focused on adaptation and genomic conflict in childhood diseases. He considers the evolution of childhood diseases and their risk levels, and finds that both risk and disease have evolved. [ 12 ]
Hotchberg and Belsky incorporate a life-history perspective, looking at adolescence. Substantial variation in phenotypic paths and presentations suggest significant environmental influence. They focus on plasticity between stages of development and the factors that shape it. Rate of maturation, fecundity, and fertility were all impacted by environmental circumstances. They argue that early maturation can be positive, reflecting opportunistic actions within specific conditions. [ 13 ]
Technological advances that have allowed better and better access to the growth of the human form in utero have proven particularly formative in studies involving focus on genetic and epigenetic development. Bakker et al. look at the interconnected nature of developmental processes and attempt to use fetal vertebral abnormalities as an indicator for other malformations. They found that the origin of the cells was not nearly as highly correlated as the observed developmental signals. [ 14 ] In utero development and malformations were correlated in severity. [ 14 ]
Freiston and Galis look at the development of ribs, digits, and mammalian asymmetry. They argue that this construction is relevant for the study of disease, the consistency in evolution of body plans, and understanding of developmental constraints. [ 15 ] Sexual dimorphism in prenatal digit ratio was found as early as 14 weeks and was maintained whether or not the fleshy finger part was included. [ 15 ]
Languages and cognitive function have also been subjects of evolutionary studies. Insofar as language and evolutionary developmental biology, there is tension from the gate. Much of this contention has centered around whether to view and study language as an adaptation in and of itself, or as a by-product of other adaptations. Jackendoff and Pinker have argued for language as an adaptation owing to the interdependent social nature of humans. To support these claims, he points to things like the bi-directionality in language usage and comprehension. [ 16 ] This is a counter to the claims by theorists like Noam Chomsky , who argued against language as a human specific adaptation. [ 17 ]
Adaptation and adaptive theory has been argued even separate from its utility in the study of language. Gould and Lewontin engage with what they saw as flaws in adaptive theory using the analogy of the spandrels of San Marco. Among the issues identified is the lack of distinction between what trait developed and how it is used, and the underlying reasons or forces that created the novel trait initially. [ 18 ] This is particularly difficult to access in intangible language and cognition.
This debate has continued over decades and most often presents in the form of a response and published dialogue between theorists. This continued debate has prompted efforts to marry the two perspectives in a useful way. Fitch argues that these two approaches can be rectified with the study of "neutral computation and mammalian brain development". [ 19 ] It may be more useful to consider specific components of neural computation and development, what has been selected for, and to what end. [ 19 ]
Ploeger and Galis tackled modular evolvability and developmental constraints in human and other primate evolutionary trajectories. They argue that these should be treated with an interdisciplinary approach across the cognitive sciences. They frame this in the context of: | https://en.wikipedia.org/wiki/Human_evolutionary_developmental_biology |
Human factors in diving equipment design are the influences of the interactions between the user and equipment in the design of diving equipment and diving support equipment . The underwater diver relies on various items of diving and support equipment to stay alive, healthy and reasonably comfortable and to perform planned tasks during a dive.
Divers vary considerably in anthropometric dimensions , physical strength , joint flexibility, and other factors. Diving equipment should be versatile and chosen to fit the diver, the environment, and the task. How well the overall design achieves a fit between equipment and diver can strongly influence its functionality. [ 1 ] Diving support equipment is usually shared by a wide range of divers and must work for them all. When correct operation of equipment is critical to diver safety, it is desirable that different makes and models should work similarly to facilitate rapid familiarisation with new equipment. When this is not possible, additional training for the required skills may be necessary.
The most difficult stages for recreational divers are out of water activities and transitions between the water and the surface site, such as carrying equipment on shore, exiting from water to boat and shore, swimming on the surface, and putting on equipment. Safety and reliability, adjustability to fit the individual, performance, and simplicity were rated the most important features for diving equipment by recreational divers. [ 1 ] [ 2 ]
The professional diver is supported by a surface team , who are available to assist with the out-of-water activities to the extent necessary, to reduce the risk associated with them to a level acceptable in terms of the governing occupational safety and health regulations and codes of practice. This tends to make professional diving more expensive, and the cost tends to be passed on to the client. [ 3 ] [ 4 ] [ 5 ] [ 6 ]
Human factors engineering (HFE), also known as human factors and ergonomics , is the application of psychological and physiological principles to the engineering and design of equipment, procedures, processes, and systems. Primary goals of human factors engineering are to reduce human error , increase productivity and system availability, and enhance safety, health and comfort with a specific focus on the interaction between the human and equipment. [ 7 ]
Diving equipment is used to facilitate underwater activity by the diver. The primary requirements are to keep the diver alive and healthy, while secondary requirements include providing comfort and the capacity to perform required tasks. Safe operation requires correct equipment function as well as diver competence. [ 8 ]
Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure. Diving equipment, especially those pieces which are high availability or safety-critical systems , must have a high fault tolerance. The ability to maintain functionality when portions of a system break down is referred to as ' graceful degradation ', as opposed to a small failure causing total breakdown. [ 9 ] The diver must also be fault tolerant, a state that is achieved by competence , situational awareness and fitness to dive , . [ 10 ] [ 11 ]
Task loading , nitrogen narcosis , fatigue, and cold can lead to loss of concentration and focus, reducing situation awareness . Reduced situation awareness can increase the risk of a situation that should be manageable developing into an incident where damage, injury or death may occur. [ 12 ]
A diver must be able to survive any reasonably foreseeable single equipment failure long enough to reach a place where longer-term correction can be made. The solo diver can not rely on team redundancy , and must provide all the necessary emergency equipment indicated as necessary by the risk assessment. [ 13 ] [ 14 ] [ 15 ] On the other hand, a team can reduce risk to an acceptable level in most cases by distributing redundancy among its members. However, the effectiveness of this strategy is tied to team cohesion and good communication. [ 12 ]
No gender-specific traits have been identified which require design of tasks and tools exclusively for female divers. Fit of diving suits must be tailored to suit the range of human shapes and sizes, and most other equipment fits all sizes, is adjustable to suit all sizes, or is available in several sizes. A few items are designed specifically for female use, but this is often more a fine tuning for comfort or cosmetic styling than an ergonomically functional difference. Female divers are reported, on average, to experience greater difficulty in performing five tasks of recreational diving: carrying heavy equipment on shore, putting on the scuba set, underwater orientation, underwater balance, and trim and descent. The first two are related to lifting large, heavy and bulky equipment. Balance and trim could be related to buoyancy and weight distribution, but insufficient data is available to specify a remedy. [ 1 ]
There is a relative growth in the older sector of recreational diver demographics. Some are newcomers to the activity and others are veterans continuing a long career of diving activity. They include older female divers. More research is needed to establish the implications of age and sex-related variations on human factors and safety issues. [ 1 ]
The breathing apparatus must allow the diver to breathe with minimal added work of breathing , and minimum additional dead space . It should be comfortable to wear, and not cause stress injury or allergic reactions to its materials. It must be reliable and not require constant attention or adjustment during a dive, and performance should degrade gradually in the event of malfunctions, allowing time for corrective action to be taken with minimum risk. [ 16 ] When more than one breathing gas mixture is available, the risk of selecting a gas unsuitable for the current depth must be minimised. [ 12 ]
Holding the scuba mouthpiece between the teeth can cause jaw fatigue on a long dive. The loads that cause this fatigue can be reduced by using smaller second stages, different hose lengths and routing, angled swivels, and improved mouthpiece design, which may include customised bite grips. [ 17 ] Allergic reactions to mouthpiece materials are less common with silicone rubber and other hypoallergenic materials than with natural rubber, which was commonly used in older equipment. [ 18 ] Some divers experience a gag reflex with mouthpieces that contact the roof of the mouth, but this can be corrected by fitting a different style of mouthpiece. [ 19 ]
Purging the second stage is a useful function to clear water from the interior. The purge button should function only when pressed, and should be powerful enough to sufficiently clear the chamber while not blowing its contents down the diver's throat. [ 20 ] Cracking pressure is the pressure difference over the diaphragm needed to open the second stage valve. This should be low but not excessively sensitive to water movement or orientation. Once open with gas flowing, the gas flow often produces a slight increase in the pressure drop in the demand valve. This helps hold the demand valve open during inhalation, effectively reducing the work of breathing, but making the regulator more susceptible to free-flow. In high performance models, the user can adjust these sensitivity settings. [ 20 ]
The exhaust valve should offer the minimum resistance to exhalation, including a minimum opening pressure difference, and low resistance to flow through the opening. It should not easily block or leak due to foreign matter such as vomit. Exhaust gas flow should not be too distracting or annoying to the diver in a normal diving posture. Flow should be directed away from the faceplate of the mask and bubbles should not flow directly over the ears. [ 21 ]
Breathing effort should be reasonable in all diver attitudes . The diver can rotate in three axes, and may need to do so for a significant period including several breaths from any arbitrary orientation. The DV should continue to function correctly throughout the maneuvers, though some variation in breathing effort is inevitable. Breathing performance testing for compliance to standards is generally done facing forward and facing down. Manual adjustment of the inhalation valve spring may be available, and can help if an unusual orientation must be maintained for a long period. [ 22 ]
Scuba divers must be able to easily provide emergency gas to other scuba divers who are diving as part of the same group, which can be made easier or more difficult by the handedness of the regulator, the hose length, and the hose routing. Different configurations are used for specific circumstances. [ 23 ] For example, the 5 to 7 feet (1.5 to 2.1 m) "long hose" arrangement is used to facilitate gas sharing when swimming in single file through a restriction. There are hose routings that have been standardised to a reliable system. [ 12 ]
Rebreather equipment removes carbon dioxide from exhaled gas and replaces it with oxygen, allowing the diver to breathe the gas again. This can be done in a self-contained system carried by the diver, in a system where the scrubber is carried by the diver and gas is supplied from the surface, [ 24 ] or where the gas is returned to the surface for recycling. [ 25 ] The power to circulate gas in the loop can be the lung power of the diver, energy from the supply gas pressure, [ 24 ] or externally powered booster pumps. [ 25 ] Scuba rebreathers tend to circulate by lung power, and the work of breathing can make up a significant part of diver effort at depth; in extreme circumstances it may exceed the capacity of the diver without additional workload. [ 26 ] [ 22 ] The position of the counterlungs and the orientation of the diver in the water, can have a significant effect on work of breathing, as can the restriction of flow between the diver's teeth. [ 27 ]
Using two scrubber canisters in series can provide a level of redundancy in that a fault in one that allows carbon dioxide breakthrough will not necessarily directly affect the other. It is also possible to repack just the first scrubber after a short dive, and may be possible to also change the order so that the freshest scrubber is last in the circuit. Mounting the counterlung across the diver between the scrubbers can eliminate transverse shifts in the centre of buoyancy during the breathing cycle, and also mounting it longitudinally in line with the lungs eliminates longitudinal buoyancy shifts during the breathing cycle. The counterlung could be mounted across the back or across the chest. [ 28 ]
A wide variety of rebreather types are used in diving because of the highly variable requirements in different situations. A diving rebreather is safety-critical life-support equipment – some modes of failure can kill the diver without warning, some others require immediate appropriate response for survival. [ 29 ]
Some rebreathers have control systems which will lock out if they do not complete a satisfactory pre-dive test, but others which may be used for cave diving, where the inability to start a dive may prevent the diver from exiting from a cave in which they have surfaced in a space isolated from the exit by a water-filled passage, will not prevent the diver from starting a return dive, but will warn them of the problems detected, as it may be possible to escape by operating the rebreather manually. [ 30 ]
Rebreather diving incidents commonly involve an inappropriate breathing gas, which can result in loss of consciousness, water aspiration and drowning. Water aspiration may be delayed by use of a full-face mask or a mouthpeice retaining strap. According to a study of military rebreather accidents, the number of fatalities was low where mouthpiece retaining straps were used. The availability of a tethered dive buddy also helped ensure timely rescue. [ 31 ]
A separate snorkel, or tube snorkel, used for freediving and swimming at the surface on scuba, typically comprises a curved tube for breathing and a means of attaching the tube to the head of the wearer. The tube has an opening at the top and a mouthpiece at the bottom. Snorkels are classified by their dimensions and by their orientation and shape. [ 32 ] The length and the inner diameter (or inner volume ) of the tube are important ergonomic considerations when matching a snorkel to the requirements of its user. The orientation and shape of the tube must also be taken into account when matching a snorkel to its use while seeking to optimise ergonomic factors such as low drag through the water, airflow , water retention, interrupting the field of vision, work of breathing , and dead space. [ 33 ] The collapsible snorkel is intended for scuba divers who do not need a snorkel on every dive, but may find it occasionally useful. It can be folded up and stored in a pocket until it is needed. [ 34 ]
Some snorkels have a sump and drain valve at the lowest point to help clear the snorkel and drain the remnant volume of water in the snorkel out of the direct air passage. The effectiveness of these has not been clearly established. These valves have a tendency to fail if infrequently used, stored for long periods, through environmental fouling, or owing to lack of maintenance. Many also slightly increase the flow resistance of the snorkel, retain a small amount of water in the tube after clearing, or both. [ 35 ]
Mechanical dead space of diving breathing apparatus is the volume in which the exhaled breathing gas is immediately inhaled on the next breath, increasing the necessary tidal volume and respiratory effort to get the same amount of usable breathing gas, increasing the accumulation of carbon dioxide from shallow breaths, and limiting the maximum volume of fresh or recycled gas [ Note 1 ] in a breath. It is in effect an external extension of the physiological dead space. [ 36 ] [ 37 ] The importance of minimising dead space volume is greater when the work of breathing is large, as work of breathing can also be a limiting factor in gas exchange. This becomes critical at high ambient pressures when the density of the breathing gas is high. Lower density breathing gas diluents help mitigate this problem. [ 26 ]
Diving masks and helmets provide air space between the eyes and a transparent window to allow the diver a clear view underwater. [ 38 ]
The internal volume of masks and helmets affects buoyancy and trim, and dead space, which affects gas exchange and work of breathing.
The volume of dead space is important for full-face masks and helmets, but not relevant to half masks as they are not part of the breathing passage. For breathhold diving, the mask internal volume must be equalised from the single breath in the diver's lungs, so a small volume is highly desirable, but scuba divers have sufficient air available that this is not a problem.
Large internal volume half-masks tend to float up against the nose, which is uncomfortable and becomes painful over time. The trend is towards low volumes and wide fields of vision, which requires the viewport to be close to the face. This makes it difficult to design a frame and nose pocket that will accommodate the full range of face shapes and sizes. Wide and high-bridged noses and very narrow faces are a particular problem. The clearance between the viewport and eyes should account for the eyelashes when blinking.
Full-face masks have larger internal volumes, but they are strapped on more securely and the load is carried by the neck. This load is small enough to be easily accommodated by most divers, though it may take some time to get used to it, and a lower volume is more comfortable. A large volume may adversely affect diver trim and necessitate moving or adding ballast weight to compensate. [ 39 ]
The weight of a lightweight demand helmet in air is about 15 kg. Underwater it is nearly neutrally buoyant so it is not an excessive static load on the neck. The helmet is a close fit to the head and moves with the head, allowing the diver to aim the viewport using head movement to compensate for the restricted field of vision. [ 40 ]
Free-flow helmets compensate for a potentially large dead space by a high gas flow rate, so that exhaled gas is flushed away before it can be rebreathed. [ 41 ] : Ch.3 They tend to have a large internal volume, and be heavier than demand helmets, and usually rest on the shoulders, so do not move with the head. As there is no need for an oro-nasal inner mask, they usually have a large viewport or several viewports to compensate for the fixed position. The diver can move the head inside the helmet to a limited extent, but to look around further, the diver must rotate the torso. The view downwards is particularly restricted, and requires the diver to bend over to see the area near the feet. Buoyancy may be compensated by direct weighting of the helmet and corselet, or by a jocking harness and indirect weighting.
The mask must form a watertight seal around the edges to keep water out, regardless of the attitude of the diver in the water. This seal is between the elastomer skirt of the mask and the skin of the face. The fit of a mask affects the seal and comfort and must account for the variability of face shapes and sizes. For half masks, this is achieved by the very wide range of models available, but in spite of this some faces are too narrow or noses too large to fit comfortably. This is less of a problem with full-face masks and less again with helmets. However, these are affected by other factors like overall head size and neck length and circumference, so there is still a need for adjustment and different size options. [ 41 ]
Face and neck seals may be compromised by hair passing under the seal between the rubber and skin, and the amount of leakage will depend on the amount of hair and the position of the compromised part of the seal. Divers with large amounts of facial hair can usually compensate adequately on open circuit by occasional exhalation through the nose to clear the mask, but with a rebreather the gas used for mask clearing is lost from the circuit.
Two aspects of equalising the pressure in gas spaces are influenced by mask and helmet design. These are equalising the internal space of the mask or helmet itself, and equalising the ears . Equalising the internal space of a half mask is normally achieved through the nose, and equalising the ears requires a method to block the nostrils. This is relatively easy to do with half-masks, where the diver can usually pinch the nostrils closed through the rubber of the mask skirt. [ 41 ]
Helmets and most full-face masks do not allow the diver finger access to the nose, and various mechanical aids have been tried with varying levels of comfort and convenience. [ 42 ] [ 41 ]
The field of vision of the diver is reduced by opaque parts of the helmet or mask. Peripheral vision is more reduced in the lower areas due to the size of the demand valve. Helmet design is a compromise between low mass and inertia (with a smaller interior volume and restricted field of vision), and large viewports that lead to a larger interior volume. A viewport close to the eyes provides a better view for the same area, but this is complicated because of the varying nose sizes of divers and the need for clearance for the oro-nasal mask. Curved viewports can introduce visual distortions that reduce the ability to judge distance, and almost all viewports are made flat. Even a flat viewport causes some distortion, but it takes relatively little time to get used to this, as it is constant. Spherical port surfaces are generally used in newer atmospheric suits for structural reasons, and work well when the interior volume is large enough. They can be made wide enough for adequate peripheral vision. Field of vision in helmets is affected by the mobility of the helmet. A helmet directly supported by the head can rotate with the head, allowing the diver to aim the viewport at the target. In this case, however, peripheral vision is constrained by the dimensions of the viewport, the weight in air and unbalanced buoyancy forces when immersed must be carried by the neck, and inertial and hydrodynamic loads must also be carried by the neck. A helmet fixed to a breastplate is supported by the torso, which can safely support much greater loads, but it cannot rotate with the head. The entire upper body must rotate to direct the field of vision. This makes it necessary to use larger viewports so the diver has an acceptable field of vision at times when rotating the body is impractical. The need to rotate the head inside the non-rotatable helmet requires internal clearance, therefore a larger volume, and consequently a greater mass of ballast.
Optical correction is another factor that is considered in mask and helmet design. Contact lenses can be worn under all types of masks and helmets. Regular spectacles can be worn in most helmets, but cannot be adjusted during the dive. Corrective lenses can be glued to the inside of half-masks and some full-face masks, but the distance from the eyes to the lenses may not be optimal, and some correction may be needed to compensate for the increased distance from the cornea to the lens. Bifocal arrangements are available, mostly for far-sightedness, and may be necessary with older divers to allow them to read their instruments. Defogging of bonded lenses is the same as for plain glass. Some dive computers have relatively large font displays, and adjustable brightness to suit the ambient lighting. [ 43 ] [ 44 ] [ 45 ]
An open circuit breathing apparatus produces exhalation gas bubbles at the exhaust ports. Free-flow systems produce the largest volumes, but the outlet can be behind the viewports so it does not obscure the diver's vision. Demand systems must have the second stage diaphragm and exhaust ports at approximately the same depth as the mouth and lungs to minimise work of breathing. To get consistent breathing effort for the range of postures the diver may need to assume, this is most practicable when the exhaust ports and valves are close to the mouth, so some form of ducting is required to direct the bubbles away from the viewports of helmet or mask. This generally diverts exhaust gases around the sides of the head, where they tend to be rather noisy as the bubbles rise past the ears. Closed circuit systems vent far less gas, which can be released behind the diver, and are significantly quieter. Diffuser systems have been tried, [ 46 ] but have not been successful for open circuit equipment, though they have been used on rebreathers, where they improve stealth characteristics. [ 47 ]
The inside surface of the viewport of a mask or helmet tends to be prone to fogging, where tiny droplets of condensed water disperse light passing through the transparent material, blurring the view. Treating the inside surface with a defogging surfactant before the dive can reduce fogging. Fogging may occur anyway, and it must be possible to actively defog, either by rinsing with water or by blowing dry air over it until it is clear. There is no supply of dry air to a half-mask, but rinsing is easy and only momentarily interrupts breathing. A spitcock may be provided on standard helmets for rinsing. Demand helmets generally have a free-flow supply valve that directs dry air over the inside of the faceplate. Full-face masks may use either rinsing or free-flow, depending on whether they are intended primarily for scuba or surface-supply diving.
Masks held in place by adjustable straps can be knocked off or moved from the correct position, allowing water to flood in. Half masks are more susceptible to this, but because the diver can still breathe with a flooded half mask this is not considered a major issue unless the mask is lost. Full-face masks are part of the breathing passage, and need to be more securely supported, usually by four or five adjustable straps connected at the back of the head. [ Note 2 ] It is still possible for these to be dislodged, so it must be possible for the diver to refit them sufficiently to continue breathing with their hands in cold-water gloves. On the other hand, the regulator is fixed to the mask so the mask is not easily lost and can be retrieved in the same way as a regulator second stage. Helmets are much more securely attached, and it is considered an emergency if they come off the head, as it is difficult for the diver to rectify the problem underwater, though it is usually still possible to breathe carefully if the free-flow valve is opened and the helmet held over the head with the bottom opening level.
When using multiple gas sources and mixtures it is important to avoid confusing the gas mix in use and the pressure remaining in the various cylinders. The cylinder arrangement must allow access to cylinder valves when in the water. Use of the wrong gas for the depth can have fatal consequences with no warning. High task loading for technical divers can distract from checking the mix when switching gas. It is important to check that each cylinder is the correct gas and is mounted in the right place, to positively identify the new gas at each gas switch, and to adjust the decompression computer to allow for each change in gas for correct decompression. Some computers automatically change based on data from integrated pressure transducers, but still require correct pre-dive setting of gas mixes. [ 48 ] [ 49 ]
A back-mounted single cylinder configuration is stable on the diver in and out of the water, and is compact and acceptably balanced. However, some divers have difficulty reaching the valve knob, which is behind the back, particularly when the cylinder is mounted relatively low on the harness, or the suit is thick or tight. Back-mounted twin cylinders with an isolation manifold are also stable in and out of the water. They are compact, heavy, and acceptably balanced for most divers. Some divers have difficulty reaching the valve knobs behind the back. This can be a problem in a free-flow or leak emergency, where a large volume of gas can be lost due to inability to access knobs quickly to shut down the cylinder. The weight and buoyancy distribution may be top heavy for some divers. [ 12 ] [ 50 ] In back-mounted independent doubles, gas is not available if a cylinder valve must be shut down. The side-mount emergency options of feather breathing and regulator swap-out are also not available. Flexible valve knob extensions on back mount sets are not very satisfactory and not very reliable, and are an additional snag risk. [ 12 ] Pony cylinders for bailout or decompression gas clamped to the main gas supply put the valve where it cannot be seen, and may be difficult to reach. They are reasonably compact and manageable out of the water. [ citation needed ] Sling mount bailout and decompression cylinders allow easy access to the valve and allow the visual checking of labels during gas switching. Up to four sling cylinders are reasonably manageable with some practice. [ 12 ] [ 50 ]
Alternative configurations include an inverted single or manifolded twin cylinders. These have valves at the bottom which are more reachable, but are more vulnerable to impact damage. Custom hose lengths are needed, and hose routing will be different. This arrangement is used by firefighters , and has also been used by military divers. Weight and buoyancy distribution may be bottom heavy for some divers, and may adversely affect trim. This arrangement is also used for the gas cylinders on some rebreather models. Side mounts provide much easier valve access, and it is possible to see the top of each cylinder to check the label when switching gas, which allows confirmation of correct gas. It is possible to hand off a cylinder when donating gas to another diver, so a long hose is not needed. [ citation needed ]
The material and pressure rating of cylinders affects convenience, ergonomics, and safety. Aluminum alloy and steel are the two commonly available materials that are most often used for scuba cylinders. Their strength to weight ratio allows the manufacture of scuba cylinders that are near neutral buoyancy when empty. Cylinders that require a buoyancy compensator for support when they are empty can be unsafe, since it could be necessary to ditch breathing gas to regain buoyancy in the event of a buoyancy compensator failure. Cylinders that are buoyant when full require ballasting to make them manageable underwater. This kind of cylinder is usually a fibre wound composite cylinder , which are expensive, relatively easy to damage, and usually have a shorter service life. They tend to be used for cave diving when they must be carried through a difficult route to get to the water. Buoyancy control is easier, more stable, and safer when the gas volume needed to achieve neutral buoyancy is minimised, particularly at the end of a dive during ascent and decompression. The need for a large volume of gas in the buoyancy compensator during ascent increases risk of an uncontrolled buoyant ascent during decompression. [ 12 ]
For stage drops , sidemount diving where the cylinders will be pushed ahead of the diver for long distances, and where a cylinder may be handed off to another diver, it is particularly convenient if the cylinder has nearly neutral buoyancy during these maneuvers, as this has the least immediate impact on the buoyancy and trim of the diver. This convenience reduces task loading and improves safety. [ 51 ]
Diving suits are worn for protection from the environment. In most cases this is to keep the diver warm, as heat loss to water is rapid. There is a trade-off between insulation, comfort, and mobility. When diving in the presence of hazardous materials, the diving suit also serves as personal protective equipment to limit exposure to those materials. [ 39 ]
Wetsuits rely on a good fit to work effectively. They rely on the low heat conductivity of the gas bubbles in the neoprene foam of the suit, which slows heat loss. If the water inside the suit can be flushed out and replaced by cold water, this insulating function is bypassed. Movement of the diver tends to move the water in the suit around mostly where it is present in thick layers, and if this water is forced out it will be replaced by cold water from outside. A close fit reduces the thickness of the layer of water and makes it more resistant to flushing. [ 52 ] A suit that is too tight can also cause problems. It could restrict movement and increase the diver's work of breathing. The gas bubbles in the neoprene foam will compress at depth, reducing insulation as the diver gets deeper. [ 53 ] Semi-dry suits attempt to address this issue by making it more difficult for water to enter and leave the suit, but are still most effective when they are close-fitting. [ 54 ]
Dry suits rely on staying dry inside and maintaining a limited volume of gas distributed through the thermal undergarments. The volume of gas needed is fairly constant, but it expands and contracts due to pressure variations as the diver changes depth. Suit squeeze is caused by insufficient gas in the suit, and will reduce flexibility of the suit and restrict the diver's freedom of motion. This could prevent the diver from reaching critical equipment in an emergency. Gas is added manually by pressing a button to open the inflation valve , which is normally in the central chest area where it can easily be reached by both hands and is clear of the harness and buoyancy compensator. High flow rates are neither necessary nor desirable, as they could lead to over-inflation, particularly if the valve sticks open due to freezing. Over-inflation causes an uncontrollable rapid ascent if not corrected. Dumping of suit gas is only possible when the dump valve is above the gas to be dumped. [ 55 ] During ascent, the diver has several things to monitor, so an adjustable automatic exhaust valve which provides hands-free operation helps reduce this task loading. [ 56 ]
If the dry suit is flooded, thermal insulation is lost which may make it necessary to abort the dive. Buoyancy can also be lost, a problem that can countered by ditching ballast, inflating the buoyancy compensator if it is large enough, or deploying a DSMB or small lifting bag. The extra weight of the water can make it difficult to exit the water, but this can be mitigated by having ankle dumps or cutting the suit to allow water drainage. [ 55 ]
The ability of the diver to reach the cylinder valve can be constrained by the suit and personal joint flexibility of the diver. Back-mount configurations with valves up are particularly difficult to reach. This can cause delays in reacting effectively to some emergencies. This is partly a suit issue and partly a cylinder configuration issue. [ 55 ]
The combination of suit and helmet can further constrain movement. Considerable effort may be necessary to overcome the encumbrance of the suit so it can take longer to complete complex tasks, in an environment that is already non-conducive to dexterity or heavy labour. This was particularly noticeable on the standard diving suit. [ 57 ] Wrist and neck seals are commonly available in latex rubber, silicone rubber, and expanded neoprene. Some divers are allergic to latex, and should avoid latex seals. [ 58 ]
Dry suits can be effective for protection against exposure to a wide range of hazardous materials, and the choice of suit material should take into account its resistance to the known contaminants. Hazmat diving often requires complete isolation of the diver from the environment, necessitating the use of dry glove systems and helmets sealed directly to the suit. [ 39 ]
The suit should allow sufficient freedom of movement to swim, work, and reach all necessary accessories and controls when worn over undergarments suitable for the water temperature, without having excess internal volume, particularly in the legs. Excess leg length and loose fit can cause the boots to float off the feet, followed by a loss of ability to swim, and orientate correctly, which can be dangerous. The seals should be tight enough to be reliable without restricting blood flow, particularly at the neck. [ 55 ]
The operation and skill requirements for the safe use of dry suits has become fairly standardised, so although initial training is considered essential, switching between makes and models does not usually require retraining. [ 55 ]
Hot water suits are often used for deep dives when breathing mixes containing helium are used. Helium has a higher heat conductivity than air, but has a lower specific heat. The expansion of gas in the diving regulator causes intense cooling, and the chilled gas is heated to body temperature and humidified in the alveoli, which causes rapid heat loss from the body by conduction and evaporation. The amount of heat loss is proportional to the mass of gas breathed, which is proportional to ambient pressure at depth. This compounds the risk of hypothermia already present in the cold temperatures found at these depths. [ 59 ]
Hot water suits are usually one piece suits made of foamed neoprene with a zipper on the front of the torso and on the lower part of each leg. They are similar to wetsuits in construction and appearance, but do not fit as closely by design; they are not as thick because they only need to temporarily retain and guide the flow of the heating water. The wrists and ankles of the suit must allow water to flush out of the suit as it is replenished with fresh hot water from the surface. [ 60 ] Gloves and boots are worn which receive hot water from the ends of the arm and leg hoses. If a full-face mask is worn, the hood may be supplied by a tube at the neck of the suit. Helmets do not require heating. [ 42 ] : ch18 Breathing gas can be heated at the helmet by using a hot water shroud over the helmet inlet piping between the valve block and the regulator, which reduces heat loss to the breathing gas. [ 61 ]
Heated water in the suit forms an active insulation barrier to heat loss, but the temperature must be regulated within fairly close limits. If the temperature falls below about 32 °C, hypothermia can result, and temperatures above 45 °C can cause burn injury to the diver. The diver may not notice a gradual change in temperature, and could enter the early stages of hypo- or hyperthermia without noticing. [ 60 ] The suit must be loose fitting to allow unimpeded water flow, but this causes a large transient volume of water (13 to 22 litres) to be held in the suit, which can impede swimming due to the added inertia in the legs. [ 60 ]
Hot water suits are an active heating system; they are very effective while they are working correctly, but if they fail, they are very ineffective. Loss of heated water supply for hot water suits can be a life-threatening emergency with a high risk of debilitating hypothermia . Just as an emergency backup source of breathing gas is required, a backup water heater is also an essential precaution whenever dive conditions warrant a hot water suit. If the heater fails and a backup unit cannot be immediately brought online, a diver in the coldest conditions can die within minutes. Depending on decompression obligations, bringing the diver directly to the surface could be equally deadly. [ 60 ]
The diver will usually wear something under a hot water suit for protection against scalding, chafe and for personal hygiene, as hot water suits may be shared by divers on different shifts, and the interior of the suit may transmit fungal infections if not sufficiently cleaned. Wetsuits can prevent scalding of the parts of the body they cover, and thermal underwear can protect against chafe and keep the standby diver warm at the surface before the dive. [ 62 ] [ 63 ] [ 64 ]
The hot water supply hose of the umbilical is connected to a supply manifold at the right hip of the suit, which has a set of valves to allow the diver to control flow to the front and back of the torso and the arms and legs, and to dump the supply to the environment if the water is too hot or too cold. The manifold distributes the water through the suit via perforated tubes. [ 42 ] : ch18
Some initial training in the safe and effective use of hot water suits is considered necessary, but the skills are quickly learned and easily transferable between makes and the arrangement is fairly standard. [ 65 ]
The physiological problems of ambient pressure diving are largely eliminated by isolating the diver from the water and hydrostatic pressure in an atmospheric suit . [ 66 ] However, dexterity problems with manipulators on atmospheric diving suits reduce their effectiveness for many tasks. The joints of atmospheric suits allow walking but are not suitable for swimming. [ 67 ]
The suit must maintain constant volume during articulation, as a variable volume would require additional effort to move from a lower volume geometry to higher volume due to the large pressure difference. [ Note 3 ] [ 67 ] A range of user sizes can be accommodated by adding spacers between components, but the extra joints increase the likelihood of leaks. Alternative parts with different lengths that require moving high-pressure seals to be split and reconnected may need to be pressure tested before each use. [ 67 ]
The work required to overcome friction in the pressure-resistant joint seals, inertia of the limb armour, and drag of the bulky limbs moving through the water are major constraints on agility and limit the ways the diver can move. However, buoyancy control is relatively simple, as the suit is mostly incompressible and the life support system is closed so there is no weight change due to gas consumption. [ 67 ]
Although the pressure hull of the suit is often made from metals with high heat conductivity, insulating the diver is largely a matter of wearing clothing suitable for the internal air temperature, and insulating the shell away from the moving parts of joints is fairly straightforward. The air is recycled through the scrubber, which will heat it slightly through the exothermic chemical reaction that removes carbon dioxide. [ 66 ]
The helmet is rigidly connected to the torso of the suit, which limits the field of vision. This can be partly compensated by using a nearly hemispherical dome viewport. [ 67 ]
Atmospheric diving suits are still an emerging technology, and differ considerably, so specialist training is required for each model. [ citation needed ]
The surface-supplied diver's harness is an item of strong webbing, and sometimes cloth, which is fastened around a diver over the exposure suit. It must allow the diver to be lifted without risk of falling out of the harness. [ 62 ] : ch6 It also provides support for the bailout gas cylinder, and may carry the ballast weights, a buoyancy compensator, the cutting tool, and other equipment. Several types are in use. [ 3 ] Recreational scuba harnesses are mainly used to support the gas cylinders, buoyancy compensator and often the weights and small accessories, but are not normally required to function as a lifting harness. [ 68 ] In professional diving, when the harness may also be used to lift the diver, it must be strong enough to support the diver and equipment without causing injury. Some discomfort is considered acceptable when lifting out of the water, as this is an emergency procedure. [ 3 ]
Improper distribution of weight carried by the harness can cause discomfort and nerve pressure injury out of the water, [ 69 ] and the weight of the harness including cylinders can be problematic for putting the set on for some divers. [ 1 ]
Because pressure varies rapidly with depth, buoyancy is inherently unstable and controlling it requires continuous monitoring and input from the diver. The instability is proportional to the volume of the gas required for neutral buoyancy, so the volume of gas required for neutral buoyancy should be kept as low as possible over the course of the dive. [ 8 ]
Most of the weight change in a dive is due to gas use. Unless equipment is lost or abandoned, the maximum weight change is the consumption of all the gas in all the cylinders carried. The diver needs enough buoyancy volume to remain comfortably afloat before the dive starts. At the end of the dive there will be more buoyancy in reserve as a result of the gas consumption. However, too much reserve volume in the buoyancy compensator has the potential for contributing to an uncontrolled buoyant ascent. [ 12 ] [ 8 ]
In dry suits, gas is primarily intended for thermal insulation, and the additional buoyancy it creates is undesirable. Removing excess gas is only possible when there is an upward path from the gas to the venting point.
Automatic dump valve position is conventionally on the upper left sleeve, clear of the harness, but in easy reach of the diver at all times and at a natural high point for the most useful and likely trim positions for swimming, work, and particularly ascents. The gas will expand as a diver ascends, increasing the need to vent it. However, a body orientation that allows for sufficient venting during an ascent is inefficient for horizontal propulsion. On the other hand, maintaining an orientation with the feet kept higher means the diver loses the ability to vent and risks losing control of buoyancy. Ankle venting points can mitigate this, but they are not fitted as standard equipment as they have proven to be a common leak point. Diving suits should not be excessively baggy, to reduce the amount of trapped gas, but must be loose enough to allow freedom of movement and access for the feet to the boots. The problem can be exacerbated if the legs are baggy at the ankles and the boots are loose, as if they slip off the feet, all control of the fins, and transfer of power to the fins, is lost. Gaiters and ankle straps can reduce the volume of this part of a suit, and may also reduce hydrodynamic drag, while ankle weights require acceleration with every fin stroke. [ 55 ]
Female divers are reported to have more difficulties with buoyancy and trim. This may be a consequence of a buoyancy distribution not well catered for by most harness, buoyancy compensator and weighting systems, possibly exacerbated by dry suit buoyancy distribution. Many manage with available equipment, but it may take longer to learn to effectively use less ergonomically matched equipment. A similar problem is reported with unusually small divers. [ 1 ]
The operating skills for most types of single bladder buoyancy compensator are standardised and portable between models. Familiarisation is rapid and straightforward, and retraining is generally not required, though additional training is provided for adapting to sidemount because of the associated changes in breathing apparatus management. Twin bladder units require more adaptation of procedures, and are associated with more accidents due to human error, as there are more kinds of operator errors that can be made. [ 12 ]
Weighting systems are needed to compensate for the buoyancy of the diver and buoyant equipment. The distribution of buoyancy and ballast affect diver trim, which influences propulsion efficiency breathing gas consumption. [ 70 ]
Weight-belts of conventional design are fastened around the waist and load the lower back when the diver is trimmed horizontal. This can cause lower back pain, particularly when the weights are heavy to compensate for the buoyancy of a dry suit with thick undergarments. Weights supported by the harness distribute the load more evenly. [ 71 ]
Ankle weights, used to improve trim, add inertia to the feet, which must be accelerated and decelerated with every fin stroke, requiring additional power input for finning and reducing propulsive efficiency. [ 70 ] The ability to shed ballast weight is considered a safety feature for scuba diving. It allows the diver to achieve positive buoyancy in an emergency, but the inadvertent loss of ballast when the diver needs to control ascent rate is itself an emergency that can cause decompression illness . [ 70 ]
The need to pull weights clear of other equipment when ditching in some orientations is additional task loading in an emergency. The weight belt can become caught in the harness and compound the diver's problems if the need to establish positive buoyancy is urgent. [ citation needed ]
Fin design is a compromise between propulsive efficiency and maneuverability. Monofins are the equipment of choice for deep apnea diving and for speed and endurance competitions. Breath hold spearfishers need more maneuverability while retaining the best reasonably practicable efficiency, and they mostly choose long bifins. Professional and recreational scuba and surface-supplied divers will sacrifice more efficiency for better maneuverability. Comfort issues and muscle or joint stress, particularly among less physically fit divers, may bias the choice towards softer fins that produce less thrust and maneuverability. Divers needing maximum maneuverability will usually choose stiff paddle fins which can be effective for reversing out of a tight spot but are inefficient for cruising using flutter kick. These fins work well with the frog kick, which is also less likely to shed vortices downward and disturb silty bottoms, so this style of fin is popular for cave and wreck penetration diving. [ 12 ]
Experimental work suggests that larger fin blades are more efficient in converting diver effort to thrust, and are more economical in breathing gas for similar propulsive effect. Larger fins were perceived by the participating divers to be less fatiguing than smaller fins. [ 72 ] For each kick stroke the mass of the fin must be accelerated once in each direction, so producing more thrust per stroke will waste less work on accelerations. Inertial effects increasing the work of finning are also caused by heavier fins, boots and ankle weights.
Attachment to the foot follows two basic options: an integral foot pocket enclosing the heel or an open heeled foot pocket with an elastic heel strap. Both systems allow full mobility of the ankle joint for bi-fins, but limit the motion for monofins. Full foot-pockets are softer and more comfortable on bare feet, and spread the loads more evenly, but are often unsuited to wearing over a thick or hard-soled boot capable of crossing rough rocky shores. Fin retainers may be necessary for security if the fit is loose. Open heel foot pockets can be matched with foot width when wearing a boot, and the heel-strap is selected or adjusted to fit. Fin straps may be of fixed or adjustable length. Fixed length straps are always the right length for a single user, and have fewer snag points, moving parts, and other components that can fail. Adjustable straps are quickly adaptable to the feet of different users, a major advantage for rental equipment. [ 73 ]
Glove fit is important for several reasons. Gloves that are too tight or thick restrict movement and require more effort to grip, which causes early fatigue. Reduced blood flow may cause cramping. Loose gloves may be ineffective against heat loss due to flushing, and may reduce dexterity due to excess bulk. [ 74 ]
There is a conflict between insulation and dexterity, and the reduction of tactile sense, grip strength, and early fatigue due to thick gloves or chilled hands. The diver can tolerate greater heat loss through the hands if the rest of the diver is warm, but in some cases such as diving in near-freezing water or where the air temperature at the surface is below freezing, the risk of frostbite or non-freezing cold injury necessitates the use of gloves most of the time. For safety-critical equipment, dexterity can be the difference between managing a problem adequately, or a situation deteriorating beyond recovery. Simple, large control interfaces such as oversize knobs and buttons, large clips, and tools that can be gripped by a heavily gloved hand can reduce risk significantly. [ 75 ] [ 74 ]
In very cold water there are two problems causing loss of dexterity. The chilling of hands and fingers directly causes loss of feeling and strength of the hands, and thick gloves needed to reduce chilling also reduce the sensitivity of the fingertips, making it more difficult to feel what the fingers are doing. Thick gloves also make the fingertips wider and thicker and a poorer fit to components designed to be used by gloveless hands. This is less of a problem with gloves where the fingertips have a reduced thickness of cover over the contact surface, but few neoprene gloves have this feature. The fingertips of the thumbs and forefingers are most affected, and also wear out faster than the rest of the glove. Some divers wear a thinner, tougher, work glove under the neoprene insulating glove, and cut the tips off the thumbs and forefingers of the neoprene gloves to expose the inner gloves as a workable compromise. Dry gloves allow the diver to tailor the inner insulating glove to suit the task. Insulation can be thicker where it affects dexterity least, and thinner where more sensitivity is needed. [ 74 ] [ 76 ]
Long term grip strength is reduced by fatigue. If the glove requires effort to close the hand to hold an object, this will eventually tire the hand, and grip will weaken sooner than when affected by cold alone. This is mitigated by gloves with a preform to fit a partly closed hand, and by more flexible glove materials. [ 75 ] [ 74 ]
Breathing gas supplied to divers from the surface is routed through a surface control manifold and the gas panel , and may also pass through a manifold in an open or closed diving bell. The surface gas panel may be operated by the diving supervisor or a designated gas man , and the bell panel is the responsibility of the bellman . The gas panels are arranged so that it is clear to the operator which valves and gauges serve each diver. The surface standby diver may be supplied from an independent panel with independent gas supplies, so the standby diver is isolated from gas supply problems that may affect the working divers. [ 6 ] [ 77 ] Gas panels may be integrated with voice communication equipment.
The gas panel should monitor the depth of each diver in order to provide the right supply pressure. This is done using the pneumofathometer gauge for each diver. It should control the flow rate for free-flow helmets, monitor the supply pressure of connected gasses, make it clear which supply is in use when changing between main and secondary, and confirm that the gas is breathable at the current depth of each diver. Additionally, it should display which part of the system is supplying which diver. Safe and reliable gas provision to the divers depends on the panel operator having a clear and accurate knowledge of the status of the valves and pressures at the panel. This is helped by arranging the components of the panel so that it is immediately obvious which components are dedicated to each diver, what the function of each component is, and the status of each valve. Quarter turn ball valves are generally used because it is immediately obvious from the handle position whether they are open or closed. The spatial arrangement of valves and gauges on the panel is usually either the same for each diver, or mirrored. All operable valves and gauges should be labeled, and colour or shape coding may be useful. [ 78 ] [ 79 ] [ 77 ] [ 80 ] [ 81 ]
Diver communications are the methods used by divers to communicate with each other or with surface members of the dive team. In professional diving , diver communication is usually voice communications between a single working diver and the diving supervisor at the surface control point, and with the bell for bell operations. This is considered important both for managing the diving work, and as a safety measure for monitoring the condition of the diver. The traditional method of communication by line signals is now used in emergencies when voice communications have failed. Surface supplied divers also often carry a closed circuit video camera on the helmet which allows the surface team to see what the diver is doing and to be involved in inspection tasks. This can also be used to transmit hand signals to the surface if voice communications fails. [ 82 ] : Ch.429 Underwater slates may be used to write text messages which can be shown to other divers, [ 83 ] [ 82 ] : Ch.5 Voice communication is the most generally useful format underwater, as visual forms are more affected by visibility, and written communication and signing are relatively slow and restricted by diving equipment. [ 84 ] : 1–2
Diver voice communication equipment does not work with a standard scuba demand valve mouthpiece, so scuba divers generally use hand signals are when visibility allows, and there are a range of commonly used signals, with some variations. [ 85 ] These signals are often also used by professional divers to communicate with other divers. [ 86 ] There is also a range of other special purpose non-verbal signals, mostly used for safety and emergency communications.
The interface between air and water is an effective barrier to direct sound transmission, [ 87 ] and the natural water surface is a barrier to visual communication across the interface due to internal reflection. Hyperbaric speech distortion also hinders sound-based communication.
The process of talking underwater is influenced by the internal geometry of the life support equipment and constraints on the communications systems as well as the physical and physiological influences of the environment on the processes of speaking and vocal sound production. [ 84 ] : 6, 16 The use of breathing gases under pressure or containing helium causes problems in intelligibility of diver speech due to distortion caused by the different speed of sound in the gas and the different density of the gas compared to air at surface pressure. These parameters induce changes in the vocal tract formants , which affect the timbre , and a slight change of pitch . Several studies indicate that the loss in intelligibility is mainly due to the change in the formants. [ 88 ]
The difference in density of the breathing gas causes a non-linear shift of low-pitch vocal resonance, due to resonance shifts in the vocal cavities, giving a nasal effect, and a linear shift of vocal resonances which is a function of the velocity of sound in the gas, known as the Donald Duck effect . Another effect of higher density is the relative increase in intensity of voiced sounds relative to unvoiced sounds. The contrast between closed and open voiced sounds and the contrast between voiced consonants and adjacent vowels decrease with increased pressure. [ 89 ] Change of the speed of sound is relatively large in relation to depth increase at shallower depths, but this effect reduces as the pressure increases, and at greater depths a change in depth makes a smaller difference. [ 88 ]
Helium speech unscramblers are a partial technical solution. They improve intelligibility of transmitted speech to surface personnel. [ 89 ]
Diving instrumentation may be for safety or to facilitate the task being performed. Safety-critical information such as gas pressure and decompression status should be presented clearly and unambiguously. [ 90 ] [ 91 ]
A lack of standardised dive computer user-interfaces can cause confusion under stress. [ 90 ] Computer lock-out at times of great need is a potentially fatal design flaw. The meaning of alarms and warnings should be immediately obvious. The diver should be dealing with the problem, not trying to work out what it is. [ 90 ] [ 91 ] Displays should allow for variations in visual acuity , and be readable with colour-blindness. [ 90 ] Ideally, critical displays should be readable without a mask, or allow safe surfacing without a mask. There should not be too much distracting information on the main screen, and returning to the main screen should be automatic by default, or auxiliary screens should continue to display critical decompression data. [ 90 ] [ 91 ]
Dive computers are safety critical equipment, but there is very little formal training provided for their use. Models also vary considerably in operation, and are often not intuitive, so skills are not transferable when a new unit is used. The user manual is usually all that is available to learn from, and it cannot be taken underwater for convenient reference. Instrument consoles represent a concentrated source of information, and a large potential for operator error. [ 57 ]
Dive computers provide a variety of visual dive information to the diver, usually on a LCD or OLED display. More than one screen arrangement may be selectable during a dive, and the primary screen will display by default and contain the safety critical data. Secondary screens are usually selected by pressing one or two buttons one or more times, and may be transient or remain visible until another screen is selected. All safety critical information should be visible on any screen that will not automatically revert within a short period, as the diver may forget how to get back to it and this may put them as significant risk. Some computers use a scroll through system which tends to require more button pushes, but is easier to remember, as eventually the right screen will turn up, others may use a wider selection of buttons, which is quicker when the sequence is known, but easier to forget or become confused, and may demand more of the diver's attention, : [ 49 ] [ 92 ]
Display and control units for electronically controlled closed circuit rebreathers have very similar requirements and problems to dive computers. This may be reduced when the rebreather controllers and backup dive computer are produced by the same manufacturer.
Head-up displays can be used to provide the diver with a view of critical information which is always visible. These can be mounted on the mask, or on the mouthpiece assembly. Head-up displays require special near-eye 0ptics to allow correct focus on the display. [ 93 ] [ 94 ] [ 95 ] In conditions of very low visibility, a head-up display has the advantage that the diver's ability to see the display is not affected by turbidity. It also lets the diver monitor all displayed dive data without interrupting their work. [ 96 ]
The primary function of diver cutting tools is to deal with entanglement by lines or nets. The tool should be accessible to both hands, and should be capable of cutting the diver free from any entanglement hazard predicted at the dive site. Many divers carry a cutting tool as standard equipment, and it may be required by code of practice as default procedure. When entanglement risk is high, backup cutting tools may be required. [ 97 ]
Dive lights may be needed to compensate for insufficient natural illumination or to restore colour. [ 98 ] [ 38 ] They may be carried in several ways depending on their purpose. Head mount lights are used by divers who need to use both hands for other purposes. [ 99 ] With a head mount there is a greater risk of dazzling other divers in the vicinity, as the lights move with the diver's head. As such, this arrangement is more appropriate for divers who work or explore alone. Helmet mounts are appropriate for illuminating work which is monitored via a helmet-mounted closed circuit video camera. Hand-held lights can be directed by the diver independently of the direction the diver is facing and do not require any special mounting equipment. However, they occupy a hand and are at risk of being dropped unless they are clipped on. They are most suitable for incidental lighting, and where precise direction is useful. Glove or Goodman handle mount allows precise direction and allows the hand to perform some other tasks. Canister lights allow the light head to be held in either hand, on a Goodman handle, or looed over the neck to free both hands, and the cable prevents the light from falling far if dropped. It is possible and fairly common to carry more than one of these options. Where light is important for safety, the diver will carry backup lights. [ 100 ]
A buddy line is a line or strap physically tethering two scuba divers together underwater to prevent separation. They can also serve as a means of communication in low visibility conditions. [ 101 ] It is usually a short length and may be buoyant to reduce the risk of snagging on the bottom. It does not need to be particularly strong or secure, but should not pull free under moderate loads, such as when used for line signals. Divers may communicate by rope signals , but may just use the line to attract attention before moving closer and communicating by hand signals. The disadvantage of a buddy line is an increased risk of snagging and entanglement, and the risk is increased with a longer or thinner line. Divers may need to disconnect the line quickly at either end in an emergency, which can be done via a quick-release mechanism or by cutting the line, both of which require at least one free hand. A velcro strap requires no tools for release and can be released under tension. [ citation needed ]
Clips and attachment points should be reliable and must generally be operable by one hand with gloves suitable for the water temperature, without needing to see what is being done, as it may be dark, low visibility, or out of view. Single-hand operation is necessary where only one hand can reach. This is always preferable, as the other hand may be in use for something important. While unlikely, it is possible for most types of clip to become jammed closed, and if this may endanger the diver it should be possible to use an alternative method to disconnect, which does not involve special tools. Cutting loose using the diver's cutting tool is the standard. [ 12 ]
A reliable clip is one that does not allow connection or disconnection by accident, instead requiring specific action by the operator to clip or unclip. Unreliable clips may cause loss of equipment or entanglement. Bolt snaps and screw-gate carabiners are examples of clips with a reputation for reliability. [ 12 ] The carabiners are more secure, and may be load rated, but are less convenient to operate. Carabiners are approved for attaching the umbilical to a surface supplied diver's harness. [ 3 ]
There are usually several attachment points provided on the diving harness or buoyancy compensator for securing accessories and additional diving cylinders. On technical harnesses these are often in the form of stainless steel D-rings or sliders with integral rings, and may be adjustable for position. [ 12 ] Plastic D-rings are common on bulk-produced recreational buoyancy compensators, and are usually in fixed positions, held on by bar-tacked webbing straps or tabs, and are not replaceable. Professional harness is usually required to have at least one attachment point capable of lifting the diver out of the water. [ 4 ] Attachment rings that are free to swing are less prone to snagging on the surroundings in tight spaces but are more difficult to clip onto one-handed when out of view. [ citation needed ]
A diver propulsion vehicle (DPV) is a powered device with an integral thruster used by scuba divers to increase their range underwater. Range is restricted by the amount of breathing gas that can be carried, the rate at which that breathing gas is consumed, and the power endurance and speed of the DPV. Time limits imposed on the diver by decompression requirements may also limit safe range in practice. DPVs have recreational, scientific and military applications. They have been produced in a range of configurations from small, easily portable scooter units with a small range and low speed, to faired or enclosed units capable of carrying several divers longer distances at higher speeds.
The most efficient position for towing behind is when the wake of the thruster bypasses the diver. This is usually achieved by using a tow leash from the DPV to a D-ring on the lower front of the diver's harness. The diver also holds a handle on top of the DPV with a dead-man switch that turns off power to the DPV as soon as the diver lets go of the handle. The DPV is commonly steered by one hand, leaving the other hand free for other tasks. This requires good static and dynamic balance of the DPV and diver to avoid excessive diver fatigue. Lights, cameras, navigation, and other instruments may be mounted on a DPV for convenience, but the diver should also carry backups for essential instruments in case the DPV must be abandoned in an emergency. Control of the DPV is additional task loading and can distract the diver. [ 102 ] A DPV can increase the risk of a silt-out if the thrust is allowed to wash over the bottom. [ 103 ]
DPV operation requires greater situational awareness than simply swimming, as some changes can happen much faster. Operating a DPV requires simultaneous depth control, buoyancy adjustment, monitoring of breathing gas, and navigation. Buoyancy control is vital for diver safety. The DPV has the capacity to dynamically compensate for poor buoyancy control by thrust vectoring while moving, but once it stops the diver may turn out to be dangerously positively or negatively buoyant if adjustments were not made to suit the changes in depth while moving. If the diver does not control the DPV properly, a rapid ascent or descent under power can result in barotrauma or decompression sickness. [ 104 ]
Underwater cameras are usually popular models encased in a watertight pressure housing. There are a few notable exceptions, such as the Nikonos and Sea & Sea ranges, in which the camera body was the pressure housing. [ 105 ] Controls are generally operated by movable links penetrating the watertight case, each requiring reliable seals because they represent a possible leak. Compact and lightweight camera bodies with multiple controls packed into a small space tend to transform into bulky, heavy and expensive units when housed for moderately deep diving. The camera's controls must be operable using thick gloves in cold water. For most underwater photography, a camera that is close to neutral buoyancy will be easier to handle and have less disruptive effect on diver trim. Strobe arms incorporating incompressible buoyancy compartments are the preferred system, as they do not need to be adjusted for changes of depth. [ citation needed ]
Internal flash is problematic at anything except very close range, as it can cause backscatter in cloudy water, and is the major consumer of battery power at full power. External flash using optical coupling avoids hull penetrations and potential leaks, and video lights give a good preview of exposure, and also provide the diver with a high-power dive light that is already pointing in the right direction to record the scene. With more powerful video lights and low-light sensitivity cameras, flash may not be necessary. [ citation needed ]
A surface marker buoy is towed to indicate the position of the diver. It should have sufficient buoyancy to reliably remain at the surface so it can be seen. If it is actively towed, it should not develop so much drag that the diver is unable to manage it effectively. The tow line may be a major source of drag (roughly proportional to its diameter); as such, a smaller, smooth line is preferable, and also fits on a more compact reel or spool. Smaller line may need to be of stronger and more abrasion-resistant material like ultra-high-molecular-weight polyethylene for acceptable durability. [ 106 ]
A decompression buoy is deployed towards the end of the dive as a signal to the surface that the diver has started to ascend. [ 107 ] This kind of buoy is not usually towed, so drag is not a problem. Visibility to a surface observer depends on colour, reflectivity, length above the water, and diameter. A low waterplane area has the advantage of reducing the variation of line tension as waves pass overhead, making it easier to maintain accurate depth under large swells during decompression stops. A larger buoy is more visible at the surface but will pull upward harder if the reel jams during deployment. [ citation needed ]
Distance lines are used for underwater navigation where it is either essential to mark the route out of the overhead environment, or necessary or desirable to return to a specific point. Lines are deployed from reels or spools, and may be left in place or recovered on the return. [ 108 ] [ 109 ] The design and construction of the reel have a large influence on handling during both deployment and recovery of line, which are major parts of the task loading of one of the divers in a wreck or cave penetration team. Good design can minimise effort of winding in the line, and an adjustable brake reduces risk of overruns and loose line in the water while laying the line, which is an entanglement hazard. Highly visible line helps reduce the risk of losing the line in bad visibility, and a near neutral buoyancy of the reel minimises the fatigue caused by carrying it in the hand for long periods. Matching the size of a reel or spool to its intended use allows easy recognition by feel and efficient storage. [ 110 ] [ 111 ]
Line markers are generally used on permanent guidelines to provide critical information to divers following the line. Slots and notches are used to wrap the line and secure the marker in place. Passing the line through the enlarged area at the base of the two slots allows the marker to slide along the line, or even fall off if brushed by a diver. To more securely fasten the marker, an extra wrap may be added at each slot. It must be possible to fit, interpret, and remove a line marker by feel in total darkness with the line under moderate tension. All of this must happen without dislodging the line. The basic function of these markers is fairly consistent internationally, but procedures may differ by region or team. The protocol for placement and removal should be well understood by the members of a specific team. [ 112 ]
A dive reel comprises a spool to hold the line. It is coupled with a winding knob which rotates on an axle attached to a frame, with a handle to hold the assembly in position while in use. [ 109 ] A line guide is almost always present to prevent line from unwinding unintentionally, and there is usually a method of clipping the reel to the diver's harness when not in use. Reels may be made from a wide variety of materials, but near neutral buoyancy and resistance to impact damage are desirable features. Reels may also be open or closed. This refers to the presence of a cover around the spool, which is intended to reduce the risk of line tangles on the spool, or line flipping over the side and causing a jam. To some extent this works, but if there is a jam the cover effectively prevents the diver from correcting it. Open reels allow easy access to free jams caused by overwinds or line getting caught between spool and handle, and allow visual checks on the line while winding it in. Reels should be easy to use and lockable to prevent unintentionally unrolling, and have sufficient friction to prevent overruns. Reels used for deploying DSMBs usually have a thumb release ratchet to allow free running deployment and to prevent unwinding when there is tension on the line at other times. A reel with a closed handle is less tiring to hold for long periods, particularly when wearing thick or stiff gloves. [ 111 ] [ 106 ]
Finger spools are a simple, compact, and low tech alternative to reels that are best suited to relatively short lengths of line. They are a pair of circular flanges with a hole in the middle, connected by a tubular hub, which is sized to use a finger as an axle when unrolling the line. The line is secured by clipping a bolt snap through a hole on one of the flanges and over the line as it leaves the reel. It is reeled in by holding the spool with one hand and simply winding the line onto the spool by hand. Spools are most suitable for reasonably short lines, up to about 50 m, as it becomes tedious to roll up longer lengths. [ 109 ] The double end bolt snap for locking the line may also be used as an aid for winding it in, to avoid line abrasion of the fingers or gloves.
While it is possible for a diver to put on and take off some items of equipment in the water, there is a greater risk of fitting them incorrectly or losing them, particularly when the water is a bit rough. Doing this in the surf is even more risky, and delays at the surface on a boat dive can let the divers drift off site. When possible, kit-up and pre-dive checks should be completed on shore or on the boat, and the kit-up area should facilitate this, or at least make it possible. For recreational diving charter boats, this gives preference to arrangements where each diver can safely and securely stow all their personal dive gear at the same place where they will be putting it on, and where it is not necessary for it to be handled by anyone else except at the diver's request, as unauthorised handling of another person's life-support equipment could have legal consequences if something goes wrong.
Boarding the boat after a dive may require equipment to be removed in the water, and this presents another set of hazards, and the associated risks of injury and damage to or loss of equipment, some of which may be avoided if the diver does not have to take off equipment in the water, and heavy equipment does not have to be lifted over the side of the boat with fragile dangling components exposed to snagging, impact, and crushing hazards by crew or passengers. The necessity to remove fins before climbing some ladders reduces the diver's ability to swim back to the boat if they drift away. When boarding an anchored boat, some way of keeping within reach of the boarding area while removing equipment is required, and it may be necessary to use both hands to ensure secure removal and hand-over of some equipment. Suitable handholds, clip-off points and trailing lines can facilitate this activity.
Design and construction of pressure vessels for human occupancy are regulated by law, safety standards, and codes of practice . These specify safety and ergonomic requirements, airlock opening sizes, internal dimensions, valve types and arrangement, safety interlocks, pressure gauge types and arrangements, gas inlet silencers, outlet safety covers, seating, illumination, breathing gas supply and monitoring, climate control and communications systems. Other requirements are also specified for structural strength, permitted materials, over-pressure relief, testing, fire suppression and periodical inspection. [ 113 ]
A closed bell design must allow access by divers wearing bulky diving suits and bailout sets appropriate for the depth. The amount of gas in the bailout set is calculated for a return rate of 10 metres per minute from the reach of the excursion umbilical . At greater depths, this may require twin sets of high pressure cylinders. It must also be possible for the bellman to hoist an unconscious diver through the lock. A flood-up valve may be provided to allow partial flooding of the bell, so that an unresponsive diver is partially supported by buoyancy while being maneuvered through the opening. Once suspended inside the bell, the water can be blown back down by adding gas. The internal volume must include enough space for divers and equipment including racks for the excursion umbilicals and the bell gas panel. On-board gas cylinders, emergency power packs, tools and hydraulic power supply lines do not have to be stored inside. Access while underwater is through a lock at the bottom, so that the internal gas pressure can keep the water out. This lock can be used for transfer to the saturation habitat, or a side lock can be provided. The side lock does not need to allow passage with harness and bailout cylinders as these are not carried into the habitat area and are serviced at atmospheric pressure. [ 114 ]
The splash zone is the region where the bell passes through the surface of the water and where wave action and platform movement can cause the bell to swing around, which can be uncomfortable and dangerous to the occupants. To limit this motion a bell cursor may be used. This is a device used to guide and control the motion of the bell through the air and the splash zone near the surface, where waves can move the bell significantly. It can either be a passive system that relies on additional ballast weight or an active system that uses a controlled drive system to provide vertical motion. The cursor has a cradle which locks onto the bell and moves vertically on rails to constrain lateral movement. The bell is released and locked onto the cursor in the relatively still water below the splash zone. [ 60 ] [ 115 ]
A bell stage is a rigid frame that may be fitted below a closed bell to ensure that even if the bell is lowered so far as to contact the clump weight or the seabed, there is enough space under the bell for the divers to get in and out through the bottom lock. If all the lifting arrangements fail, the divers must be able to shelter inside the bell while awaiting rescue, and must be able to get out if the rescue is to another bell when the bell is resting on the seabed. [ 114 ]
Each compartment of a hyperbaric system for human occupation has an independent externally mounted pressure gauge so that it is not possible to confuse which compartment pressure is being displayed. Where physically practicable, lock doors open towards the side where pressure is normally higher, so that a higher internal pressure will hold them closed and sealed. [ 116 ]
Medical and supply lock outer doors generally open outwards due to space constraints, and therefore are fitted with safety interlock systems which prevent them from being opened with internal pressure above atmospheric. This helps avoid the possibility of human error allowing them to be opened while the inner lock is not sealed, as the uncontrolled decompression that would ensue would probably kill the occupants, and possibly also the lock operator. [ 117 ]
Internal diameter of hyperbaric living compartments and deck decompression chambers is constrained by codes of practice for reasonable comfort for the occupants. For emergency transfer chambers, there may be overriding logistical constraints on size and mass. [ citation needed ]
A hyperbaric stretcher is a lightweight pressure vessel for human occupancy (PVHO) designed to accommodate one person undergoing initial hyperbaric treatment during or while awaiting transport or transfer to a treatment chamber . The stretcher should accommodate most divers without being excessively claustrophobic, be conveniently portable by a reasonable number of bearers, should fit into the available space in the transport likely to be used, and fit through the entry opening of the treatment chamber or lock onto the chamber for transfer under pressure. It should be possible to see and communicate with the person in the chamber, and the occupant should be able to breathe oxygen which is vented to the exterior to keep a constant internal pressure and limit the fire hazard. Breathing gas supplies should also be portable, and it should be possible to disconnect them for a short period when maneuvering in tight spaces. [ 118 ]
A saturated diver who needs to be evacuated should be transported without a significant change in ambient pressure. Hyperbaric evacuation requires pressurised transportation equipment, and could be required in a range of situations. The pressure rating and locking mechanism of the evacuation chamber must be compatible with the saturation system it is to serve and the reception facility. This is because both transfers must be under pressure, and it may not be safe to start decompression during the evacuation. [ 119 ]
Access equipment is the gear needed to get into and out of the water. In most cases, it refers to diving from a floating platform, but also applies to shore dives where access requires equipment.
Diving stages and wet bells are open platforms used to lower the divers to the work site and to control the ascent and in-water decompression, and to provide safe and easy entry and exit from the water. Design must provide space for the working diver and possibly the bellman. They must be in positions where they are protected from impact during transit and prevented from falling out when above the water. The divers may be seated, but standing during transit is more common. [ 120 ]
A stage must have a way to guide the umbilical from the surface tending point to the diver so the diver can be sure of finding the right way back to the stage. This can be provided by having the diver exit the stage on the opposite side to boarding, with the umbilical passing through the frame, but this is not infallible in bad visibility, and a closed fairlead is more reliable. Running the umbilical via the stage may also be needed to ensure the diver cannot approach known hazards, such as the thrusters of a dynamically positioned vessel. [ 120 ]
A wet bell has an open-bottomed air space at the top, large enough for the diver and bellman's heads. This space can be used as a place of refuge in an emergency, where some breathing problems can be managed. The air space must be large enough for an unresponsive diver to be suspended by their harness with their head in the air space, as it may be necessary to remove an unresponsive diver's helmet or full-face mask to provide first aid. The bell is also provided with an on-board emergency gas supply, sufficient for any planned or reasonably foreseeable decompression, and a means of safely switching between surface and on-board gas supply. This necessitates an on-board gas distribution manifold and divers' umbilicals that are deployed from and stored on the bell, and someone to operate the panel and tend the working diver's excursion umbilical. The bellman does this, and also serves as standby diver. The buoyancy of the air space may have to be compensated by ballast, as the bell must be negatively buoyant during normal operation. [ 120 ]
For some applications, dive boat ladders that allow the diver to ascend without removing the fins are preferred. When there is a lot of relative motion between the diver and ladder, it can become difficult to safely remove fins, then get onto the ladder, and not lose the fins. A ladder that can be climbed with fins on the feet avoids this problem. A ladder that slopes at an angle of about 15° from the vertical reduces the load on the arms.
If a ladder must be climbed in full equipment, suitable handholds to brace the diver while climbing are necessary for safety. This also applies if the divers need to climb down a ladder wearing dive gear, and they may need to turn around at the top of the ladder. In the general case, the vessel will be moving in a seaway while the diver is boarding. [ 121 ]
A dive platform, or swim platform, is a near horizontal surface on a dive boat that gives more convenient access to the water than the deck. It may be large enough for several divers to use simultaneously, or just enough for a single diver. The platform may be fixed, folding, or arranged to lower divers into the water and lift them out again, in which case it is known as a diver lift. [ 122 ] Most dive platforms are mounted at the stern, usually on the transom, at a height a short distance above the waterline. They are easily flooded by a following sea, and are self-draining.
Fixed and folding platforms are generally provided with ladders which can be folded or lifted out of the water when not in use. They are also equipped with steps or ladders from the platform to the deck, while lifting platforms may be sufficiently immersible for the divers to swim directly over the platform and stand up to be lifted to a level where they can walk off onto the deck. Lifts are commonly mounted on the transom, [ 123 ] [ 122 ] or on the side of the boat. [ 124 ] Handrails while using steps, ladders and lifts, when crossing or waiting on the platform, or making adjustments to equipment are a valuable safety adjunct as the platform will often be moving when in use, and the divers will usually be encumbered by heavy and bulky diving equipment. [ 125 ] Barriers to protect occupants from pinch point hazards may be necessary when there are moving parts. [ 126 ] [ 127 ] The utility of a lift is enhanced if the diver can use it without having to remove any equipment in the water or on the platform, so an upper position level with the working deck and sufficient space to walk onto the deck fully kitted is preferable. [ 122 ]
Professional divers may be required to wear a harness suitable for lifting the diver out of the water in an emergency, and there will usually be an emergency recovery plan and the necessary extraction equipment and personnel available. Recreational divers are not usually required to make any special provisions for an emergency, but recreational diving service providers may have a duty of care to their customers to provide for reasonably foreseeable emergencies with practicable facilities. [ 128 ] There may be a regional or membership organisation standard or code of practice. Recovering an incapacitated diver from the water and providing first aid on the boat would usually be considered an expected level of care from a professional service provider.
Recreational divers are not required to wear lifting rated harness, so other plans should be in place. These often necessitate removing equipment from the diver, and the risk of losing the equipment. Details of methods to recover a diver into a boat will vary depending on the geometry of the boat. [ 129 ] [ 130 ] Simply dragging a diver over the pontoon of an inflatable hull may work in many cases. Larger boats with higher freeboard may have lifting gear that can be used with a rescue sling .
Tools that are intended for use by divers should take into account the handicaps of the underwater environment on operator stability, mobility, and control, within the full range of conditions in which they are likely to be used. Buoyancy effects on tool and operator, water movement, and reduced sensory input can complicate underwater tool use. Use with gloves is common, and can be a problem when controls are small and clustered.
Lanyards and clipping points can prevent the loss of tools and equipment like cameras, lights and cutting tools in mid-water or poor visibility, but can increase entanglement risk. Carrying heavy tools can compromise the diver's ability to accurately control ascent and descent rates, so it is common practice for professional divers to have their tools delivered in a bag lowered from the surface, or to transport them in a basket on the stage or bell which transports the diver to the underwater workplace. Tools do not have to be carried inside the pressurised volume of a closed bell, so the basket or rack can be on the bell stage or clump weight .
Pockets for small accessories are common on jacket-style buoyancy compensators. Wing buoyancy compensators generally do not have pockets, as the wing is behind the diver and the harness is usually fairly minimal, but pockets can be added to the waistbelt if there is space. They are supported by the webbing at the top and may be strapped around the thigh to prevent flapping. Cargo pockets on the diving suit are more popular with technical divers, and may be glued to the front or side of the thighs of the suit, or attached in similar positions to wetsuit shorts or a tunic worn over the main suit. Occasionally a chest pocket or internal key pocket may be provided. Sidemount divers may use a butt-pack, a clip-on bag worn behind the diver below the harness and buoyancy comprnsator, that is unclipped and brought forward for access. Position, size, shape, closures, and accessibility are important for the function of carrying equipment, and possible interference with other equipment should also be considered.
Tool bags serve a similar purpose and are available in forms which can be clipped to the diver's harness in a position where access is relatively convenient. Tool bags are used by technical divers for similar purposes to pockets, and professional divers use them to carry small sets of relatively light tools and components in clip-on bags, and heavier tools and components in independent bags which are set down when not being used for the carrying function. Lifting bags of appropriate size may be used to support part of the weight of a bag, heavy tool or installation component.
Checklists for preparation of the dive and diving equipment are regarded as important safety tools, and are mandatory in some circumstances. There are several design factors which affect the effectiveness of checklists. [ 131 ]
The design of a checklist should fit the purpose of the list. If a checklist is perceived as a top-down means to control behaviour by the organisational hierarchy it is more likely to be rejected and fail in its purpose. A checklist perceived as helping the operator to save time and reduce error is likely to be better accepted. This is more likely to happen when the user is involved in the development of the checklist. [ 132 ]
Rae et al. (2018) define safety clutter as "the accumulation and persistence of 'safety' work that does not contribute to operational safety", and state that "when 'safety' rules impose a significant and unnecessary burden on the performance of everyday activities, both work and safety suffer". [ 132 ]
An objective in checklist design that it should promote a positive attitude towards the use of the checklist by the operators. For this to happen it must be realistic, convenient and not be regarded as a nuisance. A checklist should be designed to describe and facilitate a physical procedure that is accepted by the operators as necessary, effective, efficient and convenient. [ 131 ] | https://en.wikipedia.org/wiki/Human_factors_in_diving_equipment_design |
Human Factors Integration (HFI) is the process adopted by a number of key industries (notably defence and hazardous industries like oil & gas) in Europe to integrate human factors and ergonomics into the systems engineering process. Although each industry has a slightly different domain, the underlying approach is the same.
In essence HFI tries to reconcile the top down nature of system engineering with the iterative nature of a user centred design approach (e.g. ISO 6385 or ISO 9241-210 [ note 1 ] ). It often does this by creating a Human Factors Integration Plan (HFIP) that sits alongside the system development plan . The purpose of the HFIP is to define how the Human Factors Engineering activities necessary for the successful delivery of a particular system will be conducted.
It establishes the guiding principles to be followed by the project to implement the best-practice Human Factors methods. As well as the principles involved, the Plan normally describes the organisation, processes and controls necessary over the entire life cycle of the system from the concept phase through to decommissioning.
HFI undertakes this by conducting a formal process that identifies and reconciles human related issues. These issues are split for convenience into domains. The seven domains defined by the US Army under its MANPRINT [ 1 ] programme are:
Manpower - The number of military and civilian personnel required and potentially available to operate, maintain, sustain and provide training for systems
Personnel - The cognitive and physical capabilities required to be able to train for, operate, maintain and sustain systems.
Training - The instruction or education, and on-the-job or unit training required to provide personnel their essential job skills, knowledge, values and attributes.
Human Factors Engineering - The integration of human characteristics into system definition, design, development, and evaluation to optimise human-machine performance under operational conditions.
Health Hazard Assessment - Short or long term hazards to health occurring as a result of normal operation of the system.
System safety - Safety risks occurring when the system is functioning in an abnormal manner.
Soldier Survivability - The characteristics of a system that can reduce fratricide, detectability and probability of being attacked and minimize system damage, soldier injury and cognitive and physical fatigue.
The UK Ministry of Defence (MoD) adopted a similar HFI approach to MANPRINT in the early 1990s, but excluded Soldier Survivability. [ 2 ] Subsequently the MoD added a seventh 'Social & Organisational' domain. [ 3 ] Some industries also include habitability as a separate domain. [ 4 ]
The HFI plan scope defines the relationship between all the activities and the Human Factors domains and provides a systematic approach to ensure that: | https://en.wikipedia.org/wiki/Human_factors_integration |
Human genetic enhancement or human genetic engineering refers to human enhancement by means of a genetic modification . This could be done in order to cure diseases ( gene therapy ), prevent the possibility of getting a particular disease [ 1 ] (similarly to vaccines), to improve athlete performance in sporting events ( gene doping ), or to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence.
These genetic enhancements may or may not be done in such a way that the change is heritable (which has raised concerns within the scientific community). [ 2 ]
Genetics is the study of genes and inherited traits and while the ongoing advancements in this field have resulted in the advancement of healthcare at multiple levels, ethical considerations have become increasingly crucial especially alongside. Genetic engineering has always been a topic of moral debate among bioethicists. [ 3 ] Even though the technological advancements in this field present exciting prospects for biomedical improvement, it also prompts the need for ethical, societal, and practical assessments to understand its impact on human biology, evolution, and the environment. [ 4 ] Genetic testing , genetic engineering , and stem cell research are often discussed together due to the interrelated moral arguments surrounding these topics. The distinction between repairing genes and enhancing genes is a central idea in many moral debates surrounding genetic enhancement because some argue that repairing genes is morally permissible, but that genetic enhancement is not due to its potential to lead to social injustice through discriminatory eugenics initiatives. [ 5 ]
Moral questions related to genetic testing are often related to duty to warn family members if an inherited disorder is discovered, how physicians should navigate patient autonomy and confidentiality with regard to genetic testing, the ethics of genetic discrimination, and the moral permissibility of using genetic testing to avoid causing seriously disabled persons to exist, such as through selective abortion. [ 5 ] [ 6 ] [ 7 ]
The responsibility of public health professionals is to determine potential exposures and suggest testing for communicable diseases that require reporting. Public health professionals may encounter disclosure concerns if the extension of obligatory screening results in genetic abnormalities being classified as reportable conditions. [ 8 ] Genetic data is personal and closely linked to a person's identity. Confidentiality concerns not only work, health care, and insurance coverage, but a family's whole genetic test results can be impacted. Affected individuals may also have their parents, children, siblings, sisters, and even extended relatives if the condition is either genetically dominant or carried by them. Moreover, a person's decisions could change their entire life depending on the outcome of a genetic test. Results of genetic testing may need to be disclosed in all facets of a person's life. [ 8 ] [ 9 ]
Non-invasive prenatal testing (NIPT) can accurately determine the sex of the fetus at an early stage of gestation, raising concerns about the potential facilitation of sex-selective termination of pregnancy (TOP) due to its ease, timing, and precision. Even though the ultrasound technology can do the same, NIPT is being explored due to its capability to accurately identify the fetus's sex at an early stage in pregnancy, with increasing precision as early as 7 weeks' gestation. This timeframe precedes the typical timing for other sex determination techniques, such as ultrasound or chorionic villus sampling (CVS). [ 10 ] [ 11 ] The high early accuracy of NIPT reduces the uncertainty associated with other methods, such as the aforementioned, leading to more informed decisions and eliminating the risk of inaccurate results that could influence decision-making regarding sex-selective TOP. Additionally, NIPT enables sex-selective TOP in the first trimester, which is more practical, and allows pregnant women to postpone maternal-fetal bonding. These considerations may significantly facilitate the pursuit of sex-selective TOP when NIPT is utilized. Therefore, it is crucial to examine these ethical concerns within the framework of NIPT adoption. [ 12 ]
Ethical issues related to gene therapy and human genetic enhancement concern the medical risks and benefits of the therapy, the duty to use the procedures to prevent suffering, reproductive freedom in genetic choices, and the morality of practicing positive genetics, which includes attempts to improve normal functions. [ 5 ]
In every genetic based study conducted for humanity, studies must be carried out in accordance with the ethics committee approval statement, ethical, legal norms and human morality. CAR T cell therapy, which is intended to be a new treatment aims to change the genetics of T cells and transform immune system cells that do not recognize cancer into cells that recognize and fight cancer. it works with the T cell therapy method, which is arranged with palindromic repeats at certain short intervals called CRISPR . [ 13 ]
All research involving human subjects in healthcare settings must be registered in a public database before the recruitment of the first trial. The informed consent statement should include adequate information about possible conflicts of interest, the expected benefits of the study, its potential risks, and other issues related to the discomfort it may involve. [ 14 ]
Technological advancements play an integral role in new forms of human enhancement. While phenotypic and somatic interventions for human enhancement provide noteworthy ethical and sociological dilemmas, germline heritable genetic intervention necessitates even more comprehensive deliberations at the individual and societal levels. [ 15 ]
Moral judgments are empirically based and entail evaluating prospective risk-benefit ratios particularly in the field of biomedicine. The technology of CRISPR genome editing raises ethical questions for several reasons. To be more specific, concerns exist regarding the capabilities and technological constraints of CRISPR technology. Furthermore, the long-term effects of the altered organisms and the possibility of the edited genes being passed down to succeeding generations and having unanticipated effects are two further issues to be concerned about. Decision-making on morality becomes more difficult when uncertainty from these circumstances prevents appropriate risk/benefit assessments. [ 16 ]
The potential benefits of revolutionary tools like CRISPR are endless. For example, because it can be applied directly in the embryo, CRISPR/Cas9 reduces the time required to modify target genes compared to gene targeting technologies that rely on the use of embryonic stem (ES) cells. Bioinformatics tools developed to identify the optimal sequences for designing guide RNAs and optimization of experimental conditions have provided very robust procedures that guarantee the successful introduction of the desired mutation. [ 17 ] Major benefits are likely to develop from the use of safe and effective HGGM, making a precautionary stance against HGGM unethical. [ 18 ]
Going forward, many people support the establishment of an organization that would provide guidance on how best to control the ethical complexities mentioned above. Recently, a group of scientists founded the Association for Responsible Research and Innovation in Genome Editing (ARRIGE) to study and provide guidance on the ethical use of genome editing. [ 19 ] [ 20 ]
In addition, Jasanoff and Hurlbut have recently advocated for the establishment and international development of an interdisciplinary "global observatory for gene regulation". [ 21 ]
Researchers proposed that debates in gene editing should not be controlled by the scientific community. The network is envisioned to focus on gathering information from dispersed sources, bringing to the fore perspectives that are often overlooked, and fostering exchange across disciplinary and cultural divides. [ 22 ]
The interventions aimed at enhancing human traits from a genetic perspective are emphasized as being contingent upon the understanding of genetic engineering, and comprehending the outcomes of these interventions requires an understanding of the interactions between humans and other living beings. Therefore, the regulation of genetic engineering underscores the significance of examining the knowledge between humans and the environment. [ 15 ]
To address the ethical challenges and uncertainties arising from genetic advancements, the development of comprehensive guidelines based on universal principles has been emphasized as essential. The importance of adopting a cautious approach to safeguard fundamental values such as autonomy, global well-being, and individual dignity has been elucidated when overcoming these challenges. [ 23 ]
When contemplating genetic enhancement, genetic technologies should be approached from a broad perspective, using a definition that encompasses not only direct genetic manipulation but also indirect technologies such as biosynthetic drugs. It has been emphasized that attention should be given to expectations that can shape the marketing and availability of these technologies, anticipating the allure of new treatments. These expectations have been noted to potentially signify the encouragement of appropriate public policies and effective professional regulations. [ 24 ]
Clinical stem cell research must be conducted in accordance with ethical values. This entails a full respect for ethical principles, including the accurate assessment of the balance between risks and benefits, as well as obtaining informed and voluntary participant consent. The design of research should be strengthened, scientific and ethical reviews should be effectively coordinated, assurance should be provided that participants understand the fundamental features of the research, and full compliance with additional ethical requirements for disclosing negative findings has been addressed. [ 25 ]
Clinicians have been emphasized to understand the role of genomic medicine in accurately diagnosing patients and guiding treatment decisions. It has been highlighted that detailed clinical information and expert opinions are crucial for the accurate interpretation of genetic variants. While personalized medicine applications are exciting, it has been noted that the impact and evidence base of each intervention should be carefully evaluated. The human genome contains millions of genetic variants, so caution should be exercised and expert opinions sought when analyzing genomic results. [ 26 ]
With the discovery of various types of immune-related disorders, there is a need for diversification in prevention and treatment. Developments in the field of gene therapy are being studied to be included in the scope of this treatment, but of course more research is needed to increase the positive results and minimize the negative effects of gene therapy applications. [ 27 ] The CRISPR/Cas9 system is also designed as a gene editing technology for the treatment of HIV-1/AIDS. CRISPR/Cas9 has been developed as the latest gene editing technique that allows the insertion, deletion and modification of DNA sequences and provides advantages in the disruption of the latent HIV-1 virus. However, the production of some vectors for HIV-1-infected cells is still limited and further studies are needed [ 28 ] Being an HIV carrier also plays an important role in the incidence of cervical cancer. While there are many personal and biological factors that contribute to the development of cervical cancer, HIV carriage is correlated with its occurrence. However, long-term research on the effectiveness of preventive treatment is still ongoing. Early education, accessible worldwide, will play an important role in prevention. [ 29 ] When medications and treatment methods are consistently adhered to, safe sexual practices are maintained and healthy lifestyle changes are implemented, the risk of transmission is reduced in most people living with HIV. Consistently implemented proactive prevention strategies can significantly reduce the incidence of HIV infections. Education on safe sex practices and risk-reducing changes for everyone, whether they are HIV carriers or not, is critical to preventing the disease. [ 30 ] However, controlling the HIV epidemic and eliminating the stigma associated with the disease may not be possible only through a general AIDS awareness campaign. It is observed that HIV awareness, especially among individuals in low socio-economic regions, is considerably lower than the general population. Although there is no clear-cut solution to prevent the transmission of HIV and the spread of the disease through sexual transmission, a combination of preventive measures can help to control the spread of HIV. Increasing knowledge and awareness plays an important role in preventing the spread of HIV by contributing to the improvement of behavioral decisions with high risk perception. [ 31 ] Genetics plays a pivotal role in disease prevention, offering insights into an individual's predisposition to certain conditions and paving the way for personalized strategies to mitigate disease risk. The burgeoning field of genetic testing and analysis has provided valuable tools for identifying genetic markers associated with various diseases, allowing for proactive measures to be taken in disease prevention [ 32 ] Disease prevention via genetic testing is made easier as genetic testing can unveil an individual's genetic susceptibility to certain diseases, enabling early detection and intervention which can be very crucial in disease like heritable cancers such and breast [ 33 ] [ 34 ] and ovarian cancer. [ 35 ] [ 36 ] Having genetic information can inform the development of precision medicine approaches and targeted therapies for disease prevention in general. By identifying genetic factors contributing to disease susceptibility, such as specific gene mutations associated with autoimmune disorders, researchers can develop targeted therapies to modulate the immune response and prevent the onset or progression of these conditions. [ 37 ] [ 38 ] [ 39 ]
There are many types of neurodegenerative diseases. Alzheimer's disease is one of the most common one of these diseases and it affects millions of people worldwide. The CRISPR-Cas9 techniques can be used to prevent the Alzheimer's disease. For example, it has a potential to correct the autosomal dominant mutaitons, problematic neurons, restoring the associated electrophysiological deficits and decreased the Aβ peptides. [ 40 ] Amyotrophic Lateral Sclerosis (ALS) is another highly lethal neurodegenerative disease. And CRISPR-Cas9 technology is simple and effective for changinc specific point mutations about ALS. Also with this technology Chen and his colleagues were found some important alterations in major indicators of ALS like decreasing in RNA foci, polypeptides and haplosufficiency. [ 41 ] [ 40 ]
Some individuals experience immunocompromise , a condition in which their immune systems are weakened and less effective in defending against various diseases, including but not limited to influenza . This susceptibility to infections can be attributed to a range of factors, including genetic flaws and genetic diseases such as Severe Combined Immunodeficiency (SCID). Some gene therapies have already been developed or are being developed to correct these genetic flaws/diseases, hereby making these people less susceptible to catching additional diseases (i.e. influenza, ). [ 42 ] These genetic flaws and diseases can significantly impact the body's ability to mount an effective immune response, leaving individuals vulnerable to a wide array of pathogens. However, advancements in gene therapy research and development have shown promising potential in addressing these genetic deficiencies however not without associated challenges [ 43 ] [ 44 ]
CRISPR technology is a promising tool not only for genetic disease corrections but also for the prevention of viral and bacterial infections. Utilizing CRISPR–Cas therapies, researchers have targeted viral infections like HSV-1, EBV, HIV-1, HBV, HPV, and HCV, with ongoing clinical trials for an HIV-clearing strategy named EBT-101 . Additionally, CRISPR has demonstrated efficacy in preventing viral infections such as IAV and SARS-CoV-2 by targeting viral RNA genomes with Cas13d, and it has been used to sensitize antibiotic-resistant S. aureus to treatment through Cas9 delivered via bacteriophages. [ 45 ]
Advancements in gene editing and gene therapy hold promise for disease prevention by addressing genetic factors associated with certain conditions. Techniques like CRISPR-Cas9 offer the potential to correct genetic mutations associated with hereditary diseases, thereby preventing their manifestation in future generations and reducing disease burden. In November 2018, Lulu and Nana were created. [ 46 ] By using clustered regularly interspaced short palindromic repeat (CRISPR)-Cas9, a gene editing technique, they disabled a gene called CCR5 in the embryos, aiming to close the protein doorway that allows HIV to enter a cell and make the subjects immune to the HIV virus.
Despite existing evidence of CRISPR technology, advancements in the field persist in reducing limitations. Researchers developed a new, gentle gene editing method for embryos using nanoparticles and peptide nucleic acids. Delivering editing tools without harsh injections, the method successfully corrected genes in mice without harming development. While ethical and technical questions remain, this study paves the way for potential future use in improving livestock and research animals, and maybe even in human embryos for disease prevention or therapy. [ 47 ]
Informing prospective parents about their susceptibility to genetic diseases is crucial. Pre-implantation genetic diagnosis also holds significance for disease prevention by inheritance, as whole genome amplification and analysis help select a healthy embryo for implantation, preventing the transmission of a fatal metabolic disorder in the family. [ 48 ]
Genetic human enhancement emerges as a potential frontier in disease prevention by precisely targeting genetic predispositions to various illnesses. Through techniques like CRISPR, specific genes associated with diseases can be edited or modified, offering the prospect of reducing the hereditary risk of conditions such as cancer, cardiovascular disorders, or neurodegenerative diseases. This approach not only holds the potential to break the cycle of certain genetic disorders but also to influence the health trajectories of future generations.
Furthermore, genetic enhancement can extend its impact by focusing on fortifying the immune system and optimizing overall health parameters. By enhancing immune responses and fine-tuning genetic factors related to general well-being, the susceptibility to infectious diseases can be minimized. This proactive approach to health may contribute to a population less prone to ailments and more resilient in the face of environmental challenges.
However, the ethical dimensions of genetic manipulation cannot be overstated. Striking a delicate balance between scientific progress and ethical considerations is imperative. Robust regulatory frameworks and transparent guidelines are crucial to ensuring that genetic human enhancement is utilized responsibly, avoiding unintended consequences or potential misuse. As the field advances, the integration of ethical, legal, and social perspectives becomes paramount to harness the full potential of genetic human enhancement for disease prevention while respecting individual rights and societal values. [ 49 ]
Overall, the technology requires improvements in effectiveness, precision, and applications. Immunogenicity, off-target effects, mutations, delivery systems, and ethical issues are the main challenges that CRISPR technology faces. The safety concerns, ethical considerations, and the potential for misuse underscore the need for careful and responsible exploration of these technologies. [ 50 ] CRISPR-Cas9 technology offers so much on disease prevention and treatment yet its future aspects, especially those that affect the next generations, should be investigated strictly.
Modification of human genes in order to treat genetic diseases is referred to as gene therapy . Gene therapy is a medical procedure that involves inserting genetic material into a patient's cells to repair or fix a malfunctioning gene in order to treat hereditary illnesses. Between 1989 and December 2018, over 2,900 clinical trials of gene therapies were conducted, with more than half of them in phase I . [ 51 ] Since that time, many gene therapy based drugs became available, such as Zolgensma and Patisiran . Most of these approaches utilize viral vectors , such as adeno-associated viruses (AAVs), adenoviruses (AV) and lentiviruses (LV), for inserting or replacing transgenes in vivo or ex vivo . [ 52 ] [ 53 ]
In 2023, nanoparticles that act similarly to viral vectors were created. These nanoparticles, called bioorthgonal engineered virus-like recombinant biosomes , display strong and rapid binding capabilities to LDL receptors on cell surfaces, allowing them to enter cells efficiently and deliver genes to specific target areas, such as tumor and arthritic tissues . [ 54 ]
RNA interference -based agents, such as zilebesiran , contain siRNA which binds with mRNA of the target cells, modifying gene expression. [ 55 ]
Many diseases are complex and cannot be effectively treated by simple coding sequence-targeting strategies. CRISPR/Cas9 is one technology that targets double strand breaks in the human genome, modifying genes and providing a quick way to treat genetic disorders. Gene treatment employing the CRISPR/Cas genome editing method is known as CRISPR/Cas-based gene therapy. Mammalian cells can be genetically modified using the straightforward, affordable, and extremely specific CRISPR/Cas method. It can help with single-base exchanges, homology-directed repair, and non-homologous end joining. The primary application is targeted gene knockouts, involving the disruption of coding sequences to silence deleterious proteins. Since the development of the CRISPR-Cas9 gene editing method between 2010 and 2012, scientists have been able to alter genes by making specific breaks in their DNA. This technology has many uses, including genome editing and molecular diagnosis.
Genetic engineering has undergone a revolution because to CRISPR/Cas technology, which provides a flexible framework for building disease models in larger animals. This breakthrough has created new opportunities to evaluate possible therapeutic strategies and comprehend the genetic foundations of different diseases. But in order to fully realize the promise of CRISPR/Cas-based gene therapy, a number of obstacles must be removed. Improving CRISPR/Cas systems' editing precision and efficiency is one of the main issues. Although this technology makes precise gene editing possible, reducing off-target consequences is still a major challenge. Unintentional genetic changes resulting from off-target modifications may have unanticipated effects or difficulties. Using enhanced guide RNA designs, updated Cas proteins, and cutting-edge bioinformatics tools, researchers are actively attempting to improve the specificity and reduce off-target effects of CRISPR/Cas procedures. Moreover, the effective and specific delivery of CRISPR components to target tissues presents another obstacle. Delivery systems must be developed or optimized to ensure the CRISPR machinery reaches the intended cells or organs efficiently and safely. This includes exploring various delivery methods such as viral vectors, nanoparticles, or lipid-based carriers to transport CRISPR components accurately to the target tissues while minimizing potential toxicity or immune responses.
Despite recent progress, further research is needed to develop safe and effective CRISPR therapies. CRISPR/Cas9 technology is not actively used today, however there are ongoing clinical trials of its use in treating various disorders, including sickle cell disease, human papillomavirus (HPV)-related cervical cancer, COVID-19 respiratory infection, renal cell carcinoma, and multiple myeloma. [ 56 ]
Gene therapy has emerged as a promising field in medical science , aiming to address and treat various genetic diseases by modifying human genes . The process involves the introduction of genetic material into a patient's cells, with the primary goal of repairing or correcting malfunctioning genes that contribute to hereditary illnesses . This innovative medical procedure has seen significant advancements and a growing number of clinical trials since its inception.
Between 1989 and December 2018, more than 2,900 clinical trials of gene therapies were conducted, with over half of them reaching the phase I stage. Over the years, several gene therapy-based drugs have been developed and made available to the public, marking important milestones in the treatment of genetic disorders . Examples include Zolgensma and Patisiran, which have demonstrated efficacy in addressing specific genetic conditions.
The majority of gene therapy approaches leverage viral vectors, such as adeno-associated viruses (AAVs), adenoviruses (AV), and lentiviruses (LV), to facilitate the insertion or replacement of transgenes either in vivo or ex vivo. These vectors serve as delivery vehicles for introducing the therapeutic genetic material into the patient's cells.
A notable development in 2023 was the creation of nanoparticles designed to function similarly to viral vectors. These bioorthogonal engineered virus-like recombinant biosomes represent a novel approach to gene delivery. They exhibit robust and rapid binding capabilities to low-density lipoprotein (LDL) receptors on cell surfaces, enhancing their efficiency in entering cells. This capability enables the targeted delivery of genes to specific areas, such as tumor and arthritic tissues. This advancement holds the potential to enhance the precision and effectiveness of gene therapy, minimizing off-target effects and improving overall therapeutic outcomes.
In addition to viral vector and nanoparticle-based approaches, RNA interference (RNAi) has emerged as another strategy in gene therapy. Agents like zilebesiran utilize small interfering RNA (siRNA) that binds with the messenger RNA ( mRNA ) of target cells, effectively modifying gene expression. This RNA interference-based approach provides a targeted and specific method for regulating gene activity, presenting further opportunities for treating genetic disorders .
The continuous evolution of gene therapy techniques , along with the development of innovative delivery systems and therapeutic agents, underscores the ongoing commitment of the scientific and medical communities to advance the field and provide effective treatments for a wide range of genetic diseases. [ 57 ]
Athletes might adopt gene therapy technologies to improve their performance. [ 58 ] Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports. [ 59 ] Therefore, this technology, which is a subfield of genetic engineering commonly referred to as gene doping in sports, has been prohibited due to its potential risks. [ 60 ] The primary objective of gene doping is to aid individuals with medical conditions. However, athletes, cognizant of its associated health risks, resort to employing this method in pursuit of enhanced athletic performance. The prohibition of the indiscriminate use of gene doping in sports has been enforced since the year 2003, pursuant to the decision taken by the World Anti-Doping Agency (WADA). [ 61 ] A study conducted in 2011 underscored the significance of addressing issues related to gene doping and highlighted the importance of promptly comprehending how gene doping in sports and exercise medicine could impact healthcare services by elucidating its potential to enhance athletic performance. The article elucidates, according to the World Anti-Doping Agency (WADA), how gene doping poses a threat to the fairness of sports. Additionally, the paper delves into health concerns that may arise as a consequence of the utilization of gene doping solely for the purpose of enhancing sports performance. [ 62 ] The misuse of gene doping to enhance athletic performance constitutes an unethical practice and entails significant health risks, including but not limited to cancer, viral infections, myocardial infarction, skeletal damage, and autoimmune complications. In addition, gene doping may give rise to various health issues, such as excessive muscle development leading to conditions like hypertonic cardiomyopathy, and render bones and tendons more susceptible to injuries [ 63 ] Several genes such as EPO, IGF1, VEGFA, GH, HIFs, PPARD, PCK1, and myostatins are prominent choices for gene doping. Particularly in gene doping, athletes employ substances such as antibodies against myostatin or myostatin blockers. These substances contribute to the augmentation of the athletes' mass, facilitation of increased muscle development, and enhancement of strength. However, the primary genes utilized for gene doping in humans may lead to complications such as excessive muscle growth, which can adversely impact the cardiovascular system and increase the likelihood of injuries. [ 64 ] However, due to insufficient awareness of these risks, numerous athletes resort to employing gene doping for purposes divergent from its genuine intent. Within the realm of athlete health, sports ethics and the ethos of fair play, scientists have developed various technologies for the detection of gene doping. Although in its early years the technology used wasn’t reliable, more extensive research has been done for better techniques to uncover gene doping instances that have been more successful. In the beginning, scientist resorted to techniques such as PCR in its various forms. This was not successful due to the fact that such technologies rely on exon-exon junctions in the DNA. This leads to a lack of precision in its detection as results can be easily tampered using misleading primers and gene doping would go undetected. [ 65 ] With the emerge of new technologies, more recent studies utilized Next Generation Sequencing (NGS) as a method of detection. With the help of bioinformatics, this technology surpassed previous sequencing techniques in its in-depth analysis of DNA make up. Next Generation Sequencing (NGS) focuses on using an elaborate method of analyzing sample sequence and comparing it to a pre-existing reference sequence from a gene database. This way, primer tampering is not possible as the detection is on a genomic level. Using bioinformatic visualizing tools, data can be easily read and sequences that do not align with reference sequence can be highlighted. [ 66 ] [ 67 ] Most recently, One of the high-efficiency gene doping analysis methods conducted in the year 2023, leveraging cutting-edge technology, is HiGDA (High-efficiency Gene Doping Analysis), which employs CRISPR/deadCas9 technology. [ 68 ]
The ethical issues concerning gene doping have been present long before its discovery. Although gene doping is relatively new, the concept of genetic enhancement of any kind has always been subject to ethical concerns. Even when used in a therapeutic manner, gene therapy poses many risks due to its unpredictability among other reasons. Factors other than health issues have raised ethical questions as well. These are mostly concerned with the hereditary factor of these therapies, where gene editing in some cases can be transmitted to the next generation with higher rates of unpredictability and risks in outcomes. [ 69 ] For this reason, non-therapeutic application of gene therapy can be seen as a riskier approach to a non-medical concern. [ 70 ]
In a study, from history to today, human beings have always been in competition. While in the past warriors competed to be stronger in wars, today there is competition to be successful in every field, and it is understood that this psychology is a phenomenon that has always existed in human history until today. It is known that although an athlete has genetic potential, he cannot become a champion if he does not comply with the necessary training and lifestyle. However, as competition increases, both more physical training and more mental performance are needed. Just as warriors in history used some herbal cures to look stronger and more aggressive, it is a fact that today, athletes resort to doping methods to increase their performance. However, this situation is against sports ethics because it does not comply with the morality and understanding of the game. [ 71 ]
One of the negative effects is the risk of cancer, and as a positive effect is taking precautions against certain pathological conditions.Altering genes could lead to unintended and unpredictable changes in the body, potentially causing unforeseen health issues. Further effects of gene doping in sports is the constant fight against drugs not approved by the World Anti doping agency and unfairness regarding athletes that take drugs and don't. The long-term health consequences of gene doping may not be fully understood, and athletes may face health problems later in life. [ 72 ]
Other hypothetical gene therapies could include changes to physical appearance, metabolism, mental faculties such as memory and intelligence, and well-being (by increasing resistance to depression or relieving chronic pain , for example). [ 73 ] [ 74 ]
The exploration of challenges in understanding the effects of gene alterations on phenotypes, particularly within natural genetic diversity, is highlighted. Emphasis is placed on the potential of systems biology and advancements in genotyping / phenotyping technologies for studying complex traits. Despite progress, persistent difficulties in predicting the influence of gene alterations on phenotypic changes are acknowledged, emphasizing the ongoing need for research in this area. [ 75 ]
Some congenital disorders (such as those affecting the muscoskeletal system ) may affect physical appearance, and in some cases may also cause physical discomfort. Modifying the genes causing these congenital diseases (on those diagnosed to have mutations of the gene known to cause these diseases) may prevent this.
- Phenotypic Impacts of CRISPR-Cas9 Editing in Mice Targeting the Tyr Gene:
In a comprehensive CRISPR - Cas9 study on gene editing, the Tyr gene in mice was targeted, seeking to instigate genetic alterations. The analysis found no off-target effects across 42 subjects, observing modifications exclusively at the intended Tyr locus. Though specifics were not explicitly discussed, these alterations may potentially influence non-defined aspects, such as coat color, emphasizing the broader potential of gene editing in inducing diverse phenotype changes. [ 76 ]
Also changes in the myostatin gene [ 77 ] may alter appearance.
Significant quantitative genetic discoveries were made in the 1970s and 1980s, going beyond estimating heritability. However, issues such as The Bell Curve resurfaced, and by the 1990s, scientists recognized the importance of genetics for behavioral traits such as intelligence . The American Psychological Association 's Centennial Conference in 1992 chose behavioral genetics as a theme for the past, present, and future of psychology . Molecular genetics synthesized, resulting in the DNA revolution and behavioral genomics , as quantitative genetic discoveries slowed. Individual behavioral differences can now be predicted early thanks to the behavioral sciences' DNA revolution. The first law of behavioral genetics was established in 1978 after a review of thirty twin studies revealed that the average heritability estimate for intelligence was 46%. [ 78 ] Behavior may also be modified by genetic intervention. [ 79 ] Some people may be aggressive, selfish, and may not be able to function well in society. Mutations in GLI3 and other patterning genes have been linked to HH etiology, according to genetic research. Approximately 50%-80% of children with HH have acute wrath and violence, and the majority of patients have externalizing problems. Epilepsy may be preceded by behavioral instability and intellectual incapacity. [ 80 ] There is currently research ongoing on genes that are or may be (in part) responsible for selfishness (e.g. ruthlessness gene ), aggression (e.g. warrior gene ), altruism (e.g. OXTR , CD38 , COMT , DRD4 , DRD5 , IGF2 , GABRB2 [ 81 ] )
There has been a great anticipation of gene editing technology to modify genes and regulate our biology since the invention of recombinant DNA technology. These expectations, however, have mostly gone unmet. Evaluation of the appropriate uses of germline interventions in reproductive medicine should not be based on concerns about enhancement or eugenics, despite the fact that gene editing research has advanced significantly toward clinical application. [ 82 ]
Cystic fibrosis (CF) is a hereditary disease caused by mutations in the Cystic fibrosis transmembrane conductance regulator (CFTR) gene. While 90% of CF patients can be treated, current treatments are not curative and do not address the entire spectrum of CFTR mutations. Therefore, a comprehensive, long-term therapy is needed to treat all CF patients once and for all. CRISPR/Cas gene editing technologies are being developed as a viable platform for genetic treatment. [ 83 ] However, the difficulties of delivering enough CFTR gene and sustaining expression in the lungs has hampered gene therapy's efficacy. Recent technical breakthroughs, including as viral and non-viral vector transport, alternative nucleic acid technologies, and new technologies like mRNA and CRISPR gene editing , have taken use of our understanding of CF biology and airway epithelium. [ 84 ]
Human gene transfer has held the promise of a lasting remedy to hereditary illnesses such as cystic fibrosis (CF) since its conception and use. The emergence of sophisticated technologies that allow for site-specific alteration with programmable nucleases has greatly revitalized the area of gene therapy . [ 85 ] There is some research going on on the hypothetical treatment of psychiatric disorders by means of gene therapy. It is assumed that, with gene-transfer techniques, it is possible (in experimental settings using animal models) to alter CNS gene expression and thereby the intrinsic generation of molecules involved in neural plasticity and neural regeneration, and thereby modifying ultimately behaviour. [ 86 ]
In recent years, it was possible to modify ethanol intake in animal models. Specifically, this was done by targeting the expression of the aldehyde dehydrogenase gene (ALDH2), lead to a significantly altered alcohol-drinking behaviour. [ 87 ] Reduction of p11, a serotonin receptor binding protein, in the nucleus accumbens led to depression-like behaviour in rodents, while restoration of the p11 gene expression in this anatomical area reversed this behaviour. [ 73 ]
Recently, it was also shown that the gene transfer of CBP (CREB (c-AMP response element binding protein) binding protein) improves cognitive deficits in an animal model of Alzheimer's dementia via increasing the expression of BDNF (brain-derived neurotrophic factor). [ 88 ] The same authors were also able to show in this study that accumulation of amyloid-β (Aβ) interfered with CREB activity which is physiologically involved in memory formation.
In another study, it was shown that Aβ deposition and plaque formation can be reduced by sustained expression of the neprilysin (an endopeptidase) gene which also led to improvements on the behavioural (i.e. cognitive) level. [ 89 ]
Similarly, the intracerebral gene transfer of ECE (endothelin-converting enzyme) via a virus vector stereotactically injected in the right anterior cortex and hippocampus, has also shown to reduce Aβ deposits in a transgenic mouse model of Alzeimer's dementia. [ 90 ]
There is also research going on on genoeconomics , a protoscience that is based on the idea that a person's financial behavior could be traced to their DNA and that genes are related to economic behavior . As of 2015 [update] , the results have been inconclusive. Some minor correlations have been identified. [ 91 ] [ 92 ]
Some studies show that our genes may affect some of our behaviors. For example, some genes may follow our state of stagnation, while others may be responsible for our bad habits. To give an example, the MAOA (Mono oxidase A) gene, the feature of this gene affects the release of hormones such as serotonin, epinephrine and dopamine and suppresses them. It prevents us from reacting in some situations and from stopping and making quick decisions in other situations, which can cause us to make wrong decisions in possible bad situations. As a result of some research, mood states such as aggression, feelings of compassion and irritability can be observed in people carrying this gene. Additionally, as a result of research conducted on people carrying the MAOA gene, this gene can be passed on genetically from parents, and mutations can also develop due to later epigenetic reasons. If we talk about epigenetic reasons, children of families growing up in bad environments begin to implement whatever they see from their parents. For this reason, those children begin to exhibit bad habits or behaviors such as irritability and aggression in the future. [ 93 ]
In December 2020, then- Director of National Intelligence John Ratcliffe said in an editorial for The Wall Street Journal that US intelligence shows China had conducted human testing on People's Liberation Army soldiers with the aim of creating "biologically enhanced" soldiers. [ 94 ] [ 95 ]
In 2022, the People's Liberation Army Academy of Military Sciences reported a notable experiment where military scientists inserted a gene from the tardigrade into human embryonic stem cells . This experiment aimed to explore the potential enhancement of soldiers' resistance to acute radiation syndrome , thereby increasing their ability to survive nuclear fallout. This development reflects the intersection of genetic engineering and military research, with a focus on bioenhancement for military personnel. [ 96 ]
CRISPR/Cas9 technologies have garnered attention for their potential applications in military contexts. Various projects are underway, including those focused on protecting soldiers from specific challenges. For instance, researchers are exploring the use of CRISPR/Cas9 to provide protection from frostbite , reduce stress levels, alleviate sleep deprivation , and enhance strength and endurance. The Defense Advanced Research Projects Agency ( DARPA ) is actively involved in researching and developing these technologies. One of their projects aims to engineer human cells to function as nutrient factories, potentially optimizing soldiers' performance and resilience in challenging environments. [ 97 ]
Additionally, military researchers are conducting animal trials to explore the prophylactic treatment for long-term protection against chemical weapons of mass destruction. This involves using non-pathogenic AAV8 vectors to deliver a candidate catalytic bioscavenger, PON1-IF11, into the bloodstream of mice . These initiatives underscore the broader exploration of genetic and molecular interventions to enhance military capabilities and protect personnel from various threats. [ 98 ]
In the realm of bioenhancement, concerns have been raised about the use of dietary supplements and other biomedical enhancements by military personnel. A significant portion of American special operations forces reportedly use dietary supplements to enhance performance, but the extent of the use of other bioenhancement methods, such as steroids, human growth hormone, and erythropoietin, remains unclear. The lack of completed safety and efficacy testing for these bioenhancements raises ethical and regulatory questions. This concern is not new, as issues surrounding the off-label use of products like pyridostigmine bromide and botulinum toxoid vaccine during the Gulf War , as well as the DoD's Anthrax Vaccine Immunization Program in 1998, have prompted discussions about the need for thorough FDA approval for specific military applications. [ 99 ]
The intersection of genetic engineering , CRISPR/Cas9 technologies, and military research introduces complex ethical considerations regarding the potential augmentation of human capabilities for military purposes. Striking a balance between scientific advancements, ethical standards, and regulatory oversight over classified projects remain crucial as these technologies continue to evolve. [ 100 ]
George Church has compiled a list of potential genetic modifications based on scientific studies for possibly advantageous traits such as less need for sleep , cognition-related changes that protect against Alzheimer's disease, disease resistances, higher lean muscle mass and enhanced learning abilities along with some of the associated studies and potential negative effects. [ 101 ] [ 102 ] | https://en.wikipedia.org/wiki/Human_genetic_enhancement |
Human genetic resistance to malaria refers to inherited changes in the DNA of humans which increase resistance to malaria and result in increased survival of individuals with those genetic changes. The existence of these genotypes is likely due to evolutionary pressure exerted by parasites of the genus Plasmodium which cause malaria. Since malaria infects red blood cells , these genetic changes are most common alterations to molecules essential for red blood cell function (and therefore parasite survival), such as hemoglobin or other cellular proteins or enzymes of red blood cells. These alterations generally protect red blood cells from invasion by Plasmodium parasites or replication of parasites within the red blood cell.
These inherited changes to hemoglobin or other characteristic proteins, which are critical and rather invariant features of mammalian biochemistry, usually cause some kind of inherited disease. Therefore, they are commonly referred to by the names of the blood disorders associated with them, including sickle-cell disease , thalassemia , glucose-6-phosphate dehydrogenase deficiency , and others. These blood disorders cause increased morbidity and mortality in areas of the world where malaria is less prevalent.
Microscopic parasites , like viruses, protozoans that cause malaria, and others, cannot replicate on their own and rely on a host to continue their life cycles. They replicate by invading the hosts' cells and usurping the cellular machinery to replicate themselves. Eventually, unchecked replication causes the cells to burst, killing the cells and releasing the infectious organisms into the bloodstream where they can infect other cells. As cells die and toxic products of invasive organism replication accumulate, disease symptoms appear. Because this process involves specific proteins produced by the infectious organism as well as the host cell, even a very small change in a critical protein may render infection difficult or impossible. Such changes might arise by a process of mutation in the gene that codes for the protein. If the change is in the gamete, that is, the sperm or egg that join to form a zygote that grows into a human being, the protective mutation will be inherited. Since lethal diseases kill many persons who lack protective mutations, in time, many persons in regions where lethal diseases are endemic come to inherit protective mutations. [ citation needed ]
When the P. falciparum parasite infects a host cell, it alters the characteristics of the red blood cell membrane, making it "stickier" to other cells. Clusters of parasitized red blood cells can exceed the size of the capillary circulation, adhere to the endothelium , and block circulation. When these blockages form in the blood vessels surrounding the brain, they cause cerebral hypoxia , resulting in neurological symptoms known as cerebral malaria . This condition is characterized by confusion, disorientation, and often terminal coma . It accounts for 80% of malaria deaths. Therefore, mutations that protect against malaria infection and lethality pose a significant advantage. [ citation needed ]
Malaria has placed the strongest known selective pressure on the human genome since the origin of agriculture within the past 10,000 years. [ 1 ] [ 2 ] Plasmodium falciparum was probably not able to gain a foothold among African populations until larger sedentary communities emerged in association with the evolution of domestic agriculture in Africa (the agricultural revolution ). Several inherited variants in red blood cells have become common in parts of the world where malaria is frequent as a result of selection exerted by this parasite . [ 3 ] This selection was historically important as the first documented example of disease as an agent of natural selection in humans . It was also the first example of genetically controlled innate immunity that operates early in the course of infections, preceding adaptive immunity which exerts effects after several days. In malaria, as in other diseases, innate immunity leads into, and stimulates, adaptive immunity . [ citation needed ]
Mutations may have detrimental as well as beneficial effects, and any single mutation may have both. Infectiousness of malaria depends on specific proteins present in the cell walls and elsewhere in red blood cells. Protective mutations alter these proteins in ways that make them inaccessible to malaria organisms. However, these changes also alter the functioning and form of red blood cells that may have visible effects, either overtly, or by microscopic examination of red blood cells. These changes may impair the function of red blood cells in various ways that have a detrimental effect on the health or longevity of the individual. However, if the net effect of protection against malaria outweighs the other detrimental effects, the protective mutation will tend to be retained and propagated from generation to generation. [ citation needed ]
These alterations which protect against malarial infections but impair red blood cells are generally considered blood disorders since they tend to have overt and detrimental effects. Their protective function has only in recent times, been discovered and acknowledged. Some of these disorders are known by fanciful and cryptic names like sickle-cell anemia, thalassaemia, glucose-6-phosphate dehydrogenase deficiency, ovalocytosis, elliptocytosis and loss of the Gerbich antigen and the Duffy antigen. These names refer to various proteins, enzymes, and the shape or function of red blood cells. [ citation needed ]
The potent effect of genetically controlled innate resistance is reflected in the probability of survival of young children in areas where malaria is endemic. It is necessary to study innate immunity in the susceptible age group (younger than four years) because, in older children and adults, the effects of innate immunity are overshadowed by those of adaptive immunity. It is also necessary to study populations in which random use of antimalarial drugs does not occur. Some early contributions on innate resistance to infections of vertebrates, including humans, are summarized in Table 1.
It is remarkable that two of the pioneering studies were on malaria. The classical studies on the Toll receptor in Drosophila fruit fly [ 6 ] were rapidly extended to Toll-like receptors in mammals [ 7 ] and then to other pattern recognition receptors , which play important roles in innate immunity. However, the early contributions on malaria remain as classical examples of innate resistance, which have stood the test of time. [ citation needed ]
The mechanisms by which erythrocytes containing abnormal hemoglobins, or are G6PD deficient, are partially protected against P. falciparum infections are not fully understood, although there has been no shortage of suggestions. During the peripheral blood stage of replication malaria parasites have a high rate of oxygen consumption [ 8 ] and ingest large amounts of hemoglobin. [ 9 ] It is likely that HbS in endocytic vesicles is deoxygenated, polymerizes and is poorly digested. In red cells containing abnormal hemoglobins, or which are G6PD deficient, oxygen radicals are produced, and malaria parasites induce additional oxidative stress. [ 10 ] This can result in changes in red cell membranes, including translocation of phosphatidylserine to their surface [ jargon ] , followed by macrophage recognition and ingestion. [ 11 ] The authors suggest that this mechanism is likely to occur earlier in abnormal than in normal red cells, thereby restricting multiplication in the former. In addition, binding of parasitized sickle cells to endothelial cells is significantly decreased because of an altered display of P. falciparum erythrocyte membrane protein-1 (PfMP-1). [ 12 ] This protein is the parasite's main cytoadherence ligand and virulence factor on the cell surface. During the late stages of parasite replication red cells are adherent to venous endothelium, and inhibiting this attachment could suppress replication. [ citation needed ]
Sickle hemoglobin induces the expression of heme oxygenase-1 in hematopoietic cells. Carbon monoxide , a byproduct of heme catabolism by heme oxygenase -1(HO-1), prevents an accumulation of circulating free heme after Plasmodium infection, suppressing the pathogenesis of experimental cerebral malaria. [ 13 ] Other mechanisms, such as enhanced tolerance to disease mediated by HO-1 and reduced parasitic growth due to translocation of host micro-RNA into the parasite, have been described. [ 14 ]
The first line of defense against malaria is mainly exerted by abnormal hemoglobins and glucose-6-phosphate dehydrogenase deficiency. The three major types of inherited genetic resistance – sickle cell disease , thalassemias , and G6PD deficiency – were present in the Mediterranean world by the time of the Roman Empire . [ citation needed ]
Malaria does not occur in the cooler, drier climates of the highlands in the tropical and subtropical regions of the world.
Tens of thousands of individuals have been studied, and high frequencies of abnormal hemoglobins have not been found in any population that was malaria-free. The frequencies of abnormal hemoglobins in different populations vary greatly, but some are undoubtedly polymorphic, having frequencies higher than expected by recurrent mutation. There is no longer doubt that malarial selection played a major role in the distribution of all these polymorphisms. All of these are in malarious areas, [ citation needed ]
The thalassemias have a high incidence in a broad band extending from the Mediterranean basin and parts of Africa, throughout the Middle East, the Indian subcontinent, Southeast Asia, Melanesia, and into the Pacific Islands.
Sickle-cell disease was the genetic disorder to be linked to a mutation of a specific protein. Pauling introduced his fundamentally important concept of sickle cell anemia as a genetically transmitted molecular disease. [ 20 ]
The molecular basis of sickle cell anemia was finally elucidated in 1959 when Ingram perfected the techniques of tryptic peptide fingerprinting. In the mid-1950s, one of the newest and most reliable ways of separating peptides and amino acids was by means of the enzyme trypsin, which split polypeptide chains by specifically degrading the chemical bonds formed by the carboxyl groups of two amino acids, lysine and arginine. Small differences in hemoglobin A and S will result in small changes in one or more of these peptides . [ 21 ] To try to detect these small differences, Ingram combined paper electrophoresis and the paper chromotography methods. By this combination he created a two-dimensional method that enabled him to comparatively "fingerprint" the hemoglobin S and A fragments he obtained from the tryspin digest. The fingerprints revealed approximately 30 peptide spots, there was one peptide spot clearly visible in the digest of haemoglobin S which was not obvious in the haemoglobin A fingerprint. The HbS gene defect is a mutation of a single nucleotide (A to T) of the β-globin gene replacing the amino acid glutamic acid with the less polar amino acid valine at the sixth position of the β chain. [ 22 ]
HbS has a lower negative charge at physiological pH than does normal adult hemoglobin. The consequences of the simple replacement of a charged amino acid with a hydrophobic, neutral amino acid are far-ranging, Recent studies in West Africa suggest that the greatest impact of Hb S seems to be to protect against either death or severe disease—that is, profound anemia or cerebral malaria—while having less effect on infection per se. Children who are heterozygous for the sickle cell gene have only one-tenth the risk of death from falciparum as do those who are homozygous for the normal hemoglobin gene. Binding of parasitized sickle erythrocytes to endothelial cells and blood monocytes is significantly reduced due to an altered display of Plasmodium falciparum erythrocyte membrane protein 1 (PfEMP-1), the parasite's major cytoadherence ligand and virulence factor on the erythrocyte surface. [ 23 ]
Protection also derives from the instability of sickle hemoglobin, which clusters the predominant integral red cell membrane protein (called band 3) and triggers accelerated removal by phagocytic cells. Natural antibodies recognize these clusters on senescent erythrocytes. Protection by HbAS involves the enhancement of not only innate but also of acquired immunity to the parasite. [ 24 ] Prematurely denatured sickle hemoglobin results in an upregulation of natural antibodies which control erythrocyte adhesion in both malaria and sickle cell disease. [ 25 ] Targeting the stimuli that lead to endothelial activation will constitute a promising therapeutic strategy to inhibit sickle red cell adhesion and vaso-occlusion. [ 26 ]
This has led to the hypothesis that while homozygotes for the sickle cell gene suffer from disease, heterozygotes might be protected against malaria. [ 27 ] Malaria remains a selective factor for the sickle cell trait. [ 28 ]
It has long been known that a kind of anemia, termed thalassemia , has a high frequency in some Mediterranean populations, including Greeks and southern Italians. The name is derived from the Greek words for sea ( thalassa ), meaning the Mediterranean Sea , and blood ( haima ). Vernon Ingram deserves the credit for explaining the genetic basis of different forms of thalassemia as an imbalance in the synthesis of the two polypeptide chains of hemoglobin. [ 29 ]
In the common Mediterranean variant, mutations decrease production of the β-chain (β-thalassemia). In α-thalassemia, which is relatively frequent in Africa and several other countries, production of the α-chain of hemoglobin is impaired, and there is relative over-production of the β-chain. Individuals homozygous for β-thalassemia have severe anemia and are unlikely to survive and reproduce, so selection against the gene is strong. Those homozygous for α-thalassemia also suffer from anemia and there is some degree of selection against the gene. [ citation needed ]
The lower Himalayan foothills and Inner Terai or Doon Valleys of Nepal and India are highly malarial due to a warm climate and marshes sustained during the dry season by groundwater percolating down from the higher hills. Malarial forests were intentionally maintained by the rulers of Nepal as a defensive measure. Humans attempting to live in this zone suffered much higher mortality than at higher elevations or below on the drier Gangetic Plain . However, the Tharu people had lived in this zone long enough to evolve resistance via multiple genes. Medical studies among the Tharu and non-Tharu population of the Terai yielded the evidence that the prevalence of cases of residual malaria is nearly seven times lower among Tharus. The basis for resistance has been established to be homozygosity of α-Thalassemia gene within the local population. [ 30 ] Endogamy along caste and ethnic lines appear to have prevented these genes from being more widespread in neighboring populations. [ 31 ]
There is evidence that the persons with α-thalassemia, HbC and HbE have some degree of protection against the parasite. [ 17 ] [ 32 ] Hemoglobin C (HbC) is an abnormal hemoglobin with substitution of a lysine residue for glutamic acid residue of the β-globin chain, at exactly the same β-6 position as the HbS mutation. The "C" designation for HbC is from the name of the city where it was discovered—Christchurch, New Zealand. People who have this disease, particularly children, may have episodes of abdominal and joint pain, an enlarged spleen, and mild jaundice, but they do not have severe crises, as occur in sickle cell disease. Haemoglobin C is common in malarious areas of West Africa, especially in Burkina Faso. In a large case–control study performed in Burkina Faso on 4,348 Mossi subjects, that HbC was associated with a 29% reduction in risk of clinical malaria in HbAC heterozygotes and of 93% in HbCC homozygotes. HbC represents a 'slow but gratis' genetic adaptation to malaria through a transient polymorphism, compared to the polycentric 'quick but costly' adaptation through balanced polymorphism of HbS. [ 33 ] [ 34 ] HbC modifies the quantity and distribution of the variant antigen P. falciparum erythrocyte membrane protein 1 (PfEMP1) on the infected red blood cell surface and the modified display of malaria surface proteins reduces parasite adhesiveness (thereby avoiding clearance by the spleen) and can reduce the risk of severe disease. [ 35 ] [ 36 ]
Hemoglobin E is due to a single point mutation in the gene for the beta chain with a glutamate-to-lysine substitution at position 26. It is one of the most prevalent hemoglobinopathies with 30 million people affected. Hemoglobin E is very common in parts of Southeast Asia. HbE erythrocytes have an unidentified membrane abnormality that renders the majority of the RBC population relatively resistant to invasion by P falciparum . [ 37 ]
Other genetic mutations besides hemoglobin abnormalities that confer resistance to Plasmodia infection involve alterations of the cellular surface antigenic proteins, cell membrane structural proteins, or enzymes involved in glycolysis . [ citation needed ]
Glucose-6-phosphate dehydrogenase (G6PD) is an important enzyme in red cells, metabolizing glucose through the pentose phosphate pathway , an anabolic alternative to catabolic oxidation (glycolysis), while maintaining a reducing environment. [ 38 ] G6PD is present in all human cells but is particularly important to red blood cells. Since mature red blood cells lack nuclei and cytoplasmic RNA , they cannot synthesize new enzyme molecules to replace genetically abnormal or ageing ones. All proteins, including enzymes, have to last for the entire lifetime of the red blood cell, which is normally 120 days. [ citation needed ]
In 1956 Alving and colleagues showed that in some African Americans the antimalarial drug primaquine induces hemolytic anemia, and that those individuals have an inherited deficiency of G6PD in erythrocytes. [ 39 ] G6PD deficiency is sex-linked, and common in Mediterranean, African and other populations. In Mediterranean countries such individuals can develop a hemolytic diathesis ( favism ) after consuming fava beans . G6PD deficient persons are also sensitive to several drugs in addition to primaquine. [ citation needed ]
G6PD deficiency is the second most common enzyme deficiency in humans (after ALDH2 deficiency), estimated to affect some 400 million people. [ 40 ] There are many mutations at this locus, two of which attain frequencies of 20% or greater in African and Mediterranean populations; these are termed the A- and Med mutations. [ 41 ] Mutant varieties of G6PD can be more unstable than the naturally occurring enzyme, so that their activity declines more rapidly as red cells age.
This question has been studied in isolated populations where antimalarial drugs were not used in Tanzania, East Africa [ 42 ] and in the Republic of the Gambia , West Africa, following children during the period when they are most susceptible to falciparum malaria. [ 43 ] In both cases parasite counts were significantly lower in G6PD-deficient persons than in those with normal red cell enzymes. The association has also been studied in individuals, which is possible because the enzyme deficiency is sex-linked and female heterozygotes are mosaics due to lyonization , where random inactivation of an X-chromosome in certain cells creates a population of G6PD deficient red blood cells coexisting with normal red blood cells. Malaria parasites were significantly more often observed in normal red cells than in enzyme-deficient cells. [ 44 ] An evolutionary genetic analysis of malarial selection of G6PD deficiency genes has been published by Tishkoff and Verelli. [ 41 ] The enzyme deficiency is common in many countries that are, or were formerly, malarious, but not elsewhere. [ citation needed ]
Pyruvate kinase (PK) deficiency, also called erythrocyte pyruvate kinase deficiency, is an inherited metabolic disorder of the enzyme pyruvate kinase. In this condition, a lack of pyruvate kinase slows down the process of glycolysis. This effect is especially devastating in cells that lack mitochondria because these cells must use anaerobic glycolysis as their sole source of energy because the TCA cycle is not available. One example is red blood cells, which in a state of pyruvate kinase deficiency rapidly become deficient in ATP and can undergo hemolysis. Therefore, pyruvate kinase deficiency can cause hemolytic anemia. [ citation needed ]
There is a significant correlation between severity of PK deficiency and extent of protection against malaria. [ 45 ]
Elliptocytosis, a blood disorder in which an abnormally large number of the patient's erythrocytes are elliptical. There is much genetic variability amongst those affected. There are three major forms of hereditary elliptocytosis: common hereditary elliptocytosis, spherocytic elliptocytosis and southeast Asian ovalocytosis . [ citation needed ]
Ovalocytosis is a subtype of elliptocytosis, and is an inherited condition in which erythrocytes have an oval instead of a round shape. In most populations ovalocytosis is rare, but South-East Asian ovalocytosis (SAO) occurs in as many as 15% of the indigenous people of Malaysia and of Papua New Guinea . Several abnormalities of SAO erythrocytes have been reported, including increased red cell rigidity and reduced expression of some red cell antigens. [ 47 ] SAO is caused by a mutation in the gene encoding the erythrocyte band 3 protein. There is a deletion of codons 400–408 in the gene, leading to a deletion of 9 amino-acids at the boundary between the cytoplasmic and transmembrane domains of band 3 protein. [ 48 ] Band 3 serves as the principal binding site for the membrane skeleton, a submembrane protein network composed of ankyrin , spectrin , actin , and band 4.1 . Ovalocyte band 3 binds more tightly than normal band 3 to ankyrin, which connects the membrane skeleton to the band 3 anion transporter. These qualitative defects create a red blood cell membrane that is less tolerant of shear stress and more susceptible to permanent deformation. [ citation needed ]
SAO is associated with protection against cerebral malaria in children because it reduces sequestration of erythrocytes parasitized by P. falciparum in the brain microvasculature. [ 49 ] Adhesion of P. falciparum -infected red blood cells to CD36 is enhanced by the cerebral malaria-protective SAO trait . Higher efficiency of sequestration via CD36 in SAO individuals could determine a different organ distribution of sequestered infected red blood cells. These provide a possible explanation for the selective advantage conferred by SAO against cerebral malaria. [ 50 ]
Plasmodium vivax has a wide distribution in tropical countries, but is absent or rare in a large region in West and Central Africa, as recently confirmed by PCR species typing. [ 51 ] This gap in distribution has been attributed to the lack of expression of the Duffy antigen receptor for chemokines (DARC) on the red cells of many sub-Saharan Africans. Duffy negative individuals are homozygous for a DARC allele, carrying a single nucleotide mutation (DARC 46 T → C), which impairs promoter activity by disrupting a binding site for the hGATA1 erythroid lineage transcription factor. [ jargon ] [ 52 ] In widely cited in vitro and in vivo studies, Miller et al. reported that the Duffy blood group is the receptor for P. vivax and that the absence of the Duffy blood group on red cells is the resistance factor to P. vivax in persons of African descent. [ 5 ] This has become a well-known example of innate resistance to an infectious agent because of the absence of a receptor for the agent on target cells. [ citation needed ]
However, observations have accumulated showing that the original Miller report needs qualification. In human studies of P. vivax transmission, there is evidence for the transmission of P. vivax among Duffy-negative populations in Western Kenya, [ 53 ] the Brazilian Amazon region, [ 54 ] and Madagascar . [ 55 ] The Malagasy people on Madagascar have an admixture of Duffy-positive and Duffy-negative people of diverse ethnic backgrounds. [ 56 ] 72% of the island population were found to be Duffy-negative. P. vivax positivity was found in 8.8% of 476 asymptomatic Duffy-negative people, and clinical P. vivax malaria was found in 17 such persons. Genotyping indicated that multiple P. vivax strains were invading the red cells of Duffy-negative people. The authors suggest that among Malagasy populations there are enough Duffy-positive people to maintain mosquito transmission and liver infection. More recently, Duffy negative individuals infected with two different strains of P. vivax were found in Angola and Equatorial Guinea ; further, P. vivax infections were found both in humans and mosquitoes, which means that active transmission is occurring. The frequency of such transmission is still unknown. [ 57 ] Because of these several reports from different parts of the world it is clear that some variants of P. vivax are being transmitted to humans who are not expressing DARC on their red cells. The same phenomenon has been observed in New World monkeys. [ Note 1 ] However, DARC still appears to be a major receptor for human transmission of P. vivax .
The distribution of Duffy negativity in Africa does not correlate precisely with that of P. vivax transmission. [ 51 ] Frequencies of Duffy negativity are as high in East Africa (above 80%), where the parasite is transmitted, as they are in West Africa, where it is not. The potency of P. vivax as an agent of natural selection is unknown and may vary from location to location. DARC negativity remains a good example of innate resistance to an infection, but it produces a relative and not an absolute resistance to P. vivax transmission. [ citation needed ]
The Gerbich antigen system is an integral membrane protein of the erythrocyte and plays a functionally important role in maintaining erythrocyte shape. It also acts as the receptor for the P. falciparum erythrocyte binding protein. There are four alleles of the gene which encodes the antigen, Ge-1 to Ge-4. Three types of Ge antigen negativity are known: Ge-1,-2,-3, Ge-2,-3 and Ge-2,+3. persons with the relatively rare phenotype Ge-1,-2,-3, are less susceptible (~60% of the control rate) to invasion by P. falciparum . Such individuals have a subtype of a condition called hereditary elliptocytosis , characterized by oval or elliptical shape erythrocytes. [ citation needed ]
Rare mutations of glycophorin A and B proteins are also known to mediate resistance to P. falciparum .
Human leucocyte antigen (HLA) polymorphisms common in West Africans but rare in other racial groups are associated with protection from severe malaria. This group of genes encodes cell-surface antigen-presenting proteins and has many other functions. In West Africa, they account for as great a reduction in disease incidence as the sickle-cell hemoglobin variant. The studies suggest that the unusual polymorphism of major histocompatibility complex genes has evolved primarily through natural selection by infectious pathogens. [ citation needed ]
Polymorphisms at the HLA loci, which encode proteins that participate in antigen presentation, influence the course of malaria. In West Africa an HLA class I antigen (HLA Bw53) and an HLA class II haplotype (DRB1*13OZ-DQB1*0501) are independently associated with protection against severe malaria. [ 60 ] However, HLA correlations vary, depending on the genetic constitution of the polymorphic malaria parasite, which differs in different geographic locations. [ 61 ] [ 62 ]
Some studies suggest that high levels of fetal hemoglobin (HbF) confer some protection against falciparum malaria in adults with Hereditary persistence of fetal hemoglobin . [ 63 ]
Evolutionary biologist J.B.S. Haldane was the first to give a hypothesis on the relationship between malaria and the genetic disease. He first delivered his hypothesis at the Eighth International Congress of Genetics held in 1948 at Stockholm on a topic "The Rate of Mutation of Human Genes". [ 64 ] He formalised in a technical paper published in 1949 in which he made a prophetic statement: "The corpuscles of the anaemic heterozygotes are smaller than normal, and more resistant to hypotonic solutions. It is at least conceivable that they are also more resistant to attacks by the sporozoa which cause malaria." [ 65 ] This became known as 'Haldane's malaria hypothesis', or concisely, the 'malaria hypothesis'. [ 66 ]
Detailed study of a cohort of 1022 Kenyan children living near Lake Victoria , published in 2002, confirmed this prediction. [ 67 ] Many SS children still died before they attained one year of age. Between 2 and 16 months the mortality in AS children was found to be significantly lower than that in AA children. This well-controlled investigation shows the ongoing action of natural selection through disease in a human population. [ citation needed ]
Analysis of genome wide association (GWA) and fine-resolution association mapping is a powerful method for establishing the inheritance of resistance to infections and other diseases. Two independent preliminary analyses of GWA association with severe falciparum malaria in Africans have been carried out, one by the Malariagen Consortium in a Gambian population and the other by Rolf Horstmann (Bernhard Nocht Institute for Tropical Medicine, Hamburg) and his colleagues on a Ghanaian population. In both cases the only signal of association reaching genome-wide significance was with the HBB locus encoding the β -chain of hemoglobin, which is abnormal in HbS. [ 68 ] This does not imply that HbS is the only gene conferring innate resistance to falciparum malaria; there could be many such genes exerting more modest effects that are challenging to detect by GWA because of the low levels of linkage disequilibrium in African populations. However, the same GWA association in two populations is powerful evidence that the single gene conferring strongest innate resistance to falciparum malaria is that encoding HbS. [ citation needed ]
The fitnesses of different genotypes in an African region where there is intense malarial selection were estimated by Anthony Allison in 1954. [ 69 ] In the Baamba population living in the Semliki Forest region in Western Uganda the sickle-cell heterozygote (AS) frequency is 40%, which means that the frequency of the sickle-cell gene is 0.255 and 6.5% of children born are SS homozygotes. [ Note 2 ] It is a reasonable assumption that until modern treatment was available three-quarters of the SS homozygotes failed to reproduce. To balance this loss of sickle-cell genes, a mutation rate of 1:10.2 per gene per generation would be necessary. This is about 1000 times greater than mutation rates measured in Drosophila and other organisms and much higher than recorded for the sickle-cell locus in Africans. [ 70 ] To balance the polymorphism, Anthony Allison estimated that the fitness of the AS heterozygote would have to be 1.26 times than that of the normal homozygote. Later analyses of survival figures have given similar results, with some differences from site to site. In Gambians, it was estimated that AS heterozygotes have 90% protection against P. falciparum -associated severe anemia and cerebral malaria, [ 60 ] whereas in the Luo population of Kenya it was estimated that AS heterozygotes have 60% protection against severe malarial anemia. [ 67 ] These differences reflect the intensity of transmission of P. falciparum malaria from locality to locality and season to season, so fitness calculations will also vary. In many African populations the AS frequency is about 20%, and a fitness superiority over those with normal hemoglobin of the order of 10% is sufficient to produce a stable polymorphism. [ citation needed ] | https://en.wikipedia.org/wiki/Human_genetic_resistance_to_malaria |
Human genetic variation is the genetic differences in and among populations . There may be multiple variants of any given gene in the human population ( alleles ), a situation called polymorphism .
No two humans are genetically identical. Even monozygotic twins (who develop from one zygote) have infrequent genetic differences due to mutations occurring during development and gene copy-number variation . [ 1 ] Differences between individuals, even closely related individuals, are the key to techniques such as genetic fingerprinting .
The human genome has a total length of approximately 3.2 billion base pairs (bp) in 46 chromosomes of DNA as well as slightly under 17,000 bp DNA in cellular mitochondria . In 2015, the typical difference between an individual's genome and the reference genome was estimated at 20 million base pairs (or 0.6% of the total). [ 2 ] As of 2017, there were a total of 324 million known variants from sequenced human genomes . [ 3 ]
Comparatively speaking, humans are a genetically homogeneous species. Although a small number of genetic variants are found more frequently in certain geographic regions or in people with ancestry from those regions, this variation accounts for a small portion (~15%) of human genome variability. The majority of variation exists within the members of each human population. For comparison, rhesus macaques exhibit 2.5-fold greater DNA sequence diversity compared to humans. [ 4 ] These rates differ depending on what macromolecules are being analyzed. Chimpanzees have more genetic variance than humans when examining nuclear DNA, but humans have more genetic variance when examining at the level of proteins. [ 5 ]
The lack of discontinuities in genetic distances between human populations, absence of discrete branches in the human species, and striking homogeneity of human beings globally, imply that there is no scientific basis for inferring races or subspecies in humans, and for most traits , there is much more variation within populations than between them. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] Despite this, modern genetic studies have found substantial average genetic differences across human populations in traits such as skin colour, bodily dimensions, lactose and starch digestion, high altitude adaptions, drug response, taste receptors, and predisposition to developing particular diseases. [ 14 ] [ 12 ] The greatest diversity is found within and among populations in Africa , [ 15 ] and gradually declines with increasing distance from the African continent, consistent with the Out of Africa theory of human origins. [ 15 ]
The study of human genetic variation has evolutionary significance and medical applications. It can help scientists reconstruct and understand patterns of past human migration. In medicine, study of human genetic variation may be important because some disease-causing alleles occur more often in certain population groups. For instance, the mutation for sickle-cell anemia is more often found in people with ancestry from certain sub-Saharan African, south European, Arabian, and Indian populations, due to the evolutionary pressure from mosquitos carrying malaria in these regions.
New findings show that each human has on average 60 new mutations compared to their parents. [ 16 ] [ 17 ]
Causes of differences between individuals include independent assortment , the exchange of genes (crossing over and recombination) during reproduction (through meiosis ) and various mutational events.
There are at least three reasons why genetic variation exists between populations. Natural selection may confer an adaptive advantage to individuals in a specific environment if an allele provides a competitive advantage. Alleles under selection are likely to occur only in those geographic regions where they confer an advantage. A second important process is genetic drift , which is the effect of random changes in the gene pool, under conditions where most mutations are neutral (that is, they do not appear to have any positive or negative selective effect on the organism). Finally, small migrant populations have statistical differences – called the founder effect – from the overall populations where they originated; when these migrants settle new areas, their descendant population typically differs from their population of origin: different genes predominate and it is less genetically diverse.
In humans, the main cause is genetic drift . [ 18 ] Serial founder effects and past small population size (increasing the likelihood of genetic drift) may have had an important influence in neutral differences between populations. [ citation needed ] The second main cause of genetic variation is due to the high degree of neutrality of most mutations . A small, but significant number of genes appear to have undergone recent natural selection, and these selective pressures are sometimes specific to one region. [ 19 ] [ 20 ]
Genetic variation among humans occurs on many scales, from gross alterations in the human karyotype to single nucleotide changes. [ 21 ] Chromosome abnormalities are detected in 1 of 160 live human births. Apart from sex chromosome disorders , most cases of aneuploidy result in death of the developing fetus ( miscarriage ); the most common extra autosomal chromosomes among live births are 21 , 18 and 13 . [ 22 ]
Nucleotide diversity is the average proportion of nucleotides that differ between two individuals. As of 2004, the human nucleotide diversity was estimated to be 0.1% [ 23 ] to 0.4% of base pairs . [ 24 ] In 2015, the 1000 Genomes Project , which sequenced one thousand individuals from 26 human populations, found that "a typical [individual] genome differs from the reference human genome at 4.1 million to 5.0 million sites … affecting 20 million bases of sequence"; the latter figure corresponds to 0.6% of total number of base pairs. [ 2 ] Nearly all (>99.9%) of these sites are small differences, either single nucleotide polymorphisms or brief insertions or deletions ( indels ) in the genetic sequence, but structural variations account for a greater number of base-pairs than the SNPs and indels. [ 2 ] [ 25 ]
As of 2017 [update] , the Single Nucleotide Polymorphism Database ( dbSNP ), which lists SNP and other variants, listed 324 million variants found in sequenced human genomes. [ 3 ]
A single nucleotide polymorphism (SNP) is a difference in a single nucleotide between members of one species that occurs in at least 1% of the population. The 2,504 individuals characterized by the 1000 Genomes Project had 84.7 million SNPs among them. [ 2 ] SNPs are the most common type of sequence variation, estimated in 1998 to account for 90% of all sequence variants. [ 26 ] Other sequence variations are single base exchanges, deletions and insertions. [ 27 ] SNPs occur on average about every 100 to 300 bases [ 28 ] and so are the major source of heterogeneity.
A functional, or non-synonymous, SNP is one that affects some factor such as gene splicing or messenger RNA , and so causes a phenotypic difference between members of the species. About 3% to 5% of human SNPs are functional (see International HapMap Project ). Neutral, or synonymous SNPs are still useful as genetic markers in genome-wide association studies , because of their sheer number and the stable inheritance over generations. [ 26 ]
A coding SNP is one that occurs inside a gene. There are 105 Human Reference SNPs that result in premature stop codons in 103 genes. This corresponds to 0.5% of coding SNPs. They occur due to segmental duplication in the genome. These SNPs result in loss of protein, yet all these SNP alleles are common and are not purified in negative selection . [ 29 ]
Structural variation is the variation in structure of an organism's chromosome . Structural variations, such as copy-number variation and deletions , inversions , insertions and duplications , account for much more human genetic variation than single nucleotide diversity. This was concluded in 2007 from analysis of the diploid full sequences of the genomes of two humans: Craig Venter and James D. Watson . This added to the two haploid sequences which were amalgamations of sequences from many individuals, published by the Human Genome Project and Celera Genomics respectively. [ 30 ]
According to the 1000 Genomes Project, a typical human has 2,100 to 2,500 structural variations, which include approximately 1,000 large deletions, 160 copy-number variants, 915 Alu insertions, 128 L1 insertions, 51 SVA insertions, 4 NUMTs , and 10 inversions. [ 2 ]
A copy-number variation (CNV) is a difference in the genome due to deleting or duplicating large regions of DNA on some chromosome. It is estimated that 0.4% of the genomes of unrelated humans differ with respect to copy number. When copy number variation is included, human-to-human genetic variation is estimated to be at least 0.5% (99.5% similarity). [ 31 ] [ 32 ] [ 33 ] [ 34 ] Copy number variations are inherited but can also arise during development. [ 35 ] [ 36 ] [ 37 ] [ 38 ]
A visual map with the regions with high genomic variation of the modern-human reference assembly relatively to a
Neanderthal of 50k [ 39 ] has been built by Pratas et al. [ 40 ]
Epigenetic variation is variation in the chemical tags that attach to DNA and affect how genes get read. The tags, "called epigenetic markings, act as switches that control how genes can be read." [ 41 ] At some alleles, the epigenetic state of the DNA, and associated phenotype, can be inherited across generations of individuals . [ 42 ]
Genetic variability is a measure of the tendency of individual genotypes in a population to vary (become different) from one another. Variability is different from genetic diversity , which is the amount of variation seen in a particular population. The variability of a trait is how much that trait tends to vary in response to environmental and genetic influences.
In biology , a cline is a continuum of species , populations, varieties, or forms of organisms that exhibit gradual phenotypic and/or genetic differences over a geographical area, typically as a result of environmental heterogeneity. [ 43 ] [ 44 ] [ 45 ] In the scientific study of human genetic variation, a gene cline can be rigorously defined and subjected to quantitative metrics.
In the study of molecular evolution , a haplogroup is a group of similar haplotypes that share a common ancestor with a single nucleotide polymorphism (SNP) mutation. The study of haplogroups provides information about ancestral origins dating back thousands of years. [ 46 ]
The most commonly studied human haplogroups are Y-chromosome (Y-DNA) haplogroups and mitochondrial DNA (mtDNA) haplogroups , both of which can be used to define genetic populations. Y-DNA is passed solely along the patrilineal line, from father to son, while mtDNA is passed down the matrilineal line, from mother to both daughter or son. The Y-DNA and mtDNA may change by chance mutation at each generation.
A variable number tandem repeat (VNTR) is the variation of length of a tandem repeat . A tandem repeat is the adjacent repetition of a short nucleotide sequence . Tandem repeats exist on many chromosomes , and their length varies between individuals. Each variant acts as an inherited allele , so they are used for personal or parental identification. Their analysis is useful in genetics and biology research, forensics , and DNA fingerprinting .
Short tandem repeats (about 5 base pairs) are called microsatellites , while longer ones are called minisatellites .
The recent African origin of modern humans paradigm assumes the dispersal of non-African populations of anatomically modern humans after 70,000 years ago. Dispersal within Africa occurred significantly earlier, at least 130,000 years ago. The "out of Africa" theory originates in the 19th century, as a tentative suggestion in Charles Darwin's Descent of Man , [ 47 ] but remained speculative until the 1980s when it was supported by the study of present-day mitochondrial DNA, combined with evidence from physical anthropology of archaic specimens .
According to a 2000 study of Y-chromosome sequence variation, [ 48 ] human Y-chromosomes trace ancestry to Africa, and the descendants of the derived lineage left Africa and eventually were replaced by archaic human Y-chromosomes in Eurasia. The study also shows that a minority of contemporary populations in East Africa and the Khoisan are the descendants of the most ancestral patrilineages of anatomically modern humans that left Africa 35,000 to 89,000 years ago. [ 48 ] Other evidence supporting the theory is that variations in skull measurements decrease with distance from Africa at the same rate as the decrease in genetic diversity. Human genetic diversity decreases in native populations with migratory distance from Africa, and this is thought to be due to bottlenecks during human migration, which are events that temporarily reduce population size. [ 49 ] [ 50 ]
A 2009 genetic clustering study, which genotyped 1327 polymorphic markers in various African populations, identified six ancestral clusters. The clustering corresponded closely with ethnicity, culture and language. [ 51 ] A 2018 whole genome sequencing study of the world's populations observed similar clusters among the populations in Africa. At K=9, distinct ancestral components defined the Afroasiatic -speaking populations inhabiting North Africa and Northeast Africa ; the Nilo-Saharan -speaking populations in Northeast Africa and East Africa ; the Ari populations in Northeast Africa; the Niger-Congo -speaking populations in West-Central Africa, West Africa , East Africa and Southern Africa ; the Pygmy populations in Central Africa ; and the Khoisan populations in Southern Africa. [ 52 ]
In May 2023, scientists reported, based on genetic studies, a more complicated pathway of human evolution than previously understood. According to the studies, humans evolved from different places and times in Africa, instead of from a single location and period of time. [ 53 ] [ 54 ]
Because of the common ancestry of all humans, only a small number of variants have large differences in frequency between populations. However, some rare variants in the world's human population are much more frequent in at least one population (more than 5%). [ 55 ]
It is commonly assumed that early humans left Africa, and thus must have passed through a population bottleneck before their African-Eurasian divergence around 100,000 years ago (ca. 3,000 generations). The rapid expansion of a previously small population has two important effects on the distribution of genetic variation. First, the so-called founder effect occurs when founder populations bring only a subset of the genetic variation from their ancestral population. Second, as founders become more geographically separated, the probability that two individuals from different founder populations will mate becomes smaller. The effect of this assortative mating is to reduce gene flow between geographical groups and to increase the genetic distance between groups. [ citation needed ]
The expansion of humans from Africa affected the distribution of genetic variation in two other ways. First, smaller (founder) populations experience greater genetic drift because of increased fluctuations in neutral polymorphisms. Second, new polymorphisms that arose in one group were less likely to be transmitted to other groups as gene flow was restricted. [ citation needed ]
Populations in Africa tend to have lower amounts of linkage disequilibrium than do populations outside Africa, partly because of the larger size of human populations in Africa over the course of human history and partly because the number of modern humans who left Africa to colonize the rest of the world appears to have been relatively low. [ 57 ] In contrast, populations that have undergone dramatic size reductions or rapid expansions in the past and populations formed by the mixture of previously separate ancestral groups can have unusually high levels of linkage disequilibrium [ 57 ]
The distribution of genetic variants within and among human populations are impossible to describe succinctly because of the difficulty of defining a "population," the clinal nature of variation, and heterogeneity across the genome (Long and Kittles 2003). In general, however, an average of 85% of genetic variation exists within local populations, ~7% is between local populations within the same continent, and ~8% of variation occurs between large groups living on different continents. [ 58 ] [ 59 ] The recent African origin theory for humans would predict that in Africa there exists a great deal more diversity than elsewhere and that diversity should decrease the further from Africa a population is sampled.
Sub-Saharan Africa has the most human genetic diversity and the same has been shown to hold true for phenotypic variation in skull form. [ 49 ] [ 60 ] Phenotype is connected to genotype through gene expression . Genetic diversity decreases smoothly with migratory distance from that region, which many scientists believe to be the origin of modern humans, and that decrease is mirrored by a decrease in phenotypic variation. Skull measurements are an example of a physical attribute whose within-population variation decreases with distance from Africa.
The distribution of many physical traits resembles the distribution of genetic variation within and between human populations ( American Association of Physical Anthropologists 1996; Keita and Kittles 1997). For example, ~90% of the variation in human head shapes occurs within continental groups, and ~10% separates groups, with a greater variability of head shape among individuals with recent African ancestors (Relethford 2002).
A prominent exception to the common distribution of physical characteristics within and among groups is skin color . Approximately 10% of the variance in skin color occurs within groups, and ~90% occurs between groups (Relethford 2002). This distribution of skin color and its geographic patterning – with people whose ancestors lived predominantly near the equator having darker skin than those with ancestors who lived predominantly in higher latitudes – indicate that this attribute has been under strong selective pressure . Darker skin appears to be strongly selected for in equatorial regions to prevent sunburn, skin cancer, the photolysis of folate , and damage to sweat glands. [ 61 ]
Understanding how genetic diversity in the human population impacts various levels of gene expression is an active area of research. While earlier studies focused on the relationship between DNA variation and RNA expression, more recent efforts are characterizing the genetic control of various aspects of gene expression including chromatin states, [ 62 ] translation, [ 63 ] and protein levels. [ 64 ] A study published in 2007 found that 25% of genes showed different levels of gene expression between populations of European and Asian descent. [ 65 ] [ 66 ] [ 67 ] [ 68 ] [ 69 ] The primary cause of this difference in gene expression was thought to be SNPs in gene regulatory regions of DNA. Another study published in 2007 found that approximately 83% of genes were expressed at different levels among individuals and about 17% between populations of European and African descent. [ 70 ] [ 71 ]
The population geneticist Sewall Wright developed the fixation index (often abbreviated to F ST ) as a way of measuring genetic differences between populations. This statistic is often used in taxonomy to compare differences between any two given populations by measuring the genetic differences among and between populations for individual genes, or for many genes simultaneously. [ 72 ] It is often stated that the fixation index for humans is about 0.15. This translates to an estimated 85% of the variation measured in the overall human population is found within individuals of the same population, and about 15% of the variation occurs between populations. These estimates imply that any two individuals from different populations may be more similar to each other than either is to a member of their own group. [ 73 ] [ 74 ] "The shared evolutionary history of living humans has resulted in a high relatedness among all living people, as indicated for example by the very low fixation index (F ST ) among living human populations." Richard Lewontin , who affirmed these ratios, thus concluded neither "race" nor "subspecies" were appropriate or useful ways to describe human populations. [ 58 ]
Wright himself believed that values >0.25 represent very great genetic variation and that an F ST of 0.15–0.25 represented great variation. However, about 5% of human variation occurs between populations within continents, therefore F ST values between continental groups of humans (or races) of as low as 0.1 (or possibly lower) have been found in some studies, suggesting more moderate levels of genetic variation. [ 72 ] Graves (1996) has countered that F ST should not be used as a marker of subspecies status, as the statistic is used to measure the degree of differentiation between populations, [ 72 ] although see also Wright (1978). [ 75 ]
Jeffrey Long and Rick Kittles give a long critique of the application of F ST to human populations in their 2003 paper "Human Genetic Diversity and the Nonexistence of Biological Races". They find that the figure of 85% is misleading because it implies that all human populations contain on average 85% of all genetic diversity. They argue the underlying statistical model incorrectly assumes equal and independent histories of variation for each large human population. A more realistic approach is to understand that some human groups are parental to other groups and that these groups represent paraphyletic groups to their descent groups. For example, under the recent African origin theory the human population in Africa is paraphyletic to all other human groups because it represents the ancestral group from which all non-African populations derive, but more than that, non-African groups only derive from a small non-representative sample of this African population. This means that all non-African groups are more closely related to each other and to some African groups (probably east Africans) than they are to others, and further that the migration out of Africa represented a genetic bottleneck , with much of the diversity that existed in Africa not being carried out of Africa by the emigrating groups. Under this scenario, human populations do not have equal amounts of local variability, but rather diminished amounts of diversity the further from Africa any population lives. Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 70% of human genetic diversity exists in a population derived from New Guinea. Long and Kittles argued that this still produces a global human population that is genetically homogeneous compared to other mammalian populations. [ 76 ]
Anatomically modern humans interbred with Neanderthals during the Middle Paleolithic . In May 2010, the Neanderthal Genome Project presented genetic evidence that interbreeding took place and that a small but significant portion, around 2–4%, of Neanderthal admixture is present in the DNA of modern Eurasians and Oceanians, and nearly absent in sub-Saharan African populations. [ 77 ] [ 78 ]
Between 4% and 6% of the genome of Melanesians (represented by the Papua New Guinean and Bougainville Islander) appears to derive from Denisovans – a previously unknown hominin which is more closely related to Neanderthals than to Sapiens. It was possibly introduced during the early migration of the ancestors of Melanesians into Southeast Asia. This history of interaction suggests that Denisovans once ranged widely over eastern Asia. [ 79 ]
Thus, Melanesians emerge as one of the most archaic-admixed populations, having Denisovan/Neanderthal-related admixture of ~8%. [ 79 ]
In a study published in 2013, Jeffrey Wall from University of California studied whole sequence-genome data and found higher rates of introgression in Asians compared to Europeans. [ 80 ] Hammer et al. tested the hypothesis that contemporary African genomes have signatures of gene flow with archaic human ancestors and found evidence of archaic admixture in the genomes of some African groups, suggesting that modest amounts of gene flow were widespread throughout time and space during the evolution of anatomically modern humans. [ 81 ]
A study published in 2020 found that the Yoruba and Mende populations of West Africa derive between 2% and 19% of their genome from an as-yet unidentified archaic hominin population that likely diverged before the split of modern humans and the ancestors of Neanderthals and Denisovans, [ 82 ] potentially making these groups the most archaic-admixed human populations identified yet.
New data on human genetic variation has reignited the debate about a possible biological basis for categorization of humans into races. Most of the controversy surrounds the question of how to interpret the genetic data and whether conclusions based on it are sound. Some researchers argue that self-identified race can be used as an indicator of geographic ancestry for certain health risks and medications .
Although the genetic differences among human groups are relatively small, these differences in certain genes such as duffy , ABCC11 , SLC24A5 , called ancestry-informative markers (AIMs) nevertheless can be used to reliably situate many individuals within broad, geographically based groupings. For example, computer analyses of hundreds of polymorphic loci sampled in globally distributed populations have revealed the existence of genetic clustering that roughly is associated with groups that historically have occupied large continental and subcontinental regions (Rosenberg et al. 2002; Bamshad et al. 2003).
Some commentators have argued that these patterns of variation provide a biological justification for the use of traditional racial categories. They argue that the continental clusterings correspond roughly with the division of human beings into sub-Saharan Africans ; Europeans , Western Asians , Central Asians , Southern Asians and Northern Africans ; Eastern Asians , Southeast Asians , Polynesians and Native Americans ; and other inhabitants of Oceania (Melanesians, Micronesians & Australian Aborigines) (Risch et al. 2002). Other observers disagree, saying that the same data undercut traditional notions of racial groups (King and Motulsky 2002; Calafell 2003; Tishkoff and Kidd 2004 [ 24 ] ). They point out, for example, that major populations considered races or subgroups within races do not necessarily form their own clusters.
Racial categories are also undermined by findings that genetic variants which are limited to one region tend to be rare within that region, variants that are common within a region tend to be shared across the globe, and most differences between individuals, whether they come from the same region or different regions, are due to global variants. [ 85 ] No genetic variants have been found which are fixed within a continent or major region and found nowhere else. [ 86 ]
Furthermore, because human genetic variation is clinal, many individuals affiliate with two or more continental groups. Thus, the genetically based "biogeographical ancestry" assigned to any given person generally will be broadly distributed and will be accompanied by sizable uncertainties (Pfaff et al. 2004).
In many parts of the world, groups have mixed in such a way that many individuals have relatively recent ancestors from widely separated regions. Although genetic analyses of large numbers of loci can produce estimates of the percentage of a person's ancestors coming from various continental populations (Shriver et al. 2003; Bamshad et al. 2004), these estimates may assume a false distinctiveness of the parental populations, since human groups have exchanged mates from local to continental scales throughout history (Cavalli-Sforza et al. 1994; Hoerder 2002). Even with large numbers of markers, information for estimating admixture proportions of individuals or groups is limited, and estimates typically will have wide confidence intervals (Pfaff et al. 2004).
Genetic data can be used to infer population structure and assign individuals to groups that often correspond with their self-identified geographical ancestry. Jorde and Wooding (2004) argued that "Analysis of many loci now yields reasonably accurate estimates of genetic similarity among individuals, rather than populations. Clustering of individuals is correlated with geographic origin or ancestry." [ 23 ] However, identification by geographic origin may quickly break down when considering historical ancestry shared between individuals back in time. [ 87 ]
An analysis of autosomal SNP data from the International HapMap Project (Phase II) and CEPH Human Genome Diversity Panel samples was published in 2009.
The study of 53 populations taken from the HapMap and CEPH data (1138 unrelated individuals) suggested that natural selection may shape the human genome much more slowly than previously thought, with factors such as migration within and among continents more heavily influencing the distribution of genetic variations. [ 88 ] A similar study published in 2010 found strong genome-wide evidence for selection due to changes in ecoregion, diet, and subsistence
particularly in connection with polar ecoregions, with foraging, and with a diet rich in roots and tubers. [ 89 ] In a 2016 study, principal component analysis of genome-wide data was capable of recovering previously-known targets for positive selection (without prior definition of populations) as well as a number of new candidate genes. [ 90 ]
Forensic anthropologists can assess the ancestry of skeletal remains by analyzing skeletal morphology as well as using genetic and chemical markers, when possible. [ 91 ] While these assessments are never certain, the accuracy of skeletal morphology analyses in determining true ancestry has been estimated at 90%. [ 92 ]
Gene flow between two populations reduces the average genetic distance between the populations, only totally isolated human populations experience no gene flow and most populations have continuous gene flow with other neighboring populations which create the clinal distribution observed for most genetic variation. When gene flow takes place between well-differentiated genetic populations the result is referred to as "genetic admixture".
Admixture mapping is a technique used to study how genetic variants cause differences in disease rates between population. [ 93 ] Recent admixture populations that trace their ancestry to multiple continents are well suited for identifying genes for traits and diseases that differ in prevalence between parental populations. African-American populations have been the focus of numerous population genetic and admixture mapping studies, including studies of complex genetic traits such as white cell count, body-mass index, prostate cancer and renal disease. [ 94 ]
An analysis of phenotypic and genetic variation including skin color and socio-economic status was carried out in the population of Cape Verde which has a well documented history of contact between Europeans and Africans. The studies showed that pattern of admixture in this population has been sex-biased (involving mostly matings between European men and African women) and there is a significant interaction between socioeconomic status and skin color, independent of ancestry. [ 95 ] Another study shows an increased risk of graft-versus-host disease complications after transplantation due to genetic variants in human leukocyte antigen (HLA) and non-HLA proteins. [ 96 ]
Given that each individual has millions of genetic variants (compared to the reference genome ), it is an important question what impact these variants have on human health or gene function. Most genetic variants have only small to moderate effects, if any. Frequently cited examples include hypertension (Douglas et al. 1996), diabetes , [ 97 ] obesity (Fernandez et al. 2003), and prostate cancer (Platz et al. 2000). However, the role of genetic factors in generating these differences remains uncertain. [ 98 ]
The human genome encodes about 20,000 protein-coding genes with about 550 amino acids each. [ 99 ] Hence, human proteins span about 11 million amino acids (22 million per diploid genome). The median number of missense mutations in individual human genomes is about 8600, that is, two individuals differ by 1 in about 2600 amino acids or in about 20% of their proteins. The average individual has about 137 (predicted) loss of function mutations, including 71 frameshift and 148 in-frame deletions or insertions . [ 100 ] Mutations at 32.2% and 9.5% of all possible genomic positions, respectively, can lead to missense and stop-gained variants (i.e., truncated proteins). [ 100 ] In a sample of almost 1 million people, almost 5000 genes were identified that had loss-of-function variants in both alleles of the same individual. That is, if these 5000 genes can tolerate homozygous loss of function mutations, they are unlikely to be essential. [ 100 ]
Differences in allele frequencies contribute to group differences in the incidence of some monogenic diseases , and they may contribute to differences in the incidence of some common diseases. [ 101 ] For the monogenic diseases, the frequency of causative alleles usually correlates best with ancestry, whether familial (for example, Ellis–Van Creveld syndrome among the Pennsylvania Amish ), ethnic ( Tay–Sachs disease among Ashkenazi Jewish populations), or geographical (hemoglobinopathies among people with ancestors who lived in malarial regions). To the extent that ancestry corresponds with racial or ethnic groups or subgroups, the incidence of monogenic diseases can differ between groups categorized by race or ethnicity, and health-care professionals typically take these patterns into account in making diagnoses. [ 102 ]
Some other variations on the other hand are beneficial to human, as they prevent certain diseases and increase the chance to adapt to the environment. For example, mutation in CCR5 gene that protects against AIDS . CCR5 gene is absent on the surface of cell due to mutation. Without CCR5 gene on the surface, there is nothing for HIV viruses to grab on and bind into. Therefore, the mutation on CCR5 gene decreases the chance of an individual's risk with AIDS. The mutation in CCR5 is also quite common in certain areas, with more than 14% of the population carry the mutation in Europe and about 6–10% in Asia and North Africa . [ 103 ]
Many genetic variants may have aided humans in ancient times but plague us today. For example, genes that allow humans to more efficiently process food also make people susceptible to obesity and diabetes today. [ 104 ]
Human genome projects are scientific endeavors that determine or study the structure of the human genome . The Human Genome Project was a landmark genome project.
There are numerous related projects that deal with genetic variation (or variation in the encoded proteins), e.g. organized by the following organizations: | https://en.wikipedia.org/wiki/Human_genetic_variation |
Human geography or anthropogeography is the branch of geography which studies spatial relationships between human communities, cultures, economies, and their interactions with the environment, examples of which include urban sprawl and urban redevelopment . [ 1 ] It analyzes spatial interdependencies between social interactions and the environment through qualitative and quantitative methods. [ 2 ] [ 3 ] This multidisciplinary approach draws from sociology, anthropology, economics, and environmental science, contributing to a comprehensive understanding of the intricate connections that shape lived spaces. [ 4 ]
The Royal Geographical Society was founded in England in 1830. [ 5 ] The first professor of geography in the United Kingdom was appointed in 1883, [ 6 ] and the first major geographical intellect to emerge in the UK was Halford John Mackinder , appointed professor of geography at the London School of Economics in 1922. [ 6 ]
The National Geographic Society was founded in the United States in 1888 and began publication of the National Geographic magazine which became, and continues to be, a great popularizer of geographic information. The society has long supported geographic research and education on geographical topics.
The Association of American Geographers was founded in 1904 and was renamed the American Association of Geographers in 2016 to better reflect the increasingly international character of its membership.
One of the first examples of geographic methods being used for purposes other than to describe and theorize the physical properties of the earth is John Snow's map of the 1854 Broad Street cholera outbreak . Though Snow was primarily a physician and a pioneer of epidemiology rather than a geographer, his map is probably one of the earliest examples of health geography .
The now fairly distinct differences between the subfields of physical and human geography developed at a later date. The connection between both physical and human properties of geography is most apparent in the theory of environmental determinism , made popular in the 19th century by Carl Ritter and others, and has close links to the field of evolutionary biology of the time. Environmental determinism is the theory that people's physical, mental and moral habits are directly due to the influence of their natural environment. However, by the mid-19th century, environmental determinism was under attack for lacking methodological rigor associated with modern science, and later as a means to justify racism and imperialism .
A similar concern with both human and physical aspects is apparent during the later 19th and first half of the 20th centuries focused on regional geography . The goal of regional geography, through something known as regionalisation , was to delineate space into regions and then understand and describe the unique characteristics of each region through both human and physical aspects. With links to possibilism and cultural ecology some of the same notions of causal effect of the environment on society and culture remain with environmental determinism.
By the 1960s, however, the quantitative revolution led to strong criticism of regional geography. Due to a perceived lack of scientific rigor in an overly descriptive nature of the discipline, and a continued separation of geography from its two subfields of physical and human geography and from geology , geographers in the mid-20th century began to apply statistical and mathematical models in order to solve spatial problems. [ 1 ] Much of the development during the quantitative revolution is now apparent in the use of geographic information systems ; the use of statistics, spatial modeling, and positivist approaches are still important to many branches of human geography. Well-known geographers from this period are Fred K. Schaefer , Waldo Tobler , William Garrison , Peter Haggett , Richard J. Chorley , William Bunge , and Torsten Hägerstrand .
From the 1970s, a number of critiques of the positivism now associated with geography emerged. Known under the term ' critical geography ,' these critiques signaled another turning point in the discipline. Behavioral geography emerged for some time as a means to understand how people made perceived spaces and places and made locational decisions. The more influential 'radical geography' emerged in the 1970s and 1980s. It draws heavily on Marxist theory and techniques and is associated with geographers such as David Harvey and Richard Peet . Radical geographers seek to say meaningful things about problems recognized through quantitative methods, [ 7 ] provide explanations rather than descriptions, put forward alternatives and solutions, and be politically engaged, [ 8 ] rather than using the detachment associated with positivists. (The detachment and objectivity of the quantitative revolution was itself critiqued by radical geographers as being a tool of capital). Radical geography and the links to Marxism and related theories remain an important part of contemporary human geography (See: Antipode ). Critical geography also saw the introduction of 'humanistic geography', associated with the work of Yi-Fu Tuan , which pushed for a much more qualitative approach in methodology.
The changes under critical geography have led to contemporary approaches in the discipline such as feminist geography , new cultural geography , settlement geography , and the engagement with postmodern and post-structural theories and philosophies.
The primary fields of study in human geography focus on the core fields of:
Cultural geography is the study of cultural products and norms – their variation across spaces and places, as well as their relations. It focuses on describing and analyzing the ways language, religion, economy, government, and other cultural phenomena vary or remain constant from one place to another and on explaining how humans function spatially. [ 9 ]
Development geography is the study of the Earth's geography with reference to the standard of living and the quality of life of its human inhabitants, study of the location, distribution and spatial organization of economic activities, across the Earth. The subject matter investigated is strongly influenced by the researcher's methodological approach.
Economic geography examines relationships between human economic systems, states, and other factors, and the biophysical environment.
Emotional geography is a subtopic within human geography, more specifically cultural geography , which applies psychological theories of emotion . It is an interdisciplinary field relating emotions, geographic places and their contextual environments. These subjective feelings can be applied to individual and social contexts. Emotional geography specifically focuses on how human emotions relate to, or affect, the environment around them. [ 10 ] [ 11 ] [ 12 ] [ 13 ]
Firstly, there is a difference between emotional and affectual geography and they have their respective geographical sub-fields. The former refers to theories of expressed feelings and the social constructs of expressed feelings which can be generalisable and understood globally. The latter refers to theories underlying inexpressible feelings that are independent, embodied, and hard to understand. [ 14 ]
Emotional geography approaches geographical concepts and research from an expressed and generalisable perspective. Historically, emotions have an ultimate adaptive significance by accentuating a non-verbal form of communication that is universal. [ 15 ] This dates back to Darwin's theory of emotion , which explains the evolutionary development of expressed emotion. This aids individual and societal relationships as there is the presence of emotional communication. For example, when studying social phenomena, individuals' emotions can connect and create a social emotion which can define the event happening. [ 16 ]
Medical or health geography is the application of geographical information, perspectives, and methods to the study of health , disease , and health care . Health geography deals with the spatial relations and patterns between people and the environment. This is a sub-discipline of human geography, researching how and why diseases are spread and contained. [ 17 ]
Historical geography is the study of the human, physical, fictional, theoretical, and "real" geographies of the past. Historical geography studies a wide variety of issues and topics. A common theme is the study of the geographies of the past and how a place or region changes through time. Many historical geographers study geographical patterns through time, including how people have interacted with their environment, and created the cultural landscape.
Political geography is concerned with the study of both the spatially uneven outcomes of political processes and the ways in which political processes are themselves affected by spatial structures.
Subfields include: Electoral geography , Geopolitics , Strategic geography and Military geography .
Population geography is the study of ways in which spatial variations in the distribution, composition, migration, and growth of populations are related to their environment or location.
Settlement geography , including urban geography , is the study of urban and rural areas with specific regards to spatial, relational and theoretical aspects of settlement. That is the study of areas which have a concentration of buildings and infrastructure . These are areas where the majority of economic activities are in the secondary sector and tertiary sectors .
Urban geography is the study of cities, towns, and other areas of relatively dense settlement. Two main interests are site (how a settlement is positioned relative to the physical environment) and situation (how a settlement is positioned relative to other settlements). Another area of interest is the internal organization of urban areas with regard to different demographic groups and the layout of infrastructure. This subdiscipline also draws on ideas from other branches of Human Geography to see their involvement in the processes and patterns evident in an urban area . [ 18 ] [ 19 ] Subfields include: Economic geography , Population geography , and Settlement geography . These are clearly not the only subfields that could be used to assist in the study of Urban geography , but they are some major players. [ 18 ]
Within each of the subfields, various philosophical approaches can be used in research; therefore, an urban geographer could be a Feminist or Marxist geographer, etc.
Such approaches are: | https://en.wikipedia.org/wiki/Human_geography |
Human germline engineering (HGE) is the process by which the genome of an individual is modified in such a way that the change is heritable. This is achieved by altering the genes of the germ cells , which mature into eggs and sperm. HGE is prohibited by law in more than 70 countries [ 1 ] and by a binding international treaty of the Council of Europe .
In November 2015, a group of Chinese researchers used CRISPR / Cas9 to edit single-celled, non-viable embryos to assess its effectiveness. This attempt was unsuccessful; only a small fraction of the embryos successfully incorporated the genetic material and many of the embryos contained a large number of random mutations. The non-viable embryos that were used contained an extra set of chromosomes, which may have been problematic. In 2016, a similar study was performed in China on non-viable embryos with extra sets of chromosomes. This study showed similar results to the first; except that no embryos adopted the desired gene.
In November 2018, researcher He Jiankui created the first human babies from genetically edited embryos, known by their pseudonyms, Lulu and Nana . In May 2019, lawyers in China reported that regulations had been drafted that anyone manipulating the human genome would be held responsible for any related adverse consequences. [ 2 ]
The CRISPR-Cas9 system consists of an enzyme called Cas9 and a special piece of guide RNA (gRNA). Cas9 acts as a pair of ‘molecular scissors’ that can cut the DNA at a specific location in the genome so that genes can be added or removed. The guide RNA has complementary bases to those at the target location, so it binds only there. Once bound Cas9 makes a cut across both DNA strands allowing base pairs to inserted/removed. Afterwards, the cell recognizes that the DNA is damaged and tries to repair it. [ 3 ]
Although CRISPR/Cas9 can be used in humans, [ 4 ] it is more commonly used in other species or cell culture systems, including in experiments to study genes potentially involved in human diseases.
Genetic engineering is in widespread use, particularly in agriculture. Human germline engineering has two potential applications: prevent genetic disorders from passing to descendants, and to modify traits such as height that are not disease related. For example, the Berlin Patient has a genetic mutation in the CCR5 gene that suppresses the expression of CCR5. This confers innate resistance to HIV . Modifying human embryos to give the CCR5 Δ32 allele protects them from the disease.
An other use would be to cure genetic disorders. In the first study published regarding human germline engineering, the researchers attempted to edit the HBB gene which codes for the human β-globin protein. HBB mutations produce β-thalassaemia , which can be fatal. [ 5 ] Genome editing in patients who have these HBB mutations would leave copies of the unmutated gene, effectively curing the disease. If the germline could be edited, this normal copy of the HBB genes could be passed on to future generations.
Eugenic modifications to humans yield " designer babies ", with deliberately-selected traits, possibly extending to its entire genome. [ 6 ] HGE potentially allows for enhancement of these traits. [ 6 ] The concept has produced strong objections, particularly among bioethicists. [ 7 ]
In a 2019 animal study with Liang Guang small spotted pigs, precise editing of the myostatin signal peptide yielded increased muscle mass. Myostatin is a negative regulator of muscle growth, so by mutating the gene's signal peptide regions could be promoted. One study mutated myostatin genes in 955 embryos at several locations with CRISPR/cas9 and implanted them into five surrogates, resulting in 16 piglets. Only specific mutations to the myostatin signal peptide increased muscle mass, mainly due to an increase in muscle fibers. [ 8 ] A similar mice study knoced out the myostatin gene, which also increased their muscle mass. [ 9 ] This showed that muscle mass could be increased with germline editing, which is likely applicable to humans because the myostatin gene regulates human muscle growth. [ 10 ]
HGE is widely debated, and more than 40 countries formally outlaw it. [ 11 ] No legislation explicitly prohibits germline engineering in the United States. The Consolidated Appropriation Act of 2016 bans the use of US FDA funds to engage in human germline modification research. [ 12 ] In April 2015, a research team published an unsuccessful experiment in which they used CRISPR to edit a gene that is associated with blood disease in non-living human embryos.
Researchers using CRISPR/Cas9 have run into issues when it comes to mammals due to their complex diploid cells . Studies in microorganisms have examined loss of function genetic screening. Some studies used mice as a subject. Because RNA processes differ between bacteria and mammalian cells, researchers have had difficulties coding for mRNA's translated data without RNA interference. Studies have successfully used a Cas9 nuclease with a single guide RNA to allow for larger knockout regions in mice. [ 13 ]
The lack of international regulation led researchers to attempt to create an international framework of ethical guidelines. The framework lacks the requisite international treaties for enforcement. At the first International Summit on Human Gene Editing in December 2015 researchers issued the first international guidelines. [ 14 ] These guidelines allowed pre-clinical research into gene editing in human cells as long as the embryos were not used to implant pregnancy. Genetic alteration of somatic cells for therapeutic proposes was considered ethically acceptable in part because somatic cells cannot pass modifications to subsequent generations. However the lack of consensus and the risks of inaccurate editing led the conference to call for restraint on germline modifications.
On March 13, 2019 researchers Eric Lander , Françoise Baylis , Feng Zhang , Emmanuelle Charpentier , Paul Bergfrom and others called for a framework that did not foreclose any outcome, but included a voluntary pledge and a call for a coordinating body to monitor the HGE moratorium with an attempt to reach social consensus before furthering research. [ 15 ] The World Health Organization announced on December 18, 2018 plans to convene an intentional committee on the topic. [ 16 ]
The He Jiankui genome editing incident is a scientific and bioethical controversy concerning the use of genome editing following its first use on humans by Chinese scientist He Jiankui , who edited the genomes of human embryos in 2018. [ 17 ] [ 18 ] He became widely known on 26 November 2018 [ 19 ] after he announced that he had created the first human genetically edited babies. He was listed in Time magazine's 100 most influential people of 2019. [ 20 ] The affair led to ethical and legal controversies, resulting in the indictment of He and two of his collaborators, Zhang Renli and Qin Jinzhou. He eventually received widespread international condemnation.
He Jiankui, working at the Southern University of Science and Technology (SUSTech) in Shenzhen , China, started a project to help people with HIV-related fertility problems , specifically involving HIV-positive fathers and HIV-negative mothers. The subjects were offered standard in vitro fertilisation services and in addition, use of CRISPR gene editing ( CRISPR/Cas9 ), a technology for modifying DNA . The embryos' genomes were edited to remove the CCR5 gene in an attempt to confer genetic resistance to HIV . [ 21 ] The clinical project was conducted secretly until 25 November 2018, when MIT Technology Review broke the story of the human experiment based on information from the Chinese clinical trials registry. Compelled by the situation, he immediately announced the birth of genome-edited babies in a series of five YouTube videos the same day. [ 22 ] [ 23 ] The first babies, known by their pseudonyms Lulu ( Chinese : 露露 ) and Nana ( 娜娜 ), are twin girls born in October 2018, and the second birth and third baby born was in 2019, [ 24 ] [ 25 ] named Amy. [ 26 ] He reported that the babies were born healthy. [ 27 ]
His actions received widespread criticism, [ 28 ] [ 29 ] and included concern for the girls' well-being. [ 21 ] [ 30 ] [ 31 ] After his presentation on the research at the Second International Summit on Human Genome Editing at the University of Hong Kong on 28 November 2018, Chinese authorities suspended his research activities the following day. [ 32 ] On 30 December 2019, a Chinese district court found He Jiankui guilty of illegal practice of medicine, sentencing him to three years in prison with a fine of 3 million yuan. [ 33 ] [ 34 ] Zhang Renli and Qin Jinzhou received an 18-month prison sentence and a 500,000-yuan fine, and were banned from working in assisted reproductive technology for life. [ 35 ]
As early in the history of biotechnology as 1990, there have been researchers opposed to attempts to modify the human germline using these new tools, [ 48 ] and such concerns have continued as technology progressed. [ 49 ] [ 50 ] In March 2015, with the advent of new techniques like CRISPR , researchers urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited. [ 51 ] In April 2015, researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR, creating controversy. [ 52 ]
A committee of the American National Academy of Sciences and National Academy of Medicine gave support to human genome editing in 2017 [ 53 ] [ 54 ] once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight." [ 55 ] The American Medical Association 's Council on Ethical and Judicial Affairs stated that "genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics." [ 56 ]
Several religious positions have been published with regards to human germline engineering. According to them, many see germline modification as being more moral than the alternative, which would be either discarding of the embryo, or birth of a diseased human. The main conditions when it comes to whether or not it is morally and ethically acceptable lie within the intent of the modification, and the conditions in which the engineering is done. [ 57 ]
Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases. [ 58 ] [ 59 ] [ 60 ] For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery. [ 61 ] [ 62 ] Another theorist claims that moral concerns limit but do not prohibit germline engineering. [ 63 ]
One issue related to human genome editing relates to the impact of the technology on future individuals whose genes are modified without their consent. Clinical ethics accepts the idea that parents are, almost always, the most appropriate surrogate medical decision makers for their children until the children develop their own autonomy and decision-making capacity. This is based on the assumption that, except under rare circumstances, parents have the most to lose or gain from a decision and will ultimately make decisions that reflects the future values and beliefs of their children. According to this assumption, it could be assumed that parents are the most appropriate decision makers for their future children as well. However, there are anecdotal reports of children and adults who disagree with the medical decisions made by a parent during pregnancy or early childhood, such as when death was a possible outcome. There are also published patient stories by individuals who feel that they would not wish to change or remove their own medical condition if given the choice and individuals who disagree with medical decisions made by their parents during childhood. [ 64 ]
Other researchers and philosophers have noted that the issue of the lack of prior consent applies as well to individuals born via traditional sexual reproduction. [ 65 ] [ 66 ] Philosopher David Pearce further argues that “old-fashioned sexual reproduction is itself an untested genetic experiment”, often compromising a child's wellbeing and pro-social capacities even if the child grows in a healthy environment. According to Pearce, “the question of [human germline engineering] comes down to an analysis of risk-reward ratios – and our basic ethical values, themselves shaped by our evolutionary past.” [ 67 ] Bioethicist Julian Savulescu in turn proposes the principle of procreative beneficence , according to which “couples (or single reproducers) should select the child, of the possible children they could have, who is expected to have the best life, or at least as good a life as the others, based on the relevant, available information”. [ 68 ] Some ethicists argue that the principle of procreative beneficence would justify or even require genetically enhancing one's children. [ 69 ] [ 70 ]
A relevant issue concerns “off target effects”, large genomes may contain identical or homologous DNA sequences, and the enzyme complex CRISPR/Cas9 may unintentionally cleave these DNA sequences causing mutations that may lead to cell death. The mutations can cause important genes to be turned on or off, such as genetic anti-cancer mechanisms, that could speed up disease exasperation. [ 64 ] [ 71 ] [ 72 ] [ 73 ] [ 74 ]
The other ethical concern is the potential for “designer babies”, or the creation of humans with "perfect", or "desirable" traits. There is a debate as to if this is morally acceptable as well. Such debate ranges from the ethical obligation to use safe and efficient technology to prevent disease to seeing some actual benefit in genetic disabilities.
There are concerns that the introduction of desirable traits in a certain part of the population (instead of the entire population) could cause economic inequalities (“positional” good) [ clarification needed ] . [ 75 ] However, this is not the case if a same desirable trait would be introduced over the entire population (similar to vaccines). [ citation needed ]
Another ethical concern pertains to potential unequal distribution of benefits, even in the case of genome editing being inexpensive. For example, corporations may be able to take unfair advantage of patent law or other ways of restricting access to genome editing and thereby may increase the inequalities. There are already disputes in the courts where CRISPR-Cas9 patents and access issues are being negotiated. [ 76 ]
There remains debate on if the permissibility of human germline engineering for reproduction is dependent on the use, being either a therapeutic or non-therapeutic application. In a survey by the UK's Royal Society, 76% of participants in the UK supported therapeutic human germline engineering to prevent or correct disease, however for non-therapeutic edits such as enhancing intelligence or altering eye or hair color in embryos, there was only 40% and 31% support, respectively. [ 77 ] There was a similar result in a study at the University of Bogota , Colombia, where students as well as professors generally agreed that therapeutic genome editing is acceptable, while non-therapeutic genome editing is not. [ 78 ]
There is also debate on if there can be a defined distinction between therapeutic and non-therapeutic germline editing. An example would be if two embryos are predicted to grow up to be very short in height. Boy 1 will be short because of a mutation in his Human Growth Hormone gene, while boy 2 will be short because his parents are very short. Editing the embryo of boy 1 to make him of average height would be a therapeutic germline edit, while editing the embryo of boy 2 to be of average height would be a non-therapeutic germline edit. In both cases with no editing of the boys' genomes they would both grow up to be very short, which would decrease their wellbeing in life. Likewise editing both of the boys' genomes would allow them to grow up to be of average height. In this scenario, editing for the same phenotype for being of average height falls under both therapeutic and non-therapeutic germline engineering. [ 79 ]
There is distinction in some country policies, including but not limited to official regulation and legislation, between human germline engineering for reproductive use and for laboratory research. As of October 2020, there are 96 countries that have policies involving the use of germline engineering in human cells. [ 1 ]
Reproductive use of human germline engineering involves implanting the edited embryo to be born. 70 countries currently explicitly prohibit the use of human germline engineering for use in reproduction, while 5 countries prohibit it for reproduction with exceptions. No countries permit the use of human germline engineering for reproduction. [ 1 ]
Countries that explicitly prohibit any use of human germline engineering for reproduction are: Albania , Argentina , Australia , Austria , Bahrain , Belarus , Benin , Bosnia and Herzegovina , Brazil , Bulgaria , Burundi , Canada , Chile , China , Congo , Costa Rica , Croatia , Cyprus , Czech Republic , Denmark , Estonia , Finland , France , Georgia , Germany , Greece , Hungary , Iceland , India , Iran , Ireland , Israel , Japan , Kenya , Latvia , Lebanon , Lithuania , Malaysia , Malta , Mexico , Moldova , Montenegro , Netherlands , New Zealand , Nigeria , North Macedonia , Norway , Oman , Pakistan , Poland , Portugal , Qatar , Romania , Russia , San Marino , Saudi Arabia , Serbia , Slovakia , Slovenia , South Korea , Spain , Sweden , Switzerland , Thailand , Tunisia , Turkey , the United Kingdom , the United States , Uruguay , and the Vatican [ 1 ]
Countries that explicitly prohibit (with exceptions) the use of human germline engineering for reproduction are: Belgium , Colombia , Italy , Panama , and the United Arab Emirates [ 1 ]
Laboratory research use involves human germline engineering restricted to in vitro use, where edited cells will not be implanted to be born. 19 countries currently explicitly prohibit any use of human germline engineering for in vitro use, while 4 prohibit it with exceptions, and 11 permit it. [ 1 ]
Countries that explicitly prohibit any use of germline engineering for in vitro use are: Albania , Austria , Bahrain , Belarus , Brazil , Canada , Costa Rica , Croatia , Germany , Greece , Lebanon , Malaysia , Malta , Pakistan , Saudi Arabia , Sweden , Switzerland , Uruguay , and the Vatican [ 1 ]
Countries that explicitly prohibit (with exceptions) the use of germline engineering for in vitro use are: Colombia , Finland , Italy , and Panama [ 1 ]
Countries that explicitly permit the use of germline engineering for in vitro use are: Burundi , China , Congo , India , Iran , Ireland , Japan , Norway , Thailand , the United Kingdom , and the United States [ 1 ] | https://en.wikipedia.org/wiki/Human_germline_engineering |
Human activities affect marine life and marine habitats through overfishing , habitat loss , the introduction of invasive species , ocean pollution , ocean acidification and ocean warming . These impact marine ecosystems and food webs and may result in consequences as yet unrecognised for the biodiversity and continuation of marine life forms. [ 3 ]
The ocean can be described as the world's largest ecosystem and it is home for many species of marine life. Different activities carried out and caused by human beings such as global warming, ocean acidification, and pollution affect marine life and its habitats. For the past 50 years, more than 90 percent of global warming resulting from human activity has been absorbed into the ocean. This results in the rise of ocean temperatures and ocean acidification which is harmful to many fish species and causes damage to habitats such as coral . [ 4 ] With coral producing materials such as carbonate rock and calcareous sediment, this creates a unique and valuable ecosystem not only providing food/homes for marine creatures but also having many benefits for humans too. Ocean acidification caused by rising levels of carbon dioxide leads to coral bleaching where the rates of calcification is lowered affecting coral growth. [ 5 ] Additionally, another issue caused by humans which impacts marine life is marine plastic pollution , which poses a threat to marine life. [ 6 ] According to the IPCC (2019), since 1950 "many marine species across various groups have undergone shifts in geographical range and seasonal activities in response to ocean warming, sea ice change and biogeochemical changes, such as oxygen loss, to their habitats." [ 7 ]
It has been estimated only 13% of the ocean area remains as wilderness , mostly in open ocean areas rather than along the coast. [ 8 ]
Overfishing is occurring in one third of world fish stocks, according to a 2018 report by the Food and Agriculture Organization of the United Nations. [ 9 ] In addition, industry observers believe illegal, unreported and unregulated fishing occurs in most fisheries, and accounts for up to 30% of total catches in some important fisheries. [ 10 ] In a phenomenon called fishing down the foodweb , the mean trophic level of world fisheries has declined because of overfishing high trophic level fish. [ 11 ]
"It is almost as though we use our military to fight the animals in the ocean. We are gradually winning this war to exterminate them."
Coastal ecosystems are being particularly damaged by humans. [ 13 ] Significant habitat loss is occurring particularly in seagrass meadows, mangrove forests and coral reefs, all of which are in global decline due to human disturbances.
Coral reefs are among the more productive and diverse ecosystems on the planet, but one-fifth of them have been lost in recent years due to anthropogenic disturbances. [ 14 ] [ 15 ] Coral reefs are microbially driven ecosystems that rely on marine microorganisms to retain and recycle nutrients in order to thrive in oligotrophic waters. However, these same microorganisms can also trigger feedback loops that intensify declines in coral reefs, with cascading effects across biogeochemical cycles and marine food webs . A better understanding is needed of the complex microbial interactions within coral reefs if reef conservation has a chance of success in the future. [ 16 ]
Seagrass meadows have lost 30,000 km 2 (12,000 sq mi) during recent decades. Seagrass ecosystem services , currently worth about $US1.9 trillion per year, include nutrient cycling , the provision of food and habitats for many marine animals, including the endangered dugongs , manatee and green turtles , and major facilitations for coral reef fish . [ 13 ]
One-fifth of the world's mangrove forests have also been lost since 1980. [ 17 ] The most pressing threat to kelp forests may be the overfishing of coastal ecosystems, which by removing higher trophic levels facilitates their shift to depauperate urchin barrens . [ 18 ]
An invasive species is a species not native to a particular location which can spread to a degree that causes damage to the environment, human economy or human health. [ 19 ] In 2008, Molnar et al. documented the pathways of hundreds of marine invasive species and found shipping was the dominant mechanism for the transfer of invasive species in the ocean. The two main maritime mechanisms of transporting marine organisms to other ocean environments are via hull fouling and the transfer of ballast water . [ 20 ]
Ballast water taken up at sea and released in port is a major source of unwanted exotic marine life. The invasive freshwater zebra mussels, native to the Black, Caspian, and Azov seas, were probably transported to the Great Lakes via ballast water from a transoceanic vessel. [ 21 ] Meinesz believes that one of the worst cases of a single invasive species causing harm to an ecosystem can be attributed to a seemingly harmless jellyfish . Mnemiopsis leidyi , a species of comb jellyfish that spread so it now inhabits estuaries in many parts of the world, was first introduced in 1982, and thought to have been transported to the Black Sea in a ship's ballast water. The population of the jellyfish grew exponentially and, by 1988, it was wreaking havoc upon the local fishing industry . "The anchovy catch fell from 204,000 tons in 1984 to 200 tons in 1993; sprat from 24,600 tons in 1984 to 12,000 tons in 1993; horse mackerel from 4,000 tons in 1984 to zero in 1993." [ 22 ] Now that the jellyfish have exhausted the zooplankton , including fish larvae, their numbers have fallen dramatically, yet they continue to maintain a stranglehold on the ecosystem .
Invasive species can take over once occupied areas, facilitate the spread of new diseases, introduce new genetic material, alter underwater seascapes, and jeopardize the ability of native species to obtain food. Invasive species are responsible for about $138 billion annually in lost revenue and management costs in the US alone. [ 23 ]
Marine pollution occurs when substances used or spread by humans, such as industrial , agricultural , and residential waste ; particles ; noise ; excess carbon dioxide ; or invasive organisms enter the ocean and cause harmful effects there. The majority of this waste (80%) comes from land-based activity, although marine transportation significantly contributes as well. [ 24 ] It is a combination of chemicals and trash, most of which comes from land sources and is washed or blown into the ocean. This pollution results in damage to the environment , to the health of all organisms, and to economic structures worldwide. [ 25 ] Since most inputs come from land, via rivers , sewage , or the atmosphere , it means that continental shelves are more vulnerable to pollution. Air pollution is also a contributing factor, as it carries iron, carbonic acid, nitrogen , silicon, sulfur, pesticides , and dust particles into the ocean. [ 26 ] The pollution often comes from nonpoint sources such as agricultural runoff , wind-blown debris , and dust. These nonpoint sources are largely due to runoff that enters the ocean through rivers, but wind-blown debris and dust can also play a role, as these pollutants can settle into waterways and oceans. [ 27 ] Pathways of pollution include direct discharge, land runoff, ship pollution , bilge pollution , dredging (which can create dredge plumes ), atmospheric pollution and, potentially, deep sea mining .
The types of marine pollution can be grouped as pollution from marine debris , plastic pollution , including microplastics , ocean acidification , nutrient pollution , toxins, and underwater noise. Plastic pollution in the ocean is a type of marine pollution by plastics , ranging in size from large original material such as bottles and bags, down to microplastics formed from the fragmentation of plastic materials. Marine debris is mainly discarded human rubbish which floats on, or is suspended in the ocean. Plastic pollution is harmful to marine life .
Nutrient pollution is a primary cause of eutrophication of surface waters, in which excess nutrients, usually nitrates or phosphates , stimulate algae growth. This algae then dies, sinks, and is decomposed by bacteria in the water. This decomposition process consumes oxygen, depleting the supply for other marine life and creating what is referred to as a "dead zone." Dead zones are hypoxic, meaning the water has very low levels of dissolved oxygen. This kills off marine life or forces it to leave the area, removing life from the area and giving it the name dead zone. Hypoxic zones or dead zones can occur naturally, but nutrient pollution from human activity has turned this natural process into an environmental problem. [ 29 ]
There are five main sources of nutrient pollution. The most common source of nutrient runoff is municipal sewage. This sewage can reach waterways through storm water, leaks, or direct dumping of human sewage into bodies of water. The next biggest sources come from agricultural practices. Chemical fertilizers used in farming can seep into ground water or be washed away in rainwater, entering water ways and introducing excess nitrogen and phosphorus to these environments. Livestock waste can also enter waterways and introduce excess nutrients. Nutrient pollution from animal manure is most intense from industrial animal agriculture operations, in which hundreds or thousands of animals are raised in one concentrated area. Stormwater drainage is another source of nutrient pollution. Nutrients and fertilizers from residential properties and impervious surfaces can be picked up in stormwater, which then runs into nearby rivers and streams that eventually lead to the ocean. The fifth main source of nutrient runoff is aquaculture, in which aquatic organisms are cultivated under controlled conditions. The excrement, excess food, and other organic wastes created by these operations introduces excess nutrients into the surrounding water. [ 30 ]
Toxic chemicals can adhere to tiny particles which are then taken up by plankton and benthic animals , most of which are either deposit feeders or filter feeders . In this way, toxins are concentrated upward within ocean food chains . Many particles combine chemically in a manner which depletes oxygen, causing estuaries to become anoxic . Pesticides and toxic metals are similarly incorporated into marine food webs, harming the biological health of marine life. Many animal feeds have a high fish meal or fish hydrolysate content. In this way, marine toxins are transferred back to farmed land animals, and then to humans.
Phytoplankton concentrations have increased over the last century in coastal waters, and more recently have declined in the open ocean. Increases in nutrient runoff from land may explain the rise in coastal phytoplankton, while warming surface temperatures in the open ocean may have strengthened stratification in the water column, reducing the flow of nutrients from the deep that open ocean phytoplankton find useful. [ 31 ]
Over 300 million tons of plastic are produced every year, half of which are used in single-use products like cups, bags, and packaging. At least 14 million [ 32 ] tons of plastic enter the oceans every year. It is impossible to know for sure, but it is estimated that about 150 million metric tons of plastic exists in our oceans. Plastic pollution makes up 80% of all marine debris from surface waters to deep-sea sediments. Because plastics are light, much of this pollution is seen in and around the ocean surface, but plastic trash and particles are now found in most marine and terrestrial habitats, including the deep sea , Great Lakes, coral reefs, beaches, rivers, and estuaries. The most eye-catching evidence of the ocean plastic problem are the garbage patches that accumulate in gyre regions . A gyre is a circular ocean current formed by the Earth's wind patterns and the forces created by the rotation of the planet. [ 33 ] There are five main ocean gyres: the North and South Pacific Subtropical Gyres , the North and South Atlantic Subtropical Gyres , and the Indian Ocean Subtropical Gyre . There are significant garbage patches in each of these. [ 34 ]
Larger plastic waste can be ingested by marine species, filling their stomachs and leading them to believe they are full when in fact they have taken in nothing of nutritional value. This can bring seabirds , whales , fish , and turtles to die of starvation with plastic-filled stomachs. Marine species can also be suffocated or entangled in plastic garbage. [ 35 ]
The biggest threat of ocean plastic pollution comes from microplastics . These are small fragments of plastic debris, some of which were produced to be this small such as microbeads. Other microplastics come from the weathering of larger plastic waste . Once larger pieces of plastic waste enter the ocean, or any waterway, the sunlight exposure, temperature, humidity, waves, and wind begin to break the plastic down into pieces smaller than five millimeters long. Plastics can also be broken down by smaller organisms who will eat plastic debris, breaking it down into small pieces, and either excrete these microplastics or spit them out. In lab tests, it was found that amphipods of the species Orchestia gammarellus could quickly devour pieces of plastic bags, shredding a single bag into 1.75 million microscopic fragments. [ 36 ] Although the plastic is broken down, it is still an artificial material that does not biodegrade. It is estimated that approximately 90% of the plastics in the pelagic marine environment are microplastics. [ 33 ] These microplastics are frequently consumed by marine organisms at the base of the food chain, like plankton and fish larvae, which leads to a concentration of ingested plastic up the food chain . Plastics are produced with toxic chemicals which then enter the marine food chain, including the fish that some humans eat. [ 37 ]
There is a natural soundscape to the ocean that organisms have evolved around for tens of thousands of years. However, human activity has disrupted this soundscape, largely drowning out sounds organisms depend on for mating, warding off predators, and travel. Ship and boat propellers and engines, industrial fishing, coastal construction, oil drilling, seismic surveys, warfare, sea-bed mining and sonar-based navigation have all introduced noise pollution to ocean environments. Shipping alone has contributed an estimated 32-fold increase of low-frequency noise along major shipping routes in the past 50 years, driving marine animals away from vital breeding and feeding grounds. [ 41 ] Sound is the sensory cue that travels the farthest through the ocean, and anthropogenic noise pollution disrupts organisms' ability to utilize sound. This creates stress for the organisms that can affect their overall health, disrupting their behavior, physiology, and reproduction, and even causing mortality. [ 42 ] Sound blasts from seismic surveys can damage the ears of marine animals and cause serious injury. Noise pollution is especially damaging for marine mammals that rely on echolocation, such as whales and dolphins. These animals use echolocation to communicate, navigate, feed, and find mates, but excess sound interferes with their ability to use echolocation and, therefore, perform these vital tasks. [ 43 ]
The prospect of deep sea mining has led to concerns from scientists and environmental groups over the impacts on fragile deep sea ecosystems and wider impacts on the ocean's biological pump . [ 44 ] [ 45 ]
Rapid change to ocean environments allows disease to flourish. Disease-causing microbes can change and adapt to new ocean conditions much more quickly than other marine life, giving them an advantage in ocean ecosystems. This group of organisms includes viruses, bacteria, fungi, and protozoans. While these pathogenic organisms can quickly adapt, other marine life is weakened by rapid changes to their environment. In addition, microbes are becoming more abundant due to aquaculture, the farming of aquatic life, and human waste polluting the ocean. These practices introduce new pathogens and excess nutrients into the ocean, further encouraging the survival of microbes. [ 46 ]
Some of these microbes have wide host ranges and are referred to as multi-host pathogens. This means that the pathogen can infect, multiply, and be transmitted from different, unrelated species. Multi-host pathogens are especially dangerous because they can infect many organisms, but may not be deadly to all them. This means the microbes can exist in species that are more resistant and use these organisms as vessels for continuously infecting a susceptible species. In this case, the pathogen can completely wipe out the susceptible species while maintaining a supply of host organisms. [ 46 ]
In marine environments, microbial primary production contributes substantially to CO 2 sequestration . Marine microorganisms also recycle nutrients for use in the marine food web and in the process release CO 2 to the atmosphere. Microbial biomass and other organic matter (remnants of plants and animals) are converted to fossil fuels over millions of years. By contrast, burning of fossil fuels liberates greenhouse gases in a small fraction of that time. As a result, the carbon cycle is out of balance, and atmospheric CO 2 levels will continue to rise as long as fossil fuels continue to be burnt. [ 47 ]
Most heat energy from global warming goes into the ocean, and not into the atmosphere or warming up the land. [ 49 ] [ 50 ] Scientists realized over 30 years ago the ocean was a key fingerprint of human impact on climate change and "the best opportunity for major improvement in our understanding of climate sensitivity is probably monitoring of internal ocean temperature". [ 51 ]
Marine organisms are moving to cooler parts of the ocean as global warming proceeds. For example, a group of 105 marine fish and invertebrate species were monitored along the US Northeast coast and in the eastern Bering Sea. During the period from 1982 to 2015, the average center of biomass for the group shifted northward about 10 miles as well moving about 20 feet deeper. [ 52 ] [ 53 ]
There is evidence increasing ocean temperatures are taking a toll on marine ecosystem. For example, a study on phytoplankton changes in the Indian Ocean indicates a decline of up to 20% in marine phytoplankton during the past six decades. [ 55 ] During summer, the western Indian Ocean is home to one of the largest concentrations of marine phytoplankton blooms in the world. Increased warming in the Indian Ocean enhances ocean stratification, which prevents nutrient mixing in the euphotic zone where ample light is available for photosynthesis. Thus, primary production is constrained and the region's entire food web is disrupted. If rapid warming continues, the Indian Ocean could transform into an ecological desert and cease being productive. [ 55 ]
The Antarctic oscillation (also called the Southern Annular Mode ) is a belt of westerly winds or low pressure surrounding Antarctica which moves north or south according to which phase it is in. [ 58 ] In its positive phase, the westerly wind belt that drives the Antarctic Circumpolar Current intensifies and contracts towards Antarctica , [ 59 ] while its negative phase the belt moves towards the Equator. Winds associated with the Antarctic oscillation cause oceanic upwelling of warm circumpolar deep water along the Antarctic continental shelf. [ 60 ] [ 61 ] This has been linked to ice shelf basal melt , [ 62 ] representing a possible wind-driven mechanism that could destabilize large portions of the Antarctic Ice Sheet. [ 63 ] The Antarctic oscillation is currently in the most extreme positive phase that has occurred for over a thousand years. Recently this positive phase has been further intensifying, and this has been attributed to increasing greenhouse gas levels and later stratospheric ozone depletion. [ 64 ] [ 65 ] These large-scale alterations in the physical environment are "driving change through all levels of Antarctic marine food webs". [ 56 ] [ 57 ] Ocean warming is also changing the distribution of Antarctic krill . [ 56 ] [ 57 ] Antarctic krill is the keystone species of the Antarctic ecosystem beyond the coastal shelf, and is an important food source for marine mammals and birds . [ 66 ]
The IPCC (2019) says marine organisms are being affected globally by ocean warming with direct impacts on human communities, fisheries, and food production. [ 67 ] It is likely there will be a 15% decrease in the number of marine animals and a decrease of 21% to 24% in fisheries catches by the end of the 21st century because of climate change. [ 68 ]
A 2020 study reports that by 2050 global warming could be spreading in the deep ocean seven times faster than it is now, even if emissions of greenhouse gases are cut. Warming in mesopelagic and deeper layers could have major consequences for the deep ocean food web , since ocean species will need to move to stay at survival temperatures. [ 69 ] [ 70 ]
Coastal ecosystems are facing further changes because of rising sea levels . Some ecosystems can move inland with the high-water mark, but others are prevented from migrating due to natural or artificial barriers. This coastal narrowing, called coastal squeeze if human-made barriers are involved, can result in the loss of habitats such as mudflats and marshes . [ 72 ] [ 73 ] Mangroves and tidal marshes adjust to rising sea levels by building vertically using accumulated sediment and organic matter . If sea level rise is too rapid, they will not be able to keep up and will instead be submerged. [ 74 ]
Coral, important for bird and fish life, also needs to grow vertically to remain close to the sea surface in order to get enough energy from sunlight. So far it has been able to keep up, but might not be able to do so in the future. [ 77 ] These ecosystems protect against storm surges, waves and tsunamis. Losing them makes the effects of sea level rise worse. [ 78 ] [ 79 ] Human activities, such as dam building, can prevent natural adaptation processes by restricting sediment supplies to wetlands, resulting in the loss of tidal marshes . [ 80 ] When seawater moves inland, the coastal flooding can cause problems with existing terrestrial ecosystems, such as contaminating their soils. [ 81 ] The Bramble Cay melomys is the first known land mammal to go extinct as a result of sea level rise. [ 82 ] [ 83 ]
Ocean salinity is a measure of how much dissolved salt is in the ocean. The salts come from erosion and transport of dissolved salts from the land. The surface salinity of the ocean is a key variable in the climate system when studying the global water cycle , ocean–atmosphere exchanges and ocean circulation , all vital components transporting heat, momentum, carbon and nutrients around the world. [ 84 ] Cold water is more dense than warm water and salty water is more dense than freshwater. This means the density of ocean water changes as its temperature and salinity changes. These changes in density are the main source of the power that drives the ocean circulation. [ 84 ]
Surface ocean salinity measurements taken since the 1950s indicate an intensification of the global water cycle with high saline areas becoming more saline and low saline areas becoming more less saline. [ 85 ] [ 86 ]
Ocean acidification is the increasing acidification of the oceans, caused mainly by the uptake of carbon dioxide from the atmosphere . [ 88 ] The rise in atmospheric carbon dioxide due to the burning of fossil fuels leads to more carbon dioxide dissolving in the ocean. When carbon dioxide dissolves in water it forms hydrogen and carbonate ions. This in turn increases the acidity of the ocean and makes survival increasingly harder for microorganisms, shellfish and other marine organisms that depend on calcium carbonate to form their shells. [ 89 ]
Increasing acidity also has potential for other harm to marine organisms, such as depressing metabolic rates and immune responses in some organisms, and causing coral bleaching . [ 90 ] Ocean acidification has increased 26% since the beginning of the industrial era. [ 91 ] It has been compared to anthropogenic climate change and called the "evil twin of global warming " [ 92 ] and "the other CO 2 problem". [ 93 ]
Ocean deoxygenation is an additional stressor on marine life. Ocean deoxygenation is the expansion of oxygen minimum zones in the oceans as a consequence of burning fossil fuels . The change has been fairly rapid and poses a threat to fish and other types of marine life, as well as to people who depend on marine life for nutrition or livelihood. [ 94 ] [ 95 ] [ 96 ] [ 97 ] Ocean deoxygenation poses implications for ocean productivity , nutrient cycling, carbon cycling , and marine habitats . [ 98 ] [ 99 ]
Ocean warming exacerbates ocean deoxygenation and further stresses marine organisms, limiting nutrient availability by increasing ocean stratification through density and solubility effects while at the same time increasing metabolic demand. [ 100 ] [ 101 ] According to the IPCC 2019 Special Report on the Ocean and Cryosphere in a Changing Climate , the viability of species is being disrupted throughout the ocean food web due to changes in ocean chemistry . As the ocean warms, mixing between water layers decreases, resulting in less oxygen and nutrients being available for marine life . [ 102 ]
Until recently, ice sheets [ 104 ] were viewed as inert components of the carbon cycle and largely disregarded in global models. Research in the past decade has transformed this view, demonstrating the existence of uniquely adapted microbial communities, high rates of biogeochemical/physical weathering in ice sheets and storage and cycling of organic carbon in excess of 100 billion tonnes, as well as nutrients. [ 105 ]
The diagram on the right shows some human impacts on the marine nitrogen cycle . Bioavailable nitrogen (Nb) is introduced into marine ecosystems by runoff or atmospheric deposition, causing eutrophication , the formation of dead zones and the expansion of the oxygen minimum zones (OMZs). The release of nitrogen oxides (N 2 O, NO) from anthropogenic activities and oxygen-depleted zones causes stratospheric ozone depletion leading to higher UVB exposition, which produces the damage of marine life, acid rain and ocean warming . Ocean warming causes water stratification, deoxygenation, and the formation of dead zones. Dead zones and OMZs are hotspots for anammox and denitrification , causing nitrogen loss (N 2 and N 2 O). Elevated atmospheric carbon dioxide acidifies seawater, decreasing pH-dependent N-cycling processes such as nitrification, and enhancing N 2 fixation . [ 106 ]
Aragonite is a form of calcium carbonate many marine animals use to build carbonate skeletons and shells. The lower the aragonite saturation level , the more difficult it is for the organisms to build and maintain their skeletons and shells. The map below shows changes in the aragonite saturation level of ocean surface waters between 1880 and 2012. [ 107 ]
To pick one example, pteropods are a group of widely distributed swimming sea snails . For pteropods to create shells they require aragonite which is produced through carbonate ions and dissolved calcium. Pteropods are severely affected because increasing acidification levels have steadily decreased the amount of water supersaturated with carbonate which is needed for the aragonite creation. [ 108 ]
When the shell of a pteropod was immersed in water with a pH level the ocean is projected to reach by the year 2100, the shell almost completely dissolved within six weeks. [ 109 ] Likewise corals , [ 110 ] coralline algae , [ 111 ] coccolithophores, [ 112 ] foraminifera , [ 113 ] as well as shellfish generally, [ 114 ] all experience reduced calcification or enhanced dissolution as an effect of ocean acidification.
Pteropods and brittle stars together form the base of the Arctic food webs and both are seriously damaged by acidification. Pteropods shells dissolve with increasing acidification and brittle stars lose muscle mass when re-growing appendages. [ 115 ] Additionally the brittle star's eggs die within a few days when exposed to expected conditions resulting from Arctic acidification. [ 116 ] Acidification threatens to destroy Arctic food webs from the base up. Arctic waters are changing rapidly and are advanced in the process of becoming undersaturated with aragonite. [ 108 ] Arctic food webs are considered simple, meaning there are few steps in the food chain from small organisms to larger predators. For example, pteropods are "a key prey item of a number of higher predators – larger plankton, fish, seabirds, whales". [ 117 ]
The rise in agriculture of the past 400 years has increased the exposure rocks and soils, which has resulted in increased rates of silicate weathering. In turn, the leaching of amorphous silica stocks from soils has also increased, delivering higher concentrations of dissolved silica in rivers. [ 118 ] Conversely, increased damming has led to a reduction in silica supply to the ocean due to uptake by freshwater diatoms behind dams. The dominance of non-siliceous phytoplankton due to anthropogenic nitrogen and phosphorus loading and enhanced silica dissolution in warmer waters has the potential to limit silicon ocean sediment export in the future. [ 118 ]
In 2019 a group of scientists suggested acidification is reducing diatom silica production in the Southern Ocean . [ 119 ] [ 120 ]
As the technical and political challenges of land-based carbon dioxide removal approaches become more apparent, the oceans may be the new "blue" frontier for carbon drawdown strategies in climate governance. [ 128 ] Marine environments are the blue frontier of a strategy for novel carbon sinks in post-Paris climate governance, from nature-based ecosystem management to industrial-scale technological interventions in the Earth system. Marine carbon dioxide removal approaches are diverse [ 129 ] [ 130 ] — although several resemble key terrestrial carbon dioxide removal proposals. [ 128 ] Ocean alkalinisation (adding silicate mineral such as olivine to coastal seawater, to increase CO 2 uptake through chemical reactions) is enhanced weathering, blue carbon (enhancing natural biological CO 2 drawdown from coastal vegetation) is marine reforestation, and cultivation of marine biomass (i.e., seaweed) for coupling with consequent carbon capture and storage is the marine variant of bioenergy and carbon capture and storage. Wetlands , coasts , and the open ocean are being conceived of and developed as managed carbon removal-and-storage sites, with practices expanded from the use of soils and forests. [ 128 ]
If more than one stressor is present the effects can be amplified. [ 133 ] [ 134 ] For example, the combination of ocean acidification and an elevation of ocean temperature can have a compounded effect on marine life far exceeding the individual harmful impact of either. [ 135 ] [ 136 ] [ 137 ]
While the full implications of elevated CO 2 on marine ecosystems are still being documented, there is a substantial body of research showing that a combination of ocean acidification and elevated ocean temperature, driven mainly by CO 2 and other greenhouse gas emissions , have a compounded effect on marine life and the ocean environment. This effect far exceeds the individual harmful impact of either. [ 135 ] [ 138 ] [ 137 ] In addition, ocean warming exacerbates ocean deoxygenation , which is an additional stressor on marine organisms, by increasing ocean stratification, through density and solubility effects, thus limiting nutrients, [ 139 ] [ 140 ] while at the same time increasing metabolic demand.
The direction and magnitude of the effects of ocean acidification, warming and deoxygenation on the ocean has been quantified by meta-analyses , [ 136 ] [ 142 ] [ 143 ] and has been further tested by mesocosm studies. The mesocosm studies simulated the interaction of these stressors and found a catastrophic effect on the marine food web, namely, that the increases in consumption from thermal stress more than negates any primary producer to herbivore increase from more available carbon dioxide. [ 144 ] [ 145 ]
Changes in marine ecosystem dynamics are influenced by socioeconomic activities (for example, fishing, pollution) and human-induced biophysical change (for example, temperature, ocean acidification) and can interact and severely impact marine ecosystem dynamics and the ecosystem services they generate to society. Understanding these direct—or proximate—interactions is an important step towards sustainable use of marine ecosystems. However, proximate interactions are embedded in a much broader socioeconomic context where, for example, economy through trade and finance, human migration and technological advances, operate and interact at a global scale, influencing proximate relationships. [ 146 ]
In 2024 a study [ 147 ] was released, dedicated to the impact of fishing and non fishing ships on the coastal waters of the ocean when 75% of the industrial activity occur. According to the study: "A third of fish stocks are operated beyond biologically sustainable levels and an estimated 30–50% of critical marine habitats have been lost owing to human industrialization". It mentions that except traditional impacts like fishing , maritime trade and oil extraction there are new emerging like mining , aquaculture and offshore wind turbines . It used satellite data to monitor the vessels. It found that 72% - 76% of fishing ships and 21%-30% of energy and transport ships are "missing from public tracking systems ". When the data was added to previously existing information about ships that were publicly tracked, this led to several discoveries including:
The study discovered a significant increase in offshore wind turbins which had overpassed oil platforms by number already in 2021. Fishing increased only a little in the latest years and may begin to decline because fisheries are exhausted. It concluded that "transport and energy vessel traffic may continue to expand, following trends in global trade and the rapid development of renewable energy infrastructure. In this scenario, changes to marine ecosystems brought by infrastructure and vessel traffic may rival fishing in impact".
"Application of the physical and biological sciences has made today arguably the best of times: we live longer and healthier lives, food production has doubled in the past 35 years and energy subsidies have substituted for human labour, washing away hierarchies of servitude. But the unintended consequences of these well-intentioned actions — climate change, biodiversity loss, inadequate water supplies, and much else — could well make tomorrow the worst of times."
Shifting baselines arise in research on marine ecosystems because changes must be measured against some previous reference point (baseline), which in turn may represent significant changes from an even earlier state of the ecosystem. [ 149 ] For example, radically depleted fisheries have been evaluated by researchers who used the state of the fishery at the start of their careers as the baseline, rather than the fishery in its unexploited or untouched state. Areas that swarmed with a particular species hundreds of years ago may have experienced long-term decline, but it is the level a few decades previously that is used as the reference point for current populations. In this way large declines in ecosystems or species over long periods of time were, and are, masked. There is a loss of perception of change that occurs when each generation redefines what is natural or untouched. [ 149 ] | https://en.wikipedia.org/wiki/Human_impact_on_marine_life |
Many river systems are shaped by human activity and through anthropogenic forces. [ 1 ] The process of human influence on nature, including rivers, is stated with the beginning of the Anthropocene , which has replaced the Holocene . [ citation needed ] This long-term impact is analyzed and explained by a wide range of sciences and stands in an interdisciplinary context. The natural water cycle and stream flow is globally influenced and linked to global interconnections. [ 2 ] Rivers are an essential component of the terrestrial realm and have been a preferable location for human settlements during history. River is the main expression used for river channels themselves, riparian zones , floodplains and terraces, adjoining uplands dissected by lower channels and river deltas . [ 3 ]
The relationship between humans and rivers, which represent freshwater environments, is complicated. Rivers serve primarily as a freshwater resource and as sinks for domestic and industrial waste water . The consequences from this usage occur from diverse activities and root themselves in complex, interdisciplinary systems and practices. [ 4 ]
Environmental changes in rivers usually result from human development, such as population growth , the dependence on fossil resources , urbanization , global commerce and industrial and agricultural emission . [ 4 ] Anthropogenic activities also include discrete elements like the use of fire, domestication of plants and animals, soil development, the establishment of settlements and irrigation. [ 3 ] River ecosystems have been transformed downstream from the point of pollution. Active human transformations, river engineering , have altered the river systems and ecosystems. [ 4 ]
River engineering, a branch of civil engineering , deals with the process of planned human intervention to improve and restore rivers for human and environmental needs. With modern technologies, data collection and modelling, navigation can be improved, dredging reduced and new habitats can be created. River engineering also handles sediment and erosion control , which can be a threat to humankind by destroying infrastructure, hindering water supply and causing major river cutoffs . River training structures will help to modify the hydraulic flow and the sediment response of a river. [ 5 ]
Humans have modified the natural behavior of rivers for longer than history is recorded. The management of water resources , protection against floods and hydropower are not new concepts. Regardless, river engineering has changed in the past century because of environmental concerns. The available amount and type of data about rivers has increased which provides more useful information about the behaviour of rivers and their ecosystems. Engineering experts are able to analyse and adapt in a more environmentally conscious way. Renaturalisation projects raise more awareness for the environment, however, rapidly growing and urbanizing population needs to be supplied with enough water resources and hydropower energy, which calls for more sustainable solutions. [ 6 ]
Water pollution occurs when water bodies , such as rivers, lakes and oceans are contaminated with harmful substances. These substances degrade the water quality and are toxic to humans as consumers and to the environment. [ 7 ] The contamination in a river can come from a point source or non-point source pollution . [ 8 ] The most common types of surface water pollution are agriculture, sewage and waste water (including stormwater runoff ), oil pollution and radioactive substances . [ 8 ] The agricultural sector consumes a lot of fresh water and is the leading source for water degradation . [ 8 ]
Most settlements in human history were placed along rivers, developing into riverine cities and traceable by their considerable environmental footprint. [ 3 ] The human influence on rivers can be divided into six chronological stages:
While river engineering can improve the behaviour of the river or hold it back to adapt to our infrastructure, and therefore be rated as positive or negative impact, pollution undoubtedly has a negative impact on our environment. The consequences are very complex and difficult to measure and classify, as often benefits for humankind imply drawbacks for the environment and the other way around. [ citation needed ]
Indicators that make the human impact measurable and quantitatively assessable are: artificial water surface ratio, artificial water surface density ratio, disruption of longitudinal connectivity ratio, artificial river ratio, sinuosity of artificial cutoff, channelization ratio, artificial levee ratio, road along river ratio, artificial sediment transport ratio and the integrated river structure impact index. [ 9 ]
Through anthropogenic impact the material flux of rivers has changed, which enters the sea and has a strong effect on coastal and shelf environments . [ 10 ]
Alternate land use, deforestation , afforestation and different types of river engineering have also led to changes in hydrologic processes, such as runoff . Mushrooming illegal mining activity can, for example, change the soil structure , the pressure-gradient between stream flow and groundwater and the vegetation cover and therefore lead to increased or decreased runoff. [ citation needed ] In southern Ghana in the Lower Pra River Basin, the percentage of runoff change, which is linked to human activity is approximately up to 66%. [ 11 ] Human presence and infrastructure has benefited from river management , by changing and straightening rivers to make the valuable land around them more live-able. [ 12 ]
The consumption of polluted water leads to many deaths. In the year 2015, 1.8 million people world wide died because of water pollution and over 1 billion people became ill. [ 8 ] Low-income and third-world communities are especially endangered, because they often live close to industries with high emission. [ 8 ] Hazards like waterborne pathogens and diseases spread fast in water surface bodies like rivers and are especially threatening in third-world countries without sewage- and wastewater treatment systems. [ 8 ]
Large dams and the production of hydropower are an important part of today's energy supply and cover a broad part of river engineering. [ citation needed ] The approach of releasing small quantities of water through turbines responds to the growing power demand from rapidly growing cities; however, it also flattens the rivers hydrographs , and is responsible for a decline in seasonal hydraulic variability and for the loss of delta-building dynamics, as the sediments are stored in the reservoir . Small-scale users of the deltas lose the biodiversity and ecosystem productivity on which they depend. [ 13 ] The aquatic ecosystem consists of a chain of organisms which are dependent on each other. When pollution causes harm to one organism only, this process can start a chain reaction and danger the entire aquatic habitat. When the proliferation of newly introduces nutrients evoke plant and algae growth, oxygen levels in the water decrease. [ citation needed ] This process, known as eutrophication , suffocates plants and animals and leads to dead zones i.e. water habitats without any life. Chemicals and heavy metals from industrial wastewater are also toxic to aquatic life. They can shorten an organism's life span and its ability to reproduce while also endangering humans, since humans may feed on these organisms and any toxic impacts on these organisms may adversely impact humans. [ 8 ]
Rivers have always been a reliable source for human communities. They have been a preferable place for settlements in early history and still provide a rich environment for big cities. Many trade routes lead along rivers and build global connections. [ 3 ] | https://en.wikipedia.org/wiki/Human_impact_on_river_systems |
Human impact on the nitrogen cycle is diverse. Agricultural and industrial nitrogen (N) inputs to the environment currently exceed inputs from natural N fixation . [ 1 ] As a consequence of anthropogenic inputs, the global nitrogen cycle (Fig. 1) has been significantly altered over the past century. Global atmospheric nitrous oxide (N 2 O) mole fractions have increased from a pre-industrial value of ~270 nmol/mol to ~319 nmol/mol in 2005. [ 2 ] Human activities account for over one-third of N 2 O emissions, most of which are due to the agricultural sector. [ 2 ] This article is intended to give a brief review of the history of anthropogenic N inputs, and reported impacts of nitrogen inputs on selected terrestrial and aquatic ecosystems .
Approximately 78% of Earth's atmosphere is N gas (N 2 ), which is an inert compound and biologically unavailable to most organisms. In order to be utilized in most biological processes, N 2 must be converted to reactive nitrogen (Nr), which includes inorganic reduced forms (NH 3 and NH 4 + ), inorganic oxidized forms (NO, NO 2 , HNO 3 , N 2 O, and NO 3 − ), and organic compounds ( urea , amines , and proteins ). [ 1 ] N 2 has a strong triple bond, and so a significant amount of energy (226 kcal mol −1 ) is required to convert N 2 to Nr. [ 1 ] Prior to industrial processes, the only sources of such energy were solar radiation and electrical discharges. [ 1 ] Utilizing a large amount of metabolic energy and the enzyme nitrogenase , some bacteria and cyanobacteria convert atmospheric N 2 to NH 3 , a process known as biological nitrogen fixation (BNF). [ 4 ] The anthropogenic analogue to BNF is the Haber-Bosch process, in which H 2 is reacted with atmospheric N 2 at high temperatures and pressures to produce NH 3 . [ 5 ] Lastly, N 2 is converted to NO by energy from lightning , which is negligible in current temperate ecosystems, or by fossil fuel combustion. [ 1 ]
Until 1850, natural BNF, cultivation-induced BNF (e.g., planting of leguminous crops), and incorporated organic matter were the only sources of N for agricultural production. [ 5 ] Near the turn of the century, Nr from guano and sodium nitrate deposits was harvested and exported from the arid Pacific islands and South American deserts. [ 5 ] By the late 1920s, early industrial processes, albeit inefficient, were commonly used to produce NH 3 . [ 1 ] Due to the efforts of Fritz Haber and Carl Bosch , the Haber-Bosch process became the largest source of nitrogenous fertilizer after the 1950s, and replaced BNF as the dominant source of NH 3 production. [ 5 ] From 1890 to 1990, anthropogenically created Nr increased almost ninefold. [ 1 ] During this time, the human population more than tripled, partly due to increased food production.
Since the Industrial Revolution , an additional source of anthropogenic N input has been fossil fuel combustion, which is used to release energy (e.g., to power automobiles). As fossil fuels are burned, high temperatures and pressures provide energy to produce NO from N 2 oxidation. [ 1 ] Additionally, when fossil fuel is extracted and burned, fossil N may become reactive (i.e., NO x emissions). [ 1 ] During the 1970s scientists began to recognize that N inputs were accumulating in the environment and affecting ecosystems. [ 1 ]
Between 1600 and 1990, global reactive nitrogen (Nr) creation had increased nearly 50%. [ 6 ] During this period, atmospheric emissions of Nr species reportedly increased 250% and deposition to marine and terrestrial ecosystems increased over 200%. [ 6 ] Additionally, there was a reported fourfold increase in riverine dissolved inorganic N fluxes to coasts. [ 6 ] Nitrogen is a critical limiting nutrient in many systems, including forests, wetlands, and coastal and marine ecosystems; therefore, this change in emissions and distribution of Nr has resulted in substantial consequences for aquatic and terrestrial ecosystems. [ 7 ] [ 8 ]
Atmospheric N inputs mainly include oxides of N (NO x ), ammonia (NH 3 ), and nitrous oxide (N 2 O) from aquatic and terrestrial ecosystems, [ 4 ] and NO x from fossil fuel and biomass combustion. [ 1 ]
In agroecosystems , fertilizer application has increased microbial nitrification (aerobic process in which microorganisms oxidize ammonium [NH 4 + ] to nitrate [NO 3 − ]) and denitrification (anaerobic process in which microorganisms reduce NO 3 − to atmospheric nitrogen gas [N 2 ]). Both processes naturally leak nitric oxide (NO) and nitrous oxide (N 2 O) to the atmosphere. [ 4 ] Of particular concern is N 2 O, which has an average atmospheric lifetime of 114–120 years, [ 10 ] and is 300 times more effective than CO 2 as a greenhouse gas . [ 4 ] NO x produced by industrial processes, automobiles and agricultural fertilization and NH 3 emitted from soils (i.e., as an additional byproduct of nitrification) [ 4 ] and livestock operations are transported to downwind ecosystems, influencing N cycling and nutrient losses. Six major effects of NO x and NH 3 emissions have been cited: [ 1 ] 1) decreased atmospheric visibility due to ammonium aerosols (fine particulate matter [PM]); 2) elevated ozone concentrations; 3) ozone and PM affects human health (e.g. respiratory diseases , cancer ); 4) increases in radiative forcing and global climate change ; 5) decreased agricultural productivity due to ozone deposition; and 6) ecosystem acidification [ 11 ] and eutrophication .
Terrestrial and aquatic ecosystems receive Nr inputs from the atmosphere through wet and dry deposition. [ 1 ] Atmospheric Nr species can be deposited to ecosystems in precipitation (e.g., NO 3 − , NH 4 + , organic N compounds), as gases (e.g., NH 3 and gaseous nitric acid [HNO 3 ]), or as aerosols (e.g., ammonium nitrate [NH 4 NO 3 ]). [ 1 ] Aquatic ecosystems receive additional nitrogen from surface runoff and riverine inputs. [ 8 ]
Increased N deposition can acidify soils, streams, and lakes and alter forest and grassland productivity. In grassland ecosystems, N inputs have produced initial increases in productivity followed by declines as critical thresholds are exceeded. [ 1 ] Nitrogen effects on biodiversity , carbon cycling , and changes in species composition have also been demonstrated. In highly developed areas of near shore coastal ocean and estuarine systems, rivers deliver direct (e.g., surface runoff ) and indirect (e.g., groundwater contamination) N inputs from agroecosystems. [ 8 ] Increased N inputs can result in freshwater acidification and eutrophication of marine waters.
Much of terrestrial growth in temperate systems is limited by N; therefore, N inputs (i.e., through deposition and fertilization) can increase N availability, which temporarily increases N uptake, plant and microbial growth, and N accumulation in plant biomass and soil organic matter . [ 12 ] Incorporation of greater amounts of N in organic matter decreases C:N ratios, increasing mineral N release (NH 4 + ) during organic matter decomposition by heterotrophic microbes (i.e., ammonification ). [ 13 ] As ammonification increases, so does nitrification of the mineralized N. Because microbial nitrification and denitrification are "leaky", N deposition is expected to increase trace gas emissions. [ 14 ] Additionally, with increasing NH 4 + accumulation in the soil, nitrification processes release hydrogen ions, which acidify the soil. NO 3 − , the product of nitrification, is highly mobile and can be leached from the soil, along with positively charged alkaline minerals such as calcium and magnesium. [ 4 ] In acid soils, mobilized aluminium ions can reach toxic concentrations, negatively affecting both terrestrial and adjacent aquatic ecosystems.
Anthropogenic sources of N generally reach upland forests through deposition . [ 15 ] A potential concern of increased N deposition due to human activities is altered nutrient cycling in forest ecosystems. Numerous studies have demonstrated both positive and negative impacts of atmospheric N deposition on forest productivity and carbon storage. Added N is often rapidly immobilized by microbes , [ 16 ] and the effect of the remaining available N depends on the plant community's capacity for N uptake. [ 17 ] In systems with high uptake, N is assimilated into the plant biomass, leading to enhanced net primary productivity (NPP) and possibly increased carbon sequestration through greater photosynthetic capacity. However, ecosystem responses to N additions are contingent upon many site-specific factors including climate, land-use history, and amount of N additions. For example, in the Northeastern United States, hardwood stands receiving chronic N inputs have demonstrated greater capacity to retain N and increase annual net primary productivity (ANPP) than conifer stands. [ 18 ] Once N input exceeds system demand, N may be lost via leaching and gas fluxes. When available N exceeds the ecosystem's (i.e., vegetation, soil, and microbes , etc.) uptake capacity, N saturation occurs and excess N is lost to surface waters, groundwater, and the atmosphere. [ 12 ] [ 17 ] [ 18 ] N saturation can result in nutrient imbalances (e.g., loss of calcium due to nitrate leaching) and possible forest decline. [ 13 ]
A 15-year study of chronic N additions at the Harvard Forest Long Term Ecological Research ( LTER ) program has elucidated many impacts of increased nitrogen deposition on nutrient cycling in temperate forests. It found that chronic N additions resulted in greater leaching losses, increased pine mortality, and cessation of biomass accumulation. [ 18 ] Another study reported that chronic N additions resulted in accumulation of non-photosynthetic N and subsequently reduced photosynthetic capacity, supposedly leading to severe carbon stress and mortality. [ 17 ] These findings negate previous hypotheses that increased N inputs would increase NPP and carbon sequestration .
Many plant communities have evolved under low nutrient conditions; therefore, increased N inputs can alter biotic and abiotic interactions, leading to changes in community composition. Several nutrient addition studies have shown that increased N inputs lead to dominance of fast-growing plant species, with associated declines in species richness. [ 19 ] [ 20 ] [ 21 ] Fast growing species have a greater affinity for nitrogen uptake, and will crowd out slower growing plant species by blocking access to sunlight with their higher above ground biomass. [ 22 ] Other studies have found that secondary responses of the system to N enrichment, including soil acidification and changes in mycorrhizal communities have allowed stress-tolerant species to out-compete sensitive species. [ 11 ] [ 23 ] Trees that have arbuscular mycorrhizal associations are more likely to benefit from an increase in soil nitrogen, as these fungi are unable to break down soil organic nitrogen. [ 24 ] Two other studies found evidence that increased N availability has resulted in declines in species-diverse heathlands . Heathlands are characterized by N-poor soils, which exclude N-demanding grasses; however, with increasing N deposition and soil acidification , invading grasslands replace lowland heath. [ 25 ] [ 26 ]
In a more recent experimental study of N fertilization and disturbance (i.e., tillage) in old field succession, it was found that species richness decreased with increasing N, regardless of disturbance level. Competition experiments showed that competitive dominants excluded competitively inferior species between disturbance events. With increased N inputs, competition shifted from belowground to aboveground (i.e., to competition for light), and patch colonization rates significantly decreased. These internal changes can dramatically affect the community by shifting the balance of competition-colonization tradeoffs between species. [ 21 ] In patch-based systems, regional coexistence can occur through tradeoffs in competitive and colonizing abilities given sufficiently high disturbance rates. [ 27 ] That is, with inverse ranking of competitive and colonizing abilities, plants can coexist in space and time as disturbance removes superior competitors from patches, allowing for establishment of superior colonizers. However, as demonstrated by Wilson and Tilman, increased nutrient inputs can negate tradeoffs, resulting in competitive exclusion of these superior colonizers/poor competitors. [ 21 ]
Aquatic ecosystems also exhibit varied responses to nitrogen enrichment. NO 3 − loading from N saturated, terrestrial ecosystems can lead to acidification of downstream freshwater systems and eutrophication of downstream marine systems. Freshwater acidification can cause aluminium toxicity and mortality of pH-sensitive fish species. Because marine systems are generally nitrogen-limited, excessive N inputs can result in water quality degradation due to toxic algal blooms, oxygen deficiency, habitat loss, decreases in biodiversity , and fishery losses. [ 8 ]
Atmospheric N deposition in terrestrial landscapes can be transformed through soil microbial processes to biologically available nitrogen, which can result in surface-water acidification , and loss of biodiversity . NO 3 − and NH 4 + inputs from terrestrial systems and the atmosphere can acidify freshwater systems when there is little buffering capacity due to soil acidification . [ 8 ] N pollution in Europe, the Northeastern United States, and Asia is a current concern for freshwater acidification . [ 28 ] Lake acidification studies in the Experimental Lake Area (ELA) in northwestern Ontario clearly demonstrated the negative effects of increased acidity on a native fish species: lake trout (Salvelinus namaycush) recruitment and growth dramatically decreased due to extirpation of its key prey species during acidification. [ 29 ] Reactive nitrogen from agriculture, animal-raising, fertilizer, septic systems, and other sources have raised nitrate concentrations in waterways of most industrialized nations. Nitrate concentrations in 1,000 Norwegian lakes had doubled in less than a decade. Rivers in the northeastern United States and the majority of Europe have increased ten to fifteen fold over the last century. Reactive nitrogen can contaminate drinking water through runoff into streams, lakes, rivers, and groundwater. In the United States alone, as much as 20% of groundwater sources exceed the World Health Organization's limit of nitrate concentration in potable water. These high concentrations can cause "blue baby disease" where nitrate ions weaken the blood's capacity to carry oxygen. Studies have also linked high concentrations of nitrates to reproductive issues and proclivity for some cancers, such as bladder and ovarian cancer. [ 30 ]
Urbanization, deforestation, and agricultural activities largely contribute sediment and nutrient inputs to coastal waters via rivers. [ 8 ] Increased nutrient inputs to marine systems have shown both short-term increases in productivity and fishery yields, and long-term detrimental effects of eutrophication . Tripling of NO 3 − loads in the Mississippi River in the last half of the 20th century have been correlated with increased fishery yields in waters surrounding the Mississippi delta; [ 31 ] however, these nutrient inputs have produced seasonal hypoxia (oxygen concentrations less than 2–3 mg L −1 , " dead zones ") in the Gulf of Mexico . [ 1 ] [ 8 ] In estuarine and coastal systems, high nutrient inputs increase primary production (e.g., phytoplankton , sea grasses, macroalgae), which increase turbidity with resulting decreases in light penetration throughout the water column. Consequently, submerged vegetation growth declines, which reduces habitat complexity and oxygen production. The increased primary (i.e., phytoplankton, macroalgae, etc.) production leads to a flux of carbon to bottom waters when decaying organic matter (i.e., senescent primary production) sinks and is consumed by aerobic bacteria lower in the water column. As a result, oxygen consumption in bottom waters is greater than diffusion of oxygen from surface waters. Additionally, certain algal blooms termed harmful algal blooms (HABs) produce toxins that can act as neuromuscular or organ damaging compounds. These algal blooms can be harmful to other marine life as well as to humans. [ 32 ] [ 33 ]
The above system responses to reactive nitrogen (Nr) inputs are almost all exclusively studied separately; however, research increasingly indicates that nitrogen loading problems are linked by multiple pathways transporting nutrients across system boundaries. [ 1 ] This sequential transfer between ecosystems is termed the nitrogen cascade. [ 6 ] ( see illustration from United Nations Environment Programme ). During the cascade, some systems accumulate Nr, which results in a time lag in the cascade and enhanced effects of Nr on the environment in which it accumulates. Ultimately, anthropogenic inputs of Nr are either accumulated or denitrified; however, little progress has been made in determining the relative importance of Nr accumulation and denitrification , which has been mainly due to a lack of integration among scientific disciplines. [ 1 ] [ 34 ]
Most Nr applied to global agroecosystems cascades through the atmosphere and aquatic and terrestrial ecosystems until it is converted to N 2 , primarily through denitrification . [ 1 ] Although terrestrial denitrification produces gaseous intermediates (nitric oxide [NO] and nitrous oxide [N 2 O]), the last step—microbial production of N 2 — is critical because atmospheric N 2 is a sink for Nr. [ 34 ] Many studies have clearly demonstrated that managed buffer strips and wetlands can remove significant amounts of nitrate (NO 3 − ) from agricultural systems through denitrification . [ 35 ] Such management may help attenuate the undesirable cascading effects and eliminate environmental Nr accumulation. [ 1 ]
Human activities dominate the global and most regional N cycles. [ 36 ] N inputs have shown negative consequences for both nutrient cycling and native species diversity in terrestrial and aquatic systems. In fact, due to long-term impacts on food webs, Nr inputs are widely considered the most critical pollution problem in marine systems. [ 8 ] In both terrestrial and aquatic ecosystems, responses to N enrichment vary; however, a general re-occurring theme is the importance of thresholds (e.g., nitrogen saturation ) in system nutrient retention capacity. In order to control the N cascade, there must be integration of scientific disciplines and further work on Nr storage and denitrification rates. [ 34 ] | https://en.wikipedia.org/wiki/Human_impact_on_the_nitrogen_cycle |
Human-information interaction or HII is the formal term for information behavior research in archival science ; the term was invented by Nahum Gershon in 1995. [ 1 ] HII is not transferable from analog to digital research because nonprofessional researchers greatly emphasize the need for further elaboration of context and scope finding aid elements. Researchers in HII take on many tasks, including helping to design information systems from a biological perspective that conform to the requirements of different segments of society, along with other behaviour intended to improve interaction between humans and information systems. HII is generally considered to be multi-disciplinary as different disciplines have different viewpoints on these interactions and their consequences. [ 2 ] HII is considered especially important due to humanity's dependence on information and the technology needed to access it. [ 3 ]
This article relating to library science or information science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Human_information_interaction |
Various cultures throughout Africa utilize insects for many things and have developed unique interactions with insects : as food sources, for sale or trade in markets, or for use in traditional practices and rituals, as ethnomedicine or as part of their traditional ecological knowledge . As food, also known as entomophagy , a variety of insects are collected as part of a protein rich source of nutrition for marginal communities. [ 1 ] Entomophagy had been part of traditional culture throughout Africa, though this activity has been diminishing gradually with the influx of Western culture and market economies. [ 1 ] [ 2 ] [ 3 ] Often the collection of insects for food has been the activity of children, both male and female.
Within Southern Africa different communities have established practices for regulating and maintaining their insect harvests. Some groups, through taboos , ritual , and hierarchical organizational structures acting as regulating bodies, have maintained their traditional practice for centuries. [ 3 ] They monitor the development of certain caterpillar species' life cycles to ensure proper time frame for harvesting and sustainability. [ 3 ]
Understanding the diversity of relationships to nature is a crucial aspect of fully grasping and contending with the challenges of modernity and ecology. According to the Food and Agriculture Organization of the United Nations report from January 2012, it has been recommended that insects be utilized both for human consumption as well as for animal feed. [ 4 ] However, as the climate changes many agencies are reporting on the risk of the decline in insect populations within the larger ongoing phenomenon of biodiversity loss and how it may affect the world's ecology . [ 5 ] [ 6 ]
Maize is a staple crop of Blouberg , Limpopo . Yet due to the processing methods of removing the germ and pericarp , maize is a poor source of protein which often requires supplementation. [ 1 ] Within the Blouberg Region, Limpopo, there are some 30 species of insect which are considered edible, and of those, the caterpillar Hemijana variegata Rothschild ( Lepidoptera : Eupterotidae ) is considered a delicacy while being nutritionally sound. [ 1 ] Depending on how it is prepared, the nutritional values of protein, carbohydrate, fat, and essential vitamins varies. According to B.A. Egan et al. (2014) the fortification of staple cereals with insects would positively affect the protein content of the community's diet, and should be promoted as a healthy alternative to beef. [ 1 ]
Hemijana variegata Rothschild are sold in local markets in the village of Ga Manaka. In this market, the caterpillars are collected by locals in the surrounding forests near Blouberg Mountain and transported back for preparation. Local residents report it was important to wash them after collection. They would wash them three times and purge them, before boiling them in salty water for an hour. They are then sun dried until brittle and the hairs are "shaken off by ' winnowing ' in a basket or bucket." [ 1 ]
The Hemijana variegata has protein content that exceeds that of more common livestock such as cows or chickens when measured per gram. [ 1 ] The energy, and protein content of the caterpillars which had been traditionally dried had been lower than that of oven based drying. [ 1 ] The energy content of the caterpillars (552 kcal /100 g.) is greater than that of beef (112 to 115 kcal/100 g), goat meat (96.36 and 101.47 kcal/100 g), and chicken (144 kcal/100 g). [ 1 ] The fat content is 20% which is higher than beef or chicken . [ 1 ] The vitamin C content was measured at (14.15 mg/100 g.) compared to (30 mg/100 g) in peas and over (90 mg/100 g) in broccoli . [ 1 ]
Caterpillars such as Gonimbrasia belina , or mopane, are a staple protein source for the communities of the Northern Province of South Africa (formerly Venda ). [ 2 ] Caterpillars are one of the many insects that are traded in wide reaching markets (southern Zimbabwe , eastern Botswana and northern provinces of South Africa , formerly known as Transvaal ); not only are caterpillars traded in this expansive market, but other species traded include: soldier termites ( Macrotermes : Termitidae , Isoptera ), green bugs ( Encosternum : Tessaratomidae , Hemiptera ) and flying termites ( Isoptera ). [ 2 ] Within rural communities still practicing traditional diets, grasshoppers and mopane worms are considered vital in their subsistence economy and the most important insects for nutrition. [ 2 ] The amount of caught insects per time spent trapping varies, depending on the level of rainfall predominately, but also different environmental conditions. [ 2 ] Within rural communities grasshoppers and locusts are often trapped for personal consumption rather than to be sold within a market. [ 2 ]
In the 1996 survey of the community upwards of 70% of rural households reported having consumed grasshoppers regularly, having an estimated daily intake of 14 grams . [ 2 ] The practice of collecting grasshoppers for consumption is considered a common activity for young boys and girls, as well as older women, yet not for older men. [ 2 ] Grasshoppers are a free source of nutritional food and as such are important for the sustenance of communities marginal to market economies; as much as 2350 tons of grasshoppers were estimated to be harvested over a period of eight months. [ 2 ]
Within the Venda language, Tshivenda , in general locusts and grasshoppers share a name, nzie . The stages of the insects life also are distinctly named: nymphs as vhulka , and the pre-adult stages: thathakubi or dengulamukumbi . [ 2 ] Researchers had documented approximately 155 vernacular names for varieties of grasshoppers which varied based on the local communities queried, of which most of the respondents were children. [ 2 ] Overall, the vernacular names represented 42 species of grasshoppers. [ 2 ] There are vivid linguistic descriptors of many varied species of grasshopper, based on appearance, behaviors, habits, location found, or the sound made. [ 2 ]
Grasshoppers , or bapu , are used for a variety of ailments, and different preparations have different medicinal properties according to the ethnomedicine of the communities studied. Some examples are: when bapu is fried, it is to be eaten as a treatment for young children who wet the bed; when bapu is dried and ground up and put in warm water it is used to treat nightmares; boiled bapu is for hyperactive children; ground and then burnt bapu mixed with petroleum jelly is applied to the fontanel of newborns to strengthen them; the ashes of roasted bapu is rubbed onto women's breasts to alleviate pain. [ 2 ]
Some species of grasshopper are for various reasons thought of to be inedible or dangerous. Besides being inedible, there are beliefs associated with the consumption of certain grasshoppers, such as those that are attracted to fire, which may lead to madness or the loss of one's hearing. [ 2 ] Losing one's sanity is a persistent fear associated with eating grasshoppers that live near one's house. [ 2 ] Other such forbidden species are silivhindi and banzi ( Pyrgomorphldae ) which have a distinctly bad odor and are thought to be toxic to both humans and dogs. [ 2 ] Within Zionist African Churches, many insects such as grasshoppers and locusts are thought of us unclean, and this translates into a stigma against eating those for fear of association. [ 2 ]
Several species are believed to become a snake if certain practices are not followed. For instance, mutotombudzi ( Acrida spp., Truxalis spp.) requires that you remove the antenna, or nzie-luvhele ( Cyrtacanthracris fatarica ) must be squashed in a specific manner. [ 2 ] The folklore associated with nyammbeulwana is that it could cause one to lose their hair or blood if it were to land on your head. [ 2 ] Because of the belief that tshikwandavhokopfu ("powder eater") often eats human and cow feces some do not eat it. [ 2 ] Other species have foul tastes or are associated with snakes which often leads to their not being eaten.
The Bisa people inhabit the Kopa area of Mpika district of northern Zambia ( latitude , 11° 00'–13° 30' south; longitude , 29° 45'–32° 30' east). [ 3 ] These people practice traditional subsistence farming , hunting and caterpillar collection, which is essential to their culture. K.J. Mbata, et al. (2002) conducted a household survey in 2000 to better understand their customs and knowledge concerning caterpillar harvesting. Upwards of 89.1% of respondents practiced caterpillar harvesting in the surrounding miombo woodlands . [ 3 ] The two most well-known species for harvesting in this region of the eight said to live there were Gynanisa maja Strand ( chipumi ) and Gonimbrasia zambesina Walker ( mumpa ). Mostly due to their size, flavor, common lack of thorns or urticating hairs , and their market value, Gynanisa maja is the most popular. [ 3 ] The Bisa people believe that the caterpillars have been with them since time immemorial, as gifts from god, and this respectful belief has helped them formulate sustainable traditional management systems. [ 3 ]
The traditional ecological knowledge of the life cycles and harvesting practices have been taught through oral education and shared experiences over centuries, developed in interaction with their local environment. The Bisa identify caterpillar species in various ways, among them the sound that the caterpillars make while eating and on which plants they feed. [ 3 ] They have an understanding of the life cycles of the harvested caterpillars, recognizing the stages: egg , larva , pupa , and then adult . Through early September to late October the caterpillars oviposit , and then harvesting is done during the rainy season between November and April. [ 3 ] Taboos and specific seasonal management for harvest are some regulatory mechanisms practiced by the Bisa to teach proper traditional hunting behavior, to protect the maturation process and life cycle of the caterpillar, and to ensure the sustainability of the caterpillar and health of the ecosystem. [ 3 ]
Traditional technologies protect the habitat of the caterpillars, such as the use of fire to prevent natural blazes from consuming the host trees. [ 3 ]
The monitoring process of the Bisa people of the caterpillars are often reproduced and learned through ritual behavior, performed by members of the senior chief kopa royal establishment. [ 3 ] These rituals act as a regulator for the harvesting of the caterpillar and involve many layers of the community. The village scouts will walk through the woodlands daily and will report the location of eggs within their chiefdom back to the senior chief of the chiefdom. In one such practiced ritual to thank the ancestral Bisa spirits for the edible caterpillars, the senior chief's assistant ( chilukuta ) places a white cloth in the shrine for the burial site of the senior chiefs ( chaipinda ). [ 3 ] The white cloth is cut into two parts, half of it is to stay at the shrine while the other is cut into smaller pieces. Believed to bless the developing caterpillars, the smaller pieces are used by the chief's male grandchildren to mark the host plants . [ 3 ]
As the eggs begin to hatch, the monitors will gather several to present to the chief who will convene a meeting of himself, his adviser and sub-chiefs, and his senior wife. [ 3 ] The chief's wife ( mukolo-wa-chalo or "mother-of-the-land"), will offer the young caterpillars to the ancestral Bisa spirits at the shrine ( babenye ) in a ritual known locally as Ukuposela. [ 3 ] Once the caterpillars have begun reaching maturity and samples have been brought by monitors to the senior chief, another meeting is called and more caterpillars are offered up by the senior wife, who following the offering eats the caterpillars that were not offered. [ 3 ] A third meeting is called to set up a harvesting date in which the wife does not participate, though representatives from buyers outside the chiefdom may be invited. In another meeting a price is set for caterpillar harvests, and no outside representative participates. [ 3 ]
The Bisa people have established rules and taboos for harvesting, such as a stoppage directive issued by the senior chief. [ 3 ] The signal for the beginning of harvesting generally is the beginning of November and the signal to stop given around mid-December. [ 3 ] Other taboos and associated beliefs are: collecting caterpillars before or after the signals are believed to lead to those involved getting lost, its forbidden to roast them in an open fire or eviscerate them with a knife, noisy or sexual behavior is forbidden while harvesting, and consuming young caterpillars would make people go insane . [ 3 ]
Food and Agriculture Organization of the United Nations (01/2012) Expert consultation: "Assessing the Potential of Insects as Food and Feed in assuring Food Security." http://www.fao.org/3/an233e/an233e00.pdf [ 4 ]
[ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] | https://en.wikipedia.org/wiki/Human_interactions_with_insects_in_southern_Africa |
The human interactome is the set of protein–protein interactions (the interactome ) that occur in human cells. [ 1 ] [ 2 ] The sequencing of reference genomes, in particular the Human Genome Project , has revolutionized human genetics , molecular biology , and clinical medicine . Genome-wide association study results have led to the association of genes with most Mendelian disorders, [ 3 ] and over 140 000 germline mutations have been associated with at least one genetic disease. [ 4 ] However, it became apparent that inherent to these studies is an emphasis on clinical outcome rather than a comprehensive understanding of human disease; indeed to date the most significant contributions of GWAS have been restricted to the “low-hanging fruit” of direct single mutation disorders, prompting a systems biology approach to genomic analysis. [ 5 ] [ 6 ] The connection between genotype and phenotype (how variation in genotype affects the disease or normal functioning of the cell and the human body) remain elusive, especially in the context of multigenic complex traits and cancer. [ 7 ] To assign functional context to genotypic changes, much of recent research efforts have been devoted to the mapping of the networks formed by interactions of cellular and genetic components in humans, as well as how these networks are altered by genetic and somatic disease.
With the sequencing of the genomes of a diverse array or model organisms, it became clear that the number of genes does not correlate with the human perception of relative organism complexity – the human proteome contains some 20 000 genes, [ 8 ] which is smaller than some species such as corn. A statistical approach to calculating the number of interactions in humans gives an estimate of around 650 000, one order of magnitude bigger than Drosophila and 3 times larger than C. Elegans . [ 2 ] As of 2008, only about <0.3% of all estimated interactions among human proteins has been identified, [ 9 ] although in recent years there has been exponential growth in discovery – as of 2015, [ 10 ] over 210 000 unique human positive protein–protein interactions are currently catalogued, and bioGRID database contains almost 750 000 literature-curated PPI's for 30 model organisms, 300 000 of which are verified or predicted human physical or genetic protein–protein interactions, a 50% increase from 2013. [ 11 ] The currently available information on the human interactome network originates from either literature-curated interactions, [ 12 ] high-throughput experiments , [ 10 ] or from potential interactions predicted from interactome data, whether through phylogenetic profiling (evolutionary similarity), statistical network inference, [ 13 ] or text/literature mining methods. [ 14 ]
Protein–protein interactions are only the raw material for networks. To form useful interactome databases and create integrated networks, other types of data that can be combined with protein–protein interactions include information on gene expression and co-expression, cellular co-localization of proteins (based on microscopy ), genetic information, metabolic and signalling pathways , and more. [ 15 ] The end goal of unravelling human protein interactomes is ultimately to understand mechanisms of disease and uncover previously unknown disease genes. It has been found that proteins with a high number of interactions (outward edges) are significantly more likely to be hubs in modules that correlate with disease, [ 10 ] [ 16 ] probably because proteins with more interactions are involved in more biological functions. By mapping disease alterations to the human interactome, we can gain a much better understanding of the pathways and biological processes of disease. [ 17 ]
Analysis of metabolic networks of proteins hearkens back to the 1940s, but it was not until the late 1990s and early 2000s that computational data-driven genomic analyses to predict functional context and networks of genetic associations appeared in earnest. [ 8 ] Since then, the interactomes of many model organisms are considered to have been well characterized, notably the Saccharomyces cerevisiae Interactome [ 18 ] and the Drosophila interactome. [ 19 ]
High throughput experimental approaches for discovering protein–protein interactions typically perform a version of the two-hybrid screening approach or tandem affinity purification followed by mass spectrometry . [ 12 ] Information from experiments and literature curation are compiled into databases of protein interactions, such as DIP, [ 20 ] and BioGRID . [ 11 ] A more recent effort, HINT-KB, [ 10 ] attempts to amalgamate most of the current PPI databases, but filtering systematically erroneous interactions as well as trying to correct for inherent sociological sampling biases in literature curated datasets.
Smaller human interactome networks have been described in the specific context of important drivers of many different disorders, including neurodegenerative disorders , [ 21 ] autism and other psychiatric disorders, [ 22 ] and cancer. Cancer gene networks have been particularly well studied, due in part to large genome initiatives such as The Cancer Genome Atlas (TCGA). [ 23 ] A large portion of the mutational landscape including intra-tumoural heterogeneity has been mapped for most common types of cancers [ 24 ] (for example, breast cancer has been well studied), [ 25 ] and many studies have also investigated the difference between active driver genes and passive passenger mutations in the context of cancer interaction networks. [ 16 ]
The first attempts at large-scale integrative human interactome mapping occurred around 2005. Stetzl et al. [ 26 ] used a protein matrix of 4500 baits and 5600 preys in a yeast two hybrid system to piece together the interactome, and Rual et al. performed a similar yeast-two hybrid study verified with co-affinity purification and correlation with other biological attributes, revealing more than 300 connections to 100 disease-associated proteins. [ 12 ] Since those pioneering efforts, hundreds of similar studies have been conducted. Compiled databases such as UniHI [ 27 ] provide platform for single entry. Futschik et al. [ 28 ] performed a meta analysis of eight interactome maps and found that of 57 000 interacting proteins in total, there was a small (albeit statistically significant) overlap between the different databases, indicating considerable selection and detection biases.
In 2010, around 130 000 binary interactions in the interactome were described in the most popular databases, but many were verified with only one source. [ 15 ] With the rapid development of high throughput methods, datasets still suffer from high rates of false positives and low coverage of the interactome. Tyagi et al. [ 29 ] described a novel framework for incorporating structural complexes and binding interfaces for verification. This was part of much larger efforts for PPI verification; interaction networks are typically validated further by using a combination of coexpression profiles, protein structural information, Gene ontology terms, topological considerations , and colocalization [ 26 ] [ 30 ] before being considered “high-confidence”.
A recent resource paper (November 2014) [ 17 ] attempts to provide a more comprehensive proteome level map of the human interactome. It found vast uncharted territory in the human interactome, and used diverse methods to build a new interactome map correcting for curation bias, including probing all pairwise combinations of 13 000 protein products for interaction using Yeast two hybrid and co-affinity purification, in a massive coordinated effort across research labs in Canada and the United States. However, this still represents confirmation of but a fraction of expected interactions – around 30 000 of high confidence. Despite the coordinated efforts of many, the human interactome is still very much a work in progress. [ 17 ] [ 30 ] | https://en.wikipedia.org/wiki/Human_interactome |
Human iron metabolism is the set of chemical reactions that maintain human homeostasis of iron at the systemic and cellular level. Iron is both necessary to the body and potentially toxic. Controlling iron levels in the body is a critically important part of many aspects of human health and disease. Hematologists have been especially interested in systemic iron metabolism , because iron is essential for red blood cells , where most of the human body's iron is contained. Understanding iron metabolism is also important for understanding diseases of iron overload , such as hereditary hemochromatosis , and iron deficiency , such as iron-deficiency anemia .
Iron is an essential bioelement for most forms of life, from bacteria to mammals . Its importance lies in its ability to mediate electron transfer. In the ferrous state (Fe 2+ ), iron acts as an electron donor , while in the ferric state (Fe 3+ ) it acts as an acceptor . Thus, iron plays a vital role in the catalysis of enzymatic reactions that involve electron transfer (reduction and oxidation, redox ). Proteins can contain iron as part of different cofactors , such as iron–sulfur clusters (Fe-S) and heme groups, both of which are assembled in mitochondria .
Human cells require iron in order to obtain energy as ATP from a multi-step process known as cellular respiration, more specifically from oxidative phosphorylation at the mitochondrial cristae . Iron is present in the iron–sulfur cluster and heme groups of the electron transport chain proteins that generate a proton gradient that allows ATP synthase to synthesize ATP ( chemiosmosis ).
Heme groups are part of hemoglobin , a protein found in red blood cells that serves to transport oxygen from the lungs to other tissues. Heme groups are also present in myoglobin to store and diffuse oxygen in muscle cells.
The human body needs iron for oxygen transport. Oxygen (O 2 ) is required for the functioning and survival of nearly all cell types. Oxygen is transported from the lungs to the rest of the body bound to the heme group of hemoglobin in red blood cells. In muscle cells, iron binds oxygen to myoglobin , which regulates its release.
Iron is also potentially toxic. Its ability to donate and accept electrons means that it can catalyze the conversion of hydrogen peroxide into free radicals . Free radicals can cause damage to a wide variety of cellular structures, and ultimately kill the cell. [ 1 ]
Iron bound to proteins or cofactors such as heme is safe. Also, there are virtually no truly free iron ions in the cell, since they readily form complexes with organic molecules. However, some of the intracellular iron is bound to low-affinity complexes, and is termed labile iron or "free" iron. Iron in such complexes can cause damage as described above. [ 2 ]
To prevent that kind of damage, all life forms that use iron bind the iron atoms to proteins . This binding allows cells to benefit from iron while also limiting its ability to do harm. [ 1 ] [ 3 ] Typical intracellular labile iron concentrations in bacteria are 10-20 micromolar, [ 4 ] though they can be 10-fold higher in anaerobic environment, [ 5 ] where free radicals and reactive oxygen species are scarcer. In mammalian cells, intracellular labile iron concentrations are typically smaller than 1 micromolar, less than 5 percent of total cellular iron. [ 2 ]
In response to a systemic bacterial infection, the immune system initiates a process known as " iron withholding ". If bacteria are to survive, then they must obtain iron from their environment. Disease-causing bacteria do this in many ways, including releasing iron-binding molecules called siderophores and then reabsorbing them to recover iron, or scavenging iron from hemoglobin and transferrin . The harder the bacteria have to work to get iron, the greater a metabolic price they must pay. That means that iron-deprived bacteria reproduce more slowly. So, control of iron levels appears to be an important defense against many bacterial infections. Certain bacteria species have developed strategies to circumvent that defense, TB causing bacteria can reside within macrophages , which present an iron rich environment and Borrelia burgdorferi uses manganese in place of iron. People with increased amounts of iron, as, for example, in hemochromatosis, are more susceptible to some bacterial infections. [ 6 ]
Although this mechanism is an elegant response to short-term bacterial infection, it can cause problems when it goes on so long that the body is deprived of needed iron for red cell production. Inflammatory cytokines stimulate the liver to produce the iron metabolism regulator protein hepcidin , that reduces available iron. If hepcidin levels increase because of non-bacterial sources of inflammation, like viral infection, cancer, auto-immune diseases or other chronic diseases, then the anemia of chronic disease may result. In this case, iron withholding actually impairs health by preventing the manufacture of enough hemoglobin-containing red blood cells. [ 3 ]
Most well-nourished people in industrialized countries have 4 to 5 grams of iron in their bodies (~38 mg iron/kg body weight for women and ~50 mg iron/kg body for men). [ 7 ] Of this, about 2.5 g is contained in the hemoglobin needed to carry oxygen through the blood (around 0.5 mg of iron per mL of blood), [ 8 ] and most of the rest (approximately 2 grams in adult men, and somewhat less in women of childbearing age) is contained in ferritin complexes that are present in all cells, but most common in bone marrow, liver , and spleen . The liver stores of ferritin are the primary physiologic source of reserve iron in the body. The reserves of iron in industrialized countries tend to be lower in children and women of child-bearing age than in men and in the elderly. Women who must use their stores to compensate for iron lost through menstruation , pregnancy or lactation have lower non-hemoglobin body stores, which may consist of 500 mg , or even less.
Of the body's total iron content, about 400 mg is devoted to cellular proteins that use iron for important cellular processes like storing oxygen (myoglobin) or performing energy-producing redox reactions ( cytochromes ). A relatively small amount (3–4 mg) circulates through the plasma , bound to transferrin. [ 9 ] Because of its toxicity, free soluble iron is kept in low concentration in the body.
Iron deficiency first affects the storage of iron in the body, and depletion of these stores is thought to be relatively asymptomatic, although some vague and non-specific symptoms have been associated with it. Since iron is primarily required for hemoglobin, iron deficiency anemia is the primary clinical manifestation of iron deficiency. Iron-deficient people will suffer or die from organ damage well before their cells run out of the iron needed for intracellular processes like electron transport.
Macrophages of the reticuloendothelial system store iron as part of the process of breaking down and processing hemoglobin from engulfed red blood cells. Iron is also stored as a pigment called hemosiderin , which is an ill-defined deposit of protein and iron, created by macrophages where excess iron is present, either locally or systemically, e.g., among people with iron overload due to frequent blood cell destruction and the necessary transfusions their condition calls for. If systemic iron overload is corrected, over time the hemosiderin is slowly resorbed by the macrophages.
Human iron homeostasis is regulated at two different levels. Systemic iron levels are balanced by the controlled absorption of dietary iron by enterocytes , the cells that line the interior of the intestines , and the uncontrolled loss of iron from epithelial sloughing, sweat, injuries and blood loss. In addition, systemic iron is continuously recycled. Cellular iron levels are controlled differently by different cell types due to the expression of particular iron regulatory and transport proteins.
The absorption of dietary iron is a variable and dynamic process. The amount of iron absorbed compared to the amount ingested is typically low, but may range from 5% to as much as 35% depending on circumstances and type of iron. The efficiency with which iron is absorbed varies depending on the source. Generally, the best-absorbed forms of iron come from animal products. Absorption of dietary iron in iron salt form (as in most supplements) varies somewhat according to the body's need for iron, and is usually between 10% and 20% of iron intake. Absorption of iron from animal products, and some plant products, is in the form of heme iron, and is more efficient, allowing absorption of from 15% to 35% of intake. Heme iron in animals is from blood and heme-containing proteins in meat and mitochondria, whereas in plants, heme iron is present in mitochondria in all cells that use oxygen for respiration.
Like most mineral nutrients, the majority of the iron absorbed from digested food or supplements is absorbed in the duodenum by enterocytes of the duodenal lining. These cells have special molecules that allow them to move iron into the body. To be absorbed, dietary iron can be absorbed as part of a protein such as heme protein or iron must be in its ferrous Fe 2+ form. A ferric reductase enzyme on the enterocytes' brush border , duodenal cytochrome B ( Dcytb ), reduces ferric Fe 3+ to Fe 2+ . [ 10 ] A protein called divalent metal transporter 1 ( DMT1 ), which can transport several divalent metals across the plasma membrane, then transports iron across the enterocyte's cell membrane into the cell. If the iron is bound to heme, it is instead transported across the apical membrane by heme carrier protein 1 (HCP1). [ 11 ] Heme is then catabolized by microsomal heme oxygenase into biliverdin , releasing Fe 2+ . [ 12 ]
These intestinal lining cells can then either store the iron as ferritin , which is accomplished by Fe 2+ binding to apoferritin (in which case the iron will leave the body when the cell dies and is sloughed off into feces ), or the cell can release it into the body via the only known iron exporter in mammals, ferroportin . Hephaestin , a ferroxidase that can oxidize Fe 2+ to Fe 3+ and is found mainly in the small intestine, helps ferroportin transfer iron across the basolateral end of the intestine cells. Upon release into the bloodstream, Fe 3+ binds transferrin and circulates to tissues. In contrast, ferroportin is post-translationally repressed by hepcidin , a 25-amino acid peptide hormone. The body regulates iron levels by regulating each of these steps. For instance, enterocytes synthesize more Dcytb, DMT1 and ferroportin in response to iron deficiency anemia. [ 13 ] Iron absorption from diet is enhanced in the presence of vitamin C and diminished by excess calcium, zinc, or manganese. [ 14 ]
The human body's rate of iron absorption appears to respond to a variety of interdependent factors, including total iron stores, the extent to which the bone marrow is producing new red blood cells, the concentration of hemoglobin in the blood, and the oxygen content of the blood. The body also absorbs less iron during times of inflammation , in order to deprive bacteria of iron. Recent discoveries demonstrate that hepcidin regulation of ferroportin is responsible for the syndrome of anemia of chronic disease.
Most of the iron in the body is hoarded and recycled by the reticuloendothelial system, which breaks down aged red blood cells. In contrast to iron uptake and recycling, there is no physiologic regulatory mechanism for excreting iron. People lose a small but steady amount by gastrointestinal blood loss, sweating and by shedding cells of the skin and the mucosal lining of the gastrointestinal tract . The total amount of loss for healthy people in the developed world amounts to an estimated average of 1 mg a day for men, and 1.5–2 mg a day for women with regular menstrual periods. [ 15 ] People with gastrointestinal parasitic infections, more commonly found in developing countries, often lose more. [ 1 ] Those who cannot regulate absorption well enough get disorders of iron overload. In these diseases, the toxicity of iron starts overwhelming the body's ability to bind and store it. [ 16 ]
Most cell types take up iron primarily through receptor-mediated endocytosis via transferrin receptor 1 (TFR1), transferrin receptor 2 (TFR2) and GAPDH . TFR1 has a 30-fold higher affinity for transferrin-bound iron than TFR2 and thus is the main player in this process. [ 17 ] [ 18 ] The higher order multifunctional glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH) also acts as a transferrin receptor. [ 19 ] [ 20 ] Transferrin-bound ferric iron is recognized by these transferrin receptors, triggering a conformational change that causes endocytosis. Iron then enters the cytoplasm from the endosome via importer DMT1 after being reduced to its ferrous state by a STEAP family reductase. [ 21 ]
Alternatively, iron can enter the cell directly via plasma membrane divalent cation importers such as DMT1 and ZIP14 (Zrt-Irt-like protein 14). [ 22 ] Again, iron enters the cytoplasm in the ferrous state after being reduced in the extracellular space by a reductase such as STEAP2, STEAP3 (in red blood cells), Dcytb (in enterocytes) and SDR2. [ 21 ]
Iron can also enter cells via CD44 in complexes bound to hyaluronic acid during epithelial–mesenchymal transition (EMT). In this process, epithelial cells transform into mesenchymal cells with detachment from the basement membrane , to which they’re normally anchored, paving the way for the newly differentiated motile mesenchymal cells to begin migration away from the epithelial layer. [ 23 ] [ 24 ]
While EMT plays a crucial role in physiological processes like implantation , where it enables the embryo to invade the endometrium to facilitate placental attachment, its dysregulation can also fuel the malignant spread of tumors empowering them to invade surrounding tissues and establish distant colonies ( metastasis ). [ 24 ]
Malignant cells often exhibit a heightened demand for iron, fueling their transition towards a more invasive mesenchymal state. This iron is necessary for the expression of mesenchymal genes, like those encoding transforming growth factor beta (TGF-β), crucial for EMT. Notably, iron’s unique ability to catalyze protein and DNA demethylation plays a vital role in this gene expression process. [ 23 ]
Conventional iron uptake pathways, such as those using the transferrin receptor 1 (TfR1), often prove insufficient to meet these elevated iron demands in cancer cells. As a result, various cytokines and growth factors trigger the upregulation of CD44, a surface molecule capable of internalizing iron bound to the hyaluronan complex. This alternative pathway, relying on CD44-mediated endocytosis, becomes the dominant iron uptake mechanism compared to the traditional TfR1-dependent route. [ 23 ] [ 24 ]
In the cytoplasm, ferrous iron is found in a soluble, chelatable state which constitutes the labile iron pool (~0.001 mM). [ 25 ] In this pool, iron is thought to be bound to low-mass compounds such as peptides, carboxylates and phosphates, although some might be in a free, hydrated form ( aqua ions ). [ 25 ] Alternatively, iron ions might be bound to specialized proteins known as metallochaperones . [ 26 ] Specifically, poly-r(C)-binding proteins PCBP1 and PCBP2 appear to mediate transfer of free iron to ferritin (for storage) and non-heme iron enzymes (for use in catalysis). [ 22 ] [ 27 ] The labile iron pool is potentially toxic due to iron's ability to generate reactive oxygen species. Iron from this pool can be taken up by mitochondria via mitoferrin to synthesize Fe-S clusters and heme groups. [ 21 ]
Iron can be stored in ferritin as ferric iron due to the ferroxidase activity of the ferritin heavy chain. [ 28 ] Dysfunctional ferritin may accumulate as hemosiderin , which can be problematic in cases of iron overload. [ 29 ] The ferritin storage iron pool is much larger than the labile iron pool, ranging in concentration from 0.7 mM to 3.6 mM. [ 25 ]
Iron export occurs in a variety of cell types, including neurons , red blood cells, hepatocytes, macrophages and enterocytes. The latter two are especially important since systemic iron levels depend upon them. There is only one known iron exporter, ferroportin . [ 30 ] It transports ferrous iron out of the cell, generally aided by ceruloplasmin and/or hephaestin (mostly in enterocytes), which oxidize iron to its ferric state so it can bind ferritin in the extracellular medium. [ 21 ] Hepcidin causes the internalization of ferroportin, decreasing iron export. Besides, hepcidin seems to downregulate both TFR1 and DMT1 through an unknown mechanism. [ 31 ] Another player assisting ferroportin in effecting cellular iron export is GAPDH. [ 32 ] A specific post translationally modified isoform of GAPDH is recruited to the surface of iron loaded cells where it recruits apo-transferrin in close proximity to ferroportin so as to rapidly chelate the iron extruded. [ 33 ]
The expression of hepcidin, which only occurs in certain cell types such as hepatocytes , is tightly controlled at the transcriptional level and it represents the link between cellular and systemic iron homeostasis due to hepcidin's role as "gatekeeper" of iron release from enterocytes into the rest of the body. [ 21 ] Erythroblasts produce erythroferrone , a hormone which inhibits hepcidin and so increases the availability of iron needed for hemoglobin synthesis. [ 35 ]
Although some control exists at the transcriptional level, the regulation of cellular iron levels is ultimately controlled at the translational level by iron-responsive element-binding proteins IRP1 and especially IRP2. [ 36 ] When iron levels are low, these proteins are able to bind to iron-responsive elements (IREs). IREs are stem loop structures in the untranslated regions (UTRs) of mRNA. [ 21 ]
Both ferritin and ferroportin contain an IRE in their 5' UTRs, so that under iron deficiency their translation is repressed by IRP2, preventing the unnecessary synthesis of storage protein and the detrimental export of iron. In contrast, TFR1 and some DMT1 variants contain 3' UTR IREs, which bind IRP2 under iron deficiency, stabilizing the mRNA, which guarantees the synthesis of iron importers. [ 21 ]
Functional or actual iron deficiency can result from a variety of causes. These causes can be grouped into several categories:
The body is able to substantially reduce the amount of iron it absorbs across the mucosa. It does not seem to be able to entirely shut down the iron transport process. Also, in situations where excess iron damages the intestinal lining itself (for instance, when children eat a large quantity of iron tablets produced for adult consumption), even more iron can enter the bloodstream and cause a potentially deadly syndrome of iron overload. Large amounts of free iron in the circulation will cause damage to critical cells in the liver, the heart and other metabolically active organs.
Iron toxicity results when the amount of circulating iron exceeds the amount of transferrin available to bind it, but the body is able to vigorously regulate its iron uptake. Thus, iron toxicity from ingestion is usually the result of extraordinary circumstances like iron tablet over-consumption [1] [ 42 ] rather than variations in diet . The type of acute toxicity from iron ingestion causes severe mucosal damage in the gastrointestinal tract, among other problems.
Excess iron has been linked to higher rates of disease and mortality. For example, breast cancer patients with low ferroportin expression (leading to higher concentrations of intracellular iron) survive for a shorter period of time on average, while high ferroportin expression predicts 90% 10-year survival in breast cancer patients. [ 43 ] Similarly, genetic variations in iron transporter genes known to increase serum iron levels also reduce lifespan and the average number of years spent in good health. [ 44 ] It has been suggested that mutations that increase iron absorption, such as the ones responsible for hemochromatosis (see below), were selected for during Neolithic times as they provided a selective advantage against iron-deficiency anemia. [ 45 ] The increase in systemic iron levels becomes pathological in old age, which supports the notion that antagonistic pleiotropy or "hyperfunction" drives human aging. [ 44 ]
Chronic iron toxicity is usually the result of more chronic iron overload syndromes associated with genetic diseases, repeated transfusions or other causes. In such cases the iron stores of an adult may reach 50 grams (10 times normal total body iron) or more. The most common diseases of iron overload are hereditary hemochromatosis (HH), caused by mutations in the HFE gene, and the more severe disease juvenile hemochromatosis (JH), caused by mutations in either hemojuvelin ( HJV ) [ 46 ] or hepcidin ( HAMP ). The exact mechanisms of most of the various forms of adult hemochromatosis, which make up most of the genetic iron overload disorders, remain unsolved. So, while researchers have been able to identify genetic mutations causing several adult variants of hemochromatosis, they now must turn their attention to the normal function of these mutated genes. | https://en.wikipedia.org/wiki/Human_iron_metabolism |
The human microbiome is the aggregate of all microbiota that reside on or within human tissues and biofluids along with the corresponding anatomical sites in which they reside, [ 1 ] [ 2 ] including the gastrointestinal tract , skin , mammary glands , seminal fluid , uterus , ovarian follicles , lung , saliva , oral mucosa , conjunctiva , and the biliary tract . Types of human microbiota include bacteria , archaea , fungi , protists , and viruses . Though micro-animals can also live on the human body, they are typically excluded from this definition. In the context of genomics , the term human microbiome is sometimes used to refer to the collective genomes of resident microorganisms; [ 3 ] however, the term human metagenome has the same meaning. [ 1 ]
The human body hosts many microorganisms, with approximately the same order of magnitude of non-human cells as human cells. [ 4 ] Some microorganisms that humans host are commensal , meaning they co-exist without harming humans; others have a mutualistic relationship with their human hosts. [ 3 ] : 700 [ 5 ] Conversely, some non- pathogenic microorganisms can harm human hosts via the metabolites they produce, like trimethylamine , which the human body converts to trimethylamine N-oxide via FMO3 -mediated oxidation. [ 6 ] [ 7 ] Certain microorganisms perform tasks that are known to be useful to the human host, but the role of most of them is not well understood. Those that are expected to be present, and that under normal circumstances do not cause disease, are sometimes deemed normal flora or normal microbiota . [ 3 ]
During early life, the establishment of a diverse and balanced human microbiota plays a critical role in shaping an individual's long-term health. [ 8 ] Studies have shown that the composition of the gut microbiota during infancy is influenced by various factors, including mode of delivery, breastfeeding, and exposure to environmental factors. [ 9 ] There are several beneficial species of bacteria and potential probiotics present in breast milk . [ 10 ] Research has highlighted the beneficial effects of a healthy microbiota in early life, such as the promotion of immune system development, regulation of metabolism, and protection against pathogenic microorganisms. [ 11 ] Understanding the complex interplay between the human microbiota and early life health is crucial for developing interventions and strategies to support optimal microbiota development and improve overall health outcomes in individuals. [ 12 ]
The Human Microbiome Project (HMP) took on the project of sequencing the genome of the human microbiota, focusing particularly on the microbiota that normally inhabit the skin, mouth, nose, digestive tract, and vagina. [ 3 ] It reached a milestone in 2012 when it published its initial results. [ 13 ]
Though widely known as flora or microflora , this is a misnomer in technical terms, since the word root flora pertains to plants, and biota refers to the total collection of organisms in a particular ecosystem. Recently, the more appropriate term microbiota is applied, though its use has not eclipsed the entrenched use and recognition of flora with regard to bacteria and other microorganisms. Both terms are being used in different literature. [ 5 ]
The number of bacterial cells in the human body is estimated to be around 38 trillion, while the estimate for human cells is around 30 trillion. [ 14 ] [ 15 ] [ 16 ] [ 17 ] The number of bacterial genes is estimated to be 2 million, 100 times the number of approximately 20,000 human genes . [ 18 ] [ 19 ] [ 20 ]
The problem of elucidating the human microbiome is essentially identifying the members of a microbial community, which includes bacteria, eukaryotes, and viruses. [ 21 ] This is done primarily using deoxyribonucleic acid (DNA)-based studies, though ribonucleic acid (RNA), protein and metabolite based studies are also performed. [ 21 ] [ 22 ] DNA-based microbiome studies typically can be categorized as either targeted amplicon studies or, more recently, shotgun metagenomic studies. The former focuses on specific known marker genes and is primarily informative taxonomically, while the latter is an entire metagenomic approach which can also be used to study the functional potential of the community. [ 21 ] One of the challenges that is present in human microbiome studies, but not in other metagenomic studies, is to avoid including the host DNA in the study. [ 23 ]
Aside from simply elucidating the composition of the human microbiome, one of the major questions involving the human microbiome is whether there is a "core", that is, whether there is a subset of the community that is shared among most humans. [ 24 ] [ 25 ] If there is a core, then it would be possible to associate certain community compositions with disease states, which is one of the goals of the HMP. It is known that the human microbiome (such as the gut microbiota) is highly variable both within a single subject and among different individuals, a phenomenon which is also observed in mice. [ 5 ]
On 13 June 2012, a major milestone of the HMP was announced by the National Institutes of Health (NIH) director Francis Collins . [ 13 ] The announcement was accompanied with a series of coordinated articles published in Nature [ 26 ] [ 27 ] and several journals in the Public Library of Science (PLoS) on the same day. By mapping the normal microbial make-up of healthy humans using genome sequencing techniques, the researchers of the HMP have created a reference database and the boundaries of normal microbial variation in humans. From 242 healthy U.S. volunteers, more than 5,000 samples were collected from tissues from 15 (men) to 18 (women) body sites such as mouth, nose, skin, lower intestine (stool), and vagina. All the DNA, human and microbial, were analyzed with DNA sequencing machines. The microbial genome data were extracted by identifying the bacterial specific ribosomal RNA, 16S rRNA . The researchers calculated that more than 10,000 microbial species occupy the human ecosystem, and they have identified 81–99% of the genera . [ 28 ]
The statistical analysis is essential to validate the obtained results ( ANOVA can be used to size the differences between the groups); if it is paired with graphical tools, the outcome is easily visualized and understood. [ 29 ]
Once a metagenome is assembled, it is possible to infer the functional potential of the microbiome. The computational challenges for this type of analysis are greater than for single genomes, because usually metagenomes assemblers have poorer quality, and many recovered genes are non-complete or fragmented. After the gene identification step, the data can be used to carry out a functional annotation by means of multiple alignment of the target genes against orthologs databases. [ 30 ]
It is a technique that exploits primers to target a specific genetic region and enables to determine the microbial phylogenies . The genetic region is characterized by a highly variable region which can confer detailed identification; it is delimited by conserved regions, which function as binding sites for primers used in PCR . The main gene used to characterize bacteria and archaea is 16S rRNA gene, while fungi identification is based on Internal Transcribed Spacer (ITS). The technique is fast and not so expensive and enables to obtain a low-resolution classification of a microbial sample; it is optimal for samples that may be contaminated by host DNA. Primer affinity varies among all DNA sequences, which may result in biases during the amplification reaction; indeed, low-abundance samples are susceptible to overamplification errors, since the other contaminating microorganisms result to be over-represented in case of increasing the PCR cycles. Therefore, the optimization of primer selection can help to decrease such errors, although it requires complete knowledge of the microorganisms present in the sample, and their relative abundances. [ 31 ]
Marker gene analysis can be influenced by the primer choice; in this kind of analysis, it is desirable to use a well-validated protocol (such as the one used in the Earth Microbiome Project). The first thing to do in a marker gene amplicon analysis is to remove sequencing errors; a lot of sequencing platforms are very reliable, but most of the apparent sequence diversity is still due to errors during the sequencing process. To reduce this phenomenon a first approach is to cluster sequences into Operational taxonomic unit (OTUs): this process consolidates similar sequences (a 97% similarity threshold is usually adopted) into a single feature that can be used in further analysis steps; this method however would discard SNPs because they would get clustered into a single OTU. Another approach is Oligotyping , which includes position-specific information from 16s rRNA sequencing to detect small nucleotide variations and from discriminating between closely related distinct taxa. These methods give as an output a table of DNA sequences and counts of the different sequences per sample rather than OTU. [ 31 ]
Another important step in the analysis is to assign a taxonomic name to microbial sequences in the data. This can be done using machine learning approaches that can reach an accuracy at genus-level of about 80%. Other popular analysis packages provide support for taxonomic classification using exact matches to reference databases and should provide greater specificity, but poor sensitivity. Unclassified microorganism should be further checked for organelle sequences. [ 31 ]
Many methods that exploit phylogenetic inference use the 16SRNA gene for Archea and Bacteria and the 18SRNA gene for Eukaryotes. Phylogenetic comparative methods (PCS) are based on the comparison of multiple traits among microorganisms; the principle is: the closely they are related, the higher number of traits they share. Usually PCS are coupled with phylogenetic generalized least square (PGLS) or other statistical analysis to get more significant results. Ancestral state reconstruction is used in microbiome studies to impute trait values for taxa whose traits are unknown. This is commonly performed with PICRUSt , which relies on available databases. Phylogenetic variables are chosen by researchers according to the type of study: through the selection of some variables with significant biological informations, it is possible to reduce the dimension of the data to analyse. [ 32 ]
Phylogenetic aware distance is usually performed with UniFrac or similar tools, such as Soresen's index or Rao's D, to quantify the differences between the different communities. All this methods are negatively affected by horizontal gene transmission (HGT), since it can generate errors and lead to the correlation of distant species. There are different ways to reduce the negative impact of HGT: the use of multiple genes or computational tools to assess the probability of putative HGT events. [ 32 ]
Microbial communities develop in a very complex dynamic which can be viewed and analyzed as an ecosystem. The ecological interactions between microbes govern its change, equilibrium and stability, and can be represented by a population dynamic model. [ 33 ] The ongoing study of ecological features of the microbiome is growing rapidly and allows to understand the fundamental properties of the microbiome. Understanding the underlying rules of microbial community could help with treating diseases related to unstable microbial communities.
A very basic question is if different humans, who share different microbial communities, have the same underlying microbial dynamics. [ 34 ] Increasing evidence and indications have found that the dynamics is indeed universal. [ 35 ] This question is a basic step that will allow scientists to develop treatment strategies, based on the complex dynamics of human microbial communities.
There are more important properties on which considerations should be taken into account for developing interventions strategies for controlling the human microbial dynamics. [ 36 ] Controlling the microbial communities could result in solving very bad and harmful diseases.
Populations of microbes (such as bacteria and yeasts ) inhabit the skin and mucosal surfaces in various parts of the body. Their role forms part of normal, healthy human physiology, however if microbe numbers grow beyond their typical ranges (often due to a compromised immune system) or if microbes populate (such as through poor hygiene or injury) areas of the body normally not colonized or sterile (such as the blood, or the lower respiratory tract, or the abdominal cavity), disease can result (causing, respectively, bacteremia/sepsis, pneumonia, and peritonitis). [ 37 ]
The Human Microbiome Project found that individuals host thousands of bacterial types, different body sites having their own distinctive communities. Skin and vaginal sites showed smaller diversity than the mouth and gut, these showing the greatest richness. The bacterial makeup for a given site on a body varies from person to person, not only in type, but also in abundance. Bacteria of the same species found throughout the mouth are of multiple subtypes, preferring to inhabit distinctly different locations in the mouth. Even the enterotypes in the human gut, previously thought to be well understood, are from a broad spectrum of communities with blurred taxon boundaries. [ 38 ] [ 39 ]
It is estimated that 500 to 1,000 species of bacteria live in the human gut but belong to just a few phyla: Bacillota and Bacteroidota dominate but there are also Pseudomonadota , Verrucomicrobiota , Actinobacteriota , Fusobacteriota , and " Cyanobacteria ". [ 40 ]
A number of types of bacteria, such as Actinomyces viscosus and A. naeslundii , live in the mouth, where they are part of a sticky substance called plaque . If this is not removed by brushing, it hardens into calculus (also called tartar). The same bacteria also secrete acids that dissolve tooth enamel , causing tooth decay . [ citation needed ]
The vaginal microflora consist mostly of various lactobacillus species. It was long thought that the most common of these species was Lactobacillus acidophilus , but it has later been shown that L. iners is in fact most common, followed by L. crispatus . Other lactobacilli found in the vagina are L. jensenii , L. delbruekii and L. gasseri . Disturbance of the vaginal flora can lead to infections such as bacterial vaginosis and candidiasis . [ 41 ]
Archaea are present in the human gut, but, in contrast to the enormous variety of bacteria in this organ, the numbers of archaeal species are much more limited. [ 42 ] The dominant group are the methanogens , particularly Methanobrevibacter smithii and Methanosphaera stadtmanae . [ 43 ] However, colonization by methanogens is variable, and only about 50% of humans have easily detectable populations of these organisms. [ 44 ]
As of 2007, no clear examples of archaeal pathogens were known, [ 45 ] [ 46 ] although a relationship has been proposed between the presence of some methanogens and human periodontal disease . [ 47 ] Methane-dominant small intestinal bacterial overgrowth (SIBO) is also predominantly caused by methanogens, and Methanobrevibacter smithii in particular. [ 48 ]
Fungi, in particular yeasts , are present in the human gut. [ 49 ] [ 50 ] [ 51 ] [ 52 ] The best-studied of these are Candida species due to their ability to become pathogenic in immunocompromised and even in healthy hosts. [ 50 ] [ 51 ] [ 52 ] Yeasts are also present on the skin, [ 49 ] such as Malassezia species, where they consume oils secreted from the sebaceous glands . [ 53 ] [ 54 ]
Viruses, especially bacterial viruses ( bacteriophages ), colonize various body sites. These colonized sites include the skin, [ 55 ] gut, [ 56 ] lungs, [ 57 ] and oral cavity. [ 58 ] Virus communities have been associated with some diseases, and do not simply reflect the bacterial communities. [ 59 ] [ 60 ] [ 61 ]
In January 2024, biologists reported the discovery of " obelisks ", a new class of viroid-like elements , and "oblins", their related group of proteins, in the human microbiome. [ 62 ] [ 63 ]
A study of 20 skin sites on each of ten healthy humans found 205 identified genera in 19 bacterial phyla, with most sequences assigned to four phyla: Actinomycetota (51.8%), Bacillota (24.4%), Pseudomonadota (16.5%), and Bacteroidota (6.3%). [ 64 ] A large number of fungal genera are present on healthy human skin, with some variability by region of the body; however, during pathological conditions, certain genera tend to dominate in the affected region. [ 49 ] For example, Malassezia is dominant in atopic dermatitis and Acremonium is dominant on dandruff-affected scalps. [ 49 ]
The skin acts as a barrier to deter the invasion of pathogenic microbes. The human skin contains microbes that reside either in or on the skin and can be residential or transient. Resident microorganism types vary in relation to skin type on the human body. A majority of microbes reside on superficial cells on the skin or prefer to associate with glands. These glands such as oil or sweat glands provide the microbes with water, amino acids, and fatty acids. In addition, resident bacteria that associated with oil glands are often Gram-positive and can be pathogenic. [ 3 ]
A small number of bacteria and fungi are normally present in the conjunctiva . [ 49 ] [ 65 ] Classes of bacteria include Gram-positive cocci (e.g., Staphylococcus and Streptococcus ) and Gram-negative rods and cocci (e.g., Haemophilus and Neisseria ) are present. [ 65 ] Fungal genera include Candida , Aspergillus , and Penicillium . [ 49 ] The lachrymal glands continuously secrete, keeping the conjunctiva moist, while intermittent blinking lubricates the conjunctiva and washes away foreign material. Tears contain bactericides such as lysozyme , so that microorganisms have difficulty in surviving the lysozyme and settling on the epithelial surfaces.
In humans, the composition of the gastrointestinal microbiome is established during birth. [ 70 ] Birth by Cesarean section or vaginal delivery also influences the gut's microbial composition. Babies born through the vaginal canal have non-pathogenic, beneficial gut microbiota similar to those found in the mother. [ 71 ] However, the gut microbiota of babies delivered by C-section harbors more pathogenic bacteria such as Escherichia coli and Staphylococcus and it takes longer to develop non-pathogenic, beneficial gut microbiota. [ 72 ]
The relationship between some gut microbiota and humans is not merely commensal (a non-harmful coexistence), but rather a mutualistic relationship. [ 3 ] Some human gut microorganisms benefit the host by fermenting dietary fiber into short-chain fatty acids (SCFAs), such as acetic acid and butyric acid , which are then absorbed by the host. [ 5 ] [ 73 ] Intestinal bacteria also play a role in synthesizing vitamin B and vitamin K as well as metabolizing bile acids , sterols , and xenobiotics . [ 3 ] [ 73 ] The systemic importance of the SCFAs and other compounds they produce are like hormones and the gut flora itself appears to function like an endocrine organ , [ 73 ] and dysregulation of the gut flora has been correlated with a host of inflammatory and autoimmune conditions. [ 5 ] [ 74 ]
The composition of human gut microbiota changes over time, when the diet changes, and as overall health changes. [ 5 ] [ 74 ] A systematic review of 15 human randomized controlled trials from July 2016 found that certain commercially available strains of probiotic bacteria from the Bifidobacterium and Lactobacillus genera ( B. longum , B. breve , B. infantis , L. helveticus , L. rhamnosus , L. plantarum , and L. casei ), when taken by mouth in daily doses of 10 9 –10 10 colony forming units (CFU) for 1–2 months, possess treatment efficacy (i.e., improves behavioral outcomes) in certain central nervous system disorders – including anxiety , depression , autism spectrum disorder , and obsessive–compulsive disorder – and improves certain aspects of memory . [ 75 ]
The genitourinary system appears to have a microbiota, [ 76 ] [ 77 ] which is an unexpected finding in light of the long-standing use of standard clinical microbiological culture methods to detect bacteria in urine when people show signs of a urinary tract infection ; it is common for these tests to show no bacteria present. [ 78 ] It appears that common culture methods do not detect many kinds of bacteria and other microorganisms that are normally present. [ 78 ] As of 2017, sequencing methods were used to identify these microorganisms to determine if there are differences in microbiota between people with urinary tract problems and those who are healthy. [ 76 ] [ 77 ] [ 79 ] To properly assess the microbiome of the bladder as opposed to the genitourinary system, the urine specimen should be collected directly from the bladder, which is often done with a catheter . [ 80 ]
Vaginal microbiota refers to those species and genera that colonize the vagina. These organisms play an important role in protecting against infections and maintaining vaginal health. [ 81 ] The most abundant vaginal microorganisms found in premenopausal women are from the genus Lactobacillus , which suppress pathogens by producing hydrogen peroxide and lactic acid. [ 51 ] [ 81 ] [ 82 ] Bacterial species composition and ratios vary depending on the stage of the menstrual cycle . [ 83 ] [ 84 ] [ needs update ] Ethnicity also influences vaginal flora. The occurrence of hydrogen peroxide-producing lactobacilli is lower in African American women and vaginal pH is higher. [ 85 ] Other influential factors such as sexual intercourse and antibiotics have been linked to the loss of lactobacilli. [ 82 ] Moreover, studies have found that sexual intercourse with a condom does appear to change lactobacilli levels, and does increase the level of Escherichia coli within the vaginal flora. [ 82 ] Changes in the normal, healthy vaginal microbiota is an indication of infections, [ 86 ] such as candidiasis or bacterial vaginosis . [ 82 ] Candida albicans inhibits the growth of Lactobacillus species, while Lactobacillus species which produce hydrogen peroxide inhibit the growth and virulence of Candida albicans in both the vagina and the gut. [ 49 ] [ 51 ] [ 52 ]
Fungal genera that have been detected in the vagina include Candida , Pichia , Eurotium , Alternaria , Rhodotorula , and Cladosporium , among others. [ 49 ]
Until recently the placenta was considered to be a sterile organ but commensal, nonpathogenic bacterial species and genera have been identified that reside in the placental tissue. [ 87 ] [ 88 ] [ 89 ] However, the existence of a microbiome in the placenta is controversial as criticized in several researches. So called "placental microbiome" is likely derived from contamination of regents because low-biomass samples are easily contaminated. [ 90 ] [ 91 ] [ 92 ]
Until recently, the upper reproductive tract of women was considered to be a sterile environment. A variety of microorganisms inhabit the uterus of healthy, asymptomatic women of reproductive age. The microbiome of the uterus differs significantly from that of the vagina and gastrointestinal tract. [ 93 ]
The environment present in the human mouth allows the growth of characteristic microorganisms found there. It provides a source of water and nutrients, as well as a moderate temperature. [ 3 ] Resident microbes of the mouth adhere to the teeth and gums to resist mechanical flushing from the mouth to stomach where acid-sensitive microbes are destroyed by hydrochloric acid . [ 3 ] [ 51 ]
Anaerobic bacteria in the oral cavity include: Actinomyces , Arachnia , Bacteroides , Bifidobacterium , Eubacterium , Fusobacterium , Lactobacillus , Leptotrichia , Peptococcus , Peptostreptococcus , Propionibacterium , Selenomonas , Treponema , and Veillonella . [ 94 ] [ needs update ] Genera of fungi that are frequently found in the mouth include Candida , Cladosporium , Aspergillus , Fusarium , Glomus , Alternaria , Penicillium , and Cryptococcus , among others. [ 49 ]
Bacteria accumulate on both the hard and soft oral tissues in biofilm allowing them to adhere and strive in the oral environment while protected from the environmental factors and antimicrobial agents. [ 95 ] Saliva plays a key biofilm homeostatic role allowing recolonization of bacteria for formation and controlling growth by detaching biofilm buildup. [ 96 ] It also provides a means of nutrients and temperature regulation. The location of the biofilm determines the type of exposed nutrients it receives. [ 97 ]
Oral bacteria have evolved mechanisms to sense their environment and evade or modify the host. However, a highly efficient innate host defense system constantly monitors the bacterial colonization and prevents bacterial invasion of local tissues. A dynamic equilibrium exists between dental plaque bacteria and the innate host defense system. [ 98 ]
This dynamic between host oral cavity and oral microbes plays a key role in health and disease as it provides entry into the body. [ 99 ] A healthy equilibrium presents a symbiotic relationship where oral microbes limit growth and adherence of pathogens while the host provides an environment for them to flourish. [ 99 ] [ 95 ] Ecological changes such as change of immune status, shift of resident microbes and nutrient availability shift from a mutual to parasitic relationship resulting in the host being prone to oral and systemic disease. [ 95 ] Systemic diseases such as diabetes and cardiovascular diseases has been correlated to poor oral health. [ 99 ] Of particular interest is the role of oral microorganisms in the two major dental diseases: dental caries and periodontal disease . [ 98 ] Pathogen colonization at the periodontium cause an excessive immune response resulting in a periodontal pocket- a deepened space between the tooth and gingiva. [ 95 ] This acts as a protected blood-rich reservoir with nutrients for anaerobic pathogens. [ 95 ] Systemic disease at various sites of the body can result from oral microbes entering the blood bypassing periodontal pockets and oral membranes. [ 99 ]
Persistent proper oral hygiene is the primary method for preventing oral and systemic disease. [ 99 ] It reduces the density of biofilm and overgrowth of potential pathogenic bacteria resulting in disease. [ 97 ] However, proper oral hygiene may not be enough as the oral microbiome, genetics, and changes to immune response play a factor in developing chronic infections. [ 97 ] Use of antibiotics could treat already spreading infection but ineffective against bacteria within biofilms. [ 97 ]
The healthy nasal microbiome is dominated by Corynebacterium and Staphylococcus species. The mucosal microbiome plays a critical role in modulating viral infection. [ 100 ]
Much like the oral cavity, the upper and lower respiratory system possess mechanical deterrents to remove microbes. Goblet cells produce mucus which traps microbes and moves them out of the respiratory system via continuously moving ciliated epithelial cells. [ 3 ] In addition, a bactericidal effect is generated by nasal mucus which contains the enzyme lysozyme. [ 3 ] The upper and lower respiratory tract appears to have its own set of microbiota. [ 101 ] Pulmonary bacterial microbiota belong to 9 major bacterial genera: Prevotella , Sphingomonas , Pseudomonas , Acinetobacter , Fusobacterium , Megasphaera , Veillonella , Staphylococcus , and Streptococcus . Some of the bacteria considered "normal biota" in the respiratory tract can cause serious disease especially in immunocompromised individuals; these include Streptococcus pyogenes , Haemophilus influenzae , Streptococcus pneumoniae , Neisseria meningitidis , and Staphylococcus aureus . [ citation needed ] Fungal genera that compose the pulmonary mycobiome include Candida , Malassezia , Neosartorya , Saccharomyces , and Aspergillus , among others. [ 49 ]
Unusual distributions of bacterial and fungal genera in the respiratory tract is observed in people with cystic fibrosis . [ 49 ] [ 102 ] Their bacterial flora often contains antibiotic-resistant and slow-growing bacteria, and the frequency of these pathogens changes in relation to age. [ 102 ]
Traditionally the biliary tract has been considered to be normally sterile, and the presence of microorganisms in bile is a marker of pathological process. This assumption was confirmed by failure in allocation of bacterial strains from the normal bile duct. Papers began emerging in 2013 showing that the normal biliary microbiota is a separate functional layer which protects a biliary tract from colonization by exogenous microorganisms. [ 103 ]
Human bodies rely on the innumerable bacterial genes as the source of essential nutrients. [ 104 ] Both metagenomic and epidemiological studies indicate vital roles for the human microbiome in preventing a wide range of diseases, from type 2 diabetes and obesity to inflammatory bowel disease, Parkinson's disease, and even mental health conditions like depression. [ 105 ] A symbiotic relationship between the gut microbiota and different bacteria may influence an individual's immune response. [ 106 ] Metabolites generated by gut microbes appear to be causative factors in type 2 diabetes. [ 107 ] Although in its infancy, microbiome-based treatment is also showing promise, most notably for treating drug-resistant C. difficile Archived 7 December 2019 at the Wayback Machine infection [ 108 ] and in diabetes treatment. [ 109 ]
An overwhelming presence of the bacteria, C. difficile, leads to an infection of the gastrointestinal tract, normally associated to dysbiosis with the microbiota believed to have been caused by the administration of antibiotics. Use of antibiotics eradicates the beneficial gut flora within the gastrointestinal tract, which normally prevents pathogenic bacteria from establishing dominance. [ 110 ] Traditional treatment for C. difficile infections includes an additional regime of antibiotics, however, efficacy rates average between 20 and 30%. [ 111 ] Recognizing the importance of healthy gut bacteria, researchers turned to a procedure known as fecal microbiota transplant (FMT), where patients experiencing gastrointestinal diseases, such as C. difficile infection (CDI), receive fecal content from a healthy individual in hopes of restoring a normal functioning intestinal microbiota. [ 112 ] Fecal microbiota transplant is approximately 85–90% effective in people with CDI for whom antibiotics have not worked or in whom the disease recurs following antibiotics. [ 113 ] [ 114 ] Most people with CDI recover with one FMT treatment. [ 115 ] [ 110 ] [ 116 ]
Although cancer is generally a disease of host genetics and environmental factors, microorganisms are implicated in some 20% of human cancers. [ 117 ] Particularly for potential factors in colon cancer , bacterial density is one million times higher than in the small intestine , and approximately 12-fold more cancers occur in the colon compared to the small intestine, possibly establishing a pathogenic role for microbiota in colon and rectal cancers. [ 118 ] Microbial density may be used as a prognostic tool in assessment of colorectal cancers. [ 118 ]
The microbiota may affect carcinogenesis in three broad ways: (i) altering the balance of tumor cell proliferation and death, (ii) regulating immune system function, and (iii) influencing metabolism of host-produced factors, foods and pharmaceuticals. [ 117 ] Tumors arising at boundary surfaces, such as the skin, oropharynx and respiratory, digestive and urogenital tracts, harbor a microbiota. Substantial microbe presence at a tumor site does not establish association or causal links. Instead, microbes may find tumor oxygen tension or nutrient profile supportive. Decreased populations of specific microbes or induced oxidative stress may also increase risks. [ 117 ] [ 118 ] Of the around 10 30 microbes on earth, ten are designated by the International Agency for Research on Cancer as human carcinogens. [ 117 ] Microbes may secrete proteins or other factors directly drive cell proliferation in the host, or may up- or down-regulate the host immune system including driving acute or chronic inflammation in ways that contribute to carcinogenesis. [ 117 ]
Concerning the relationship of immune function and development of inflammation, mucosal surface barriers are subject to environmental risks and must rapidly repair to maintain homeostasis . Compromised host or microbiota resiliency also reduce resistance to malignancy, possibly inducing inflammation and cancer. Once barriers are breached, microbes can elicit proinflammatory or immunosuppressive programs through various pathways. [ 117 ] For example, cancer-associated microbes appear to activate NF-κΒ signaling within the tumor microenvironment. Other pattern recognition receptors, such as nucleotide-binding oligomerization domain–like receptor (NLR) family members NOD-2 , NLRP3 , NLRP6 and NLRP12 , may play a role in mediating colorectal cancer. [ 117 ] Likewise Helicobacter pylori appears to increase the risk of gastric cancer, due to its driving a chronic inflammatory response in the stomach. [ 117 ] [ 118 ]
Inflammatory bowel disease consists of two different diseases: ulcerative colitis and Crohn's disease and both of these diseases present with disruptions in the gut microbiota (also known as dysbiosis ). This dysbiosis presents itself in the form of decreased microbial diversity in the gut, [ 119 ] [ 120 ] and is correlated to defects in host genes that changes the innate immune response in individuals. [ 119 ]
In patients with Irritable bowel syndrome and other functional gastrointestinal disorders like abdominal bloating, studies have noted alterations in the abundance of specific bacterial groups. In particular, decreased levels of beneficial bacteria like Bifidobacterium and Lactobacillus ; increased levels of potentially harmful bacteria like Bacteroides and Proteobacteria . These microbial alterations can contribute to: increased intestinal permeability ("leaky gut"), visceral hypersensitivity (increased sensitivity of the gut to stimuli), altered gut motility, immune system activation. [ 121 ] [ 122 ] [ 123 ] [ 124 ]
The HIV disease progression influences the composition and function of the gut microbiota, with notable differences between HIV-negative, HIV-positive, and post- ART HIV-positive populations. [ citation needed ] HIV decreases the integrity of the gut epithelial barrier function by affecting tight junctions . This breakdown allows for translocation across the gut epithelium, which is thought to contribute to increases in inflammation seen in people with HIV. [ 125 ]
Vaginal microbiota plays a role in the infectivity of HIV, with an increased risk of infection and transmission when the woman has bacterial vaginosis , a condition characterized by an abnormal balance of vaginal bacteria. [ 126 ] The enhanced infectivity is seen with the increase in pro-inflammatory cytokines and CCR5 + CD4+ cells in the vagina. However, a decrease in infectivity is seen with increased levels of vaginal Lactobacillus, which promotes an anti-inflammatory condition. [ 125 ]
Humans who are 100 years old or older, called centenarians , have a distinct gut microbiome. This microbiome is characteristically enriched in microorganisms that are able to synthesize novel secondary bile acids . [ 127 ] These secondary bile acids include various isoforms of lithocholic acid that may contribute to healthy aging. [ 127 ]
With death, the microbiome of the living body collapses and a different composition of microorganisms named necrobiome establishes itself as an important active constituent of the complex physical decomposition process. Its predictable changes over time are thought to be useful to help determine the time of death. [ 128 ] [ 129 ]
Studies in 2009 questioned whether the decline in biota (including microfauna ) as a result of human intervention might impede human health, hospital safety procedures, food product design, and treatments of disease. [ 130 ]
Hygiene , [ 132 ] probiotics , [ 131 ] prebiotics , [ 133 ] synbiotics , [ 134 ] light therapy , [ 135 ] microbiota transplants ( fecal [ 136 ] or skin [ 137 ] ), antibiotics , [ 138 ] exercise , [ 139 ] [ 140 ] diet , [ 141 ] breastfeeding , [ 142 ] aging [ 143 ] can change the human microbiome across various anatomical systems or regions such as skin and gut.
The human microbiome is transmitted between a mother and her children , as well as between people living in the same household . [ 144 ] [ 145 ]
Primary research indicates that immediate changes in the microbiota may occur when a person migrates from one country to another, such as when Thai immigrants settled in the United States [ 146 ] or when Latin Americans immigrated into the United States. [ 147 ] Losses of microbiota diversity were greater in obese individuals and children of immigrants . [ 146 ] [ 147 ]
A 2024 study suggests that gut microbiota capable of digesting cellulose can be found in the human microbiome, and they are less abundant in people living in industrialized societies . [ 148 ] [ 149 ]
The sexome refers to microbes left on genitalia after penetrative sex. In the context of forensic science , the sexome can potentially aid in sexual assault casework for perpetrator identification when human male DNA is absent. [ 150 ] | https://en.wikipedia.org/wiki/Human_microbiome |
Human milk immunity is the protection provided to the immune system of an infant via the biologically active components in human milk . Human milk was previously thought to only provide passive immunity primarily through Secretory IgA , but advances in technology have led to the identification of various immune-modulating components. [ 1 ] [ 2 ] [ 3 ] Human milk constituents provide nutrition and protect the immunologically naive infant as well as regulate the infant's own immune development and growth. [ 4 ]
Immune factors and immune-modulating components in human milk include cytokines , growth factors , proteins , microbes , and human milk oligosaccharides . [ 5 ] [ 6 ] Immune factors in human milk are categorized mainly as anti-inflammatory [ 2 ] primarily working without inducing inflammation or activating the complement system . [ 7 ]
Bio-active constituents of human milk that have been cataloged to possess immune-modulating capabilities include immunoglobulins , Lactoferrin , Lysozyme , oligosaccharides , lipids , cytokines , hormones , and growth factors . [ 7 ] [ 8 ] Some of the roles of bio-actives in human milk are theorized based on their function in other parts of the body, but the mechanisms and function of their activities remain to be discovered. [ 9 ]
Immunoglobulin A is the most well known immune factor in human milk. [ 2 ] In its secretory form, SIgA , it is the most plentiful antibody in human milk. [ 2 ] [ 8 ] It constitutes between 80-90% of all immunoglobulins present in milk. [ 8 ] SIgA provides adaptive immunity by directly targeting specific pathogens that both infant and mother have been exposed to in their environments. [ 2 ] [ 10 ]
Lactoferrin is an immune protein with strong anti-microbial function in human milk. [ 11 ] Lactoferrin protects the infant intestine by binding to iron to prevent pathogens from utilizing it as a resource. It also modulates immunity by blocking inflammatory signaling cytokines. [ 7 ]
Cytokines are pluripotent signaling molecules with the ability to bind to specific receptors. [ 3 ] They can cross the intestinal barrier and mediate immune activity. [ 12 ] Their presence in human milk may stimulate lymphocytes responsible for the development of the infant's specific immunity. [ 7 ] Cytokines present in human milk include IL-1β , IL-6, IL-8, IL-10, TNFα , and IFN-γ . [ 3 ]
Bio-active components in human milk are speculated to colonize in human milk in several ways including secretion by the mammary gland , epithelium cells, and by milk cells. [ 3 ] [ 12 ] Maternal immune factors are transferred by lymphocytes traveling from the mother's gut to the mammary gland [ 8 ] where the secretory cells of the breast produce antibodies . [ 10 ]
The origin of the human milk microbiota, including those with immune-modulating functions, are not well established. However, several theories including skin-to-skin contact , [ 2 ] the entero-mammary pathway, [ 13 ] and retrograde back-flow hypothesis [ 14 ] [ 15 ] have been put forth to explain the microbial composition of human milk.
Human milk immune composition is known to change over the course of lactation. [ 12 ] Most notably, antibody levels are lower in mature milk than in colostrum , [ 7 ] with SIgA measuring at up to 12 grams per liter in colostrum and decreasing to 1 gram per liter in mature milk. [ 8 ] Studies find time postpartum to be most influential on the presence of immune factors, including growth factors [ 16 ] and lactoferrin. [ 11 ]
The exposure to microbiota through mother's milk is the primary stimulus for immune development in infants. [ 8 ] Microbiota interacts with the infant's immune system by stimulating the mucous layer, down-regulating the inflammatory response, producing antibodies and helping initiate oral tolerance . [ 17 ] Mucosal layers protection comes from their ability to limit pathogens from attaching to the infant intestinal tract . [ 8 ]
Human milk oligosaccharides (HMOs) are carbohydrate components in human milk. [ 12 ] They are mostly indigestible and work as a prebiotic to feed commensal bacteria in the infant gut. [ 9 ] [ 18 ] Studies show that HMOs also function as immune-modulators by blocking receptors that allow pathogenic bacteria to attach to the infant intestinal epithelium. [ 19 ]
There are observed differences in immune factor composition in the milk of mothers who delivered cesarean versus vaginally. [ 20 ] A study of 82 women saw an increase in the levels of IgA in the colostrum of women who had cesarean births after experiencing labor when compared to women who delivered vaginally or had elected cesareans . [ 21 ]
Milk immunity levels are observably lower in women with higher parity . [ 22 ] A study among the Ariaal women of Kenya saw that milk IgA decreased drastically only in women who had given birth to eight or more children. [ 23 ]
Human milk composition remains relatively stable despite maternal dietary changes, except in cases of extreme maternal depletion. Seasonal changes and malnutrition influence the concentration of immune factors. [ 22 ]
In addition, intervention studies have confirmed that both fish oil [ 24 ] and fish consumption during pregnancy can alter immune-modulating components in human milk. [ 25 ]
Differences in the maternal environment such as rural and urban environments, [ 26 ] including exposure to farming, [ 27 ] and exposure to pathogens [ 28 ] have shown to affect human milk immune factor variation. [ 2 ]
Geographic location is known to play a role in human milk variation, with country of residence specifically linked with immune factor variation. [ 29 ] A study found a variation in levels of growth factor in both mature milk and colostrum to be correlated with geographic location. [ 16 ] However, a larger study found support for consistency in the presence of a small group of immunological factors in mature milk independent of geographic location. [ 26 ]
Over the last century, breastfeeding has been consistently shown to reduce infant mortality and morbidity, particularly of infectious disease. [ 8 ] Comparative research between human milk and formula has pointed towards the bio-active components in human milk as potential proponents of its immunological protection. [ 9 ] Studies have shown that breastfed infants respond better to vaccines, [ 30 ] and are better protected against diarrhea , otitis media , sepsis , and necrotizing enterocolitis , [ 7 ] celiac disease , obesity , and inflammatory bowel disease than formula-fed infants. [ 1 ] Human breast milk is seen as particularly beneficial to infants born before full term and those that are underweight at birth who are at a higher risk of infectious diseases, such as sepsis and meningitis . [ 7 ] [ 30 ] Also, there is a lower chance of contamination acquired through direct breastfeeding than with mixing formula with water or other animal milks which may also help explain why human milk is more protective for the infant. [ 31 ]
Because various components present in human breast milk stimulate the growth of the immune system, there is a growing interest in whether breastfeeding provides a long term protective effect against auto-immune and inflammatory diseases . [ 7 ]
The WHO infant feeding guidelines advise the use of donor milk when the mother's milk is not available. [ 32 ] With the understanding that breast milk provides immune protection that is absent in formula, mothers have turned to milk sharing options to in order to give formula alternatives to their infants. [ 33 ] A donation of milk without monetary benefit defines milk sharing. [ 32 ] In addition, milk banks have emerged to regulate and pasteurize donated milk to be sold in the legal market. [ 33 ] The main concern with bank milk is that it has lost many immune cells, commensal microbiota and bio-active proteins during the pasteurization process. [ 34 ] Donor milk is in high demand for infants in the Neonatal Intensive Care Unit ( NICU ). [ 33 ] who have been shown to benefit most from access to human milk [ 35 ]
Immunological consequences or benefits of milk sharing are not well documented, but it has been speculated that allo-nursing, or nursing from multiple females, may provide infants with an immune boost. [ 33 ] The reported risk associated with unregulated sharing milk includes the possibility of the transmission of drugs, toxins , pathogenic bacteria , HIV and other viruses . [ 33 ] However, some researchers believe that allo-nursing and milk sharing may have been part of our evolutionary past. Evidence of milk sharing history include the wet nursing practices of the 20th century, [ 33 ] milk kinship among Islamic tradition, [ 36 ] and documentation of allo-nursing in primates species. [ 33 ] [ 37 ]
There is evidence of a relationship between the microbes that have co-evolved with humans as their host and the human immune system. [ 38 ] The transfer of microorganisms from mother to offspring is universal in animals. In humans, microbial exchange occurs primarily through placental transfer and breast milk. [ 39 ] The presence of these complex microbial communities in the human body suggests that the immune system has been selected to remember and mediate the colonization of these microorganisms within the human host. [ 40 ] Further, microbial dysbiosis in infants is strongly associated with immune-mediated diseases such as allergies and necrotizing enterocolitis . [ 17 ]
In early life, an infant's immune system is considered immature due to its lack of resources necessary for defense against infection. [ 7 ] An infant is not able to produce specific cytokines, [ 30 ] IgA, [ 7 ] and is limited to producing mostly IgM antibodies. [ 30 ] The human infant is unable to adequately protect itself without the immune-stimulating and immune-modulating components present in human milk. This dynamic affirms the consensus among researchers that human milk evolved to provide not only nutritional but immunological benefits to the infant. [ 23 ] Some researches have proposed that the mammary gland and milk production evolved as a part of the human innate immune system, [ 41 ] with its immunological protective role predating its nutritional role. [ 42 ] | https://en.wikipedia.org/wiki/Human_milk_immunity |
Human milk oligosaccharides ( HMOs ), also known as human milk glycans , are short polymers of simple sugars that can be found in high concentrations in human breast milk . [ 1 ] Human milk oligosaccharides promote the development of the immune system, can reduce the risk of pathogen infections and improve brain development and cognition. [ 1 ] The HMO profile of human breast milk shapes the gut microbiota of the infant by selectively stimulating bifidobacteria and other bacteria. [ 2 ]
In contrast to the other components of breast milk that are absorbed by the infant through breastfeeding, HMOs are indigestible for the nursing child. However, they have a prebiotic effect and serve as food for intestinal bacteria, especially bifidobacteria . [ 3 ] The dominance of these intestinal bacteria in the gut reduces the colonization with pathogenic bacteria (probiosis) and thereby promotes a healthy intestinal microbiota and reduces the risk of dangerous intestinal infections. Recent studies suggest that HMOs significantly lower the risk of viral and bacterial infections and thus diminish the chance of diarrhoea and respiratory diseases.
This protective function of the HMOs is activated when in contact with specific pathogens , such as certain bacteria or viruses . These have the ability to bind themselves to the glycan receptors (receptors for long chains of connected sugar molecules on the surface of human cells) located on the surface of the intestinal cells and can thereby infect the cells of the intestinal mucosa . Researchers have discovered that HMOs mimic these glycan receptors so the pathogens bind themselves to the HMOs rather than the intestinal cells. This reduces the risk of an infection with a pathogen. [ 1 ] [ 4 ] It has also been demonstrated that HMOs can bind to several intestinal viruses, such as norovirus and Norwalk virus , moreover they can reduce the virus load from influenza and RSV . [ 5 ]
In addition to this, HMOs seem to influence the reaction of specific cells of the immune system in a way that reduces inflammatory responses. [ 1 ] [ 6 ] It is also suspected that HMOs reduce the risk of premature infants becoming infected with the potentially life-threatening disease necrotizing enterocolitis (NEC). [ 1 ]
Some of the metabolites directly affect the nervous system or the brain and can sometimes influence the development and behavior of children in the long term. There are studies that indicate certain HMOs supply the child with sialic acid residues. Sialic acid is an essential nutrient for the development of the child’s brain and mental abilities. [ 1 ] [ 6 ]
In experiments designed to test the suitability of HMOs as a prebiotic source of carbon for intestinal bacteria it was discovered that they are highly selective for a commensal bacteria known as Bifidobacteria longum biovar infantis . The presence of genes unique to B. infantis , including co-regulated glycosidases, and its efficiency at using HMOs as a carbon source may imply a co-evolution of HMOs and the genetic capability of select bacteria to utilize them. [ 7 ]
Milk oligosaccharides seem to be more abundant in humans than in other animals and to be more complex and varied. [ 8 ] Oligosaccharides in primate milk are generally more complex and diverse than in non-primates. [ 1 ]
Human milk oligosaccharides (HMOs) form the third most abundant solid component ( dissolved or emulsified or suspended in water) of human milk, after lactose and fat . [ 9 ] HMOs are present in a concentration of 11.3 – 17.7 g/L (1.5 oz/gal – 2.36 oz/gal) in human milk, depending on lactation stages. [ 10 ] Approximately 200 structurally different human milk oligosaccharides are known, and they can be categorized into fucosylated, sialylated and neutral core HMOs. The composition of human milk oligosaccharides in breast milk is individual to each mother and varies over the period of lactation . The dominant oligosaccharide in 80% of all women is 2′-fucosyllactose , which is present in human breast milk at a concentration of approximately 2.5 g/L; [ 4 ] other abundant oligosacchadies include lacto- N -tetraose , lacto- N -neotetraose, and lacto- N -fucopentaose. [ 11 ] It has been found by numerous studies that the concentration of each individual human milk oligosaccharide changes throughout the different periods of lactation ( colostrum , transitional, mature and late milk) and depend on various factors such as the mother's genetic secretor status and length of gestation. [ 10 ]
All HMOs derive from lactose, which can be decorated by four monosaccharides ( N-acetyl-D-glucosamine , D-galactose , sialic acid and/or L-fucose ) to form an oligosaccharide. [ 10 ] The HMO variability in human mothers depend on two specific enzymes , the α1-2-fucosyltransferase ( FUT2 ) and the α1-3/4-fucosyltransferase ( FUT3 ). [ 16 ] The milk of mothers with inactivated FUT2 enzyme do not contain α1-2-fucosylated HMOs, and likewise with inactivated FUT3 enzyme there can be almost no α1-4-fucasylated HMOs found. Typically 20% of the global population of mothers do not have active FUT2 enzyme, but still have an active FUT3 enzyme, whereas 1% of mothers express neither FUT2 nor FUT3 enzymes. [ 17 ]
Human milk oligosaccharides can be synthesized in large quantities using precision industrial fermentation methods e.g. by the commonly used, non-pathogenic bacteria Escherichia coli . [ 18 ] During the fermentation process the bacteria are fed with a carbon-source (e.g. glucose), salts, minerals and trace elements under aseptic conditions in a stainless steel bioreactor , while lactose is added to the process as precursor molecule. Bacteria are then converting the lactose into human milk oligosaccharides by decorating it with other sugar monomers. After the fermentation process the HMOs are completely separated from the bacteria, proteins and DNA using different filtration techniques. [ 18 ] Subsequently the HMOs are purified, crystallized , dried, packaged and delivered to infant formula manufacturers where they are mixed with other components of infant formula. [ 18 ]
Enzymatic synthesis of HMOs through transgalactosylation is an efficient way for production. Various donors, including p -nitrophenyl-β-galactopyranoside, uridine diphosphate galactose and lactose, can be used in transgalactosylation. In particular, lactose may act as either a donor or an acceptor in a variety of enzymatic reactions and is available in large quantities from the whey produced as a co-processing product from cheese production. There is a lack of published data, however, describing the large-scale production of such galacto-oligosaccharides. [ 19 ] | https://en.wikipedia.org/wiki/Human_milk_oligosaccharide |
The human mitochondrial molecular clock is the rate at which mutations have been accumulating in the mitochondrial genome of hominids during the course of human evolution . The archeological record of human activity from early periods in human prehistory is relatively limited and its interpretation has been controversial. Because of the uncertainties from the archeological record, scientists have turned to molecular dating techniques in order to refine the timeline of human evolution. A major goal of scientists in the field is to develop an accurate hominid mitochondrial molecular clock which could then be used to confidently date events that occurred during the course of human evolution.
Estimates of the mutation rate of human mitochondrial DNA (mtDNA) vary greatly depending on the available data and the method used for estimation. The two main methods of estimation, phylogeny-based methods and pedigree-based methods, have produced mutation rates that differ by almost an order of magnitude . Current research has been focused on resolving the high variability obtained from different rate estimates.
A major assumption of the molecular clock theory is that mutations within a particular genetic system occur at a statistically uniform rate and this uniform rate can be used for dating genetic events. In practice the assumption of a single uniform rate is an oversimplification. Though a single mutation rate is often applied, it is often a composite or an average of several different mutation rates. [ 1 ] Many factors influence observed mutation rates and these factors include the type of samples, the region of the genome studied and the time period covered.
The rate at which mutations occur during reproduction, the germline mutation rate, is thought to be higher than all observed mutation rates, because not all mutations are successfully passed down to subsequent generations. [ 2 ] mtDNA is only passed down along the matrilineal line, and therefore mutations passed down to sons are lost. Random genetic drift may also cause the loss of mutations. For these reasons, the actual mutation rate will not be equivalent to the mutation rate observed from a population sample. [ 2 ]
Population dynamics are believed to influence observed mutation rates. When a population is expanding, more germline mutations are preserved in the population. As a result, observed mutation rates tend to increase in an expanding population. When populations contract, as in a population bottleneck , more germline mutations are lost. Population bottlenecks thus tend to slow down observed mutation rates. Since the emergence of the species homo sapiens about 200,000 years ago, the human population has expanded from a few thousand individuals living in Africa to over 8 billion worldwide. However, the expansion has not been uniform, so the history of human populations may consist of both bottlenecks and expansions. [ 3 ]
The mutation rate across the mitochondrial genome is not uniformly distributed. Certain regions of the genome are known to mutate more rapidly than others. The Hypervariable regions are known to be highly polymorphic relative to other parts of the genome.
The rate at which mutations accumulate in coding and non-coding regions of the genome also differs as mutations in the coding region are subject to purifying selection . For this reason, some studies avoid coding region or nonsynonymous mutations when calibrating the molecular clock. Loogvali et al. (2009) only consider synonymous mutations, they have recalibrated the molecular clock of human mtDNA as 7990 years per synonymous mutation over
the mitochondrial genome. [ 1 ] Soares et al. (2009) consider both coding and non-coding region mutations to arrive at a single mutation rate, but apply a correction factor to account for selection in the coding region.
The mutation rate has been observed to vary with time. Mutation rates within the human species are faster than those observed along the human-ape lineage. The mutation rate is also thought to be faster in recent times, since the beginning of the Holocene 11,000 years ago. [ 1 ] [ 3 ] [ 4 ]
Parallel mutation (sometimes referred to as Homoplasy) or convergent evolution occurs when separate lineages have the same mutation independently occur at the same site in the genome. Saturation occurs when a single site experiences multiple mutations. Parallel mutations and saturation result in the underestimation of the mutation rate because they are likely to be overlooked. [ 2 ]
Individuals affected by heteroplasmy have a mixture of mtDNA types, some with new mutations and some without. The new mutations may or may not be passed down to subsequent generations. Thus the presence of heteroplasmic individuals in a sample may complicate the calculation of mutation rates. [ 2 ] [ 5 ]
Pedigree methods estimate the mutation rate by comparing the mtDNA sequences of a sample of parent/offspring pairs or analyzing mtDNA sequences of individuals from a deep-rooted genealogy. The number of new mutations in the sample is counted and divided by the total number of parent-to-child DNA transmission events to arrive at a mutation rate. [ 3 ] [ 5 ]
Phylogeny based methods are estimated by first reconstructing the haplotype of the most recent common ancestor (MRCA) of a sample of two or more genetic lineages. A requirement is that the time to the most recent common ancestor ( TMRCA ) of the sample of lineages must already be known from other independent sources, usually the archeological record. The average number of mutations that have accumulated since the MRCA is then computed and divided by the TMRCA to arrive at the mutation rate. The human mutation rate is usually estimated by comparing the sequences of modern humans and chimpanzees and then reconstructing the ancestral haplotype of the chimpanzee-human common ancestor. According to the paleontological record the last common ancestor of humans may have lived around 6 million years ago. [ 3 ]
Rates obtained by pedigree methods are about 10 times faster than those obtained by phylogenetic methods. Several factors acting together may be responsible for this difference. As pedigree methods record mutations in living subjects, the mutation rates from pedigree studies are closer to the germline mutation rate. Pedigree studies use genealogies that are only a few generations deep whereas phylogeny based methods use timescales that are thousands or millions of years deep. According to Henn et al. 2009, phylogeny based methods take into account events that occur over long time scales and are thus less affected by stochastic fluctuations. Howell et al. 2003 suggests that selection, saturation, parallel mutations and genetic drift are responsible for the differences observed between pedigree based methods and phylogeny based methods.
Anatomically modern humans (AMH) spread out of Africa and over a large area of Eurasia and left artifacts along the northern coast of the Southwest, South, Southeast and East Asia. Cann, Stoneking & Wilson (1987) did not rely on a predicted T CHLCA to estimate single-nucleotide polymorphism (SNP) rates. Instead, they used evidence of colonization in Southeast Asia and Oceania to estimate mutation rates. In addition they used RFLP technology ( Restriction fragment length polymorphism ) to examine differences between DNA. Using these techniques this group came up with a T MRCA of 140,000 to 290,000 years. Cann et al. (1987) estimated the TMRCA of humans to be approximately 210 ky and the most recent estimates Soares et al. 2009 (using 7 million year chimpanzee human mtDNA MRCA) differ by only 9%, which is relatively close considering the wide confidence range for both estimates and calls for more ancient T CHLCA .
Endicott & Ho (2008) have reevaluated the predicted migrations globally and compared those to the actual evidence. This group used the coding regions of sequences. They postulate that the molecular clock based on chimp-human comparisons is not reliable, particularly in predicting recent migrations, such as founding migrations into Europe, Australia, and the Americans. With this technique this group came up with a T MRCA of 82,000 to 134,000 years.
Because chimps and humans share a matrilineal ancestor, establishing the geological age of that last ancestor allows the estimation of the mutation rate. The chimp-human last common ancestor (CHLCA) is frequently applied as an anchor for mt-T MRCA studies with ranges between 4 and 13 million years cited in the literature. [ 6 ] This is one source of variation in the time estimates. The other weakness is the non-clocklike accumulation of SNPs, would tend to make more recent branches look older than they actually are. [ 7 ]
These two sources may balance each other or amplify each other depending on the direction of the T CHLCA error. There are two major reasons why this method is widely employed. First the pedigree based rates are inappropriate for estimates for very long periods of time. Second, while the archaeology anchored rates represent the intermediate range, archaeological evidence for human colonization often occurs well after colonization. For example, colonization of Eurasia from west to east is believed to have occurred along the Indian Ocean. However, the oldest archaeological sites that also demonstrate anatomically modern humans (AMH) are in China and Australia, greater than 42,000 years in age. However the oldest Indian site with AMH remains is from 34,000 years, and another site with AMH compatible archaeology is in excess of 76,000 years in age. [ 7 ] Therefore, application of the anchor is a subjective interpretation of when humans were first present.
A simple measure the sequence divergence between humans and chimps can be bound by observing the SNPs. Given that the mitogenome is about 16553 base pairs in length (each base-pair which can be aligned with known references is called a site), [ 8 ] the formula is:
The '2' in the denominator is derived from the 2 lineages, human and chimpanzee, that split from the CHLCA. Ideally it represents the accumulation of mutations on both lineages but in different positions (SNPs). As long as the number of SNP observed approximates the number of mutations this formula works well. However, at rapidly evolving sites mutations are obscured by saturation affects. Sorting positions within the mitogenome by rate and compensating for saturation are alternative approaches. [ 9 ]
Because the T CHLCA is subject to change with more paleontological information, the equation described above allows the comparison of TMRCA from different studies.
To overcome the effects of saturation , HVR analysis relied on the transversional distance between humans and chimpanzees. [ 10 ] A transition to transversion ratio was applied to this distance to estimate sequence divergence in the HVR between chimpanzees and humans, and divided by an assumed T CHLCA of 4 to 6 million years. [ 11 ] Based on 26.4 substitutions between chimpanzee and human and 15:1 ratio, the estimated 396 transitions over 610 base-pairs demonstrated sequence divergence of 69.2% (rate * T CHLCA of 0.369), producing divergence rates of roughly 11.5% to 17.3% per million years .
Vigilant et al. (1991) also estimated the sequence divergence rate for the sites in the rapidly evolving HVR I and HVR II regions. As noted in the table above, the rate of evolution is so high that site saturation occurs in direct chimpanzee and human comparisons. Consequently, this study used transversions, which evolve at a slower rate than the more common transition polymorphisms. Comparing chimp and human mitogenomes, they noted 26.4 transversions within the HVR regions, however they made no correction for saturation. As more HVR sequence was obtained following this study, it was noted that the dinucleotide site CRS:16181-16182 experienced numerous transversions in parsimony analysis, many of these were considered to be sequencing errors. However the sequencing of Feldhofer I Neanderthal revealed that there was also a transversion between humans and Neanderthals at this site. [ 12 ] In addition, Soares et al. (2009) noted three sites in which recurrent transversions had occurred in human lineages, two of which are in HVR I, 16265 (12 occurrences) and 16318(8 occurrences). [ note 1 ] Therefore, 26.4 transversions was an underestimate of the likely number of transversion events. The year 1991 study also used a transition-to-transversion ratio from the study of old world monkeys of 15:1. [ citation needed ] However, examination of chimp and gorilla HVR reveals a rate that is lower, and the examination of humans places the rate at 34:1. [ 6 ] Therefore, this study underestimated that level of sequence divergence between chimpanzee and human. The estimated sequence divergence 0.738/site (includes transversions) is significantly lower than the ~2.5 per site suggested by Soares et al. (2009). These two errors would result in an overestimate of the human mitochondrial TMRCA. However, they failed to detect the basal L0 lineage in the analysis and also failed to detect recurrent transitions in many lineages, which also underestimate the TMRCA. Also, Vigilant et al. (1991) used a more recent CHLCA anchor of 4 to 6 million years.
L0d
L0k
L0f
L0b
L0a
L1b
L1c
L5
L2
L6
L3
L4
Partial coding region sequence originally supplemented HVR studies because complete coding region sequence was uncommon. There were suspicions that the HVR studies had missed major branches based on some earlier RFLP and coding region studies. Ingman et al. (2000) was the first study to compare genomic sequences for coalescence analysis. Coding region sequence discriminated M and N haplogroups and L0 and L1 macrohaplogroups. Because the genomic DNA sequencing resolved the two deepest branches it improved some aspects estimating TMRCA over HVR sequence alone. Excluding the D-loop and using a 5-million-year T CHLCA , Ingman et al. (2000) estimated the mutation rate to be 1.70 × 10 −8 per site per year (rate * T CHLCA = 0.085, 15,435 sites).
However, coding region DNA has come under question because coding sequences are either under purifying selection to maintain structure and function, or under regional selection to evolve new capacities. [ 13 ] The problem with mutations in the coding region has been described as such: mutations occurring in the coding region that are not lethal to the mitochondria can persist but are negatively selective to the host; over a few generations these will persist, but over thousands of generations these slowly are pruned from the population, leaving SNPs. [ 6 ] However, over thousands of generations regionally selective mutations may not be discriminated from these transient coding region mutations. The problem with rare mutations in the human mitogenomes is significant enough to prompt a half-dozen recent studies on the matter.
Ingman et al. (2000) estimated the non-D loop region evolution 1.7 × 10 −8 per year per site based on 53 non-identical genomic sequence overrepresenting Africa in a global sample. Despite this over-representation, the resolution of the L0 subbranches was lacking and one other deep L1 branches has been found. Despite these limitations that sampling was adequate for the hallmark study. Today, L0 is restricted to African populations, whereas L1 is the ancestral haplogroup of all non-Africans, as well as most Africans. Mitochondrial Eve's sequence can be approximated by comparing a sequence from L0 with a sequence from L1. By reconciling the mutations in L0 and L1. The mtDNA sequences of contemporary human populations will generally differ from Mitochondrial Eve's sequence by about 50 mutations. [ 14 ] [ 15 ] Mutation rates were not classified according to site (other than excluding the HVR regions). The T CHLCA used in the year 2000 study of 5 Ma was also lower than values used in the most recent studies.
Since it has become possible to sequence large numbers of ancient mitogenomes, several studies have estimated the mitochondrial mutation rate by measuring how many more mutations on average have accumulated in modern (or later) genomes compared to ancient (or earlier) ones descending from the same phylogenetic node. These studies have obtained similar results: central estimates for the whole chromosome, in substitutions per site per year: 2.47 × 10 −8 ; [ 16 ] 2.14 × 10 −8 ; [ 17 ] 2.53 × 10 −8 ; [ 18 ] and 2.74 × 10 −8 . [ 19 ]
Molecular clocking of mitochondrial DNA has been criticized because of its inconsistent molecular clock. [ 20 ] [ 21 ] [ 22 ] A retrospective analysis of any pioneering process will reveal inadequacies. With mitochondrial the inadequacies are the argument from ignorance of rate variation and overconfidence concerning the T CHLCA of 5 Ma. Lack of historical perspective might explain the second issue, the problem of rate variation is something that could only be resolved by the massive study of mitochondria that followed. The number of HVR sequences that have accumulated from 1987 to 2000 increased by magnitudes. Soares et al. (2009) used 2196 mitogenomic sequences and uncovered 10,683 substitution events within these sequences. Eleven of 16560 sites in the mitogenome produced greater than 11% of all the substitutions with statistically significant rate variation within the 11 sites. [ note 2 ] They argue that there is a neutral-site mutation rate which is a magnitude slower than rate observed for the fastest site, CRS 16519. Consequently, purifying selection aside, the rate of mutation itself varies between sites, with a few sites much more likely to undergo new mutations relative to others. [ 23 ] Soares et al. (2009) noted two spans of DNA, CRS 2651-2700 and 3028-3082, that had no SNPs within the 2196 mitogenomic sequences.
Phylogenetic tree of human mitochondrial DNA (mtDNA) haplogroups | https://en.wikipedia.org/wiki/Human_mitochondrial_molecular_clock |
Human papillomavirus infection ( HPV infection ) is caused by a DNA virus from the Papillomaviridae family. [ 5 ] Many HPV infections cause no symptoms and 90% resolve spontaneously within two years. [ 1 ] In some cases, an HPV infection persists and results in either warts or precancerous lesions . [ 2 ] All warts are caused by HPV. These lesions, depending on the site affected, increase the risk of cancer of the cervix , vulva , vagina , penis , anus , mouth, tonsils, or throat . [ 1 ] [ 2 ] [ 3 ] Nearly all cervical cancer is due to HPV, and two strains – HPV16 and HPV18 – account for 70% of all cases. [ 1 ] [ 7 ] HPV16 is responsible for almost 90% of HPV-positive oropharyngeal cancers . [ 3 ] Between 60% and 90% of the other cancers listed above are also linked to HPV. [ 7 ] HPV6 and HPV11 are common causes of genital warts and laryngeal papillomatosis . [ 1 ]
An HPV infection is caused by the human papillomavirus , a DNA virus from the papillomavirus family. [ 8 ] [ 9 ] Over 200 types have been described. [ 10 ] [ 11 ] An individual can become infected with more than one type of HPV, [ 12 ] and the disease is only known to affect humans. [ 5 ] [ 13 ] More than 40 types may be spread through sexual contact and infect the anus and genitals . [ 4 ] Risk factors for persistent infection by sexually transmitted types include early age of first sexual intercourse , multiple sexual partners, smoking, and poor immune function . [ 1 ] These types are typically spread by sustained direct skin-to-skin contact, with vaginal and anal sex being the most common methods. [ 4 ] HPV infection can also spread from a mother to baby during pregnancy . [ 12 ] There is no evidence that HPV can spread via common items like toilet seats, [ 14 ] but the types that cause warts may spread via surfaces such as floors. [ 15 ] HPV is not killed by common hand sanitizers and disinfectants, increasing the possibility of the virus being transferred via non-living infectious agents called fomites . [ 16 ]
HPV vaccines can prevent the most common types of infection. [ 4 ] To be most effective, inoculation should occur before the onset of sexual activity, and are therefore recommended between the ages of 9–13 years. [ 1 ] For children between the ages of 9–14 years, vaccination effectiveness is reported to range between 74% and 93%, decreasing to 12% to 90% for 15–18 year old adolescents. [ 17 ] Cervical cancer screening , such as the Papanicolaou test ("pap smear"), or examination of the cervix after applying acetic acid , can detect both early cancer and abnormal cells that may develop into cancer. [ 1 ] Screening allows for early treatment which results in better outcomes. [ 1 ] Screening has reduced both the number of cases and the number of deaths from cervical cancer. [ 18 ] Genital warts can be removed by freezing . [ 5 ]
Nearly every sexually active individual is infected by HPV at some point in their lives. [ 4 ] HPV is the most common sexually transmitted infection (STI), globally. [ 5 ] High-risk HPVs cause about 5% of all cancers worldwide and about 37,300 cases of cancer in the United States each year. [ 11 ] Cervical cancer is among the most common cancers worldwide, causing an estimated 604,000 new cases and 342,000 deaths in 2020. [ 1 ] About 90% of these new cases and deaths of cervical cancer occurred in low- and middle-income countries . [ 1 ] Roughly 1% of sexually active adults have genital warts. [ 12 ] Cases of skin warts have been described since the time of ancient Greece , but it was not until 1907 that they were determined to be caused by a virus. [ 19 ]
HPV is a group of more than 200 related viruses, which are designated by a number for each virus type. [ 11 ] Some HPV types, such as HPV5, may establish infections that persist for the lifetime of the individual without ever manifesting any clinical symptoms. HPV types 1 and 2 can cause common warts in some infected individuals. [ 20 ] HPV types 6 and 11 can cause genital warts and laryngeal papillomatosis . [ 1 ]
Many HPV types are carcinogenic . [ 21 ] About twelve HPV types (including types 16, 18, 31, and 45) are called "high-risk" types because persistent infection has been linked to cancer of the oropharynx , [ 3 ] larynx , [ 3 ] vulva , vagina , cervix , penis , and anus . [ 11 ] [ 22 ] [ 23 ] These cancers all involve sexually transmitted infection of HPV to the stratified epithelial tissue . [ 1 ] [ 2 ] HPV type 16 is the strain most likely to cause cancer and is present in about 47% of all cervical cancers, [ 24 ] [ 25 ] and in many vaginal and vulvar cancers, [ 26 ] penile cancers, anal cancers, and cancers of the head and neck.
The table below lists common symptoms of HPV infection and the associated types of HPV.
Available HPV vaccines protect against either two, four, or nine types of HPV. [ 30 ] There are six prophylactic HPV vaccines licensed for use: the bivalent vaccines Cervarix , Cecolin , and Walrinvax ; the quadrivalent vaccines Cervavax and Gardasil ; and the nonavalent vaccine Gardasil 9 . [ 30 ] All HPV vaccines protect against at least HPV types 16 and 18, which cause the greatest risk of cervical cancer. The quadrivalent vaccines also protect against HPV types 6 and 11. The nonavalent vaccine Gardasil 9 provides protection against those four types (6, 11, 16, and 18), along with five other high-risk HPV types responsible for 20% of cervical cancers (types 31, 33, 45, 52, and 58). [ 30 ]
Skin infection (" cutaneous " infection) with HPV is very widespread. [ 31 ] Skin infections with HPV can cause noncancerous skin growths called warts (verrucae). Warts are caused by the rapid growth of cells on the outer layer of the skin. [ 32 ] While cases of warts have been described since the time of ancient Greece, their viral cause was not known until 1907. [ 19 ]
Skin warts are most common in childhood and typically appear and regress spontaneously over weeks to months. Recurring skin warts are common. [ 33 ] All HPVs are believed to be capable of establishing long-term "latent" infections in small numbers of stem cells present in the skin. Although these latent infections may never be fully eradicated, immunological control is thought to block the appearance of symptoms such as warts. Immunological control is HPV type-specific, meaning an individual may become resistant to one HPV type while remaining susceptible to other types. [ citation needed ]
Types of warts include:
Common, flat, and plantar warts are much less likely to spread from person to person.
HPV infection of the skin in the genital area is the most common sexually transmitted infection worldwide. [ 36 ] Such infections are associated with genital or anal warts (medically known as condylomata acuminata or venereal warts), and these warts are the most easily recognized sign of genital HPV infection. [ citation needed ]
The strains of HPV that can cause genital warts are usually different from those that cause warts on other parts of the body, such as the hands or feet, or even the inner thighs. A wide variety of HPV types can cause genital warts, but types 6 and 11 together account for about 90% of all cases. [ 37 ] [ 38 ] However, in total more than 40 types of HPV are transmitted through sexual contact and can infect the skin of the anus and genitals. [ 4 ] Such infections may cause genital warts, although they may also remain asymptomatic. [ citation needed ]
The great majority of genital HPV infections never cause any overt symptoms and are cleared by the immune system in a matter of months. Moreover, people may transmit the virus to others even if they do not display overt symptoms of infection. Most people acquire genital HPV infections at some point in their lives, and about 10% of women are currently infected. [ 36 ] A large increase in the incidence of genital HPV infection occurs at the age when individuals begin to engage in sexual activity. As with cutaneous HPVs, immunity to genital HPV is believed to be specific to a specific strain of HPV. [ citation needed ]
In addition to genital warts, infection by HPV types 6 and 11 can cause a rare condition known as recurrent laryngeal papillomatosis , in which warts form on the larynx [ 39 ] or other areas of the respiratory tract. [ 40 ] [ 41 ] These warts can recur frequently, may interfere with breathing, and in extremely rare cases can progress to cancer. For these reasons, repeated surgery to remove the warts may be advisable. [ 40 ] [ 42 ]
Cervical cancer is among the most common cancers worldwide, causing an estimated 604,000 new cases and 342,000 deaths in 2020. [ 1 ] About 90% of these new cases and deaths of cervical cancer occurred in low- and middle-income countries , where screening tests and treatment of early cervical cell changes are not readily available. [ 1 ]
In the United States, about 37,300 cases of cancer due to HPV occur each year. [ 11 ]
In some infected individuals, their immune systems may fail to control HPV. Lingering infection with high-risk HPV types, such as types 16, 18, 31, and 45, can favor the development of cancer. [ 44 ] Co-factors such as cigarette smoke can also enhance the risk of HPV-related cancers. [ 45 ] [ 46 ]
HPV is believed to cause cancer by integrating its genome into nuclear DNA . Some of the early genes expressed by HPV, such as E6 and E7, act as oncogenes that promote tumor growth and malignant transformation . [ 19 ] HPV genome integration can also cause carcinogenesis by promoting genomic instability associated with alterations in DNA copy number. [ 47 ]
E6 produces a protein (also called E6) that simultaneously binds to two host cell proteins called p53 and E6-Associated Protein ( E6-AP ). E6AP is an E3 Ubiquitin ligase , an enzyme whose purpose is to tag proteins with a post-translational modification called Ubiquitin. By binding both proteins, E6 induces E6AP to attach a chain of ubiquitin molecules to p53, thereby flagging p53 for proteosomal degradation. [ 48 ] [ 49 ] Normally, p53 acts to prevent cell growth and promotes cell death in the presence of DNA damage. p53 also upregulates the p21 protein, which blocks the formation of the cyclin D/Cdk4 complex, thereby preventing the phosphorylation of retinoblastoma protein (RB), and in turn, halting cell cycle progression by preventing the activation of E2F . In short, p53 is a tumor-suppressor protein that arrests the cell cycle and prevents cell growth and survival when DNA damage occurs. [ 50 ] Thus, the degradation of p53, induced by E6, promotes unregulated cell division, cell growth and cell survival, all characteristics of cancer. [ 51 ]
It is important to note, that while the interaction between E6, E6AP, and p53 was the first to be characterized, there are multiple other proteins in the host cell which interact with E6 and assist in the induction of cancer. [ 52 ]
Studies have also shown a link between a wide range of HPV types and squamous cell carcinoma of the skin . In such cases, in vitro studies suggest that the E6 protein of the HPV virus may inhibit apoptosis induced by ultraviolet light . [ 53 ]
Nearly all cases of cervical cancer are associated with HPV infection, with two types, HPV16 and HPV18, present in 70% of cases. [ 1 ] [ 7 ] [ 24 ] [ 54 ] [ 55 ] [ 56 ] In 2012, twelve HPV types were considered carcinogenic for cervical cancer by the International Agency for Research on Cancer : 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, and 59. [ 57 ] One study found that 74% of squamous cell carcinomas and 78% of adenocarcinomas tested positive for HPV types 16 or 18. [ 58 ] Persistent HPV infection increases the risk for developing cervical carcinoma. Individuals who have an increased incidence of these types of infection are women with HIV/AIDS, who are at a 22-fold increased risk of cervical cancer. [ 59 ] [ 60 ]
The carcinogenic HPV types in cervical cancer belong to the alphapapillomavirus genus and can be grouped further into HPV clades . [ 61 ] The two major carcinogenic HPV clades, alphapapillomavirus-9 (A9) and alphapapillomavirus-7 (A7), contain HPV16 and HPV18 , respectively. [ 62 ] These two HPV clades were shown to have different effects on tumour molecular characteristics and patient prognosis, with clade A7 being associated with more aggressive pathways and an inferior prognosis. [ 63 ]
In 2020, about 604,000 new cases and 342,000 deaths from cervical cancer occurred worldwide. Around 90% of these occurred in the developing world . [ 1 ]
Most HPV infections of the cervix are cleared rapidly by the immune system and do not progress to cervical cancer (see below the Clearance subsection in Virology ). Because the process of transforming normal cervical cells into cancerous ones is slow, cancer occurs in people having been infected with HPV for a long time, usually over a decade or more (persistent infection). [ 40 ] [ 64 ] Furthermore, both the HPV infection and cervical cancer drive metabolic modifications that may be correlated with the aberrant regulation of enzymes related to metabolic pathways. [ 65 ]
Non-European (NE) HPV16 variants are significantly more carcinogenic than European (E) HPV16 variants. [ 66 ]
The risk for anal cancer is 17 to 31 times higher among HIV-positive individuals who were coinfected with high-risk HPV, and 80 times higher for particularly HIV-positive men who have sex with men. [ 67 ]
Anal Pap smear screening for anal cancer might benefit some subpopulations of men or women engaging in anal sex. [ 68 ] No consensus exists, though, that such screening is beneficial, or who should get an anal Pap smear. [ 69 ] [ 70 ]
HPV is associated with approximately 50% of penile cancers . In the United States, penile cancer accounts for about 0.5% of all cancer cases in men. HPV16 is the most commonly associated type detected. The risk of penile cancer increases 2- to 3-fold for individuals who are infected with HIV as well as HPV. [ 67 ]
Oral infection with high-risk carcinogenic HPV types (most commonly HPV 16) [ 43 ] is associated with an increasing number of head and neck cancers . [ 71 ] [ 55 ] [ 72 ] [ 73 ] This association is independent of tobacco and alcohol use. [ 73 ] [ 74 ] [ 75 ]
The local percentage varies widely, from 70% in the United States [ 76 ] to 4% in Brazil. [ 77 ] Engaging in anal or oral sex with an HPV-infected partner may increase the risk of developing these types of cancers. [ 72 ]
In the United States, the number of newly diagnosed, HPV-associated head and neck cancers has surpassed that of cervical cancer cases. [ 71 ] The rate of such cancers has increased from an estimated 0.8 cases per 100,000 people in 1988 [ 78 ] to 4.5 per 100,000 in 2012, [ 43 ] and, as of 2021, the rate has continued to increase. [ 79 ] Researchers explain these recent data by an increase in oral sex. This type of cancer is more common in men than in women. [ 80 ]
The mutational profile of HPV-positive and HPV-negative head and neck cancer has been reported, further demonstrating that they are fundamentally distinct diseases. [ 81 ]
Some evidence links HPV to benign and malignant tumors of the upper respiratory tract. The International Agency for Research on Cancer has found that people with lung cancer were significantly more likely to have several high-risk forms of HPV antibodies compared to those who did not have lung cancer. [ 82 ] Researchers looking for HPV among 1,633 lung cancer patients and 2,729 people without the lung disease found that people with lung cancer had more types of HPV than noncancer patients did, and among lung cancer patients, the chances of having eight types of serious HPV were significantly increased. [ 83 ] In addition, expression of HPV structural proteins by immunohistochemistry and in vitro studies suggest HPV presence in bronchial cancer and its precursor lesions. [ 84 ] Another study detected HPV in the exhaled breath condensate (EBC), bronchial brushing and neoplastic lung tissue of cases, and found a presence of an HPV infection in 16.4% of the subjects affected by nonsmall cell lung cancer, but in none of the controls. [ 85 ] The reported average frequencies of HPV in lung cancers were 17% and 15% in Europe and the Americas, respectively, and the mean number of HPV in Asian lung cancer samples was 35.7%, with considerable heterogeneity between certain countries and regions. [ 86 ]
In very rare cases, HPV may cause epidermodysplasia verruciformis (EV) in individuals with a weakened immune system . The virus, unchecked by the immune system, causes the overproduction of keratin by skin cells , resulting in lesions resembling warts or cutaneous horns which can ultimately transform into skin cancer , but the development is not well understood. [ 87 ] [ 88 ] The specific types of HPV that are associated with EV are HPV5, HPV8, and HPV14. [ 88 ]
Sexually transmitted HPV is divided into two categories: low-risk and high-risk. Low-risk HPVs cause warts on or around the genitals. Type 6 and 11 cause 90% of all genital warts and recurrent respiratory papillomatosis that causes benign tumors in the air passages. High-risk HPVs cause cancer and consist of about twelve identified types. [ 11 ] Types 16 and 18 are responsible for causing most of HPV-caused cancers. These high-risk HPVs cause 5% of the cancers in the world. In the United States, high-risk HPVs cause 3% of all cancer cases in women and 2% in men. [ 89 ]
Risk factors for persistent genital HPV infections, which increase the risk of developing cancer, include early age of first sexual intercourse, multiple partners, smoking, and immunosuppression. [ 1 ] Genital HPV is spread by sustained direct skin-to-skin contact, with vaginal, anal, and oral sex being the most common methods. [ 4 ] [ 22 ] Occasionally, it can spread from manual sex or from a mother to her baby during pregnancy . [ 90 ] [ 91 ] HPV is difficult to remove via standard hospital disinfection techniques and may be transmitted in a healthcare setting on re-usable gynecological equipment, such as vaginal ultrasound transducers. The period of communicability is still unknown, but probably at least as long as visible HPV lesions persist. HPV may still be transmitted even after lesions are treated and no longer visible or present. [ 92 ]
Although genital HPV types can be transmitted from mother to child during birth, the appearance of genital HPV-related diseases in newborns is rare. However, the lack of appearance does not rule out asymptomatic latent infection, as the virus has proven to be capable of hiding for decades. Perinatal transmission of HPV types 6 and 11 can result in the development of juvenile-onset recurrent respiratory papillomatosis (JORRP). JORRP is very rare, with rates of about 2 cases per 100,000 children in the United States. [ 40 ] Although JORRP rates are substantially higher if a woman presents with genital warts at the time of giving birth, the risk of JORRP in such cases is still less than 1%. [ citation needed ]
Genital HPV infections are transmitted primarily by contact with the genitals, anus, or mouth of an infected sexual partner. [ 93 ]
Of the 120 known human papillomaviruses, 51 species and three subtypes infect the genital mucosa. [ 94 ] Fifteen are classified as high-risk types (16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, 68, 73, and 82), three as probable high-risk (26, 53, and 66), and twelve as low-risk (6, 11, 40, 42, 43, 44, 54, 61, 70, 72, 81, and 89). [ 21 ]
Condoms do not completely protect from the virus because the areas around the genitals including the inner thigh area are not covered, thus exposing these areas to the infected person's skin. [ 95 ]
Studies have shown HPV transmission between the hands and genitals of the same person and sexual partners. Hernandez tested the genitals and dominant hand of each person in 25 heterosexual couples every other month for an average of seven months. She found two couples where the man's genitals infected the woman's hand with high-risk HPV, two where her hand infected his genitals, one where her genitals infected his hand, two each where he infected his own hand, and she infected her own hand. [ 96 ] Hands were not the main source of transmission in these 25 couples, but they were significant. [ citation needed ]
Partridge reports men's fingertips became positive for high-risk HPV at more than half the rate (26% per two years) as their genitals (48%). [ 97 ] Winer reports 14% of fingertip samples from sexually active women were positive. [ 98 ]
Non-sexual hand contact seems to have little or no role in HPV transmission. Winer found all fourteen fingertip samples from virgin women negative at the start of her fingertip study. [ 98 ] In a separate report on genital HPV infection, 1% of virgin women (1 of 76) with no sexual contact tested positive for HPV, while 10% of virgin women reporting non-penetrative sexual contact were positive (7 of 72). [ 99 ]
Sharing of possibly contaminated objects, for example, razors, [ 92 ] may transmit HPV. [ 100 ] [ 101 ] [ 102 ] Although possible, transmission by routes other than sexual intercourse is less common for female genital HPV infection. [ 93 ] Fingers-genital contact is a possible way of transmission but unlikely to be a significant source. [ 98 ] [ 103 ]
Though it has traditionally been assumed that HPV is not transmissible via blood – as it is thought to only infect cutaneous and mucosal tissues – recent studies have called this notion into question. Historically, HPV DNA has been detected in the blood of cervical cancer patients. [ 104 ] In 2005, a group reported that, in frozen blood samples of 57 sexually naive pediatric patients who had vertical or transfusion-acquired HIV infection, 8 (14.0%) of these samples also tested positive for HPV-16. [ 105 ] This seems to indicate that it may be possible for HPV to be transmitted via blood transfusion . However, as non-sexual transmission of HPV by other means is not uncommon, this could not be definitively proven. In 2009, a group tested Australian Red Cross blood samples from 180 healthy male donors for HPV, and subsequently found DNA of one or more strains of the virus in 15 (8.3%) of the samples. [ 106 ] However, it is important to note that detecting the presence of HPV DNA in blood is not the same as detecting the virus itself in blood, and whether or not the virus itself can or does reside in blood in infected individuals is still unknown. As such, it remains to be determined whether HPV can or cannot be transmitted via blood. [ 104 ] This is of concern, as blood donations are not currently screened for HPV, and at least some organizations such as the American Red Cross and other Red Cross societies do not presently appear to disallow HPV-positive individuals from donating blood. [ 107 ]
Hospital transmission of HPV, especially to surgical staff, has been documented. Surgeons, including urologists and/or anyone in the room, are subject to HPV infection by inhalation of noxious viral particles during electrocautery or laser ablation of a condyloma (wart). [ 108 ] There has been a case report of a laser surgeon who developed extensive laryngeal papillomatosis after providing laser ablation to patients with anogenital condylomata. [ 108 ]
HPV infection is limited to the basal cells of stratified epithelium , the only tissue in which they replicate. [ 110 ] The virus cannot bind to live tissue; instead, it infects epithelial tissues through micro-abrasions or other epithelial trauma that exposes segments of the basement membrane . [ 110 ] The infectious process is slow, taking 12–24 hours for initiation of transcription. It is believed that involved antibodies play a major neutralizing role while the virions still reside on the basement membrane and cell surfaces. [ 110 ]
HPV lesions are thought to arise from the proliferation of infected basal keratinocytes . Infection typically occurs when basal cells in the host are exposed to the infectious virus through a disturbed epithelial barrier as would occur during sexual intercourse or after minor skin abrasions. HPV infections have not been shown to be cytolytic ; rather, viral particles are released as a result of degeneration of desquamating cells. HPV can survive for many months and at low temperatures without a host; therefore, an individual with plantar warts can spread the virus by walking barefoot. [ 38 ]
HPV is a small double-stranded circular DNA virus with a genome of approximately 8000 base pairs. [ 22 ] [ 111 ] The HPV life cycle strictly follows the differentiation program of the host keratinocyte. It is thought that the HPV virion infects epithelial tissues through micro-abrasions, whereby the virion associates with putative receptors such as alpha integrins , laminins , and annexin A2 [ 112 ] leading to the entry of the virions into basal epithelial cells through clathrin - mediated endocytosis and/or caveolin -mediated endocytosis depending on the type of HPV. [ 113 ] At this point, the viral genome is transported to the nucleus by unknown mechanisms and establishes itself at a copy number of 10-200 viral genomes per cell. A sophisticated transcriptional cascade then occurs as the host keratinocyte begins to divide and become increasingly differentiated in the upper layers of the epithelium. [ citation needed ]
The phylogeny of the various strains of HPV generally reflects the migration patterns of Homo sapiens and suggests that HPV may have diversified along with the human population. Studies suggest that HPV evolved along five major branches that reflect the ethnicity of human hosts, and diversified along with the human population. [ 114 ]
Researchers initially identified two major variants of HPV16, European (HPV16-E), and Non-European (HPV16-NE). [ 115 ] More recent analyses based on thousands of HPV16 genomes show that indeed two major clades exist, that are further subdivided into four lineages (designated A-D) and even further subdivided into 16 sublineages (A1–4, B1–4, C1–4 and D1–4). [ 116 ] [ 117 ] The A1-A3 sublineages constitute the European variant, A4 the Asian variant, B1-B4 the African type I variant, C1–C4 the African type II variant, D1 the North American variant, D2 the Asian American type I variant, D3 the Asian American type II variant. [ 116 ] The various lineages and sublineages have different oncogenic capacity, where overall, the non-European lineages are considered to increase the risk for cancer. [ 118 ] Although HPV16 is a DNA virus, there are signs of recombination among the different lineages. [ 117 ] [ 119 ] Based on an analysis of more than 3600 genomes, between 0.3 and 1.2% of them could be recombinant. [ 117 ] Thus, ideally, genotyping (for cancer-risk assessment) of HPV16 should not be based only on certain genes, but on all genes from the entire genome. [ 117 ]
A bioinformatics tool named HPV16-Genotyper performs i) HPV16 lineage genotyping, ii) detects potential recombination events, iii) identifies, within the submitted sequences, mutations/SNPs that have been reported (in literature) to increase the risk for cancer. [ 117 ]
The two primary oncoproteins of high-risk HPV types are E6 and E7. The "E" designation indicates that these two proteins are early proteins (expressed early in the HPV life cycle), while the "L" designation indicates that they are late proteins (late expression). [ 55 ] The HPV genome is composed of six early (E1, E2, E4, E5, E6, and E7) open reading frames (ORF), two late (L1 and L2) ORFs, and a non-coding long control region (LCR). [ 121 ] After the host cell is infected viral early promoter is activated and a polycistronic primary RNA containing all six early ORFs is transcribed. This polycistronic RNA then undergoes active RNA splicing to generate multiple isoforms of mRNAs . [ 122 ] One of the spliced isoform RNAs, E6*I, serves as an E7 mRNA to translate E7 protein. [ 123 ] However, viral early transcription subjects to viral E2 regulation and high E2 levels repress the transcription. HPV genomes integrate into the host genome by disruption of E2 ORF, preventing E2 repression on E6 and E7. Thus, viral genome integration into the host DNA genome increases E6 and E7 expression to promote cellular proliferation and the chance of malignancy. The degree to which E6 and E7 are expressed is correlated with the type of cervical lesion that can ultimately develop. [ 111 ]
Sometimes papillomavirus genomes are found integrated into the host genome, and this is especially noticeable with oncogenic HPVs. [ 124 ] The E6/E7 proteins inactivate two tumor suppressor proteins, p53 (inactivated by E6) and pRb (inactivated by E7). [ 125 ] The viral oncogenes E6 and E7 [ 126 ] are thought to modify the cell cycle so as to retain the differentiating host keratinocyte in a state that is favourable to the amplification of viral genome replication and consequent late gene expression. E6 in association with host E6-associated protein, which has ubiquitin ligase activity, acts to ubiquitinate p53, leading to its proteosomal degradation. E7 (in oncogenic HPVs) acts as the primary transforming protein. E7 competes for retinoblastoma protein (pRb) binding, freeing the transcription factor E2F to transactivate its targets, thus pushing the cell cycle forward. All HPV can induce transient proliferation, but only strains 16 and 18 can immortalize cell lines in vitro . It has also been shown that HPV 16 and 18 cannot immortalize primary rat cells alone; there needs to be activation of the ras oncogene. In the upper layers of the host epithelium, the late genes L1 and L2 are transcribed/translated and serve as structural proteins that encapsidate the amplified viral genomes. Once the genome is encapsidated, the capsid appears to undergo a redox-dependent assembly/maturation event, which is tied to a natural redox gradient that spans both suprabasal and cornified epithelial tissue layers. This assembly/maturation event stabilizes virions and increases their specific infectivity. [ 127 ] Virions can then be sloughed off in the dead squames of the host epithelium and the viral lifecycle continues. [ 128 ] A 2010 study has found that E6 and E7 are involved in beta-catenin nuclear accumulation and activation of Wnt signaling in HPV-induced cancers. [ 129 ]
Once an HPV virion invades a cell, an active infection occurs, and the virus can be transmitted. Several months to years may elapse before squamous intraepithelial lesions (SIL) develop and can be clinically detected. The time from active infection to clinically detectable disease may make it difficult for epidemiologists to establish which partner was the source of infection. [ 108 ]
Most HPV infections are cleared up by most people without medical action or consequences. The table provides data for high-risk types (i.e. the types found in cancers). [ citation needed ]
Clearing an infection does not always create immunity if there is a new or continuing source of infection. Hernandez' 2005-6 study of 25 couples reports "A number of instances indicated apparent reinfection [from partner] after viral clearance." [ 96 ]
Over 200 types of HPV have been identified, and they are designated by numbers. [ 11 ] [ 8 ] [ 125 ] They may be divided into "low-risk" and "high-risk" types. Low-risk types cause warts and high-risk types can cause lesions or cancer. [ 132 ] [ 133 ]
Guidelines from the American Cancer Society recommend different screening strategies for cervical cancer based on a woman's age, screening history, risk factors, and choice of tests. [ 134 ] Because of the link between HPV and cervical cancer, the ACS currently recommends early detection of cervical cancer in average-risk asymptomatic adults primarily with cervical cytology by Pap smear, regardless of HPV vaccination status. Women aged 30–65 should preferably be tested every 5 years with both the HPV test and the Pap test. In other age groups, a Pap test alone can suffice unless they have been diagnosed with atypical squamous cells of undetermined significance (ASC-US). [ 135 ] Co-testing with a Pap test and HPV test is recommended because it decreases the rate of false-negatives. According to the National Cancer Institute, "The most common test detects DNA from several high-risk HPV types, but it cannot identify the types that are present. Another test is specific for DNA from HPV types 16 and 18, the two types that cause most HPV-associated cancers. A third test can detect DNA from several high-risk HPV types and can indicate whether HPV-16 or HPV-18 is present. A fourth test detects RNA from the most common high-risk HPV types. These tests can detect HPV infections before cell abnormalities are evident. [ citation needed ]
"Theoretically, the HPV DNA and RNA tests could be used to identify HPV infections in cells taken from any part of the body. However, the tests are approved by the FDA for only two indications: for follow-up testing of women who seem to have abnormal Pap test results and for cervical cancer screening in combination with a Pap test among women over age 30." [ 136 ]
Guidelines for oropharyngeal cancer screening by the Preventive Services Task Force and American Dental Association in the U.S. suggest conventional visual examination, but because some parts of the oropharynx are hard to see, this cancer is often only detected in later stages. [ 67 ]
The diagnosis of oropharyngeal cancer occurs by biopsy of exfoliated cells or tissues. The National Comprehensive Cancer Network and College of American Pathologists recommend testing for HPV in oropharyngeal cancer. [ 67 ] However, while testing is recommended, there is no specific type of test used to detect HPV from oral tumors that is currently recommended by the FDA in the United States. Because HPV type 16 is the most common type found in oropharyngeal cancer, p16 immunohistochemistry is one test option used to determine if HPV is present, [ 137 ] which can help determine course of treatment since tumors that are negative for p16 have better outcomes. Another option that has emerged as a reliable option is HPV DNA in situ hybridization (ISH) which allows for visualization of the HPV. [ 67 ]
There is not a wide range of tests available even though HPV is common; most studies of HPV used tools and custom analysis not available to the general public. [ 138 ] [ needs update ] Clinicians often depend on the vaccine among young people and high clearance rates (see Clearance subsection in Virology ) to create a low risk of disease and mortality, and treat the cancers when they appear. Others believe that reducing HPV infection in more men and women, even when it has no symptoms, is important (herd immunity) to prevent more cancers rather than just treating them. [ 139 ] [ 140 ] [ needs update ] Where tests are used, negative test results show safety from transmission, and positive test results show where shielding (condoms, gloves) is needed to prevent transmission until the infection clears. [ 141 ]
Studies have tested for and found HPV in men, including high-risk types (i.e. the types found in cancers), on fingers, mouth, saliva, anus, urethra, urine, semen, blood, scrotum and penis. [ 138 ]
The aforementioned Qiagen/Digene kit was successfully used off-label to test the penis, scrotum, and anus [ 142 ] of men in long-term relationships with women who were positive for high-risk HPV. Of these men, 60% were found to carry the virus, primarily on the penis. [ 142 ] [ needs update ] Similar studies have been conducted on women using cytobrushes - an endocervical brush for sampling the cervix in females - and custom analysis. [ 143 ] [ 144 ] [ needs update ]
In one study researchers sampled subjects' urethra, scrotum, and penis. [ 143 ] [ 144 ] [ needs update ] Samples taken from the urethra added less than 1% to the HPV rate. Studies like this led Giuliano to recommend sampling the glans, shaft, and crease between them, along with the scrotum, since sampling the urethra or anus added very little to the diagnosis. [ 97 ] Dunne recommends the glans, shaft, their crease, and the foreskin. [ 138 ]
In one study the subjects were asked not to wash their genitals for 12 hours before sampling, including the urethra as well as the scrotum and the penis. [ 143 ] Other studies are silent on washing – a particular gap in studies of the hands. [ citation needed ]
One small study used wet cytobrushes, rather than wet the skin. [ 144 ] It found a higher proportion of men to be HPV-positive when the skin was rubbed with a 600 grit emery paper before being swabbed with the brush, rather than swabbed with no preparation. It's unclear whether the emery paper collected the virions or simply loosened them for the swab to collect. [ citation needed ]
Studies have found self-collection (with emery paper and Dacron swabs) as effective as collection done by a clinician, and sometimes more so, since patients were more willing than a clinician to scrape vigorously. [ 145 ] [ needs update ] [ 146 ] Women had similar success in self-sampling using tampons, swabs, cytobrushes, and lavage. [ 147 ] [ needs update ]
Several studies used cytobrushes to sample fingertips and under fingernails, without wetting the area or the brush. [ 98 ] [ 103 ] [ 148 ] [ needs update ]
Other studies analyzed urine, semen, and blood and found varying amounts of HPV, [ 138 ] but there is not a publicly available test for those yet.
Although it is possible to test for HPV DNA in other kinds of infections, [ 138 ] there are no FDA-approved tests for general screening in the United States [ 149 ] or tests approved by the Canadian government, [ 150 ] since the testing is inconclusive and considered medically unnecessary. [ 151 ]
Genital warts are the only visible sign of low-risk genital HPV and can be identified with a visual check. These visible growths, however, are the result of non-carcinogenic HPV types. Five percent acetic acid (vinegar) is used to identify both warts and squamous intraepithelial neoplasia (SIL) lesions with limited success [ citation needed ] by causing abnormal tissue to appear white, but most doctors have found this technique helpful only in moist areas, such as the female genital tract. [ citation needed ] At this time, HPV tests for males are used only in research. [ citation needed ]
Research into testing for HPV by antibody presence has been done. The approach is looking for an immune response in blood, which would contain antibodies for HPV if the patient is HPV positive. [ 152 ] [ 153 ] [ 154 ] [ 155 ] The reliability of such tests has not been proven, as there has not been a FDA approved product as of August 2018; [ 156 ] testing by blood would be a less invasive test for screening purposes.
The HPV vaccines can prevent the most common types of infection. [ 4 ] Cervical cancer screening , such as with the Papanicolaou test (pap) or looking at the cervix after using acetic acid , can detect early cancer or abnormal cells that may develop into cancer. [ 1 ] Screening has reduced both the number and deaths from cervical cancer in the developed world. [ 18 ] Warts can be removed by freezing . [ 5 ]
Three vaccines are available to prevent infection by some HPV types: Gardasil , Gardasil 9 and Cervarix ; all three protect against initial infection with HPV types 16 and 18, which cause most of the HPV-associated cancer cases. Gardasil also protects against HPV types 6 and 11, which cause 90% of genital warts. Gardasil is a recombinant quadrivalent vaccine, whereas Cervarix is bivalent, and is prepared from virus-like particles (VLP) of the L1 capsid protein . Gardasil 9 is nonavalent, having the potential to prevent about 90% of cervical, vulvar, vaginal, and anal cancers. It can protect for HPV types 6, 11, 16, 18, 31, 33, 45, 52, and 58; the latter five cause up to 20% of cervical cancers which were not previously covered. [ 157 ]
The vaccines provide little benefit to women already infected with HPV types 16 and 18. [ 158 ] For this reason, the vaccine is recommended primarily for those women not yet having been exposed to HPV during sex. The World Health Organization position paper on HPV vaccination clearly outlines appropriate, cost-effective strategies for using HPV vaccine in public sector programs. [ 159 ]
There is high-certainty evidence that HPV vaccines protect against precancerous cervical lesions in young women, particularly those vaccinated aged 15 to 26. [ 160 ] HPV vaccines do not increase the risk of serious adverse events. [ 160 ] Longer follow-up is needed to monitor the impact of HPV vaccines on cervical cancer. [ 160 ]
The CDC recommends the vaccines be delivered in two shots at an interval of at least 6 months for those aged 11–12, and three doses for those 13 and older. [ 161 ] In most countries, they are funded only for female use, but are approved for male use in many countries, and funded for teenage boys in Australia. The vaccine does not have any therapeutic effect on existing HPV infections or cervical lesions. [ 162 ] In 2010, 49% of teenage girls in the US got the HPV vaccine. [ citation needed ]
Following studies suggesting that the vaccine is more effective in younger girls [ 163 ] than in older teenagers, the United Kingdom, Switzerland, Mexico, the Netherlands, and Quebec began offering the vaccine in a two-dose schedule for girls aged under 15 in 2014. [ citation needed ]
Cervical cancer screening recommendations have not changed for females who receive the HPV vaccine. It remains a recommendation that women continue cervical screening, such as Pap smear testing, even after receiving the vaccine, since it does not prevent all types of cervical cancer. [ 162 ] [ 164 ]
Both men and women are carriers of HPV. [ 165 ] The Gardasil vaccine also protects men against anal cancers and warts and genital warts. [ 166 ]
Duration of both vaccines' efficacy has been observed since they were first developed, and is expected to be long-lasting. [ 167 ]
In December 2014, the FDA approved a nine-valent Gardasil-based vaccine, Gardasil 9, to protect against infection with the four strains of HPV covered by the first generation of Gardasil as well as five other strains responsible for 20% of cervical cancers (HPV-31, HPV-33, HPV-45, HPV-52, and HPV-58). [ 168 ]
The Centers for Disease Control and Prevention says that male " condom use may reduce the risk for genital human papillomavirus (HPV) infection" but provides a lesser degree of protection compared with other sexual transmitted infections "because HPV also may be transmitted by exposure to areas (e.g., infected skin or mucosal surfaces) that are not covered or protected by the condom." [ 169 ]
The virus is unusually hardy and is immune to most common disinfectants. It is the first virus ever shown to be resistant to inactivation by glutaraldehyde , which is among the most common strong disinfectants used in hospitals. [ 170 ] Diluted sodium hypochlorite bleach is effective, [ 170 ] but cannot be used on some types of re-usable equipment, such as ultrasound transducers. [ 90 ] As a result of these difficulties, there is developing concern about the possibility of transmitting the virus on healthcare equipment, particularly reusable gynecological equipment that cannot be autoclaved . [ 171 ] [ 172 ] For such equipment, some health authorities encourage use of UV disinfection [ 173 ] or a non-hypochlorite "oxidizing‐based high‐level disinfectant [bleach] with label claims for non‐enveloped viruses", [ 174 ] such as a strong hydrogen peroxide solution [ 175 ] [ 173 ] or chlorine dioxide wipes. [ 173 ] Such disinfection methods are expected to be relatively effective against HPV. [ citation needed ]
There is currently no specific treatment for HPV infection. [ 176 ] [ 177 ] [ 178 ] However, the viral infection is usually cleared to undetectable levels by the immune system. [ 179 ] According to the Centers for Disease Control and Prevention , the body's immune system clears HPV naturally within two years for 90% of cases (see Clearance subsection in Virology for more detail). [ 176 ] However, experts do not agree on whether the virus is eliminated or reduced to undetectable levels, and it is difficult to know when it is contagious. [ 180 ] [ needs update ]
Follow up care is usually recommended and practiced by many health clinics. [ 181 ] Follow-up is sometimes not successful because a portion of those treated do not return to be evaluated. In addition to the normal methods of phone calls and mail, text messaging and email can improve the number of people who return for care. [ 182 ] As of 2015 it is unclear the best method of follow up following treatment of cervical intraepithelial neoplasia . [ 183 ]
Globally, 12% of women are positive for HPV DNA, with rates varying by age and country. [ 184 ] The highest rates of HPV are in younger women, with a rate of 24% in women under 25 years. [ 185 ] Rates decline in older age groups in Europe and the Americas, but less so in Africa and Asia. The rates are highest in Sub-Saharan Africa (24%) and Eastern Europe (21%) and lowest in North America (5%) and Western Asia (2%). [ 184 ]
The most common types of HPV worldwide are HPV16 (3.2%), HPV18 (1.4%), HPV52 (0.9%), HPV31 (0.8%), and HPV58 (0.7%). High-risk types of HPV are also distributed unevenly, with HPV16 having a rate of around 13% in Africa and 30% in West and Central Asia. [ 185 ]
Like many diseases, HPV disproportionately affects low-income and resource-poor countries. The higher rates of HPV in Sub-Saharan Africa, for example, may be related to high exposure to human immunodeficiency virus (HIV) in the region. Other factors that impact the global spread of the disease are sexual behavior - including age of sexual debut and number of sexual partners - and ease of access to barrier contraception, all of which vary globally. [ 184 ] [ 186 ]
The papilloma virus is not only widespread among women, but is also behind most cases of oropharyngeal cancer , which is the fastest growing cancer among young adults in Western countries. [ 187 ] Moreover, as of 2025, papilloma virus is the most prevalent sexually transmitted infection in the world. [ 187 ]
HPV is estimated to be the most common sexually transmitted infection in the United States. [ 188 ] Most sexually active men and women will probably acquire genital HPV infection at some point in their lives. [ 24 ] The American Social Health Association estimates that about 75–80% of sexually active Americans will be infected with HPV at some point in their lifetime. [ 189 ] [ 190 ] By the age of 50 more than 80% of American women will have contracted at least one strain of genital HPV. [ 188 ] [ 191 ] It was estimated that, in the year 2000, there were approximately 6.2 million new HPV infections among Americans aged 15–44; of these, an estimated 74% occurred to people between ages of 15 and 24. [ 192 ] Of the STIs studied, genital HPV was the most commonly acquired. [ 192 ] In the United States, it is estimated that 10% of the population has an active HPV infection, 4% has an infection that has caused cytological abnormalities, and an additional 1% has an infection causing genital warts. [ 193 ]
Estimates of HPV prevalence vary from 14% to more than 90%. [ 194 ] One reason for the difference is that some studies report women who currently have a detectable infection, while other studies report women who have ever had a detectable infection. [ 195 ] [ 196 ] Another cause of discrepancy is the difference in strains that were tested for. [ citation needed ]
One study found that, during 2003–2004, at any given time , 26.8% of women aged 14 to 59 were infected with at least one type of HPV. This was higher than previous estimates; 15.2% were infected with one or more of the high-risk types that can cause cancer. [ 188 ] [ 197 ]
The prevalence for high-risk and low-risk types is roughly similar over time. [ 188 ]
Human papillomavirus is not included among the diseases that are typically reportable to the CDC as of 2011. [ 198 ] [ 199 ]
On average 538 cases of HPV-associated cancers were diagnosed per year in Ireland during the period 2010 to 2014. [ 200 ] Cervical cancer was the most frequent HPV-associated cancer with on average 292 cases per year (74% of the female total, and 54% of the overall total of HPV-associated cancers). [ 200 ] A study of 996 cervical cytology samples in an Irish urban female, opportunistically screened population, found an overall HPV prevalence of 19.8%, HPV 16 at 20% and HPV 18 at 12% were the commonest high-risk types detected. In Europe, types 16 and 18 are responsible for over 70% of cervical cancers. [ 201 ] Overall rates of HPV-associated invasive cancers may be increasing. Between 1994 and 2014, there was a 2% increase in the rate of HPV-associated invasive cancers per year for both sexes in Ireland. [ 200 ]
As HPV is known to be associated with anogenital warts, these are notifiable to the Health Protection Surveillance Centre (HPSC). Genital warts are the second most common STI in Ireland. [ 202 ] There were 1,281 cases of anogenital warts notified in 2017, which was a decrease on the 2016 figure of 1,593. [ 203 ] The highest age-specific rate for both male and female was in the 25–29 year old age range; 53% of cases were among males. [ 203 ]
In Sri Lanka, the prevalence of HPV is 15.5% regardless of cytological abnormalities. [ 204 ]
In the Autonomous Region of Inner Mongolia overall HPV prevalence is 14.5% but shows substantial ethnical disparity, the prevalence in Mongolian women (14.9%) being much higher than that of Han participants (4.3%). [ 205 ] Urbanization, the number of sex partners, and PAP history appear as risk factors for HPV infection in Han, but not in Mongolian women. The region is thus an important example that the epidemiology of HPV is more related to cultural and ethnical factors and not to geography per se. [ citation needed ]
One of the first studies linking risk of uterine carcinoma with the number of sexual activity was performed in 1842, in Verona. Dr. Domenico Rigoni-Stern observed that uterine cancer incidence among Catholic nuns living in convents in the countryside was lower than in women living in the city. Highest incidence was seen for prostitutes, thereby linking uterine cancer prevalence to the number of sexual partners, and suggesting that this disease might have a sexually transmissible component. [ 206 ]
In 1972, the association of the human papillomaviruses with skin cancer in epidermodysplasia verruciformis was proposed by Stefania Jabłońska in Poland. In 1976 Harald zur Hausen published the hypothesis that human papillomavirus plays an important role in the cause of cervical cancer . In 1978, Jabłońska and Gerard Orth at the Pasteur Institute discovered HPV-5 in skin cancer . [ 207 ] In 1983 and 1984 zur Hausen and his collaborators identified HPV16 and HPV18 in cervical cancer. [ 208 ]
The HeLa cell line contains extra DNA in its genome that originated from HPV type 18. [ 209 ]
The Ludwig-McGill HPV Cohort is one of the world's largest longitudinal studies of the natural history of human papillomavirus (HPV) infection and cervical cancer risk. It was established in 1993 by Ludwig Cancer Research and McGill University in Montreal, Canada. [ 210 ] | https://en.wikipedia.org/wiki/Human_papillomavirus_infection |
A human pathogen is a pathogen ( microbe or microorganism such as a virus , bacterium , prion , or fungus ) that causes disease in humans .
The human physiological defense against common pathogens (such as Pneumocystis ) is mainly the responsibility of the immune system with help by some of the body's normal microbiota . However, if the immune system or "good" microbiota are damaged in any way (such as by chemotherapy , human immunodeficiency virus (HIV), or antibiotics being taken to kill other pathogens), pathogenic bacteria that were being held at bay can proliferate and cause harm to the host. Such cases are called opportunistic infections .
Some pathogens (such as the bacterium Yersinia pestis , which may have caused the Black Plague , the Variola virus, and the malaria protozoa) have been responsible for massive numbers of casualties and have had numerous effects on affected groups. Of particular note in modern times is HIV, which is known to have infected several million humans globally, along with the influenza virus. Today, while many medical advances have been made to safeguard against infection by pathogens, through the use of vaccination , antibiotics , and fungicide , pathogens continue to threaten human life. Social advances such as food safety , hygiene , and water treatment have reduced the threat from some pathogens.
Pathogenic viruses are mainly those of the families of: Adenoviridae , Picornaviridae , Herpesviridae , Hepadnaviridae , Coronaviridae , Flaviviridae , Retroviridae , Orthomyxoviridae , Paramyxoviridae , Papovaviridae , Polyomavirus , Poxviridae , Rhabdoviridae , and Togaviridae . Some notable pathogenic viruses cause smallpox , influenza , mumps , measles , chickenpox , ebola , and rubella. Viruses typically range between 20 and 300 nanometers in length. [ 1 ]
This type of pathogen is not cellular, and is instead composed of either RNA ( Ribonucleic acid ) or DNA ( Deoxyribonucleic acid ) within a protein shell - the capsid . Pathogenic viruses infiltrate host cells and manipulate the organelles within the cell such as the Ribosomes , Golgi Apparatus , and Endoplasmic Reticulum in order to multiply which commonly results in the death of the host cell via cellular decay. All the viruses that were contained within the lipid bilayer of the cell membrane are then released into the intercellular matrix to infect neighboring cells to continue the viral life cycle .
White blood cells surround and consume the virus using a mechanism known as phagocytosis [ 2 ] (a type of endocytosis ) [ 3 ] within the extracellular matrix to reduce and fight the infection. The components within the white blood cell are responsible for destroying the virus and recycling its components for the body to use. [ 4 ]
Although the vast majority of bacteria are harmless or beneficial to one's body, a few pathogenic bacteria can cause infectious diseases . The most common bacterial disease is tuberculosis , caused by the bacterium Mycobacterium tuberculosis , which affects about 2 million people mostly in sub-Saharan Africa. Pathogenic bacteria contribute to other globally important diseases, such as pneumonia , which can be caused by bacteria such as Streptococcus and Pseudomonas , and foodborne illnesses , which can be caused by bacteria such as Shigella , Campylobacter , and Salmonella . Pathogenic bacteria also cause infections such as tetanus , typhoid fever , diphtheria , syphilis , and Hansen's disease . They typically range between 1 and 5 micrometers in length. [ citation needed ]
Fungi are a eukaryotic kingdom of microbes that are usually saprophytes , but can cause diseases in humans. Life-threatening fungal infections in humans most often occur in immunocompromised patients or vulnerable people with a weakened immune system, although fungi are common problems in the immunocompetent population as the causative agents of skin, nail, or yeast infections. Most antibiotics that function on bacterial pathogens cannot be used to treat fungal infections because fungi and their hosts both have eukaryotic cells. Most clinical fungicides belong to the azole group . The typical fungal spore size is 1-40 micrometers in length. [ 5 ]
Protozoans are single-celled eukaryotes that feed on microorganisms and organic tissues. Considered as "one-celled animal" as they have animal like behaviors such as motility, predation, and a lack of a cell wall. Many protozoan pathogens are considered human parasites as they cause a variety of diseases such as: malaria , amoebiasis , babesiosis , giardiasis , toxoplasmosis , cryptosporidiosis , trichomoniasis , Chagas disease , leishmaniasis , African trypanosomiasis (sleeping sickness), Acanthamoeba keratitis , and primary amoebic meningoencephalitis (naegleriasis).
Parasitic worms (Helminths) are macroparasites that can be seen by the naked eye. Worms live and feed in their living host, receiving nourishment and shelter while affecting the host's way of digesting nutrients. They also manipulate the host's immune system by secreting immunomodulatory products [ 6 ] which allows them to live in their host for years. Many parasitic worms are more commonly intestinal that are soil-transmitted and infect the digestive tract; other parasitic worms are found in the host's blood vessels. Parasitic worms living in the host can cause weakness and even lead to many diseases. Parasitic worms can cause many diseases to both humans and animals. Helminthiasis (worm infection), Ascariasis , and enterobiasis (pinworm infection) are few that are caused by various parasitic worms. [ citation needed ]
Prions are misfolded proteins that are transmissible and can influence abnormal folding of normal proteins in the brain. They do not contain any DNA or RNA and cannot replicate other than to convert already existing normal proteins to the misfolded state. These abnormally folded proteins are found characteristically in many neurodegenerative diseases as they aggregate the central nervous system and create plaques that damages the tissue structure. This essentially creates "holes" in the tissue. It has been found that prions transmit three ways: obtained, familial, and sporadic. It has also been found that plants play the role of vector for prions. There are eight different diseases that affect mammals that are caused by prions such as scrapie , bovine spongiform encephalopathy (mad cow disease) and Feline spongiform encephalopathy (FSE) . There are also ten diseases that affect humans such as, Creutzfeldt–Jakob disease (CJD). [ 7 ] and Fatal familial insomnia (FFI).
Animal pathogens are disease-causing agents of wild and domestic animal species, at times including humans. [ 8 ]
Virulence (the tendency of a pathogen to cause damage to a host's fitness) evolves when that pathogen can spread from a diseased host, despite that host being very debilitated. An example is the malaria parasite, which can spread from a person near death, by hitching a ride to a healthy person on a mosquito that has bitten the diseased person. This is called horizontal transmission in contrast to vertical transmission , which tends to evolve symbiosis (after a period of high morbidity and mortality in the population) by linking the pathogen's evolutionary success to the evolutionary success of the host organism.
Evolutionary medicine has found that under horizontal transmission, the host population might never develop tolerance to the pathogen .
Transmission of pathogens occurs through many different routes, including airborne, direct or indirect contact, sexual contact, through blood, breast milk, or other body fluids, and through the fecal-oral route. One of the primary pathways by which food or water become contaminated is from the release of untreated sewage into a drinking water supply or onto cropland, with the result that people who eat or drink contaminated sources become infected. In developing countries , most sewage is discharged into the environment or on cropland; even in developed countries , some locations have periodic system failures that result in sanitary sewer overflows . [ 9 ] | https://en.wikipedia.org/wiki/Human_pathogen |
Human presence in space (also anthropogenic presence in space or humanity in space ) is the direct and mediated presence or telepresence of humans in outer space , [ 1 ] and in an extended sense across space including astronomical bodies . Human presence in space, particularly through mediation, can take many physical forms from space debris , uncrewed spacecraft , artificial satellites , space observatories , crewed spacecraft , art in space , to human outposts in outer space such as space stations .
While human presence in space, particularly its continuation and permanence can be a goal in itself, [ 1 ] human presence can have a range of purposes [ 2 ] and modes from space exploration , commercial use of space to extraterrestrial settlement or even space colonization and militarisation of space . Human presence in space is realized and sustained through the advancement and application of space sciences , particularly astronautics in the form of spaceflight and space infrastructure .
Humans have achieved some mediated presence throughout the Solar System , but the most extensive presence has been in orbit around Earth . Humans reached outer space mediated in 1944 ( MW 18014 ) and have sustained mediated presence since 1958 ( Vanguard 1 ), [ a ] as well as having reached space directly for the first time on 12 April 1961 ( Yuri Gagarin ) and continuously since the year 2000 with the crewed International Space Station (ISS), or since the later 1980s with some few interruptions through crewing its predecessor, the space station Mir . [ 4 ] The increasing and extensive human presence in orbital space around Earth, beside its benefits, has also produced a threat to it by carrying with it space debris, potentially cascading into the so-called Kessler syndrome . [ 5 ] This has raised the need for regulation and mitigation of such to secure a sustainable access to outer space .
Securing the access to space and human presence in space has been pursued and allowed by the establishment of space law and space industry , creating a space infrastructure . But sustainability has remained a challenging goal, with the United Nations seeing the need to advance long-term sustainability of outer space activities in space science and application, [ 6 ] and the United States having it as a crucial goal of its contemporary space policy and space program . [ 7 ] [ 8 ]
For outer space being the dominant expanse of space , "space" is often used synonymously for outer space, referring to human presence in space to human presence across all of space, including astronomical bodies which outer space surrounds.
The United States has been using the term " human presence " to identify one of the long-term goals of its space program and its international cooperation. [ 1 ] [ 9 ] While it traditionally means and is used to name direct human presence, it is also used for mediated presence. [ 1 ] Differentiating human presence in space between direct and mediated human presence, meaning human or non-human presence, such as with crewed or uncrewed spacecraft, is rooted in a history of how human presence is to be understood (see dedicated chapter ).
Human, particularly direct, presence in space is sometimes replaced with "boots on the ground" [ 1 ] or equated with space colonization. But such terms, particularly colonization [ 9 ] and even settlement has been avoided [ 1 ] and questioned to describe human presence in space, since they employ very particular concepts of appropriation , with historic baggage, [ 10 ] [ 11 ] [ 12 ] addressing the forms of human presence in a particular and not general way.
Alternatively some have used the term " humanization of space ", [ 13 ] [ 14 ] [ 15 ] which differs in focusing on the general development, impact and structure of human presence in space.
On an international level the United Nations uses the phrase of " outer space activity " for the activity of its member states in space. [ 6 ]
Human presence in outer space began with the first launches of artificial object in the mid 20th century, and has increased to the point where Earth is orbited by a vast number of artificial objects and the far reaches of the Solar System have been visited and explored by a range of space probes. Human presence throughout the Solar System is continued by different contemporary and future missions, most of them mediating human presence through robotic spaceflight .
First a realized project of the Soviet Union and followed in competition by the United States , human presence in space is now an increasingly international and commercial field.
Participation and representation of humanity in space is an issue of human access to and presence in space ever since the beginning of spaceflight. [ 16 ] Different space agencies , space programs and interest groups such as the International Astronomical Union have been formed supporting or producing humanity's or a particular human presence in space. Representation has been shaped by the inclusiveness, scope and varying capabilities of these organizations and programs.
Some rights of non-spacefaring countries to partake in spaceflight have been secured through international space law , declaring space the " province of all mankind ", understanding spaceflight as its resource, though sharing of space for all humanity is still criticized as imperialist and lacking, [ 16 ] [ 9 ] particularly regarding regulation of private spceflight. [ 17 ]
Additionally to international inclusion the inclusion of women , [ 18 ] people of colour and with disability has also been lacking. [ 19 ] [ 20 ] [ 21 ] To reach a more inclusive spaceflight some organizations like the Justspace Alliance [ 16 ] and IAU featured Inclusive Astronomy [ 22 ] have been formed in recent years.
Space activity is legally based on the Outer Space Treaty , the main international treaty. Though there are other international agreements such as the significantly less ratified Moon Treaty .
The Outer Space Treaty established the basic ramifications for space activity in article one:
" The exploration and use of outer space, including the Moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all mankind. "
And continued in article two by stating:
" Outer space, including the Moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means. " [ 23 ]
The development of international space law has revolved much around outer space being defined as common heritage of mankind . The Magna Carta of Space presented by William A. Hyman in 1966 framed outer space explicitly not as terra nullius but as res communis , which subsequently influenced the work of the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS). [ 16 ] [ 24 ]
The United Nations Office for Outer Space Affairs and the International Telecommunication Union are international organizations central for facilitating space regulation, such as space traffic management .
Humans have been producing a range of radiation which has reached space unintentionally as well as intentionally, well before any direct human presence in space. Electromagnetic radiation such as light, of humans, has been reaching even stars as far away as the age of the radiation. [ 25 ]
Beginning in the 20th century, humans have been sending radiation significantly into space. Nuclear explosions , especially high-altitude ones have since at times, starting with 1958, just a year after the first satellite Sputnik was launched, introduced strong and broad radiation from humans into space, producing electromagnetic pulses and orbital radiation belts , adding to the explosion's destructive potential on ground and in orbit.
While Earth's and humanities radiation profile is the main material for space based remote Earth observation , but radiation by human activity from Earth and from space has also been an obstacle for human activities, such as spiritual life [ 26 ] [ 27 ] or astronomy through light pollution [ 28 ] and radio spectrum pollution from Earth and space. In the case of radio astronomy radio quiet zones have been kept and sought out, with the far side of the Moon being most pristine facing away from human made electromagnetic interference .
Space junk as product and form of human presence in space has existed ever since the first orbital spaceflights and comes mostly in the form of space debris in outer space. Space debris has been for example possibly the first human objects to have been present in space beyond Earth, reaching its escape velocity after being ejected purposefully from an exploded Aerobee rocket in 1957. [ 3 ] Most space debris is in orbit around Earth, it can stay there for years to centuries if at altitudes from hundreds to thousands of kilometers, before it falls to Earth. [ 29 ] Space debris is a hazard since it can hit and damage spacecraft. Having reached considerable amounts around Earth, policies have been put into place to prevent space debris and hazards, such as international regulation to prevent nuclear hazards in Earth's orbit and the Registration Convention as part of space traffic management.
But space junk can also come as result of human activity on astronomical bodies, such as the remains of space missions, like the many artificial objects left behind on the Moon , [ 30 ] and on other bodies .
Human presence in space has been strongly based on the many robotic spacecraft , particularly as the many artificial satellites in orbit around Earth.
Many firsts of human presence in space have been achieved by robotic missions. The first artificial object to reach space, above the 100 km altitude Kármán line , and therefore performing the first sub-orbital flight was MW 18014 in 1944. But the first sustained presence in space was established by the orbital flight of Sputnik in 1957. Followed by a rich number of robotic space probes achieving human presence and exploration throughout the Solar system for the first time.
Human presence at the Moon was established by the Luna programme starting in 1959, with a first flyby and heliocentric orbit ( Luna 1 ), a first arrival of an artificial object on the surface with an impactor ( Luna 2 ), and a for the first time a successful flight to the far side of the Moon ( Luna 3 ). The Moon then was in 1966 visited for the first time by a lander ( Luna 9 ), as well as an orbiter ( Luna 10 ), and in 1970 for the first time a rover ( Lunokhod 1 ) landed on an extraterrestrial body. Interplanetary presence was established at Venus by the Venera program , with a flyby in 1961 (Venera 1) and a crash in 1966 (Venera 3) . [ 31 ] [ 32 ]
Presence in the outer Solar System was achieved by Pioneer 10 in 1972 [ 33 ] and presence in interstellar space by Voyager 1 in 2012. [ 34 ]
The 1958 Vanguard 1 is the fourth artificial satellite and the oldest spacecraft still in space and orbit around Earth, though inactive. [ 35 ]
Since the very beginning of human outer space activities in 1944, and possibly before that , [ 37 ] life has been present with microscopic life as space contaminate and after 1960 as space research subjects . Prior to crewed spaceflight non-human animals had been subjects of space research , specifically bioastronautics and astrobiology , being exposed to ever higher testflights. The first animals (including humans) and plant seeds in space above the 100 km Kármán line were corn seeds and fruit flies , launched for the first time on 9 July 1946, [ 38 ] with the first fruit flies launched and returned alive in 1947. [ 39 ] In 1949 Albert II , became the first mammal and first primate reaching the 100 km Kármán line, and in 1957 the dog Laika became the first animal in orbit, with both also becoming the first fatalities of spaceflight and in space, respectively. In 1968, on Zond 5 turtoises, insects and planets became the first animals (incl. humans) and plants to fly to and returned safely from the Moon and any extraterrestrial flight. In 2019 Chang'e 4 landed fruit flies on the Moon, the first extraterrestrial stay of non-human animals. [ 40 ]
Visits of organisms to extraterrestrial bodies have been a significant issue of planetary protection , as with the crash of tardigrades on the Moon in 2019.
Plants first grown in 1966 with Kosmos 110 [ 41 ] and in 1971 on Salyut 1 , with the first producing seeds August 4, 1982 on Salyut 7 . [ 42 ] The first plant to sprout on the Moon and any extraterrestrial body grew in 2019, on the Chang'e 4 lander. [ 43 ]
Plants and growing them in space and places such as the Moon have been important subjects of space research, but also as psychological support and possibly nutrition during continuous crewed presence in space. [ 42 ]
Direct human presence in space was achieved with Yuri Gagarin flying a space capsule in 1961 for one orbit around Earth for the first time. While direct human presence in open space, by exiting a spacecraft in a spacesuit , a so-called extravehicular activity , has been achieved since the first person to do so, Alexei Leonov , in 1965.
Though Valentina Tereshkova was in 1963 the first woman in space, women saw no further presence in space until the 1980s and are still underrepresented, e.g. with no women ever being present on the Moon. [ 18 ] An internationalization of direct human presence in space started with the first space rendezvous of two crews of different human spaceflight programs , the Apollo–Soyuz mission in 1975 and at the end of the 1970s with the Interkosmos program.
Space stations have harboured so far the only long-duration direct human presence in space. After the first station Salyut 1 (1971) and its tragic Soyuz 11 crew, space stations have been operated consecutively since Skylab (1973), having allowed a progression of long-duration direct human presence in space. Long-duration direct human presence has been joined by visiting crews since 1977 ( Salyut 6 ). Consecutive direct human presence in space has been achieved since the Salyut successor Mir starting with 1987. This was continued until the operational transition from the Mir to the ISS , giving rise with its first occupation to an uninterrupted direct human presence in space since 2000. [ 4 ] While human population records in orbit developed from 1 in 1961, 2 in 1962, 4–7 in 1969, 7–11 in 1984
and 13 in 1995, [ 44 ] to 14 in 2021, 17 in 2023 [ 45 ] and 19 in 2024, [ 46 ] developing into a continuous population of no less than 10 people on two space stations since 5 June 2022 (as of 2024). [ 47 ] The ISS has hosted the most people in space at the same time, reaching 13 for the first time during the eleven day docking of STS-127 in 2009. [ 48 ]
Beyond Earth the Moon has been the only astronomical object which so far has seen direct human presence through the week long Apollo missions between 1968 and 1972, beginning with the first orbit by Apollo 8 in 1968 and with the first landing by Apollo 11 in 1969. The longest extraterrestrial human stay was three days by Apollo 17 .
While most persons who have been to space are astronauts , professional members of human spaceflight programs , particularly governmental ones, the few others, starting in the 1980s, have been trained and gone to space as spaceflight participants , with the first space tourist staying in space in 2001.
By the end of the 2010s several hundred people from more than 40 countries have gone into space, most of them reaching orbit. 24 people have traveled to the Moon and 12 of them walked on the Moon . [ 50 ] Space travelers have spent by 2007 over 29,000 person-days (or a cumulative total of over 77 years) in space including over 100 person-days of spacewalks . [ 51 ] Usual durations for individuals to inhabit space on long-duration stays are six months, [ 52 ] with the longest stays on record being at about a year.
A permanent human presence in space depends on an established space infrastructure which harbours, supplies and maintains human presence. Such infrastructure has originally been Earth ground-based , but with increased numbers of satellites and long-duration missions beyond the near side of the Moon space-to-space based infrastructure is being used. First simple interplanetary infrastructures have been created by space probes particularly when employing a system which combines a lander and a relaying orbiter .
Space stations are space habitats which have provided a crucial infrastructure for sustaining a continuous direct human, including non-human, presence in space. Space stations have been continuously present in orbit around Earth from Skylab in 1973, to the Salyut stations , Mir and eventually ISS.
The planned Artemis program includes the Lunar Gateway a future space station around the Moon as a multimission waystation. [ 53 ]
Human presence has also been expressed through spiritual and artistic installations in outer space or on the Moon . Apollo 15 Mission Commander David Scott left for example a Bible on their Lunar Roving Vehicle during an extravehicular activity on the Moon. Space has furthermore been the site of people taking part in religious festivities such as Christmas on the International Space Station .
Human presence in Earth orbit and heliocentric orbit has been the case with a range of artificial objects since the beginning of spaceflight (both possibly with debris since 1957, [ 3 ] but for sure since 1958 with Sputnik 1 and in 1959 with Luna 1 respectively), and at more interplanetary heliocentric orbits since 1961 with Venera 1 . Extraterrestrial orbits other than heliocentric orbit has been achieved since 1966, starting with Luna 10 around the Moon and several at the same time in orbit of the Moon that same year starting with Lunar Orbiter 1 , and since 1971 with Mariner 9 around another planet (Mars).
Humans have also used and occupied co-orbital configurations , particularly at different liberation points with halo orbits , to harness the benefits of those so called Lagrange points .
Some interplanetary missions, particularly the Ulysses solar polar probe and considerably Voyager 1 and 2 , as well as others like Pioneer 10 and 11 , have entered trajectories taking them out of the ecliptic plane .
Humanity has reached different types of astronomical bodies, but the longest and most diverse presence (including non-human, e.g. sprouting plants [ 54 ] ) has been on the Moon , particularly because it is the first and only extraterrestrial body having been directly visited by humans.
Space probes have been establishing and mediating human presence interplanetarily since their first visits to Venus . Mars has seen a continuous presence since 1997 , [ 55 ] after being first flown by in 1964 and landed on in 1971 . A group of missions have been present on Mars since 2001 , including continuous presence by a series of rovers since 2003 .
Beside having reached some planetary-mass objects (that is planets , dwarf planets or the largest, so-called planetary-mass moons ), humans have also reached, landed and in some cases even returned robotic probes from some small Solar System bodies , like asteroids and comets , with a range of space probes .
The Solar System region near the Sun 's corona , inside Mercury 's orbit, with its high gravitational potential difference from Earth and the subsequent high delta-v needed to reach it, has only been considerably pierced on highly elliptic orbits by some solar probes like Helios 1 & 2 , as well as the more contemporary Parker Solar Probe . The latter being the closest to reach the Sun, breaking speed records with its very low solar altitudes at perihelion apsis .
Future direct human presence beyond Earth's orbit is possibly going to be re-introduced if current plans for crewed research stations to be established on Mars and on the Moon are continued to be developed.
Human presence in the outer Solar System has been established by the first visit to Jupiter in 1973 by Pioneer 10 . [ 33 ] Thirty years later nine probes had traveled to the Outer Solar System, and the first such probe (JUICE, the Jupiter Icy Moons Explorer ) by another space agency than NASA had just been launched on its way. Jupiter and Saturn are the only outer Solar System bodies which have been orbited by probes (Jupiter: Galileo in 1995 and Juno in 2016; Saturn: Cassini–Huygens in 2004), with all other outer Solar System probes performing flybys.
The Saturn moon Titan , with its special lunar atmosphere, has so far been the only body in the outer Solar System to be landed on by the Cassini–Huygens lander Huygens in 2005.
Several probes have reached Solar escape velocity , with Voyager 1 being the first to cross after 36 years of flight the heliopause and enter interstellar space on August 25, 2012, at distance of 121 AU from the Sun. [ 34 ]
Living in outer space is fundamentally different to living on Earth. It is shaped by the characteristic environment of outer space, particularly its microgravity (producing weightlessness) and its near perfect vacuum (supplying few and producing unhindered exposure to radiation and material from far away). Mundane needs such as for air, pressure, temperature and light have to be accommodated completely by life support systems . Furthermore movement , food intake and hygiene is confronted with challenges.
Long-duration stays are particularly endangered by the prevalent radiation exposure and the health effects of microgravity . Human fatalities have been the case due to accidents during spaceflight, particularly at launch and reentry . With the last in-flight accident killing humans, the Columbia accident in 2003, the sum of in-flight fatalities has risen to 15 astronauts and 4 cosmonauts , in five separate incidents. [ 57 ] [ 58 ] Over 100 others have died in accidents during activity directly related to spaceflight or testing.
None of them remained in space, but small parts of the remains of deceased people have been taken as space burials to orbital space since 1992 and controversially even to the Moon since 1999. [ 59 ]
Bioastronautics , space medicine , space technology and space architecture are fields which are occupied with alleviating the effects of space on humans and non-humans.
Research has begun into the culture and "microsocieties" that are formed in space, with space archeologists analyzing residue from space environments to learn about astronaut life. [ 60 ] A few incidents of astronauts from different countries having difficulties in getting along have also been studied. [ 61 ]
Human space activity, and its subsequent presence, can and has been having an impact on space as well as on the capacity to access it. This impact of human space activity and presence, or its potential, has created the need to address its issues regarding planetary protection, space debris, nuclear hazards , radio pollution and light pollution , to the reusability of launch systems , for space not to become a sacrifice zone . [ 62 ]
Sustainability has been a goal of space law, space technology and space infrastructure, with the United Nations seeing the need to advance long-term sustainability of outer space activities in space science and application, [ 6 ] and the United States having it as a crucial goal of its contemporary space policy and space program. [ 7 ] [ 8 ]
Human presence in space is particularly being felt in orbit around Earth. The orbital space around Earth has seen increasing and extensive human presence, beside its benefits it has also produced a threat to it by carrying with it space debris, potentially cascading into the so-called Kessler syndrome . [ 5 ] This has raised the need for regulation and mitigation of such to secure a sustainable access to outer space .
Individually or as a society humans have engaged since pre-history in developing their perception of space above the ground, or the cosmos at large , and developing their place in it.
Social sciences have been studying such works of people from pre-history to the contemporary with the fields of archaeoastronomy to cultural astronomy . With actual human activity and presence in space the need for fields like astrosociology and space archaeology have been added.
Earth observation has been one of the first missions of spaceflight, resulting in a dense contemporary presence of Earth observation satellites , having a wealth of uses and benefits for life on Earth.
Viewing human presence from space, particularly by humans directly, has been reported by some astronauts to cause a cognitive shift in perception, especially while viewing the Earth from outer space, this effect has been called the overview effect .
Parallel to the above overview effect the term "ultraview effect" has been introduced for a subjective response of intense awe some astronauts have experienced viewing large "starfields" while in space. [ 66 ]
Space observatories like the Hubble Space Telescope have been present in Earth's orbit, benefiting from advantages from being outside Earth's atmosphere and away from its radio noise , resulting in less distorted observation results.
Related to the long discussion of what human presence constitutes and how it should be lived, the discussion about direct (e.g. crewed) and mediated (e.g. uncrewed) human presence, has been decisive for how space policy makers have chosen human presence and its purposes. [ 67 ]
The relevance of this issue for space policy has risen with the advancement and resulting possibilities of telerobotics , [ 1 ] to the point where most of the human presence in space has been reallized robotically, leaving direct human presence behind.
The location of human presence has been studied throughout history by astronomy and was significant in order to relate to the heavens, that is to outer space and its bodies.
The historic argument between geocentrism and heliocentrism is one example about the location of human presence.
Realizations of the scales of space , have been taken as subject to discuss human and life's existence or relations to space and time beyond them, with some understanding humanity's or life's presence as a singularity or one to be in isolation , pondering on the Fermi paradox .
A diverse range of arguments of how to relate to space beyond human presence have been raised, with some seeing space beyond humans as reason to venture out into space and exploring it, some aiming for contact with extraterrestrial life , to arguments for protection of humanity or life from its possibilities. [ 68 ] [ 69 ]
Considerations about the ecological integrity [ 70 ] and independence of celestial bodies, counter exploitive understandings of space as dead, particularly in the sense of terra nullius , have raised issues such as rights of nature .
Space and human presence in it has been the subject of different agendas. [ 2 ]
Human presence in space at its beginnings, was fueled by the Cold War and its outgrowing the Space Race . During this time technological, nationalist, ideological and military competition were dominant driving factors of space policy [ 71 ] and the resulting activity and, particularly direct human, presence in space.
With the waning of the Space Race, concluded by cooperation in human spaceflight , focus shifted in the 1970s further to space exploration and telerobotics , having a range of achievements and technological advances. [ 72 ] Space exploration meant by then also an engagement by governments in the search for extraterrestrial life .
Since human activity and presence in space has been producing spin-off benefits , other than for the above purposes, such as Earth observation and communication satellites for civilian use, international cooperation to advance such benefits of human presence in space grew with time. [ 73 ] Particularly for the purpose of continuing benefits of space infrastructure and space science the United Nations has been pushing for safeguarding human activity in outer space in a sustainable way . [ 6 ]
With the contemporary so-called NewSpace , the aim of commercialization of space has grown along with a narrative of space habitation for the survival of some humans away from and without Earth , which in turn has been critically analyzed and highlighted colonialist purposes for human activity and presence in space. [ 74 ] This has given rise for a deeper engagement in the fields of space environment and space ethics . [ 75 ] | https://en.wikipedia.org/wiki/Human_presence_in_space |
Since the discovery of ionizing radiation , a number of human radiation experiments have been performed to understand the effects of ionizing radiation and radioactive contamination on the human body, specifically with the element plutonium .
Numerous human radiation experiments have been performed in the United States, many of which were funded by various U.S. government agencies [ 3 ] such as the United States Department of Defense , the United States Atomic Energy Commission , and the United States Public Health Service . Also involved were several universities, most notably Vanderbilt University involved in several of them. The experiments included:
In 1927, five-year-old Vertus Hardiman and nine other children from Lyles Station, Indiana , were severely irradiated during a medical experiment conducted at the local county hospital. To get parental consent, the experiment was misrepresented as a new therapy for the scalp fungus known as ringworm . [ 7 ] [ 8 ] Many of the children suffered long-term effects, but Hardiman's were the most pronounced. [ 9 ] The radiation disfigured his head and left a large, open wound on the side of his skull. [ 10 ] The parents of the children met with a local lawyer and filed a lawsuit against the hospital, but the hospital was found not liable. [ 9 ]
On January 15, 1994, President Bill Clinton formed the Advisory Committee on Human Radiation Experiments (ACHRE), chaired by Ruth Faden [ 11 ] [ 12 ] of the Johns Hopkins Berman Institute of Bioethics. One of the primary motivating factors behind his decision to create ACHRE was a step taken by his newly appointed Secretary of Energy, Hazel O'Leary , one of whose first actions on taking the helm of the United States Department of Energy was to announce a new openness policy for the department. The new policy led almost immediately to the release of over 1.6 million pages of classified records.
These records made clear that since the 1940s, the Atomic Energy Commission had been sponsoring tests on the effects of radiation on the human body. American citizens who had checked into hospitals for a variety of ailments were secretly injected, without their knowledge, with varying amounts of plutonium and other radioactive materials.
Ebb Cade was an unwilling participant in medical experiments that involved injection of 4.7 micrograms of plutonium on 10 April 1945 at Oak Ridge, Tennessee . [ 13 ] [ 14 ] This experiment was under the supervision of Harold Hodge . [ 15 ] Most patients thought it was "just another injection," but the secret studies left enough radioactive material in many of the patients' bodies to induce life-threatening conditions.
Such experiments were not limited to hospital patients, but included other populations such as those set out above, e.g., orphans fed irradiated milk, children injected with radioactive materials, and prisoners in Washington and Oregon state prisons. Much of the experimentation was carried out in order to assess how the human body metabolizes radioactive materials, information that could be used by the Departments of Energy and Defense in Cold War defense and attack planning.
ACHRE's final report was also a factor in the Department of Energy establishing an Office of Human Radiation Experiments (OHRE) that assured publication of DOE's involvement, by way of its predecessor, the AEC, in Cold War radiation research and experimentation on human subjects. The final report issued by the ACHRE can be found at the Department of Energy's website.
The Soviet nuclear program involved human experiments on a large scale, including most notably the Totskoye nuclear exercise of 1954 and the experiments conducted at the Semipalatinsk Test Site (1949–1989). As of 1950, there were around 700,000 participants at different levels of the program, half of whom were Gulag prisoners used for radioactivity experiments, as well as the excavation of radioactive ores. Information about the scale, conditions and lethality of those involved in the program is still kept classified by the Russian government and the Rosatom agency. [ 16 ] [ 17 ]
In the Marshall Islands , indigenous residents and crewmembers of the fishing boat Lucky Dragon No. 5 were exposed to the high yields of radioactive testing during the Castle Bravo explosions conducted at Bikini Atoll . Researchers subsequently exploited this ostensibly "unexpected" turn of events by conducting research on the onset of effects from radiation poisoning as part of Project 4.1 , raising ethical questions as to both the specific incident and the broader phenomenon of testing in populated areas. [ 18 ]
Likewise, the Venezuelan geneticist Marcel Roche was implicated in Patrick Tierney's 2000 publication, Darkness in El Dorado , for allegedly administering radioactive iodine to indigenous peoples in the Orinoco basin of Venezuela , such as the Yanomami and Ye'Kwana peoples, in cooperation with the US Atomic Energy Commission (AEC) , possibly with no apparent benefit for the test group and without obtaining proper informed consent. This corresponded to similar administrations of iodine-124 by the French anthropologist Jacques Lizot in cooperation with the French Atomic Energy Commission (CEA) . [ 19 ] [ 20 ] | https://en.wikipedia.org/wiki/Human_radiation_experiments |
Human reproductive ecology is a subfield in evolutionary biology that is concerned with human reproductive processes and responses to ecological variables. [ 1 ] It is based in the natural and social sciences , and is based on theory and models deriving from human and animal biology, evolutionary theory , and ecology . It is associated with fields such as evolutionary anthropology and seeks to explain human reproductive variation and adaptations. [ 2 ] The theoretical orientation of reproductive ecology applies the theory of natural selection to reproductive behaviors, and has also been referred to as the evolutionary ecology of human reproduction. [ 3 ]
Multiple theoretical foundations from evolutionary biology and evolutionary anthropology are important to human reproductive ecology. Notably, reproductive ecology relies heavily on life history theory , energetics, fitness theories , kin selection , and theories based on the study of animal evolution.
Life history theory is a prominent analytical framework used in evolutionary anthropology, biology, and reproductive ecology that seeks to explain growth and development of an organism through various life history stages of the entire lifespan. The life history stages include early growth and development, puberty, sexual development, reproductive career, and post-reproductive stage. Life history theory is based in evolutionary theory and suggests that natural selection operates on the allocation of different types of resources (material and metabolic) to meet the competing demands of growth, maintenance, and reproduction at the various life stages. [ 4 ] Life history theory is applied to reproductive ecology in the theoretical understandings of puberty, sexual growth and maturation, fertility, parenting, and senescence because at every life stage organisms are bound to encounter and cope with unconscious and conscious decisions that hold trade-offs. [ 5 ] Reproductive ecologists have specifically impacted life history by improving on the energetic models because they are complicated in humans, and involve many causal factors. They draw on classical life history theory, behavioral ecology , and reproductive ecology to make predictions about reproductive behavior and growth [ 6 ]
Analytical frameworks that explore problems relevant to reproductive ecology, such as age at menarche , or lactational amenorrhea , often employ understandings of energetics to their hypotheses and models. [ 7 ] Energetics in this context refers to energy allocation, under the assumption that natural selection favors optimal allocation and use of energy, but also that trade-offs often pose energetic constraints. Allocations of energy are evolved so they in turn, can be foreseeable but they are also variable depending on ecological constraints.
The assumption that energy measured in calories can be used as a universal measure of nutritional cost is criticized by a number of scientists on the basis of essential nutrients , nutrients that the body cannot produce regardless of calorie availability and the specific nutrients must be present in the diet. It is argued that since there are different dietary conditions in which different essential nutrients are the most scarce in different regions and the few foods that contain the scarcest nutrients that are needed to avoid deficit diseases are therefore the most expensive (the cost may be paid in the form of other goods and services in societies without money), and different functions in the body primarily consume different essential nutrients, no universal ranking of the costs of different aspects of reproduction can be made. For example, it is possible for the few micronutrients that men consume more of the more sperm they produce but the consumption of which does not increase in women during pregnancy or lactation to be the scarcest nutrients contained in the most expensive food in some societies, making sperm production effectively more expensive than pregnancy and lactation under local food prices in such societies. It is also argued that the variability of what food is the most valuable due to containing the rarest essential nutrients extend their effects to the economical significance ratio between hunting and gathering in the case of hunter-gatherer societies, and therefore that any attempt to circumvent the evolutionary psychology paradox of men not being able to be in two places at the same time to hunt and protect his family by reference to hiring guards by bartering meat would fail to make sex roles universal due to the difference between regions where the rarest essential nutrients were contained in one or more types of meat and regions where the rarest such nutrients were contained in some types of plants. It is cited in this context that humans evolved over relatively large parts of Africa with different food ecologies, making it impossible for humans to have specialized evolutionarily for one specific food cost ratio. This variability of food value ratios within Africa may have prepared humans evolutionarily to be able to leave Africa. [ 8 ] [ 9 ]
The researchers involved in human reproductive ecology use the combined approach of demography and evolutionary biology to explain the reproductive phenomenon. Biodemography is the study of demography related to biology and evolutionary biology . [ 10 ] Biodemographers do research on demographic outcomes such as conception , spontaneous abortion , births, marriage, divorce, menarche , menopause , aging, and mortality. Biodemographers use mathematical models, statistical estimates and biomarkers to analyze the demographic data. [ 11 ] The field of biodemography often explores the scientific questions associated with fertility and mortality across cultures, the determinants of reproductive senescence, mortality and sex differences, low fertility in humans, and longer post-reproductive lifespan in women. [ 12 ]
In human reproductive ecology, the study of pregnancy is primarily focused variation in pregnancy and on rates of pregnancy loss.
Pregnancy varies person-to-person and across cultural and socioeconomic lines. Human gestation is between 30 and 40 weeks long. [ 13 ] The dynamic between the mother and the fetus is one of conflict: it is in the best interest of the fetus to gestate as long as possible to continue receiving the nutritional and developmental benefits of being physically attached to the mother. For the mother, however, pregnancy is a highly demanding and risky time. Earlier births avoid complications in the birth of a too-large infant. The length of the pregnancy is a compromise between these two demands, and is influenced by factors such as socioeconomic status, health, and fetal development. Women of lower socioeconomic status have been shown to deliver their babies earlier on average than women of higher socioeconomic status. [ 14 ] Research has also shown that stress, especially during early pregnancy, can cause shorter gestation length and increase premature births. [ 15 ]
The rate of embryo loss changes throughout pregnancy. Pre-implantation in the uterine wall, rate of loss is undetectable as the hCG hormone is not secreted until implantation . [ 16 ] There is no current way to detect pregnancy or pregnancy loss at this stage. Post-implantation, rate of loss is highest in the first trimester of a pregnancy. [ 14 ] The chance of pregnancy loss lowers the further into gestation a woman is. [ 14 ]
Pregnancies may be unsuccessful for multiple reasons. The maternal immune system, though suppressed during ovulation, views the fertilized egg as a foreign body and will attack it. [ 17 ] Defective embryos may also be spontaneously aborted, or miscarried, whether due to chromosomal abnormality or developmental defects. Endometrial or placental development issues may also cause a pregnancy to fail. Additionally, the frequency of spontaneous abortion increases with the mother's age. [ 18 ] Older mothers have a higher rate of genetic abnormalities that can trigger pregnancy loss. [ 19 ]
Because human pregnancy is so costly, and human offspring so dependent on their mothers, early spontaneous abortion is high to ensure that energy of a pregnancy is spent on developing a fetus with a high chance of survival.
Human reproductive ecology considers fecundity and fertility from a demographic perspective. In this view, fecundity is the reproductive potential of an individual and fertility is the actual reproductive output of an individual.
Fecundity is determined by the biological limitations of the individual and can be reduced when biological and ecological factors impact an individual's reproductive capabilities. The key components of fecundity are a person's reproductive maturation and the maintenance of their reproductive system. In humans, the timing of female reproductive maturation is particularly variable and is heavily influenced by ecological considerations. In addition, the age at menarche has decreased over time in many global populations. [ 20 ] This phenomenon is referred to as the secular trend. Age at menarche is one measure of the fecundity of an individual female. Male reproductive maturity is less subject to environmental and ecological factors, and does not follow the secular trend that female puberty does.
In adults, fecundity is determined by the biological processes of reproduction. Female fecundity is heavily influenced by reproduction and energetics. The ovarian cycle limits the potential of conception to a brief period of fertility roughly once a month. Successful egg maturation, fertilization, and implantation must be able to occur for a reproductively mature female to be fecund. Changes in energy levels, diet, and hormones can all interfere in this process. During breastfeeding, a period of lactational infertility also reduces female fecundity. The metabolic load hypothesis in human reproductive ecology describes how the energetic expenditure of lactation acts to inhibit ovarian cycling. With the majority of available energy going towards milk production, energy is not expended on reproductive effort.
Male fecundity is primarily determined by the quality of sperm and the availability of fertile female mates. Individual variation in sperm load, pH, lifespan, and morphology creates varying fecundity in males. As males do not gestate, their contribution to fecundity is less well established post-reproduction.
A lack of fecundity in adults can be described as infecundity or infertility . Infertility occurs in about 10–15% of couples, [ 21 ] with the causes of infertility shared equally between males and females.
Fertility is the measure of an individual's actual reproductive output, rather than just their potential for reproductive success . Fertility rates vary both inter- and intra-culturally. Fertility for both males and females is dependent not just on biology but on cultural, religious, economic, and other sociological factors as well.
Natural fertility is emphasized in the study of human reproductive ecology. Natural fertility is the measure of human fertility in populations without birth control. Research on natural fertility populations seeks to understand the evolutionary context, ecological constraints, and predict outcomes for human fertility.
Fertility is influenced by fecundity, but has additional factors that can increase or decrease an individual's lifetime reproductive success. The inter-birth interval, the amount of time between a woman's births, impacts a woman's total fertility. This amount of time varies cross-culturally, as well as varies with different environmental constraints. Many cultures practice conscious birth spacing to adhere to the desired length of time between pregnancies, or desired number of children. Environmental concerns like fetal loss, lack of resource access, and disease may all impact fertility for females or males.
Fertility rates across the globe have steadily declined. [ 22 ] This trend, known as the demographic transition , began in the 1700s and continues today. It is strongly correlated with increased industrialization in a society. This trend is now seen in almost all cultures, resulting in some societies with below replacement fertility. Below replacement fertility is when the rate of childbirth in a society is less than the amount needed for each woman to have at least one daughter. Since the chance of having a daughter is 50/50, there must be at least two children for every adult woman in the population.
In 1961, French demographer Louis Henry introduced the term "natural fertility". [ 23 ] Natural fertility is defined as uncontrolled fertility when the couples do not control the number of children and the family size. Controlled fertility populations use controlled methods to stop having children after reaching a certain number of children.
In natural fertility populations, the parity related controls of fertility are not influenced by modern birth controls. Therefore, studying and understanding the age-related changes in fecundity is easier in natural fertility populations in compare to controlled fertility populations. Natural fertility populations deliver an easier platform to study the reproductive behavior which may affect the levels of fertility such as pregnancy loss, time for conception, and length of breastfeeding. [ 24 ] In Pennsylvania and Ohio states in the United States, the Amish settlements have been studied to understand the age of marriage, the age of first birth, birth intervals, the age at last birth, and total fertility rate as they are natural fertility population due to their religious belief. [ 25 ] The Dogon population in Mali, West Africa are a natural fertility population with high fertility rate and they have been studied to understand the role of the age of wife, the age of husband, nutritional status, breastfeeding status, sex of last child, economic status, and polygyny on the waiting time to conception. [ 26 ] Natural fertility population in rural Bangladesh have been studied to predict the role of parity, pregnancy loss, mother's age, economic status, child's sex, and husband's migration on the distribution of postpartum amenorrhea. [ 27 ]
The number of children in any family is associated with the quality of those children. [ 28 ] There is a trade-off between reproduction and survival of the children which influence total fertility rate in humans globally> [ 29 ] In sub-Saharan African countries, child survival is negatively associated with the number of children in the family due to the child competition for parental investment . The decrease in birth interval rate can also endanger the life of the child. In the Hungarian population, a shorter birth interval is associated with less investment of the mothers which results in small body size and low birth weight of the children at birth. [ 30 ] In historical Ireland (1700–1919) the number of children in the family was negatively associated with the lifespan and reproductive success of the children. [ 31 ] In various natural fertility populations, the shorter birth interval length may cause higher deaths of the infants. The hunter-gatherer ! Kung mothers require to carry a greater amount of food and baby on foraging trips and shorter birth interval length results higher infant mortality among them. [ 32 ] The 4-year birth interval is the optimum for the !kung women to have a maximized reproductive success. The total fertility of the women is also related to the post-reproductive survival of the women [ 33 ] and in pre-industrial (1766–1895) Swedish population, the number of children was found be to be negatively associated with the longevity of the mothers. [ 34 ]
Puberty is the transitory stage in human development in which a person goes from a child into a reproductively mature adult, in other words, puberty is the process of sexual maturation in humans. The onset of puberty varies between boys and girls, with boys usually starting around 11–12 years of age and ending by 16–17, [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] and girls starting around 10–11 and ending at 15–17. [ 40 ] [ 20 ] [ 41 ] Activity in the hypothalamic–pituitary–gonadal axis (HPG axis) initiates puberty by secreting gonadotropin-releasing hormone (GnRH) from the hypothalamus into the anterior pituitary . The anterior pituitary releases the gonadotropins luteunizing hormone (LH) into the ovaries , which produce estrogen , and follicle-stimulating hormone (FSH) into the testes , which produce testosterone . The central event in puberty for females is menarche , the first menstrual bleeding. For males, it is the first ejaculation . The onset of menarche is easier to determine due to the evidence of menstrual bleeding, while the first ejaculation for males is usually self reported. In evolutionary context, it is assumed that human physiology has been modeled through natural selection to maximize reproductive success by allotting energy and resources through trade-offs . [ 14 ]
This period of reproductive maturation sees the onset of primary sexual characteristics , the production of gametes and hormones by the gonads , and secondary sexual characteristics . Secondary sexual characteristics include adolescent growth spurt , pubic and axillary hair, genital enlargement, breast development in girls, beard growth in boys, increase in subcutaneous fat , increase in muscle mass, and widening of the pelvis in girls. While there is variation among individuals, secondary sexual characteristics tend to develop in a sequence. For girls, breast development is followed by the appearance of pubic hair, followed by menarche, and fat deposition and broadening of the hips occurring as the completion of breast development approaches. For boys, enlargement of the penis and testicles occurs, followed by pubic and axillary hair growth, voice change, facial hair growth, and muscle mass increase. [ 42 ] This period is also a time of cognitive and psychosocial development where social relationships, skills, and experiences outside of the core family are explored. [ 14 ]
While puberty is a consistent progression of events culminating in reproductive maturity, there is wide variation in age of onset of puberty and the magnitude of the changes that can be caused by a variety of different influences. Since the mid 19th century the global age of menarche has significantly decreased. [ 43 ] [ 44 ] [ 45 ] Dietary composition, disease, psyschosocial circumstances, developmental conditions, genetics and epigenetics, and other environmental factors can all affect the age of the onset of puberty. [ 46 ] These factors can come together and in terms of evolutionary trade offs, alter the allocation of energy into growth, maintenance, or reproduction, as best needed for survival. Most research focuses on female puberty because it is easier to determine due to menarche. While there is variation in the onset time and magnitude, the sequence of events stays more or less consistent, variations in the sequence can indication a pathological condition.
Differences in quality and quantity of nutrition account for one of the strongest environmental factors that alter the onset of puberty. [ 47 ] Evidence has linked childhood obesity in girls with early pubertal timing, referencing an increased amount of body fat as a signal for the brain to initiate puberty and due to an excess of available energetic resources, since developing a fetus is very energetically demanding. [ 48 ]
Disease and chronic illness in childhood can lead to a delay in pubertal timing in boys and girls. Inflammatory diseases , parasitic infections , and other illnesses that affect nutritional intake, specially chronic ones, are energetically costly and energy and resources has to be allocated into maintenance and health, sometimes taking energy from growth or reproduction, stunting or delaying them.
Variation in pubertal timing has been directly found to be due through direct genetic association between mothers and daughters in 46% of the population studied. It is believed that an androgen receptor gene , but the specific gene has not been found. [ 47 ] [ 49 ] [ 50 ] [ 51 ] [ 52 ] Chemicals and hormones found in the environment [ 53 ] and plastics such as Bisphenol A (BPA) [ 54 ] have been thought to affect sexual development in humans at the prenatal or postnatal stage. According to the Centers for Disease Control and Prevention (CDC), BPA found in plastic bottles and containers leaches into foods and liquids when warmed up, as in the case of plastic baby bottles, and traces of the chemical were found in more than 90% of the U.S. population studied. BPA is of concern because it interferes with the actions of estrogen which is needed as a developmental and reproductive regulator.
Most of the studies have reported that menarche may occur a few months earlier in girls in high-stress households, whose fathers are absent during their early childhood, who have a stepfather in the home, who are subjected to prolonged sexual abuse in childhood, or who are adopted from a developing country at a young age. Conversely, menarche may be slightly later when a girl grows up in a large family with a biological father present. However, when the stress is severely high and potentially life-threatening such as in times of war, the onset of puberty has been delayed.
Mate choice in human reproductive ecology is the process by which individuals rationally partner with others. Mate choice practices, like many of the topics in human reproductive ecology, vary greatly between individuals and between cultures.
Culture heavily influences mate choice, but there are evolutionary concepts that underpin research into mate choice. Honest signals are characteristics of an individual that are assumed to be true indicators of health and fecundity. Honest signals guide sexual selection , the process by which certain traits are picked by the potential mate and then proliferate throughout a species. Human cultures vary on what is considered to be a desirable honest signal. Emphasis on wealth, aesthetics, religious affiliation, and lineage, to name a few examples, are all used in different cultures as ways to choose a mate.
Monogamy is the mating strategy of two individuals partnering exclusively with each other for a period or time or for life. Monogamy in humans is generally accompanied by selective mate-choice and mating, cohabitation, and bi-parental care for children. Humans may practice lifelong monogamy, as well as serial monogamy. Serial monogamy is the mating strategy of having sequential, non-overlapping partners.
Polygamy is the practice of having multiple partners at the same time. The composition of the relationship will determine which type of polygamy is being practiced. Polygyny is the practice of a male partnering with multiple females. It is a fairly common mating strategy in humans, as well as in many other animals. Polygyny often occurs in agricultural societies and is often paired with male wealth or land access. When males are able to disproportionately control resources, they may be able to support more than one female partner. Polyandry is the practice of a female partnering with multiple males. It is not as common in humans as polygyny, due in part to the constraints of female reproduction. While a female may only reproduce once at a time, a male may be able to contribute to multiple concurrent pregnancies. Polyandry is often seen in cases when there are more males in a society than females, or when males are considered to be unavailable.
In reproductive ecology, concepts related to parenting, social organization, and development are discussed. The concept of parental investment defined by Trivers and Willard [ 55 ] in the 1970s is used widely in reproductive ecology to analyze and understand provisioning strategies and how they relate to life history trade-offs. Trivers' parental investment [ 56 ] is defined as investment in offspring to that benefits their survival and ability to reproduce, at the expense of the parent's ability to invest in other offspring. Inherent in these strategies is an underlying trade-off between energy and investment allocation to oneself as a parent and to each offspring.
Paternal investment is more variable than maternal investment worldwide, and compared to other primates [ 57 ] paternal investment is more robust in humans. Mating and pair-bonding includes trade-offs such as making a choice between investing in current offspring, or investing in future mating opportunities. Over the course of human evolution, there is evidence of reduced sexual dimorphism in humans compared to other primates. This suggests that there was less male-male competition for female mates, which led to more male investment in offspring, rather than mate choices. [ 55 ] Paternal investment strategies vary facultatively based on alloparental care, the costs and benefits of offspring investment, societal pressures, divisions of labor, cultural expectations and norms, and the individual qualities of males in any given society. [ 58 ] In the field of reproductive ecology, it has been a recent interest to explore the endocrinology of social relationships, including the relation of paternal investment and endocrine function. [ 59 ] It has been shown that fatherhood in general, reduces testosterone levels and competition for mates increases testosterone. It is also shown that male endocrine function is mediated by interactions with children. [ 60 ]
Maternal investment is widespread and less variable than paternal investment, but there have been recent evidence supporting multiple mating systems for females as well in the evolutionary literature. [ 61 ] This could suggest that mating systems may influence how maternal investment is given, and the trade-offs posed both biologically and socially. Maternal investment is almost always necessary for the survival of offspring, because compared to other primates, human infants are highly altricial. [ 62 ] Offspring are also categorized as taking longer to wean, still dependent after weaning, and a longer juvenile period. [ 63 ]
Parent offspring conflict is a theory synthesized by Trivers in the 1970s alongside parental investment. Parental offspring conflict is also well documented and develops in tandem with the process of reproduction and parenting. Parent-offspring conflict occurs in the relationship between parent and fetus (in the case of striking a balance between allocating placental energy stores to the growing fetus, while maintaining and metabolic balance of the mothers biology), and between parent and offspring. Parent offspring is expected to be highest during the parental investment period. [ 64 ] Parent-offspring conflict assumes there will be "disagreements" between parents and offspring about how long parental investment lasts, how resources are allocated, and maintaining the life history trade-offs in the process. [ 64 ]
Parental investment provided by individuals other than mothers and fathers is considered allocare. Both paternal care and allocare can reduce the energetic costs of parenting for mothers. Allocare is often referred as allomaternal care or allomothering if it is provided by anyone other than the mother. Based on kin selection theory , it is usually assumed that mothers have been ancestrally necessary to ensure offsprings survival and reproduction. It is less known to what extent paternal investment or care or other types of allocare are a necessity to offspring survival and reproduction. Typically maternal care is defined at the most basic level of pregnancy and birth and lactation, but includes other things like provisioning, learning (in humans), mirroring (mirroring behavior of mother), and holding, carrying, and touching. It has been shown in various studies that allocare can take many forms such as provisioning, providing food, reducing parental costs for parents, time investments, economic investments, and other types of care such as holding. There have been different results from studies in traditional societies and natural fertility populations, than in industrialized societies. [ 65 ] Allomaternal care has been hypothesized to have influenced ancestral evolution by being associated with increased brain size. [ 66 ] Allomaternal care is also a part of a larger hypothesis of humans as cooperative breeders whereby allocare discounts the individual costs of parenting, especially when sets of parents have children around the same time as each other, or have other kin or community members to provide care (see grandmother hypothesis ). Cooperative breeding is a social system that given some advantage over time, and cooperative breeding is much more common in humans and relatively rare in other mammalian species. Traits in our species that favor cooperative breeding evolve over time due to altruism, and within the context of kin selection and reciprocity.
Lactation is one of the costliest forms of parental investment because it is taxing at a metabolic and physiological level, but also in terms of time and emotion as well. There are many trade-offs regarding lactation, and recent work has explored cost benefit models and thresholds for breastfeeding. [ 67 ] From a biological and evolutionary perspective, breastfeeding infants is biologically superior and contains various bioconstituents that provide nutrition, hydration, immune factors, hormones, and other necessary components to aid infant survival and growth. Lactational strategies vary cross-culturally, but can typically be defined by sibling sets and sex ratios, frequency of nursing, entire lactational duration, and milk composition. [ 16 ] Milk is composed of my bioconstituents, but only a few will be outlined here. In the first days of puerperium, the first milk is thick and yellowish, also called colostrum . [ 16 ] For weeks after that, mature milk is expressed and it has been shown that fetal-mammary gland signaling occurs even before birth in determining milk type and concentrations based on the fetus sex. Colostrum plays an important role in establishing the infant gut microbiome , as it contains important immunoglobins , and is high in protein and low in fat and milk sugar such as lactose. [ 16 ] While breastmilk is extremely important for infants' health outcomes, it is also known that human mature milk is fairly dilute, which has an effect on infant suckling behavior, which in many cases holds implications for the contraceptive properties of lactation. [ 16 ]
Post-partum infecundability, also referred to as lactational infecundability or lactational amenorrhea , refers to the section of the human birth interval from parturition to the first post-partum ovulation. [ 16 ] This period varies widely across globe and between societies. The length of post-partum infecundability is heavily influenced by breastfeeding because it holds some contraceptive physiological effects. [ 16 ] The role of lactational amenorrhea has been shown to be important for infant survival as a mechanism to delay the next pregnancy, and thus infants have a longer period to optimize nutritional and immunological benefits of breast milk. [ 16 ] Post-partum hormonal levels change so that both estrogen and progesterone are "cleared from the maternal circulation" [ 16 ] and without breastfeeding, levels of plasma FSH and LH gradually increase and lead to the return of regular menses within 2 months. [ 16 ] With breastfeeding, the resumption of normal menses occurs many months later, and the overall effect of lactational amenorrhea is influenced by the intensity of infant suckling. [ 16 ]
Ovarian aging is characterized by the gradual decline of the ovarian follicles number and decreasing quality of oocytes. Menopause is considered as the final stage of ovarian aging. [ 68 ] Menopause is clinically defined as the absence of menstruation beyond a year. It indicates the cessation of reproductive phase of life in women. The biology of menopause is associated with the depletion of the ovarian follicular pool. At the fourth month of the fetal life, the ovarian follicles reach to the number 6–7 million. [ 69 ] At birth, the number of ovarian follicles in the ovary decline to 1–2 million. The follicle number decrease to 300,000–400,000 at the age of menarche. In the entire reproductive age, these follicles undergo atresia and at the time of menopause, the ovaries are left with approximately 1000 follicles. [ 70 ] Below this threshold regular ovarian cycles cannot be maintained. The quality of ovarian follicles declines with age due to the increase meiotic non-disjunction. After age 31 years, fecundity decreases and the probability of aneuploidy rate increases in the early embryo.
The regular menstrual cycle is associated with the hormonal regulation from hypothalamic, pituitary, and ovarian axis. Gonadotropin-releasing hormone (GnRH) secretes from the hypothalamus. Hypothalamic GnRH pulse influences the pulsatile secretion of follicle stimulating hormone (FSH) and luteinizing hormone (LH) from the pituitary gland. [ 71 ] During the menstrual cycle , due to a decreased level of inhibin-A and steroid hormones, the level of FSH increases. [ 72 ] Due to these hormonal changes, the corpus luteum gets destroyed. The elevated level of FSH helps to recruit a cohort of the FSH-sensitive antral follicles in that cycle. [ 73 ] During this phase, elevated FSH level stimulates the production of estradiol, inhibin A and B. Following that, due to the negative feedback mechanism the level of estradiol and inhibin-B increases and FSH level declines and it helps to select the dominant follicle. During the menopausal transition, FSH level elevates at the early follicular phase and due to the increased FSH level, the number of FSH-sensitive follicles decreases. [ 74 ] These series of events lead to irregular menstrual cycle and the cycle length starts to become shorter. FSH, Inhibin-B, and anti-Müllerian hormone (AMH) are used as the biomarkers for the ovarian aging. [ 75 ]
Various genetic and endocrine factors influence the aging of the ovaries and the age at menopause. In some women, the ovaries age faster and the follicle pool diminishes before the age of 40 years. [ 76 ] This phenomenon is known as the premature ovarian failure (POF) and it is used as the model for the study of the genetics of ovarian aging. The genes such as GDF9 and BMP15 have been identified as the candidate genes for POF. [ 77 ] POF has a relation with the genome-wide linkages on chromosomal regions 9q21.3 and Xp21.3. [ 78 ] Several genes related to mitochondrial function such as mt-Atp6, Sod1, Hspa4, and Nfkbia are also associated with the aging of the ovary. [ 79 ] In addition to that, the deletion at mtDNA 4977-bp in the granulosa cells is associated with fertility in older aged women. [ 80 ] | https://en.wikipedia.org/wiki/Human_reproductive_ecology |
A human resources management system ( HRMS ), also human resources information system ( HRIS ) or human capital management ( HCM ) system , is a form of human resources (HR) software that combines a number of systems and processes to ensure the easy management of human resources, business processes and data. Human resources software is used by businesses to combine a number of necessary HR functions, such as storing employee data, managing payroll, recruitment, benefits administration (total rewards), time and attendance, employee performance management, and tracking competency and training records.
A human resources management system ensures everyday human resources processes are manageable and easy to access. The field merges human resources as a discipline and, in particular, its basic HR activities and processes with the information technology field. This software category is analogous to how data processing systems evolved into the standardized routines and packages of enterprise resource planning (ERP) software. On the whole, these ERP systems have their origin from software that integrates information from different applications into one universal database. The linkage of financial and human resource modules through one database creates the distinction that separates an HRMS, HRIS, or HCM system from a generic ERP solution.
Structured resource about human resource management, especially human resource information system started with payroll systems in the late 1950s and continued into the 1960s when the first automated employee data used. [ citation needed ]
The first enterprise resource planning (ERP) system that integrated human resources functions was SAP R/2 (later to be replaced by R/3 and S/4hana), introduced in 1979. This system gave users the possibility to combine corporate data in real time and regulate processes from a single mainframe environment. Many of today's popular HR systems still offer considerable ERP and payroll functionality.
The first completely HR-centered client-server system for the enterprise market was PeopleSoft , released in 1987 and later bought by Oracle in 2005. Hosted and updated by clients, PeopleSoft overtook the mainframe environment concept in popularity. Oracle has also developed multiple similar BPM systems to automate corporate operations, including Oracle Cloud HCM . [ 1 ] [ 2 ]
Beginning in the late 1990s, HR vendors, started offering cloud-hosted HR services to make this technology more accessible to small and remote teams. Instead of a client-server, companies began using online accounts on web-based portals to access their employees' performance. Mobile applications have also become more common.
HRIS and HRMS technologies have allowed HR functions to focus more on strategic assets to an organisation rather than the more traditional administrative function. For example, these roles include employee development, as well as analyzing the workforce to target talent-rich areas.
The function of human resources departments is administrative and common to all organizations. Organizations may have formalized selection, evaluation, and payroll processes. Management of " human capital " has progressed to an imperative and complex process. The HR function consists of tracking existing employee data, which traditionally includes personal histories, skills, capabilities, accomplishments, and salary. To reduce the manual workload of these administrative activities, organizations began to electronically automate many of these processes by introducing specialized human resource management systems.
HR executives rely on internal or external IT professionals to develop and maintain an integrated HRMS. Before client–server architectures evolved in the late 1980s, many HR automation processes were relegated to mainframe computers that could handle large amounts of data transactions. In consequence of the high capital investment necessary to buy or program proprietary software, these internally developed HRMS were limited to organizations that possessed a large amount of capital. The advent of client-server, application service provider , and software as a service (SaaS) or human resource management systems enabled higher administrative control of such systems. Currently, human resource management systems tend to encompass:
The payroll module automates the pay process by gathering data on employee time and attendance, calculating various deductions and taxes, and generating periodic pay cheques and employee tax reports. Data is generally fed from human resources and timekeeping modules to calculate automatic deposit and manual cheque writing capabilities. This module can encompass all employee-related transactions as well as integrate with existing financial management systems.
The time and attendance module gathers standardized time and work related efforts. The most advanced modules provide broad flexibility in data collection methods, labor distribution capabilities and data analysis features. Cost analysis and efficiency metrics are the primary functions.
The benefits administration module provides a system for organizations to administer and track employee participation in benefits programs. These typically encompass insurance, compensation, profit sharing, and retirement.
The HR management module is a component covering many other HR aspects from application to retirement. The system records basic demographic and address data, selection, training and development, capabilities and skills management, compensation planning records and other related activities. Leading edge systems provide the ability to "read" applications and enter relevant data to applicable database fields, notify employers and provide position management and position control. Human resource management function involves the recruitment, placement, evaluation, compensation, and development of the employees of an organization. Initially, businesses used computer-based information systems to:
Online recruiting has become one of the primary methods employed by HR departments to garner potential candidates for available positions within an organization. Talent management systems , or recruitment modules, [ 3 ] offer an integrated hiring solution for HRMS which typically encompass:
The significant cost incurred in maintaining an organized recruitment effort, cross-posting within and across general or industry-specific job boards and maintaining a competitive exposure of availabilities has given rise to the development of a dedicated applicant tracking system (ATS) module.
The training module provides a system for organizations to administer and track employee training and development efforts. The system, normally called a "learning management system" (LMS) if a standalone product, allows HR to track education, qualifications, and skills of the employees, as well as outlining what training courses, books, CDs, web-based learning or materials are available to develop which skills. Courses can then be offered in date specific sessions, with delegates and training resources being mapped and managed within the same system. Sophisticated LMSs allow managers to approve training, budgets, and calendars alongside performance management and appraisal metrics. [ 4 ]
The employee self-service module allows employees to query HR related data and perform some HR transactions over the system. Employees may query their attendance record from the system without asking the information from HR personnel. The module also lets supervisors approve O.T. requests from their subordinates through the system without overloading the task on HR department.
Many organizations have gone beyond the traditional functions and developed human resource management information systems, which support recruitment, selection, hiring, job placement, performance appraisals, employee benefit analysis, health, safety, and security, while others integrate an outsourced applicant tracking system that encompasses a subset of the above.
The analytics module enables organizations to extend the value of an HRMS implementation by extracting HR related data for use with other business intelligence platforms. For example, organizations combine HR metrics with other business data to identify trends and anomalies in headcount in order to better predict the impact of employee turnover on future output.
There are now many types of HRMS or HRIS, some of which are typically local-machine-based software packages; the other main type is an online cloud-based system that can be accessed via a web browser.
The staff training module enables organizations the ability to enter, track and manage employee and staff training. Each type of activity can be recorded together with the additional data. The performance of each employee or staff member is then stored and can be accessed via the Analytics module.
Employee reassign module is a recent additional functionality of HRMS. This module has the functions of transfer, promotion, pay revision, re-designation, deputation, confirmation, pay mode change and letter form.
Employee self-service (ESS) provides employees access to their personal records and details. ESS features include allowing employees to change their contact details, banking information, and benefits. ESS also allows for administrative tasks such as applying for leave, seeing absence history, reviewing timesheets and tasks, inquiring about available loan programs, requesting overtime payment, viewing compensation history, and submitting reimbursement slips. With the emergence of ESS, employees are able to transact with their Human Resources office remotely.
With ESS features, employees can take more responsibility for their present job, skill development, and career planning . As part of HRIS, feedback is given for skill profiles, training and learning, objective setting, appraisals and reporting/analytics. [ 5 ] These systems are especially useful for businesses with remote workers, where employees are highly mobile, have flexible working , or not collocated with their manager. [ 5 ] | https://en.wikipedia.org/wiki/Human_resource_management_system |
Human satellite II is an exceptionally high-copy but unexplored sequence of the human genome thought of as junk DNA has a surprising ability to impact master regulators of our genome, and it goes awry in 50 percent of tumors. [ 1 ]
Because HSAT-II DNA is normally methylated (a form of gene regulation ), it remains dormant in healthy cells. For this reason, the HSAT-II hasn't been extensively studied and has not been thought to have a function. Due to its similarities to Human Satellite 3, the primary sequence component of the traditional human satellite fraction II (also known as Human Satellite 2 or HSat2) is sometimes incorrectly marked by RepeatMasker. In RepeatMasker annotations, both repeats frequently appear as a mixed pattern of "HSATII" and "(CATTC)n simple repeats." Based on this problem, Oxford Nanopore Technologies researcher used their own characterization of these sequences inside the CHM13 genome To further classify each HSat2 array into its previously identified subfamilies. [ 2 ]
In fact, standard genomic experiments intentionally screen HSAT-II out of the results. Both herpes viruses and cancer manipulate this same pathway causing genetic instability and disease. [ 3 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Human_satellite_II |
Human somatic variations are somatic mutations ( mutations that occur in somatic cells ) both at early stages of development and in adult cells. These variations can lead either to pathogenic phenotypes or not, even if their function in healthy conditions is not completely clear yet. [ 1 ]
The term mosaic (from medieval Latin musaicum , meaning "work of the Muses") [ 2 ] has been used since antiquity to refer to an artistic patchwork of ornamental stones, glass, gems, or other precious material. At a distance, the collective image appears as it would in a painting; only on close inspection do the individual components become recognizable. [ 2 ] In biological systems, mosaicism implies the presence of more than one genetically distinct cell line in a single organism . Occurrence of this phenomenon not only can result in major phenotypic changes but also reveal the expression of otherwise lethal genetic mutations . [ 3 ]
Genetic mutations involved in mosaicism may be due to endogenous factors, such as transposons and ploidy changes, or exogenous factors, such as UV radiation and nicotine. [ 4 ]
Somatic mosaicism arises a result of somatic mutations : genomic (or even mitochondrial ) alterations of different sizes ranging from a single nucleotide to chromosome gains or loss within somatic cells. These alterations within somatic cells begin at an early stage (pre- implantation or conception) and continue during aging, giving rise to phenotypic heterogeneity within cells, which may lead to the development of diseases such as cancer. [ 4 ] Novel array based techniques for screening genome-wide copy number variants and loss of heterozygosity in single cells showed that chromosome aneuploidies , uniparental disomies, segmental deletions , duplications , and amplifications frequently occur during embryogenesis . [ 5 ] Yet not all somatic mutations are propagated to the adult individual, due to the phenomenon of cell competition . [ 6 ]
Genetic alterations involving gains or loss of entire chromosomes predominantly occur during anaphase stage of cell division. But these are uncommon in somatic cells because they are usually selected against due to their deleterious consequences. [ 7 ] Somatic variations during embryonic development can be represented by monozygous twins since they carry different copy number profiles and epigenetic marks that keep on increasing with age. [ 8 ]
Early research on somatic mutations in aging showed that deletions, inversion, and translocations of genetic material are common in aging mice and aging genomes tend to contain visible chromosomal changes, mitotic recombination , whole gene deletions, intragenic deletions, and point mutations . Other factors include the loss of methylation , increasing gene expression heterogeneity correlating to genomic abnormalities, [ 4 ] and telomere shortening. [ 9 ] It is uncertain if transcription-based DNA repair takes part in the maintaining of somatic mutations in aging tissues. [ 4 ]
In some cells, the somatically acquired alterations can be reversed back to wild type alleles by reversion mosaicism. This can be due to endogenous mechanism such as homologous recombination , codon substitution, second-site suppressor mutations, DNA slippage, and mobile elements . [ 10 ]
The advent of Next-Generation Sequencing technologies has increased the resolution of mutation detection and has led to the revelation that older individuals not only accumulate chromosomal alterations but also abundant mutations in cancer driver genes. [ 11 ]
Age-associated accumulation of chromosomal alterations has been documented with a variety of cytogenetic approaches, from chromosome painting to single nucleotide polymorphism ( SNP ) arrays. [ 12 ]
Numerous studies demonstrated that the clonal populations might lead to loss of organismal health through the functional decline of tissue and/or the promotion of disease processes, such as cancer. This is the reason why the aberrant clonal expansion (ACE) resulting from cancer-associated mutations are common in noncancerous tissue and accumulate with age. This is universal in most organisms and affects multiple tissues. [ 11 ]
In the hematopoietic compartment mutations include both large structural chromosomal alterations and point mutations affecting cancer-associated genes. Some translocations appear to occur very early in life. The frequency of these events is low in people younger than 50 years (<0.5%), but this frequency rapidly increases to 2% to 3% of individuals in their 70s and 80s. This phenomenon was termed clonal hematopoiesis . A number of environmental factors, such as smoking, viral infections, and pesticide exposure, may contribute not only through mutation induction but also by modulation of clonal expansion. [ 13 ]
Otherwise, the detection of somatic variants in normal solid tissues has historically proved difficult. The main reasons are the generally slower replicative index, clonally restrictive tissue architecture, difficulty of tissue access, and low frequency of mutation occurrence. Recently, the analysis of somatic mutations in benign tissues adjacent to tumors revealed that 80% of samples harbors clonal mutations, with increased frequency associated with older age, smoking, and concurrent mutations in DNA repair genes. With the advent of NGS, it has become increasingly clear that somatic mutations accumulate with aging in normal tissue, even in individuals who are cancer -free. [ 11 ]
This suggested that clonal expansions driven by cancer genes are a near-universal feature of aging. NGS technologies revealed that the clonal expansions of cancer-associated mutations are very common condition in somatic tissues. [ 11 ]
Through several recent studies a prevalence of somatic variations, both in pathological and healthy nervous systems, has been highlighted. [ 14 ] [ 15 ]
Somatic aneuploidy such as SNVs ( single-nucleotide variations ) and CNVs ( copy number variations ) have been particularly observed and linked to brain disfunctions when arising in prenatal brain development; anyway those somatic aneuploidy have been observed in rates of 1,3-40%, potentially increasing with age and for this reason they have been proposed as a mechanism to generate normal genetic diversity among neurons . [ 16 ]
The confirmation of that hypothesis has been obtained through studies of single-cell sequencing , which allow a direct assessment of single neuronal genomes, so that a systematic characterization of somatic aneuploidies and subchromosomal CNVs of these cells is possible. Using postmortem brains of both healthy and diseased humans it has been possible to study how CNVs change among these two groups. It emerged that somatic aneuploidies in healthy brains are quite rare, but somatic CNVs instead aren't. [ 17 ]
These studies also showed that clonal CNVs exist in both pathological and healthy brains. This means that some CNVs can arise in early development without causing diseases, even though, when compared to the CNVs arising in other cell types such as lymphoblast , the brain's ones are more often private. This evidence could be given by the fact that, while lymphoblasts can generate clonal CNVs for a long period as they continue to proliferate, adult neurons do not replicate anymore, so the clonal CNVs they are carrying must have been generated in an early development stage. [ 17 ]
Data highlighted a tendency in neurons for the loss, rather than for the gain of copies when compared to lymphoblasts. These differences could suggest that the molecular mechanisms of CNVs arising in that two cell types are completely different. [ 17 ]
The retrotransposon LINE-1 (long interspersed element 1, L1) is a transposable element that has colonized the mammalian germline. L1 retrotransposition can happen also in somatic cells causing mosaicism (SLAVs – L1-associated variations) and in cancer. Retrotransposition is a copy and paste process in which the RNA template is retrotranscribed in DNA and integrated randomly in the genome. In humans there are around 500.000 copies of L1 and occupy 17% of genome. Its mRNA encodes for two proteins; one of them in particular has a reverse transcriptase and endonuclease activity that allows the retrotransposition in cis. Anyway most part of these copies are rendered immobile by mutations or 5’ truncation, leaving just about 80–100 mobile L1 per human genome and just about 10 are considered hot L1s so able to mobilize efficiently. [ 18 ]
L1 transpose using a mechanism called TPRT (target primed reverse transcription) it's able to insert a L1 endonuclease motif, target site duplications (TSD) and a poly-A tail with a cis preference. [ 19 ]
It has been seen in the past that there's L1 mobilization in neural progenitors during foetal and adult neurogenesis suggesting that the brain may be a L1 mosaicism hotspot. Moreover, some studies suggested that also non-dividing neurons can support L1 mobilization. This has been confirmed by single-cell genomic studies. [ 18 ]
Single-cell paired-end sequencing experiments found out that SLAVs are present both in neurons and glia of hippocampus and frontal cortex . Any neural cell has a similar probability to contain a SLAV, suggesting that somatic variations are a random phaenomenon, not focused on a specific group of cells. SLAVs occurrence in the brain is estimated to be of 0.58–1 SLAVs per cell and to involve 44–63% of the brain cells. [ citation needed ]
Since experiments showed that a half of the analyzed SLAVs lack target site duplication (TSD), another kind of L1-associated variant might occur. In fact those sequences don't have an endonuclease activity, but still have endonuclease motifs so that they can be retrotransposed in trans . [ citation needed ]
An application of the study of somatic mosaicism in the brain could be the tracing of specific brain cells. Indeed, if the somatic L1 insertions occurs in a progenitor cell , the unique variant could be used to trace the progenitor cell's development, localization, and spreading through the brain. On the contrary, if the somatic L1 insertion occurs late in development, it will be present just in a single cell or in a small group of cells. Therefore, tracing somatic variations could be useful to understand at which point of development they have occurred. Further experiments are necessary to understand the role of somatic mosaicism in brain function, since small groups of cells or even single cells can affect network activity. [ 20 ]
Human somatic mutations (HSMs) are intensively exploited by the immune system for the production of antibodies. HSMs, recombination in particular, are indeed the reason why antibodies can identify an epitope with such high specificity and sensitivity. [ 21 ]
Antibodies are encoded by B cells. Each antibody is composed of two heavy chains (IgH, encoded by IGH gene) and two light chains (IgL, encoded by either IGL or IGK gene). Each chain is then composed of a constant region (C) and a variable region (V). The constant region (C) on the heavy chain is important in the BCR signaling and determines the type of immunoglobuline (IgA, IgD, IgE, IgG, or IgM). The variable region (V) is responsible for the recognition of the target epitope and is the product of recombination processes in the related loci. [ 22 ]
After exposure of an antigen , B cells start developing. B cells genome undergoes repeated recombination processing on the Ig genes until the recognition of the epitope is perfectioned. The recombination involves the IGH locus first and then the IGL and IGK loci. All IGL, IGK, and IGH genes are the product of the V(D)J recombination process. This recombination involves the variable (V), diversity (D) and joining (J) segments. All three segments (V, D, J) are involved in the formation of the heavy chain, while only V and J recombination products encode for the light chain. [ 23 ]
The recombination between these regions allows the formation of 10 12 –10 18 potential different sequences. However, this number is an overestimation, since many factors contribute to limit the diversity of the B cell repertoire, first of all the actual number of B cell in the organism. [ 23 ]
Somatic mosaicism has been noted in the heart. Sequencing suggested mosaic variation in the gap junction protein connexin in three patients out of 15 might contribute to atrial fibrillation [ 24 ] although subsequent reports in larger numbers of patients found no examples among a large panel of genes. [ 25 ] At Stanford , a team led by Euan Ashley demonstrated somatic mosaicism in the heart of a newborn presenting with life threatening arrhythmia. Family-based genome sequencing as well as tissue RNA sequencing and single cell genomics techniques were used to verify the finding. A model combining partial and ordinary differential equations with inputs from heterologous single channel electrophysiology experiments of the genetic variant recapitulated certain aspects of the clinical presentation. [ 26 ] | https://en.wikipedia.org/wiki/Human_somatic_variation |
Human spaceflight (also referred to as manned spaceflight or crewed spaceflight ) is spaceflight with a crew or passengers aboard a spacecraft , often with the spacecraft being operated directly by the onboard human crew. Spacecraft can also be remotely operated from ground stations on Earth, or autonomously , without any direct human involvement. People trained for spaceflight are called astronauts (American or other), cosmonauts (Russian), or taikonauts (Chinese); and non-professionals are referred to as spaceflight participants or spacefarers . [ 1 ]
The first human in space was Soviet cosmonaut Yuri Gagarin , who launched as part of the Soviet Union's Vostok program on 12 April 1961 at the beginning of the Space Race . On 5 May 1961, Alan Shepard became the first American in space, as part of Project Mercury . Humans traveled to the Moon nine times between 1968 and 1972 as part of the United States' Apollo program , and have had a continuous presence in space for 24 years and 196 days on the International Space Station (ISS). [ 2 ] On 15 October 2003, the first Chinese taikonaut, Yang Liwei , went to space as part of Shenzhou 5 , the first Chinese human spaceflight. As of March 2025, humans have not traveled beyond low Earth orbit since the Apollo 17 lunar mission in December 1972.
Currently, the United States , Russia, and China are the only countries with public or commercial human spaceflight-capable programs . Non-governmental spaceflight companies have been working to develop human space programs of their own, e.g. for space tourism or commercial in-space research . The first private human spaceflight launch was a suborbital flight on SpaceShipOne on June 21, 2004. The first commercial orbital crew launch was by SpaceX in May 2020, transporting NASA astronauts to the ISS under United States government contract. [ 3 ]
Human spaceflight capability was first developed during the Cold War between the United States and the Soviet Union (USSR). These nations developed intercontinental ballistic missiles for the delivery of nuclear weapons , producing rockets large enough to be adapted to carry the first artificial satellites into low Earth orbit .
After the first satellites were launched in 1957 and 1958 by the Soviet Union, the US began work on Project Mercury , with the aim of launching men into orbit. The USSR was secretly pursuing the Vostok program to accomplish the same thing, and launched the first human into space, the cosmonaut Yuri Gagarin . On 12 April 1961, Gagarin was launched aboard Vostok 1 on a Vostok 3KA rocket and completed a single orbit. On 5 May 1961, the US launched its first astronaut , Alan Shepard , on a suborbital flight aboard Freedom 7 on a Mercury-Redstone rocket . Unlike Gagarin, Shepard manually controlled his spacecraft's attitude . [ 4 ] On 20 February 1962, John Glenn became the first American in orbit, aboard Friendship 7 on a Mercury-Atlas rocket . The USSR launched five more cosmonauts in Vostok capsules , including the first woman in space, Valentina Tereshkova , aboard Vostok 6 on 16 June 1963. Through 1963, the US launched a total of two astronauts in suborbital flights and four into orbit. The US also made two North American X-15 flights ( 90 and 91 , piloted by Joseph A. Walker ), that exceeded the Kármán line , the 100 kilometres (62 mi) altitude used by the Fédération Aéronautique Internationale (FAI) to denote the edge of space.
In 1961, US President John F. Kennedy raised the stakes of the Space Race by setting the goal of landing a man on the Moon and returning him safely to Earth by the end of the 1960s. [ 5 ] That same year, the US began the Apollo program of launching three-man capsules atop the Saturn family of launch vehicles . In 1962, the US began Project Gemini , which flew 10 missions with two-man crews launched by Titan II rockets in 1965 and 1966. Gemini's objective was to support Apollo by developing American orbital spaceflight experience and techniques to be used during the Moon mission. [ 6 ]
Meanwhile, the USSR remained silent about their intentions to send humans to the Moon and proceeded to stretch the limits of their single-pilot Vostok capsule by adapting it to a two or three-person Voskhod capsule to compete with Gemini. They were able to launch two orbital flights in 1964 and 1965 and achieved the first spacewalk , performed by Alexei Leonov on Voskhod 2 , on 8 March 1965. However, the Voskhod did not have Gemini's capability to maneuver in orbit, and the program was terminated. The US Gemini flights did not achieve the first spacewalk, but overcame the early Soviet lead by performing several spacewalks, solving the problem of astronaut fatigue caused by compensating for the lack of gravity, demonstrating the ability of humans to endure two weeks in space, and performing the first space rendezvous and docking of spacecraft.
The US succeeded in developing the Saturn V rocket necessary to send the Apollo spacecraft to the Moon, and sent Frank Borman , James Lovell , and William Anders into 10 orbits around the Moon in Apollo 8 in December 1968. In 1969, Apollo 11 accomplished Kennedy's goal by landing Neil Armstrong and Buzz Aldrin on the Moon on 21 July and returning them safely on 24 July, along with Command Module pilot Michael Collins . Through 1972, a total of six Apollo missions landed 12 men to walk on the Moon, half of which drove electric powered vehicles on the surface. The crew of Apollo 13 — Jim Lovell , Jack Swigert , and Fred Haise —survived an in-flight spacecraft failure, they flew by the Moon without landing, and returned safely to Earth.
During this time, the USSR secretly pursued crewed lunar orbiting and landing programs . They successfully developed the three-person Soyuz spacecraft for use in the lunar programs, but failed to develop the N1 rocket necessary for a human landing, and discontinued their lunar programs in 1974. [ 7 ] Upon losing the Moon race they concentrated on the development of space stations , using the Soyuz as a ferry to take cosmonauts to and from the stations. They started with a series of Salyut sortie stations from 1971 to 1986.
In 1969, Nixon appointed his vice president, Spiro Agnew , to head a Space Task Group to recommend follow-on human spaceflight programs after Apollo. The group proposed an ambitious Space Transportation System based on a reusable Space Shuttle , which consisted of a winged, internally fueled orbiter stage burning liquid hydrogen, launched with a similar, but larger kerosene -fueled booster stage, each equipped with airbreathing jet engines for powered return to a runway at the Kennedy Space Center launch site. Other components of the system included a permanent, modular space station; reusable space tug ; and nuclear interplanetary ferry, leading to a human expedition to Mars as early as 1986 or as late as 2000, depending on the level of funding allocated. However, Nixon knew the American political climate would not support congressional funding for such an ambition, and killed proposals for all but the Shuttle, possibly to be followed by the space station. Plans for the Shuttle were scaled back to reduce development risk, cost, and time, replacing the piloted fly-back booster with two reusable solid rocket boosters , and the smaller orbiter would use an expendable external propellant tank to feed its hydrogen-fueled main engines . The orbiter would have to make unpowered landings.
In 1973, the US launched the Skylab sortie space station and inhabited it for 171 days with three crews ferried aboard an Apollo spacecraft. During that time, President Richard Nixon and Soviet general secretary Leonid Brezhnev were negotiating an easing of Cold War tensions known as détente . During the détente, they negotiated the Apollo–Soyuz program, in which an Apollo spacecraft carrying a special docking adapter module would rendezvous and dock with Soyuz 19 in 1975. The American and Soviet crews shook hands in space, but the purpose of the flight was purely symbolic.
The two nations continued to compete rather than cooperate in space, as the US turned to developing the Space Shuttle and planning the space station, which was dubbed Freedom . The USSR launched three Almaz military sortie stations from 1973 to 1977, disguised as Salyuts. They followed Salyut with the development of Mir , the first modular, semi-permanent space station, the construction of which took place from 1986 to 1996. Mir orbited at an altitude of 354 kilometers (191 nautical miles), at an orbital inclination of 51.6°. It was occupied for 4,592 days and made a controlled reentry in 2001.
The Space Shuttle started flying in 1981, but the US Congress failed to approve sufficient funds to make Space Station Freedom a reality. A fleet of four shuttles was built: Columbia , Challenger , Discovery , and Atlantis . A fifth shuttle, Endeavour , was built to replace Challenger , which was destroyed in an accident during launch that killed 7 astronauts on 28 January 1986. From 1983 to 1998, twenty-two Shuttle flights carried components for a European Space Agency sortie space station called Spacelab in the Shuttle payload bay. [ 8 ]
The USSR copied the US's reusable Space Shuttle orbiter , which they called Buran -class orbiter or simply Buran , which was designed to be launched into orbit by the expendable Energia rocket, and was capable of robotic orbital flight and landing. Unlike the Space Shuttle, Buran had no main rocket engines, but like the Space Shuttle orbiter, it used smaller rocket engines to perform its final orbital insertion. A single uncrewed orbital test flight took place in November 1988. A second test flight was planned by 1993, but the program was canceled due to lack of funding and the dissolution of the Soviet Union in 1991. Two more orbiters were never completed, and the one that performed the uncrewed flight was destroyed in a hangar roof collapse in May 2002.
The dissolution of the Soviet Union in 1991 brought an end to the Cold War and opened the door to true cooperation between the US and Russia. The Soviet Soyuz and Mir programs were taken over by the Russian Federal Space Agency, which became known as the Roscosmos State Corporation . The Shuttle-Mir Program included American Space Shuttles visiting the Mir space station, Russian cosmonauts flying on the Shuttle, and an American astronaut flying aboard a Soyuz spacecraft for long-duration expeditions aboard Mir .
In 1993, President Bill Clinton secured Russia's cooperation in converting the planned Space Station Freedom into the International Space Station (ISS). Construction of the station began in 1998. The station orbits at an altitude of 409 kilometers (221 nmi) and an orbital inclination of 51.65°. Several of the Space Shuttle's 135 orbital flights were to help assemble, supply, and crew the ISS. Russia has built half of the International Space Station and has continued its cooperation with the US.
China was the third nation in the world, after the USSR and US, to send humans into space. During the Space Race between the two superpowers, which culminated with Apollo 11 landing humans on the Moon, Mao Zedong and Zhou Enlai decided on 14 July 1967 that China should not be left behind, and initiated their own crewed space program: the top-secret Project 714, which aimed to put two people into space by 1973 with the Shuguang spacecraft. Nineteen PLAAF pilots were selected for this goal in March 1971. The Shuguang-1 spacecraft, to be launched with the CZ-2A rocket, was designed to carry a crew of two. The program was officially canceled on 13 May 1972 for economic reasons.
In 1992, under China Manned Space Program (CMS), also known as "Project 921", authorization and funding was given for the first phase of a third, successful attempt at crewed spaceflight. To achieve independent human spaceflight capability, China developed the Shenzhou spacecraft and Long March 2F rocket dedicated to human spaceflight in the next few years, along with critical infrastructures like a new launch site and flight control center being built. The first uncrewed spacecraft, Shenzhou 1 , was launched on 20 November 1999 and recovered the next day, marking the first step of the realization of China's human spaceflight capability. Three more uncrewed missions were conducted in the next few years in order to verify the key technologies. On 15 October 2003 Shenzhou 5 , China's first crewed spaceflight mission, put Yang Liwei in orbit for 21 hours and returned safely back to Inner Mongolia , making China the third nation to launch a human into orbit independently. [ 9 ]
The goal of the second phase of CMS was to make technology breakthroughs in extravehicular activities (EVA, or spacewalk), space rendezvous , and docking to support short-term human activities in space. [ 10 ] On 25 September 2008 during the flight of Shenzhou 7 , Zhai Zhigang and Liu Boming completed China's first EVA. [ 11 ] In 2011, China launched the Tiangong 1 target spacecraft and Shenzhou 8 uncrewed spacecraft. The two spacecraft completed China's first automatic rendezvous and docking on 3 November 2011. [ 12 ] About 9 months later, Tiangong 1 completed the first manual rendezvous and docking with Shenzhou 9 , which carried China's first female astronaut Liu Yang . [ 13 ]
In September 2016, Tiangong 2 was launched into orbit. It was a space laboratory with more advanced functions and equipment than Tiangong 1 . A month later, Shenzhou 11 was launched and docked with Tiangong 2 . Two astronauts entered Tiangong 2 and were stationed for about 30 days, verifying the viability of astronauts' medium-term stay in space. [ 14 ] In April 2017, China's first cargo spacecraft, Tianzhou 1 docked with Tiangong 2 and completed multiple in-orbit propellant refueling tests, which marked the successful completion of the second phase of CMS. [ 14 ]
The third phase of CMS began in 2020. The goal of this phase is to build China's own space station, Tiangong . [ 15 ] The first module of Tiangong , the Tianhe core module , was launched into orbit by China's most powerful rocket Long March 5B on 29 April 2021. [ 16 ] It was later visited by multiple cargo and crewed spacecraft and demonstrated China's capability of sustaining Chinese astronauts' long-term stay in space.
According to CMS announcement, all missions of Tiangong Space Station are scheduled to be carried out by the end of 2022. [ 17 ] Once the construction is completed, Tiangong will enter the application and development phase, which is poised to last for no less than 10 years. [ 17 ]
The European Space Agency began development of the Hermes shuttle spaceplane in 1987, to be launched on the Ariane 5 expendable launch vehicle. It was intended to dock with the European Columbus space station . The projects were canceled in 1992 when it became clear that neither cost nor performance goals could be achieved. No Hermes shuttles were ever built. The Columbus space station was reconfigured as the European module of the same name on the International Space Station. [ 18 ]
Japan ( NASDA ) began the development of the HOPE-X experimental shuttle spaceplane in the 1980s, to be launched on its H-IIA expendable launch vehicle. A string of failures in 1998 led to funding reductions, and the project's cancellation in 2003 in favor of participation in the International Space Station program through the Kibō Japanese Experiment Module and H-II Transfer Vehicle cargo spacecraft. As an alternative to HOPE-X, NASDA in 2001 proposed the Fuji crew capsule for independent or ISS flights, but the project did not proceed to the contracting stage. [ citation needed ]
From 1993 to 1997, the Japanese Rocket Society [ ja ] , Kawasaki Heavy Industries , and Mitsubishi Heavy Industries worked on the proposed Kankoh-maru vertical-takeoff-and-landing single-stage-to-orbit reusable launch system. In 2005, this system was proposed for space tourism. [ 19 ]
According to a press release from the Iraqi News Agency dated 5 December 1989, there was only one test of the Al-Abid space launcher, which Iraq intended to use to develop its own crewed space facilities by the end of the century. These plans were put to an end by the Gulf War of 1991 and the economic hardships that followed. [ citation needed ]
Under the George W. Bush administration, the Constellation program included plans for retiring the Space Shuttle program and replacing it with the capability for spaceflight beyond low Earth orbit. In the 2011 United States federal budget , the Obama administration canceled Constellation for being over budget and behind schedule, while not innovating and investing in critical new technologies. [ 20 ] As part of the Artemis program , NASA is developing the Orion spacecraft to be launched by the Space Launch System . Under the Commercial Crew Development plan, NASA relies on transportation services provided by the private sector to reach low Earth orbit, such as SpaceX Dragon 2 , the Boeing Starliner or Sierra Nevada Corporation 's Dream Chaser . The period between the retirement of the Space Shuttle in 2011 and the first launch into space of SpaceShipTwo Flight VP-03 on 13 December 2018 is similar to the gap between the end of Apollo in 1975 and the first Space Shuttle flight in 1981, and is referred to by a presidential Blue Ribbon Committee as the U.S. human spaceflight gap.
Since the early 2000s, a variety of private spaceflight ventures have been undertaken. As of November 2024, [update] SpaceX [ 21 ] and Boeing [ 22 ] have launched humans to orbit, [ note 1 ] while Blue Origin has launched 8 crewed flights, six of which crossed the Kármán line . [ 23 ] [ note 2 ] Virgin Galactic has launched crew to a height above 80 km (50 mi) on a suborbital trajectory. [ 25 ] Several other companies, including Sierra Nevada and Copenhagen Suborbitals , have developed crewed spacecraft. [ 26 ] [ 27 ] SpaceX, Boeing, Blue Origin, and Virgin Galactic plan to fly commercial passengers in the emerging space tourism market. [ 28 ]
SpaceX has developed Crew Dragon flying on Falcon 9 . It first launched astronauts to orbit and to the ISS in May 2020 as part of the Demo-2 mission. Developed as part of NASA's Commercial Crew Development program, the capsule is also available for flights with other customers. A first tourist mission, Inspiration4 , launched in September 2021. [ 29 ]
Boeing developed the Starliner capsule as part of NASA's Commercial Crew Development program, which is launched on a United Launch Alliance Atlas V launch vehicle. [ 30 ] Starliner made an uncrewed flight in December 2019. A second uncrewed flight attempt was launched in May 2022. [ 31 ] A crewed flight to fully certify Starliner was launched in June 2024. [ 32 ] Similar to SpaceX, development funding has been provided by a mix of government and private funds. [ 33 ] [ 34 ]
Virgin Galactic is developing SpaceshipTwo , a commercial suborbital spacecraft aimed at the space tourism market. It reached space in December 2018. [ 25 ]
Blue Origin is in a multi-year test program of their New Shepard vehicle and has carried out thirty one launches as of May 2025, including twenty uncrewed test flights and eleven crewed flights. The first crewed flight, carrying founder Jeff Bezos , his brother Mark Bezos , aviator Wally Funk , and 18-year old Oliver Daemen launched on July 20, 2021. [ 35 ]
Over the decades, a number of spacecraft have been proposed for spaceliner passenger travel. Somewhat analogous to travel by airliner after the middle of the 20th century, these vehicles are proposed to transport large numbers of passengers to destinations in space, or on Earth via suborbital spaceflights . To date, none of these concepts have been built, although a few vehicles that carry fewer than 10 persons are currently in the test flight phase of their development process. [ citation needed ]
One large spaceliner concept currently in early development is the SpaceX Starship , which, in addition to replacing the Falcon 9 and Falcon Heavy launch vehicles in the legacy Earth-orbit market after 2020, has been proposed by SpaceX for long-distance commercial travel on Earth, flying 100+ people suborbitally between two points in under one hour, also known as "Earth-to-Earth". [ 36 ] [ 37 ] [ 38 ]
Small spaceplane or small capsule suborbital spacecraft have been under development for the past decade or so; as of 2017 [update] , at least one of each type is under development. Both Virgin Galactic and Blue Origin have craft in active development : the SpaceShipTwo spaceplane and the New Shepard capsule, respectively. Both would carry approximately a half-dozen passengers up to space for a brief time of zero gravity before returning to the launch location. XCOR Aerospace had been developing the Lynx single-passenger spaceplane since the 2000s, [ 39 ] [ 40 ] but development was halted in 2017. [ 41 ]
Participation and representation of humanity in space has been an issue ever since the first phase of space exploration. [ 42 ] Some rights of non-spacefaring countries have been secured through international space law , declaring space the " province of all mankind ", though the sharing of space by all humanity is sometimes criticized as imperialist and lacking. [ 42 ] In addition to the lack of international inclusion, the inclusion of women and people of color has also been lacking. To make spaceflight more inclusive, organizations such as the Justspace Alliance [ 42 ] and IAU -featured Inclusive Astronomy [ 43 ] have been formed in recent years.
The first woman to ever enter space was Valentina Tereshkova . She flew in 1963, but it was not until the 1980s that another woman entered space. At the time, all astronauts were required to be military test pilots; women were not able to enter this career, which is one reason for the delay in allowing women to join space crews. [ 44 ] After the rules were changed, Svetlana Savitskaya became the second woman to enter space; she was also from the Soviet Union . Sally Ride became the next woman to enter space and the first woman to enter space through the United States program.Since then, eleven other countries have allowed women astronauts. The first all-female spacewalk occurred in 2018, by Christina Koch and Jessica Meir . These two women had both participated in separate spacewalks with NASA. The first mission to the Moon with a woman aboard is planned for 2024.
Despite these developments, women are still underrepresented among astronauts and especially cosmonauts. More than 600 people have flown in space but only 75 have been women. [ 45 ] Issues that block potential applicants from the programs, and limit the space missions they are able to go on, are, for example:
Sally Ride became the first American woman in space, in 1983. Eileen Collins was the first female Shuttle pilot, and with Shuttle mission STS-93 in 1999 she became the first woman to command a U.S. spacecraft.
For many years, the USSR (later Russia) and the United States were the only countries whose astronauts flew in space. That ended with the 1978 flight of Vladimir Remek. As of 2010 [update] , citizens from 38 nations (including space tourists ) have flown in space aboard Soviet, American, Russian, and Chinese spacecraft.
Human spaceflight programs have been conducted by the Soviet Union–Russian Federation, the United States, Mainland China , and by American private spaceflight companies.
The following space vehicles and spaceports are currently used for launching human spaceflights:
The following space stations are currently maintained in Earth orbit for human occupation:
Most of the time, the only humans in space are those aboard the ISS, which generally has a crew of 7, and those aboard Tiangong, which generally has a crew of 3.
NASA and ESA use the term "human spaceflight" to refer to their programs of launching people into space. These endeavors have also formerly been referred to as "manned space missions", though this is no longer official parlance according to NASA style guides, which call for gender-neutral language . [ 52 ]
Under the Indian Human Spaceflight Program , India was planning to send humans into space on its orbital vehicle Gaganyaan before August 2022, but it has been delayed to 2024, due to the COVID-19 pandemic. The Indian Space Research Organisation (ISRO) began work on this project in 2006. [ 53 ] [ 54 ] The initial objective is to carry a crew of two or three to low Earth orbit (LEO) for a 3-to-7-day flight in a spacecraft on a LVM 3 rocket and return them safely for a water landing at a predefined landing zone. On 15 August 2018, Indian Prime Minister Narendra Modi , declared India will independently send humans into space before the 75th anniversary of independence in 2022. [ 55 ] In 2019, ISRO revealed plans for a space station by 2030, followed by a crewed lunar mission. The program envisages the development of a fully-autonomous orbital vehicle capable of carrying 2 or 3 crew members to an about 300 km (190 mi) low Earth orbit and bringing them safely back home. [ 56 ]
Since 2008, the Japan Aerospace Exploration Agency has developed the H-II Transfer Vehicle cargo-spacecraft-based crewed spacecraft and Kibō Japanese Experiment Module –based small space laboratory.
NASA is developing a plan to land humans on Mars by the 2030s. The first step has begun with Artemis I in 2022, sending an uncrewed Orion spacecraft to a distant retrograde orbit around the Moon and returning it to Earth after a 25-day mission.
SpaceX is developing Starship , a fully reusable two-stage system, with near-Earth and cislunar applications and an ultimate goal of landing on Mars. The upper stage of the Starship system, also called Starship, has had 9 atmospheric test flights as of September 2021. The first test flight of the fully integrated two-stage system occurred in April 2023. A modified version of Starship is being developed for the Artemis program .
Several other countries and space agencies have announced and begun human spaceflight programs using natively developed equipment and technology, including Japan ( JAXA ), Iran ( ISA ), and North Korea ( NADA ). The plans for the Iranian crewed spacecraft are for a small spacecraft and space laboratory. North Korea 's space program has plans for crewed spacecraft and small shuttle systems.
[ 58 ] [ 59 ]
There are two main sources of hazard in space flight: those due to the hostile space environment, and those due to possible equipment malfunctions. Addressing these issues is of great importance for NASA and other space agencies before conducting the first extended crewed missions to destinations such as Mars. [ 64 ]
Planners of human spaceflight missions face a number of safety concerns.
The basic needs for breathable air and drinkable water are addressed by the life support system of the spacecraft.
Astronauts may not be able to quickly return to Earth or receive medical supplies, equipment, or personnel if a medical emergency occurs. The astronauts may have to rely for long periods on limited resources and medical advice from the ground.
The possibility of blindness and of bone loss have been associated with human space flight . [ 65 ] [ 66 ]
On 31 December 2012, a NASA -supported study reported that spaceflight may harm the brains of astronauts and accelerate the onset of Alzheimer's disease . [ 67 ] [ 68 ] [ 69 ]
In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration , which included the potential hazards of a human mission to Mars . [ 70 ] [ 71 ]
On 2 November 2017, scientists reported, based on MRI studies , that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space . Astronauts on longer space trips were affected by greater brain changes. [ 72 ] [ 73 ]
Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to assure a healthy environment for astronauts . [ 74 ] [ 75 ]
In March 2019, NASA reported that latent viruses in humans may be activated during space missions, possibly adding more risk to astronauts in future deep-space missions. [ 76 ]
On 25 September 2021, CNN reported that an alarm had sounded during the Inspiration4 Earth-orbital journey on the SpaceX Dragon 2 . The alarm signal was found to be associated with an apparent toilet malfunction. [ 77 ]
Medical data from astronauts in low Earth orbits for long periods, dating back to the 1970s, show several adverse effects of a microgravity environment: loss of bone density, decreased muscle strength and endurance, postural instability, and reductions in aerobic capacity. Over time these deconditioning effects can impair astronauts' performance or increase their risk of injury. [ 78 ]
In a weightless environment, astronauts put almost no weight on the back muscles or leg muscles used for standing up, which causes the muscles to weaken and get smaller. Astronauts can lose up to twenty per cent of their muscle mass on spaceflights lasting five to eleven days. The consequent loss of strength could be a serious problem in case of a landing emergency. [ 79 ] Upon returning to Earth from long-duration flights, astronauts are considerably weakened and are not allowed to drive a car for twenty-one days. [ 80 ]
Astronauts experiencing weightlessness will often lose their orientation, get motion sickness , and lose their sense of direction as their bodies try to get used to a weightless environment. When they get back to Earth, they have to readjust and may have problems standing up, focusing their gaze, walking, and turning. Importantly, those motor disturbances only get worse the longer the exposure to weightlessness. [ 81 ] These changes can affect the ability to perform tasks required for approach and landing, docking, remote manipulation, and emergencies that may occur while landing. [ 82 ]
In addition, after long space flight missions, male astronauts may experience severe eyesight problems, which may be a major concern for future deep space flight missions, including a crewed mission to the planet Mars . [ 83 ] [ 84 ] [ 85 ] [ 86 ] [ 87 ] [ 88 ] Long space flights can also alter a space traveler's eye movements. [ 89 ]
Without proper shielding, the crews of missions beyond low Earth orbit might be at risk from high-energy protons emitted by solar particle events (SPEs) associated with solar flares . If estimated correctly, the amount of radiation that astronauts would be exposed to from a solar storm similar to that of the most powerful in recorded history, the Carrington Event , would result in acute radiation sickness at least, and could even be fatal "in a poorly shielded spacecraft". [ 91 ] [ better source needed ] Another storm that could have inflicted a potentially lethal dose of radiation on astronauts outside Earth's protective magnetosphere occurred during the Space Age , shortly after Apollo 16 landed and before Apollo 17 launched. [ 92 ] This solar storm, which occurred in August 1972 , could potentially have caused any astronauts who were exposed to it to suffer from acute radiation sickness, and may even have been lethal for those engaged in extravehicular activity or on the lunar surface. [ 93 ]
Another type of radiation, galactic cosmic rays , presents further challenges to human spaceflight beyond low Earth orbit. [ 94 ]
There is also some scientific concern that extended spaceflight might slow down the body's ability to protect itself against diseases, [ 95 ] resulting in a weakened immune system and the activation of dormant viruses in the body. Radiation can cause both short- and long-term consequences to the bone marrow stem cells from which blood and immune-system cells are created. Because the interior of a spacecraft is so small, a weakened immune system and more active viruses in the body can lead to a fast spread of infection. [ 96 ]
During long missions, astronauts are isolated and confined in small spaces. Depression , anxiety, cabin fever , and other psychological problems may occur more than for an average person and could impact the crew's safety and mission success. [ 97 ] NASA spends millions of dollars on psychological treatments for astronauts and former astronauts. [ 98 ] To date, there is no way to prevent or reduce mental problems caused by extended periods of stay in space.
Due to these mental disorders, the efficiency of astronauts' work is impaired; and sometimes they are brought back to Earth, incurring the expense of their mission being aborted. [ 99 ] A Russian expedition to space in 1976 was returned to Earth after the cosmonauts reported a strong odor that resulted in a fear of fluid leakage; but after a thorough investigation, it became clear that there was no leakage or technical malfunction. It was concluded by NASA that the cosmonauts most likely had hallucinated the smell .
It is possible that the mental health of astronauts can be affected by the changes in the sensory systems while in prolonged space travel.
During astronauts' spaceflight, they are in an extreme environment. This, and the fact that little change is taking place in the environment, will result in the weakening of sensory input to the astronauts' seven senses.
Space flight requires much higher velocities than ground or air transportation, and consequently requires the use of high energy density propellants for launch, and the dissipation of large amounts of energy, usually as heat, for safe reentry through the Earth's atmosphere.
Since rockets have the potential for fire or explosive destruction, space capsules generally employ some sort of launch escape system , consisting either of a tower-mounted solid-fuel rocket to quickly carry the capsule away from the launch vehicle (employed on Mercury , Apollo , and Soyuz , the escape tower being discarded at some point after launch, at a point where an abort can be performed using the spacecraft's engines), or else ejection seats (employed on Vostok and Gemini ) to carry astronauts out of the capsule and away for individual parachute landings.
Such a launch escape system is not always practical for multiple-crew-member vehicles (particularly spaceplanes ), depending on the location of egress hatch(es). When the single-hatch Vostok capsule was modified to become the 2 or 3-person Voskhod , the single-cosmonaut ejection seat could not be used, and no escape tower system was added. The two Voskhod flights in 1964 and 1965 avoided launch mishaps. The Space Shuttle carried ejection seats and escape hatches for its pilot and copilot in early flights; but these could not be used for passengers who sat below the flight deck on later flights, and so were discontinued.
There have been only two in-flight launch aborts of a crewed flight. The first occurred on Soyuz 18a on 5 April 1975. The abort occurred after the launch escape system had been jettisoned when the launch vehicle's spent second stage failed to separate before the third stage ignited and the vehicle strayed off course. The crew finally managed to separate the spacecraft, firing its engines to pull it away from the errant rocket, and both cosmonauts landed safely. The second occurred on 11 October 2018 with the launch of Soyuz MS-10 . Again, both crew members survived.
In the first use of a launch escape system on the launchpad, before the start of a crewed flight, happened during the planned Soyuz T-10a launch on 26 September 1983, which was aborted by a launch vehicle fire 90 seconds before liftoff. Both cosmonauts aboard landed safely.
The only crew fatality during launch occurred on 28 January 1986, when the Space Shuttle Challenger broke apart 73 seconds after liftoff, due to the failure of a solid rocket booster seal, which caused the failure of the external fuel tank , resulting in an explosion of the fuel and separation of the boosters. All seven crew members were killed.
Tasks outside a spacecraft require use of a space suit . Despite the risk of mechanical failures while working in open space, there have been no spacewalk fatalities. Spacewalking astronauts routinely remain attached to the spacecraft with tethers and sometimes supplementary anchors. Un-tethered spacewalks were performed on three missions in 1984 using the Manned Maneuvering Unit , and on a flight test in 1994 of the Simplified Aid For EVA Rescue (SAFER) device.
The single pilot of Soyuz 1 , Vladimir Komarov , was killed when his capsule's parachutes failed during an emergency landing on 24 April 1967, causing the capsule to crash.
On 1 February 2003, the crew of seven aboard the Space Shuttle Columbia were killed on reentry after completing a successful mission in space . A wing-leading-edge reinforced carbon-carbon heat shield had been damaged by a piece of frozen external tank foam insulation that had broken off and struck the wing during launch. Hot reentry gasses entered and destroyed the wing structure, leading to the breakup of the orbiter vehicle .
There are two basic choices for an artificial atmosphere: either an Earth-like mixture of oxygen and an inert gas such as nitrogen or helium, or pure oxygen, which can be used at lower than standard atmospheric pressure. A nitrogen–oxygen mixture is used in the International Space Station and Soyuz spacecraft, while low-pressure pure oxygen is commonly used in space suits for extravehicular activity .
The use of a gas mixture carries the risk of decompression sickness (commonly known as "the bends") when transitioning to or from the pure oxygen space suit environment. There have been instances of injury and fatalities caused by suffocation in the presence of too much nitrogen and not enough oxygen.
A pure oxygen atmosphere carries the risk of fire. The original design of the Apollo spacecraft used pure oxygen at greater than atmospheric pressure prior to launch. An electrical fire started in the cabin of Apollo 1 during a ground test at Cape Kennedy Air Force Station Launch Complex 34 on 27 January 1967, and spread rapidly. The high pressure, increased by the fire, prevented removal of the plug door hatch cover in time to rescue the crew. All three astronauts— Gus Grissom , Ed White , and Roger Chaffee —were killed. [ 103 ] This led NASA to use a nitrogen–oxygen atmosphere before launch, and low-pressure pure oxygen only in space.
The March 1966 Gemini 8 mission was aborted in orbit when an attitude control system thruster stuck in the on position, sending the craft into a dangerous spin that threatened the lives of Neil Armstrong and David Scott . Armstrong had to shut the control system off and use the reentry control system to stop the spin. The craft made an emergency reentry and the astronauts landed safely. The most probable cause was determined to be an electrical short due to a static electricity discharge, which caused the thruster to remain powered even when switched off. The control system was modified to put each thruster on its own isolated circuit.
The third lunar landing expedition, Apollo 13 , in April 1970, was aborted and the lives of the crew— James Lovell , Jack Swigert , and Fred Haise —were threatened after the failure of a cryogenic liquid oxygen tank en route to the Moon. The tank burst when electrical power was applied to internal stirring fans in the tank, causing the immediate loss of all of its contents, and also damaging the second tank, causing the gradual loss of its remaining oxygen over a period of 130 minutes. This in turn caused a loss of electrical power provided by fuel cells to the command spacecraft . The crew managed to return to Earth safely by using the lunar landing craft as a "life boat". The tank failure was determined to be caused by two mistakes: the tank's drain fitting had been damaged when it was dropped during factory testing, necessitating the use of its internal heaters to boil out the oxygen after a pre-launch test; which in turn damaged the fan wiring's electrical insulation because the thermostats on the heaters did not meet the required voltage rating due to a vendor miscommunication.
The crew of Soyuz 11 were killed on 30 June 1971 by a combination of mechanical malfunctions; the crew were asphyxiated due to cabin decompression following the separation of their descent capsule from the service module. A cabin ventilation valve had been jolted open at an altitude of 168 kilometres (104 mi) by the stronger-than-expected shock of explosive separation bolts, which were designed to fire sequentially, but in fact had fired simultaneously. The loss of pressure became fatal within about 30 seconds. [ 104 ]
As of December 2015 [update] , 23 crew members have died in accidents aboard spacecraft. Over 100 others have died in accidents during activities directly related to spaceflight or testing.
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Human_spaceflight |
Human spaceflight programs have been conducted, started, or planned by multiple countries and companies. Until the 21st century, human spaceflight programs were sponsored exclusively by governments, through either the military or civilian space agencies. With the launch of the privately funded SpaceShipOne in 2004, a new category of human spaceflight programs – commercial human spaceflight – arrived. By the end of 2022, three countries (Soviet Union/Russia, United States and China) and one private company (SpaceX) had successfully launched humans to Earth orbit, and two private companies (Scaled Composites and Blue Origin) had launched humans on a suborbital trajectory.
The criteria for what constitutes human spaceflight vary. The Fédération Aéronautique Internationale defines spaceflight as any flight over 100 kilometers (62 mi). In the United States professional, military, and commercial astronauts who travel above an altitude of 80 kilometers (50 mi) are awarded the United States Astronaut Badge . This article follows the FAI definition of spaceflight.
Programs in this section are sorted by the years when the first successful crewed spaceflight took place.
The Vostok program was a project that succeeded in putting a person into orbit for the first time. Sergei Korolev and Konstantin Feoktistov began, in June 1956, crewed spacecraft research. [ 1 ] The program developed the Vostok spacecraft from the Zenit spy satellite project and adapted the Vostok rocket from an existing ICBM design. Just before the first release of the name Vostok to the press, it was a classified word. By August/September 1958 a division had been formed devoted to producing the first Vostok craft. The official approval (decree) for the Vostok was delayed until 22 May 1959 by competition with photo reconnaissance programs.
Vostok 1 was the first human spaceflight . The Vostok 3KA spacecraft was launched on April 12, 1961, taking into space Yuri Gagarin , a cosmonaut from the Soviet Union . The Vostok 1 mission was the first time anyone had journeyed into outer space and the first time anyone had entered into orbit .
There were six Vostok flights in total, including the June, 1963 Vostok 6 mission flown by Valentina Tereshkova , the first woman in space. Another seven Vostok flights (Vostok 7 to 13) were originally planned, going through to April 1966, but these were canceled and the components recycled into the Voskhod program , which was intended to achieve more Soviet firsts in space .
Project Mercury was the first human spaceflight program of the United States. It ran from 1959 through 1963 with the goal of putting a human in orbit around the Earth. John Glenn 's Mercury-Atlas 6 flight on 20 February 1962 was the first Mercury flight to achieve this goal. Prior to that, the Mercury-Redstone 3 mission brought the first American into space, Alan Shepard . It featured the first manual pilot control of the spacecraft and the landing with pilot still within it. [ 2 ] [ 3 ]
Early planning and research was carried out by the National Advisory Committee for Aeronautics , and the program was officially conducted by the newly created NASA .
Because of their small size it was said that the Mercury spacecraft capsules were worn, not ridden. With 1.7 cubic metres (60 cu ft) of habitable volume, the capsule was just large enough for the single crew member. Inside were 120 controls: 55 electrical switches, 30 fuses and 35 mechanical levers. The spacecraft was designed by Max Faget and NASA's Space Task Group.
NASA ordered 20 production spacecraft, numbered 1 through 20, from McDonnell Aircraft Company , St. Louis, Missouri . Five of the twenty spacecraft, #10, 12, 15, 17, and 19, were not flown. Spacecraft #3 and #4 were destroyed during uncrewed test flights. Spacecraft #11 sank and was recovered from the bottom of the Atlantic Ocean after 38 years. Some spacecraft were modified after initial production (refurbished after launch abort, modified for longer missions, etc.) and received a letter designation after their number, examples 2A, 15B. Some spacecraft were modified twice; for example, spacecraft 15 became 15A and then 15B.
The North American X-15 rocket-powered aircraft was part of the X-series of experimental aircraft , initiated with the Bell X-1 , that were made for the USAF , NASA, and the USN . The X-15 set speed and altitude records in the early 1960s, reaching the edge of outer space and returning with valuable data used in aircraft and spacecraft design. It currently holds the world record for the fastest speed ever reached by a crewed aircraft. [ 4 ]
During the X-15 program, 13 of the flights (by eight pilots) met the USAF spaceflight criteria by exceeding the altitude of 50 miles (80 km), thus qualifying the pilots for astronaut status; some pilots also qualified for NASA astronaut wings . [ 5 ] [ 6 ]
The Voskhod program ( Russian : Восход , "ascent" , "dawn" ) was a Soviet human spaceflight project. Voskhod development was a follow-on to the Vostok program , recycling components left over from that program's cancellation following its first six flights. The two missions flown used the Voskhod spacecraft and rocket .
The Voskhod spacecraft was basically a Vostok spacecraft that had a backup, solid fuel retrorocket added to the top of the descent module. The heavier weight of the craft was made possible by improvements to the R-7 Semyorka -derived booster. The ejection seat was removed and two or three crew couches were added to the interior at a 90-degree angle to that of the Vostok crew position. However, the position of the in-flight controls was not changed, so the crew had to crane their heads 90 degrees to see the instruments.
While the Vostok program was dedicated towards understanding the effects of space travel and microgravity on the human body, Voskhod's two flights were aimed towards spectacular "firsts". Cosmonaut Alexei Leonov made the first EVA ("spacewalk") during Voskhod 2 , which became the main success of the program, while putting the first multi-person crew into orbit during Voskhod 1 was the objective that initially motivated it. Once both goals were realized, the program was abandoned. This followed the change in Soviet leadership, which was less concerned about stunt and prestige flights, and allowed the Soviet designers to concentrate on the Soyuz program .
Project Gemini was the second human spaceflight program conducted by NASA. It operated between Projects Mercury and Apollo, with 10 crewed flights occurring in 1965 and 1966. Its objective was to develop techniques for advanced space travel, notably those necessary for Project Apollo, whose objective was to land humans on the Moon. Gemini missions included the first American extravehicular activity , and new orbital maneuvers including rendezvous and docking .
Gemini was originally seen as a simple extrapolation of the Mercury program, and thus early on was called Mercury Mark II . The actual program had little in common with Mercury and was superior to even Apollo in some ways . This was mainly a result of its late start date, which allowed it to benefit from much that had been learned during the early stages of the Apollo project (which, despite its later launch dates, actually began before Gemini).
The Soyuz program ( Russian : Союз , pronounced [sɐˈjus] , meaning "Union") is a human spaceflight program that was initiated by the Soviet Union in early 1967. It was originally part of a Moon landing program intended to put a Soviet cosmonaut on the Moon. All experimental or unsuccessful starts received the status of satellites of a series Kosmos , and flights of the Lunar orbital ships around the Moon – the name Zond . Both the Soyuz spacecraft and the Soyuz rocket are part of this program, which is now the responsibility of the Russian Federal Space Agency .
The basic Soyuz spacecraft design was the basis for many projects, many of which never came to light. Its earliest form was intended to travel to the Moon without employing a huge booster like the Saturn V or the Soviet N-1 by repeatedly docking with upper stages that had been put in orbit using the same rocket as the Soyuz. This and the initial civilian designs were done under the Soviet Chief Designer Sergei Pavlovich Korolev , who did not live to see the craft take flight. Several military derivatives actually took precedence in the Soviet design process, though they never came to pass.
The launch vehicles used in the Soyuz expendable launch system are manufactured at the Progress State Research and Production Rocket Space Center (TsSKB-Progress) in Samara, Russia . As well as being used in the Soyuz program as the launcher for the crewed Soyuz spacecraft , Soyuz launch vehicles are now also used to launch robotic Progress supply spacecraft to the International Space Station and commercial launches marketed and operated by TsSKB-Progress and the Starsem company. There were 11 Soyuz launches in 2001 and 9 in 2002. Currently, Soyuz vehicles are launched from the Baikonur Cosmodrome in Kazakhstan and the Plesetsk Cosmodrome in northwest Russia. Since 2009 Soyuz launch vehicles are also being launched from the Guiana Space Centre in French Guiana . [ 7 ]
The Apollo Program was undertaken by NASA during the years 1961–1975 with the goal of conducting crewed Moon landing missions. In 1961, President John F. Kennedy announced a goal of landing a man on the Moon by the end of the decade. It was accomplished on July 20, 1969, by the landing of astronauts Neil Armstrong and Buzz Aldrin , with Michael Collins orbiting above during the Apollo 11 mission. Five other Apollo missions also landed astronauts on the Moon , the last one in 1972. These six Apollo spaceflights are the only times humans have landed on another celestial body . [ 8 ]
Apollo was the third human spaceflight program undertaken by NASA, the space agency of the United States. It used Apollo spacecraft and Saturn launch vehicles, which were later used for the Skylab program and the joint American-Soviet Apollo-Soyuz Test Project . These later programs are thus often considered to be part of the overall Apollo program.
The goal of the program, as articulated by President Kennedy, was accomplished with only two major failures. The first failure resulted in the deaths of three astronauts, Gus Grissom , Ed White and Roger Chaffee , in the Apollo 1 launchpad fire. The second was an in-space explosion on Apollo 13 , which badly damaged the spacecraft on the moonward leg of its journey. The three astronauts aboard narrowly escaped with their lives, thanks to the efforts of flight controllers, project engineers, backup crew members and the skills of the astronauts themselves.
NASA's Space Shuttle , officially called "Space Transportation System" (STS), was a United States government crewed launch vehicle, retired from service in 2011. The winged Space Shuttle orbiter was launched vertically, usually carrying five to seven astronauts (although eight have been carried) and up to 50,000 lb (23,000 kg) of payload into low Earth orbit . When its mission was complete, the shuttle could independently move itself out of orbit (by means of making a 180-degree turn and firing its main engines, thus slowing it down) and re-enter the Earth's atmosphere. During descent and landing, the orbiter acted as a glider and made a completely unpowered runway landing.
The Space Shuttle was the only winged spacecraft to achieve orbit and land with crew aboard, and the first of a small number of reusable space vehicles to make multiple flights into orbit (subsequently followed by the X-37B , Cargo Dragon , and Crew Dragon ). Its missions involved carrying large payloads to various low-Earth orbits (including segments to be added to the International Space Station ), providing crew rotation for the International Space Station, and performing service missions to the Hubble Space Telescope . The orbiter could also recover satellites and other payloads from orbit and return them to Earth , but its use in this capacity was rare. However, the Space Shuttle was used to return large payloads from the ISS to Earth, as the Russian Soyuz spacecraft has limited capacity for return payloads. Each vehicle was designed with a projected lifespan of 100 launches, or 10 years' operational life.
China was the first Asian country and third nation in the world, after the USSR and USA, to send humans into space. During the Space Race between the two superpowers, which culminated with Apollo 11 landing humans on the Moon, Mao Zedong and Zhou Enlai decided on 14 July 1967 that China should not be left behind, and initiated their own crewed space program: the top-secret Project 714, which aimed to put two people into space by 1973 with the Shuguang spacecraft. Nineteen PLAAF pilots were selected for this goal in March 1971. The Shuguang-1 spacecraft, to be launched with the CZ-2A rocket, was designed to carry a crew of two. The program was officially cancelled on 13 May 1972 for economic reasons.
A second, short-lived crewed program was based on the successful implementation of landing technology by FSW satellites . It was announced a few times in 1978 with the publishing of some details, including photos, but then was abruptly canceled in 1980. It has been argued that the second crewed program was created solely for propaganda purposes, and was never intended to produce results. [ 9 ]
In 1992, under China Manned Space Program (CMS), also known as "Project 921", authorization and funding was given for the first phase of a third, successful attempt at crewed spaceflight. To achieve independent human spaceflight capability, China developed Shenzhou spacecraft and Long March 2F rocket dedicated for human spaceflight in the next few years, along with critical infrastructures like new launch site and flight control center being built. The first uncrewed spacecraft, Shenzhou 1 , was launched on 20 November 1999 and recovered the next day, marking the first step of the realization of China's human spaceflight capability. Three more uncrewed missions were conducted in the next few years in order to verify the key technologies. On 15 October 2003 Shenzhou 5 , China's first crewed spaceflight mission, put Yang Liwei in orbit for 21 hours and returned safely back to Inner Mongolia , making China the third nation to launch a human into orbit independently. [ 10 ]
Virgin Galactic is a company within Sir Richard Branson 's Virgin Group , which is developing a privately funded spacecraft called SpaceShipOne and SpaceShipTwo , in conjunction with Scaled Composites to offer sub-orbital spaceflights and later orbital spaceflights to the paying public. SpaceShipOne reached space with a pilot in three test flights in 2004.
Tier One is Scaled Composites ' program of suborbital human spaceflight using the reusable spacecraft SpaceShipOne and its launcher White Knight . The craft are designed by Burt Rutan , and the project is funded 20 million US Dollars by Paul Allen . In 2004 it made the first privately funded human spaceflight and won the 10 million US Dollars Ansari X Prize for the first non-governmental reusable crewed spacecraft.
The objective of the project is to develop technology for low-cost routine access to space. Tier One is not itself intended to carry paying passengers, but it is envisioned that there will be commercial spinoffs, initially in space tourism . The company Mojave Aerospace Ventures was formed to manage commercial exploitation of the technology. A deal with Virgin Galactic could see routine space tourism, using a spacecraft based on Tier One technology.
The model finally developed into SpaceShipTwo , Virgin Galactic 's second generation suborbital vehicle. On 10 October 2010, VSS Enterprise , the first SpaceShipTwo spaceplane, made its first crewed gliding test flight. By October 2014 SpaceShipTwo had conducted 54 test flights. [ 11 ] On October 31, 2014, SpaceShipTwo VSS Enterprise suffered an in-flight breakup during a powered flight test, [ 12 ] [ 13 ] resulting in a crash killing one pilot and injuring the other. The second SpaceShipTwo, VSS Unity , made first flight tests in 2016. [ 14 ] VSS Unity made its first spaceflight (according to the U.S. definition of space) on December 13, 2018. Marking the end of the "shuttle gap." VSS Unity made its second spaceflight on February 22, 2019.
The Commercial Crew Program is an economic stimulus program funds technology development related to human spaceflight by private companies. In September 2014 NASA awarded contracts to SpaceX and Boeing to build crewed spacecraft for low Earth orbit operations. Dragon 2 , the capsule developed by SpaceX, is listed under "successful programs" as it first launched humans to space in May 2020.
The SpaceX Dragon 2 is a development of the robotic Dragon cargo spacecraft which has been re-supplying the International Space Station since 2010. The spacecraft is able to carry a crew of four astronauts to the International Space Station, with a planned maximum capacity of seven. [ 15 ] It includes a set of four side-mounted thruster pods with two SuperDraco engines each as Launch Abort System (LAS).
To develop Dragon 2, SpaceX did a "pad abort" test in May 2015. A one-week uncrewed orbital flight to the ISS occurred in March 2019, [ 16 ] an in-flight abort test was successfully conducted on 19 January 2020. A crewed demonstration mission to the ISS launched on 30 May 2020. [ 17 ] The first operational crewed mission, Crew-1 , flew to the ISS in November 2020 for a six month stay. [ 18 ] Dragon 2 has flown Inspiration4 , the first purely private mission to Earth orbit.
The Boeing Starliner is a class of space capsules under construction by Boeing to transport crew to the International Space Station , [ 19 ] and to private space stations such as the proposed Bigelow Aerospace Commercial Space Station . [ 20 ] The Starliner is to support larger crews of up to seven people. The Starliner is designed to be able to remain on-orbit for up to seven months and for reusability for up to ten missions.
Starliner made an uncrewed test flight in December 2019 but failed to reach the ISS. Another uncrewed flight was launched in May 2022, [ 21 ] followed by final certification crewed demonstration flight to get Starliner operational in June 2024.
The New Shepard is a reusable launch system capable of vertical-takeoff, vertical-landing (VTVL), suborbital crewed spacecraft by Blue Origin , a company owned by Amazon.com founder and businessman Jeff Bezos , flying humans to space since 2021. It is a commercial system for suborbital space tourism . [ 22 ] The name New Shepard makes reference to the first United States astronaut in space, Alan Shepard . [ 23 ]
The first flight of the New Shepard vehicle was conducted on 29 April 2015 during which an altitude of 93,500 meters (307,000 ft) was attained. While the test itself was deemed a success and the capsule was correctly recovered via parachute landing, the booster stage landing failed because hydraulic pressure was lost during the descent. [ 24 ] [ 25 ] Twelve subsequent flights (through January 2019), including two in-flight abort tests, took place with safe landings of both capsule and booster with two additional vehicles. New Shepard first flew humans to space on 20 July 2021 with the NS-16 mission.
(Dates refer to periods when stations were inhabited by crews.)
The Salyut program was the world's first space station program undertaken by the Soviet Union , which consisted of a series of four crewed scientific research space stations and two crewed military reconnaissance space stations over a period of 15 years from 1971 to 1986. Two other Salyut launches failed. Salyut was, on the one hand, designed to carry out long-term research into the problems of living in space and a variety of astronomical, biological and Earth-resources experiments, and on the other hand this civilian program was used as a cover for the highly secretive military Almaz stations, which flew under the Salyut designation. Salyut 1 , the first station in the program, became the world's first crewed space station. Salyut broke several spaceflight records , including several mission duration records, the first ever orbital handover of a space station from one crew to another, and various spacewalk records. The program went through various changes.
Skylab was launched and operated by NASA and was the United States ' first space station. Skylab orbited Earth from 1973 to 1979, and included a workshop, a solar observatory, and other systems. It was launched uncrewed by a modified Saturn V rocket, with a weight of 169,950 pounds (77,090 kg). Three crewed missions to the station, conducted between 1973 and 1974 using the Apollo command and service module (CSM) atop the smaller Saturn IB , each delivered a three-astronaut crew. On the last two crewed missions, an additional Apollo / Saturn IB stood by ready to rescue the crew in orbit if it was needed.
Mir was the first modular space station and was assembled in orbit from 1986 to 1996. It had a greater mass than any previous spacecraft. Until 21 March 2001 it was the largest artificial satellite in orbit, succeeded by the International Space Station after Mir 's orbit decayed . The station served as a microgravity research laboratory in which crews conducted experiments in biology , human biology , physics , astronomy , meteorology and spacecraft systems with a goal of developing technologies required for permanent occupation of space.
Mir was the first continuously inhabited long-term research station in orbit and set the record for the longest continuous human presence in space at 3,644 days until 23 October 2010 when it was surpassed by the ISS . [ 26 ] It holds the record for the longest single human spaceflight, with Valeri Polyakov spending 437 days and 18 hours on the station between 1994 and 1995. Mir was occupied for a total of twelve and a half years out of its fifteen-year lifespan, having the capacity to support a resident crew of three, or larger crews for short term visits. Mir had 28 long duration crews .
The International Space Station (ISS) is a space station in low Earth orbit . Its first component launched into orbit in 1998, and the ISS is now the largest artificial body in orbit and can often be seen with the naked eye from Earth. [ 27 ] The ISS consists of pressurized modules, external trusses, solar arrays and other components. ISS components have been launched by Russian Proton and Soyuz rockets as well as American Space Shuttles . [ 28 ]
The ISS program is a joint project among five participating space agencies: NASA , Roscosmos , JAXA , ESA , and CSA . [ 29 ] [ 30 ] The ownership and use of the space station is established by intergovernmental treaties and agreements. [ 31 ] The station is divided into two sections, the Russian Orbital Segment (ROS) and the United States Orbital Segment (USOS), which is shared by many nations. The American portion of ISS was funded until 2024. [ 32 ] [ 33 ] [ 34 ] Roscosmos has also endorsed the continued operation of ISS through 2024, [ 35 ] but have proposed subsequently using elements of the Russian Orbital Segment to construct a new Russian space station called OPSEK . [ 36 ]
As of May 2022 there have been 66 long duration crews .
In 2011, China launched the Tiangong 1 target spacecraft and Shenzhou 8 uncrewed spacecraft. The two spacecraft completed China's first automatic rendezvous and docking on 3 November 2011. [ 37 ] About 9 months later, Tiangong 1 completed the first manual rendezvous and docking with Shenzhou 9 , which carried China's first female astronaut Liu Yang . [ 38 ]
In September 2016, Tiangong 2 was launched into the orbit. It was a space laboratory with more advanced functions and equipment than Tiangong 1 . A month later, Shenzhou 11 was launched and docked with Tiangong 2 . Two astronauts entered Tiangong 2 and stationed for about 30 days and verified the viability of astronauts' medium-term stay in space. [ 39 ] In April 2017, China's first cargo spacecraft, Tianzhou 1 docked with Tiangong 2 and completed multiple in-orbit propellant refueling tests. [ 39 ]
The goal of the next phase of China Manned Space Program is to build China's own space station, Tiangong . [ 40 ] The first module of Tiangong , the Tianhe core module , was launched into orbit by China's most powerful rocket Long March 5B on 29 April 2021. [ 41 ] It was later visited by multiple cargo and crewed spacecraft and demonstrated China's capability of sustaining Chinese astronauts' long-term stay in space.
According to CMS announcement, all missions of Tiangong Space Station are scheduled to be carried out by the end of 2022. [ 42 ] Once the construction is completed, Tiangong will enter the application and development phase, which is poised to last for no less than 10 years. [ 42 ]
Programs in this section are sorted by the years when their development started.
The Dream Chaser was originally intended to serve as an American reusable crewed suborbital and orbital lifting-body spaceplane being developed and privately funded by Sierra Nevada Corporation (SNC) Space Systems. It is now planned to be a robotic cargo transport to the ISS. The Dream Chaser was designed to carry up to seven people to and from low Earth orbit prior to the decision to transition to a robotic platform. The vehicle would launch vertically on an Atlas V rocket and land horizontally on conventional runways. On 26 October 2013, the first glide flight occurred. An initial orbital test flight of the Dream Chaser orbital test vehicle was planned for 1 November 2016, [ 43 ] which was not met. On 3 February 2015, the Sierra Nevada Corporation's (SNC) Space Systems and OHB System AG (OHB) in Germany announced the completion of the initial Dream Chaser for European Utilization (DC4EU) study. [ 44 ]
The Indian Human Spaceflight Programme (HSP) of the Indian Space Research Organisation (ISRO) plans to develop and launch a crewed spacecraft, named Gaganyaan , to low Earth orbit no earlier than 2025. [ 45 ] [ 46 ]
Copenhagen Suborbitals is an amateur crowd-funded, human space programme. Since its beginning in 2008, Copenhagen Suborbitals has flown five home-built rockets and two mock-up space capsules. Their stated goal is to have one of the members fly into space (above 100 km), on a sub-orbital spaceflight , in a space capsule on the Spica rocket.
HEAT 1X Tycho Brahe was the first rocket and spacecraft combination built by Copenhagen Suborbitals , a Danish organization attempting to perform the first amateur suborbital crewed spaceflight. The vehicle consisted of a motor named HEAT-1X and a spacecraft Tycho Brahe. [ 47 ]
In 2014, Copenhagen Suborbitals settled on the basic design for their first crewed rocket and space capsule. The rocket will be named Spica, and will stand 12–14 m tall with a diameter of 950 mm. It will be powered by the BPM-100 engine class, using liquid oxygen as oxidizer and ethanol as fuel, producing 100 kilonewtons of thrust.
Formerly called PPTS (Prospective Piloted Transport System) and Federation ( Russian : Федерация , Federatsiya ) Orel is a new multi-task Russian spacecraft for LEO, ISS and lunar missions. The spacecraft, when revealed in 2015, resembled NASA's Orion capsule and had a set of soft-landing type legs similar to the plans for Dragon 2 at that time. An uncrewed flight is planned in 2024. [ 48 ]
New Glenn is an orbital launch vehicle under development by Blue Origin . The company expects a first flight no earlier 2023. [ 49 ] Like New Shepard , the first stage is designed to land vertically to be reusable. It can launch either a cargo or a crew capsule to space. [ 50 ]
The SpaceX Starship is a fully reusable super heavy-lift launch vehicle [ 51 ] under development by SpaceX since 2012, as a self-funded private spaceflight project. [ 52 ] [ 53 ] [ 54 ]
The second stage of the Starship [ 55 ] : 16:20–16:48 is designed as a long-duration cargo and passenger-carrying spacecraft. In 2020 and 2021 it was tested without a booster stage as part of the development program to get launch and landing working and iterate on a variety of design details, particularly with respect to the vehicle's atmospheric reentry . [ 54 ] [ 56 ]
The Iranian crewed spacecraft is a proposal by the Iranian Aerospace Research Institute of Iranian Space Research Center (ISRC) to put an astronaut into space. The details of the design were published by the institute in its "Astronaut" publication in February 2015. [ 57 ] A mock up of the spaceship was displayed on 17 February 2015 during the ceremony of the national day of space of Iran. [ 58 ] The head of the institute announced that the spaceship will be launched to space in about a year. [ 59 ] [ 60 ] The spaceship is supposed be able to carry a single astronaut to a 175 km height and return him to the Earth. The spaceship is designed under the code name "Class E Kavoshgar" project. Through December 2022, no further details have been published and no crewed launches have occurred.
The Artemis program is an ongoing crewed spaceflight program carried out by NASA , U.S. commercial spaceflight companies , and international partners such as ESA , [ 61 ] with the goal of landing "the first woman and the next man" on the lunar south pole region by 2025. Artemis would be the first step towards the long-term goal of establishing a sustainable presence on the Moon, laying the foundation for private companies to build a lunar economy, and eventually sending humans to Mars .
Artemis I was the first mission of the Artemis Program and was the first integrated flight of the Space Launch System and the Orion (spacecraft) . During the mission, an uncrewed Orion capsule spent 10 days in a 40,000 mi (64,000 km) distant retrograde orbit around the Moon before returning to Earth. [ 62 ]
Artemis II , the first crewed mission of the program, is planned to launch four astronauts in May 2024 [ 63 ] on a free-return flyby of the Moon at a distance of 4,000 miles (6,400 km). [ 64 ] [ 65 ]
After Artemis II, the Power and Propulsion Element of the Lunar Gateway and three components of an expendable lunar lander are planned to be delivered on multiple launches from commercial launch service providers . [ 66 ]
Artemis III is planned to be the maiden flight of the SLS Block 1B and will use the minimalist Gateway and expendable lander to achieve the first crewed lunar landing of the program. The flight is planned to touch down on the lunar south pole region, with two astronauts staying there for about one week. [ 67 ] [ 68 ] [ 69 ] [ 70 ]
Programs in this section are sorted by the years when their development started.
Man In Space Soonest was a United States Air Force program to put an American astronaut in orbit. It was canceled when NASA was formed in August 1958.
The X-20 Dyna-Soar (Dynamic Soarer) was a United States Air Force program to develop a crewed spaceplane that could be used for a variety of military missions, including reconnaissance, bombing, space rescue, satellite maintenance, and sabotage of enemy satellites. The program ran from 24 October 1957 to 10 December 1963 and was canceled just after spacecraft construction had begun.
The Manned Orbital Development System was a project by the Air Force Space System Division (SSD). It was to begin working on plans to use Gemini hardware as the first step in a new US Air Force man-in-space program called MODS (Manned Orbital Development System), a type of military space station that used Gemini spacecraft as ferry vehicles. MODS was effectively superseded when the Manned Orbital Laboratory was announced in December 1963.
Western nickname "Battlestar Khrushchev" a nuclear-armed monolith station, about 5 times the volume of Salyut 1 and as heavy as Skylab. The station was designed for a crew of 6 and proceeded to mock-up stage before cancellation.
The Manned Orbiting Laboratory ( MOL ) was part of the United States Air Force 's crewed spaceflight program, a successor to the canceled X-20 Dyna-Soar project. It was announced to the public on the same day that the Dyna-Soar program was canceled, 10 December 1963. the program was redirected in the mid-1960s and developed as a space station used for reconnaissance purposes. The space station used a spacecraft that was derived from NASA 's Gemini program . The project was canceled on 10 June 1969 before there were any crewed flights.
In accordance with the quinquennial plan of the Soviet air forces, the Spiral program to develop a 2-stage launcher plane began in 1965 and was entrusted to OKB-155 A.I.Mikojan whose chief of the engineering and design department was Lozino Lozinsky (55 years). The project received the name of SPIRAL and was to prepare the Soviet Union for a war in space. [ 73 ]
The TKS spacecraft (Russian: Транспортный корабль снабжения, Transportnyi Korabl Snabzheniia, Transport Supply Spacecraft, GRAU index 11F72) was a Soviet spacecraft conceived in the late 1960s for resupply flights to the military Almaz space station. The spacecraft was designed for both crewed and autonomous uncrewed cargo resupply flights, but was never used operationally in its intended role – only four test missions were flown (including three that docked to Salyut space stations) during the program. The Functional Cargo Block (FGB) of the TKS spacecraft later formed the basis of several space station modules, including the Zarya FGB module on the International Space Station .
The Soviet Buran program was a reusable spaceplane project begun in 1976 at TsAGI as a response to the United States Space Shuttle program . It had only one orbital flight, an uncrewed test, before cancellation. In the process it became the first spaceplane to land autonomously. [ 74 ]
The Shuguang program was the first Chinese crewed space program with plans to launch two astronauts by 1973.
The Piloted FSW program was the second Chinese crewed space program based on the successful achievement of landing technology (third in the world after USSR and USA) by FSW satellites .
The Saenger was a proposed two stage to orbit vehicle. Air-breathing hypersonic first stage and delta wing second stage. The German Hypersonics Program and its Saenger II reference vehicle received most of the domestic funding for spaceplane development in the late 1980s and early 1990s. [ 75 ] In 1995, the project was discontinued primarily due to concerns of development costs and limited gains in price and performance compared to the existing space launch systems such as the Ariane 5 rocket. [ 76 ]
HOTOL , for Horizontal Take-Off and Landing, was a 1980s British design for a single-stage-to-orbit (SSTO) spaceplane that was to be powered by an airbreathing jet engine. Development was being conducted by a consortium led by Rolls-Royce and British Aerospace (BAe).
The Zarya spacecraft was a secret Soviet project of the late 1980s aiming to design and build a large, crewed, vertical takeoff, vertical landing (VTVL) reusable space capsule, [ 77 ] a much larger replacement for the Soyuz (spacecraft) . The project was shelved in 1989, shortly before the Soviet Union's collapse.
The Rockwell X-30 was an advanced technology demonstrator project for the National Aero-Space Plane (NASP), part of a United States project to create a single-stage-to-orbit (SSTO) spacecraft and passenger spaceliner. See also List of X-planes .
Hermes was a proposed spaceplane designed by the French Centre National d'Études Spatiales (CNES) in 1975, and later by the European Space Agency (ESA). It was superficially similar to the American Boeing X-20 Dyna-Soar and the larger Space Shuttle .
The MAKS (Russian: МАКС (Многоцелевая авиационно-космическая система), Multipurpose aerospace system) was a Soviet air-launched reusable launch system project with orbiter that was proposed in 1988 but canceled in 1991.
HOPE-X was a Japanese experimental spaceplane project designed by a partnership between NASDA and NAL (both now part of JAXA), started in the 1980s. It was positioned for most of its lifetime as one of the main Japanese contributions to the International Space Station, the other being the Japanese Experiment Module. The project was eventually canceled in 2003, by which point test flights of a sub-scale testbed had flown successfully. NASA
The Russian Aerospace Aircraft ( RAKS ) is being created within the framework of the research work (SRW) "Orel" commissioned by the Russian Aerospace Agency since 1993. [ 78 ] [ needs update ]
The Kankoh-maru (観光丸? Kankōmaru) is the name of a proposed vertical takeoff and landing (VTVL), single-stage-to-orbit (SSTO), reusable launch system (rocket-powered spacecraft).
The Ansari X Prize was a space competition in which the X Prize Foundation offered a US$10,000,000 prize for the first non-government organization to launch a reusable crewed spacecraft into space twice within two weeks. Twenty-six teams from around the world participated, ranging from volunteer hobbyists to large corporate-backed operations. Won by Scaled Composites ' Tier One project. The other companies stopped work or as ARCA Space Corporation switched to other, more immediate purposes.
VentureStar was a single-stage-to-orbit reusable launch system proposed by Lockheed Martin and funded by the U.S. government. The goal was to replace the Space Shuttle by developing a reusable spaceplane that could launch satellites into orbit at a fraction of the cost.
Fuji (ふじ) was a crewed spacecraft of the space capsule kind, proposed by Japan's National Space Development Agency (NASDA) Advanced Mission Research Center in December 2001. The Fuji design was ultimately not developed.
Hopper was a proposed European Space Agency orbital and reusable launch vehicle. The shuttle prototype spaceplane was one of several proposals for a European reusable launch vehicle (RLV) planned to cheaply ferry satellites into orbit by 2015. There have been no launches.
Kliper (Russian: Клипер, Clipper) was a partly reusable crewed spacecraft concept, proposed in the early 2000s by RSC Energia. Due to lack of funding from the ESA and RSA, the project was indefinitely postponed by 2006.
Project Constellation , NASA 's intended successor to the Space Shuttle, is a program to develop new crafts and respective delivery systems for increased operation in space. It is primarily intended to facilitate missions for International Space Station resupply, lunar landing , etc.
The Constellation program was canceled in 2010 and replaced with the Artemis program based on the Space Launch System . [ 79 ] [ 80 ] [ 81 ] [ 82 ] [ 83 ]
The XCOR Lynx is a suborbital horizontal-takeoff, horizontal-landing (HTHL), rocket-powered spaceplane under development by the California-based company XCOR Aerospace to compete in the emerging suborbital spaceflight market. The Lynx is projected to carry one pilot, a ticketed passenger, and/or a payload above 100 km altitude. The Mark I test model will reach only 200,000 feet (61 km); the Mark II production model will be sub-orbital.
According to a September 2015 report, the first flight of the Lynx spaceplane was proposed to be in the second quarter of 2016 from Midland, Texas, [ 84 ] but the company halted spaceplane development in May 2016 and refocused on its LOX/H2 engine technology. [ 85 ]
Unilever 's Axe Apollo Space Academy marketing campaign which was launched in 2013 was also affected by the cancellation of the XCOR Lynx. The campaign included an astronaut selection contest where 23 winners would be given suborbital spaceflights on board the Lynx.
The Orbital Piloted Assembly and Experiment Complex (abbreviated OPSEK) [ 86 ] [ 87 ] was a proposed third-generation modular space station in Low Earth orbit . OPSEK would initially consist of modules from the Russian Orbital Segment of the International Space Station (ISS) from 2024. It would then add new modules to it. It was canceled in 2017.
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Human_spaceflight_programs |
Human systems engineering ( HSE ) is a field based on systems theory intended as a structured approach to influencing the intangible reality in organizations in a desirable direction. HSE claims to turn complexity into an advantage, to ease innovation processes in organizations and to master problems rooted in negative emotions and a lack of motivation. It is taught in the Master of Advanced Studies program of the University of Applied Sciences Western Switzerland (HES-SO) as a complementary and postgraduate program for students who have already achieved a bachelor level or an MBA.
Recently, after the crisis of the Swiss banking system due to whistle blowing and to the stealing and selling to intelligence services of sensitive data by bank personnel, numerous articles featured "human risks" as a major problem in organisations. According to de:Lutz von Rosenstiel [ 1 ] the "lack of meaning" and conflicts between personal and organisational values systems is becoming increasingly a problem; people have not any more the feeling to "belong" to an organization if every relation is to be seen as a commercial interaction. Chris Argyris sees the same problem from the point of view of learning interactions between the organization and personnel, where the organization expects from its personnel to learn in order to fulfil jobs, but the organization is not prepared to learn from its personnel through double-loop learning . [ 2 ]
To handle these issues, in HSE, the organization is seen as a living system according to J.G. Millers theory of open and self organizing systems . [ 3 ] In HSE, the 3 systemic levels "individual", "group" and "organization" are considered as main entities and targets to influence, whereas the levels "society" and "supranational system" supply the criteria for a positive insertion of the organization in its environment. This approach is intended to help managers to understand the organization as a complex and organic system where functional relations, hierarchy and processes are only the visible and tangible part of the "iceberg". HSE claims the invisible part is as important as the tangible and structural aspects of the organizations. HSE sees the invisible as the unconscious part of both the individual and the organization as a collective entity. Fritjof Capra describes the subtle interactions between the tangible and the invisible in one of his books. [ 4 ]
From an epistemological point of view HSE refers explicitly to Edgar Morin 's proposal to link sciences and practices [ 5 ] and to Jean Piaget 's concept of " transdisciplinarity ".
As a result of the program, human risks and the resources deriving from a positive interaction are now better understood. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] As in a part of the Swiss academic system ( University of Applied Sciences ) Master Thesis' include applications of contents and learned skills, HSE is becoming increasingly popular amongst private or public organizations when resolving problems on the intangible side becomes critical for survival or for success. Further, 7 cohorts of students who achieved their master's degree "teach the gospel" in and around Switzerland.
HSE was first launched in French in 2002 at the University of Applied Sciences Western Switzerland and in 2004 at the Zurich University of Applied Sciences in German language.
The program was founded in 2002 by two professors of the university:
The program in Zürich was abandoned after a few years. In 2018, the French program was renamed "Développement Humain dans les Organisations" (human development in organizations). [ 13 ]
Recently Massachusetts Institute of Technology started using the term "Human Systems Engineering" in its Engineering Systems Division, putting the focus on how people and organizations, conceive, develop and handle technological systems. Specialised courses focus on topics such as "The Human Side of Technology" or more human risk oriented courses as for example "Managing Nuclear Technology".
A similar program exists at Concordia University in Canada: Human Systems Intervention . [ 14 ]
Several departments and academic programs at the Georgia Institute of Technology focus on Human Systems Engineering and related theory. Namely their Masters program in Human Computer Interaction, which explores intersections of industrial design, psychology, interactive computing, and media, [ 15 ] and the professional education short course "Introduction to Human Systems Integration [ 16 ] ," which considers Human Systems Engineering as it relates to addressing human-related issues in design. The Georgia Tech Research Institute , the applied research arm of the Georgia Institute of Technology, houses a Human Systems Engineering branch that focuses primarily on applications of Human Systems Engineering in the defense domain. [ 17 ] | https://en.wikipedia.org/wiki/Human_systems_engineering |
Human systems integration ( HSI ) is an interdisciplinary managerial and technical approach to developing and sustaining systems which focuses on the interfaces between humans and modern technical systems. [ 1 ] [ 2 ] The objective of HSI is to provide equal weight to human, hardware, and software elements of system design throughout systems engineering and lifecycle logistics management activities across the lifecycle of a system. The end goal of HSI is to optimize total system performance and minimize total ownership costs. [ 3 ] The field of HSI integrates work from multiple human centered domains of study include training, manpower (the number of people), personnel (the qualifications of people), human factors engineering, safety, occupational health, survivability and habitability. [ 4 ]
HSI is a total systems approach that focuses on the comprehensive integration across the HSI domains, and across systems engineering and logistics support processes. [ 2 ] [ 3 ] The domains of HSI are interrelated: a focus on integration allows tradeoffs between domains, resulting in improved manpower utilization, reduced training costs, reduced maintenance time, improved user acceptance, decreased overall lifecycle costs, and a decreased need for redesigns and retrofits. [ 3 ] An example of a tradeoff is the increased training costs that might result from reducing manpower or increasing the necessary skills for a specific maintenance task. [ 3 ] HSI is most effective when it is initiated early in the acquisition process, when the need for a new or modified capability is identified. Application of HSI should continue throughout the lifecycle of the system, integrating HSI processes alongside the evolution of the system. [ 3 ] [ 4 ] [ 5 ]
HSI is an important part of systems engineering projects. [ 2 ] [ 6 ]
The US Navy initiated the Military Manpower versus Hardware (HARDMAN) Methodology in 1977 to address problems with manpower, personnel and training in the service. [ 7 ] In 1980, The National Academies of Sciences, Engineering, and Medicine established the Committee on Human Factors, which was later renamed the Committee on Human Systems Integration. [ 8 ] The modern concept of human systems integration in the United States originated in 1986 as a US Army program called the Manpower and Personnel Integration (MANPRINT) program. [ 9 ] With ties to the academic fields of industrial engineering and experimental psychology, MANPRINT incorporated human factors engineering with manpower, personnel and training domains into an integrated discipline. [ 10 ] MANPRINT focused on the needs and capabilities of the soldier during the development of military systems, and MANPRINT framed a human-centered focus in six domains: human factors engineering, manpower, personnel, training, health hazards and system safety. [ 11 ] The US Marine Corps, a component of the Navy, implemented aspects of both HARDMAN and MANPRINT programs to achieve HSI objectives, issuing a formal HSI policy in Marine Corps Order 5000.22 in 1994. [ 12 ] The US Air Force began an HSI program in 1982 as "IMPACTS". [ 10 ] Modern HSI programs abandoned early acronyms such as HARDMAN, MANPRINT and IMPACTS over the course of the development of their HSI programs. [ 13 ] [ 14 ] For example, the Air Force currently manages HSI through the Air Force Office of Human Systems Integration (AFHSIO ). The US Coast Guard implemented an HSI program in 2000 [ 15 ] in the strategy and HR capability division (CG-1B) of the human resources directorate. The US Department of Homeland Security initiated an HSI program under the Science and Technology Directorate in 2007, and the Transportation Security Administration (TSA) initiated a focused HSI effort under the umbrella of DHS S&T in 2018. [ 15 ] The Federal Rail Administration (under the National Transportation Safety Board) and NASA Ames Research Center also address HSI. The United Kingdom, Canada, [ 16 ] Australia and New Zealand have HSI programs similarly rooted in human factors and modeled after the Army MANPRINT program. [ 10 ] In Europe HSI is known as Human Factors Integration .
DoD acquisition policy to formalize manpower, personnel, training and safety processes started in 1988. [ 12 ] HSI as a distinct focus area was first addressed in the Operation of the Defense Acquisition System (DODINST 5000.02) issued in 2003. [ 17 ] Updated in 2008, this policy expanded the six domains in the MANPRINT program to seven, re-focusing systems safety as safety and occupational health, and adding habitability and survivability to the list. [ 18 ] In 2010, the National Academy of Sciences committee on Human Systems Integration was transitioned to a board under the Division of Behavioral and Social Sciences and Education. The Board on Human Systems Integration (BOHSI) issues consensus studies, reports and proceedings on HSI research and application. [ 8 ] A 2013 update of the DODINST 5000.02 added force protection to the survivability domain. [ 19 ] In 2020, the DODINST 5000.02 title and content shifted to the "Operation of the Adaptive Acquisition framework", which describes HSI activities tailored to each acquisition pathway, according to the unique characteristics of the capability being required. [ 20 ]
The Defense Acquisition Guidebook, first published in 2002, [ 21 ] devotes an entire chapter to manpower planning and HSI. In addition to focused discussion on each domain, the DAG emphasizes viewing HSI from a total system perspective, viewing the human components of a system as integral to the total system as any other component or subsystem. The DAG emphasizes the importance of representing HSI in all aspects of programmatic Integrated Product and Process Development, strategic planning and risk management. [ 3 ]
The Standard Practice for Human Systems Integration ( SAE 6906 ) was issued in 2019, and defines standard practices for procurement activities related to HSI. The standard is provided for industry to apply HSI during system design, through disposal and all related activities. This standard includes an overview of HSI and the domains, the domain relationships and tradeoffs, systems development process requirements, and a number of technical standard references [ 22 ]
ASTM F1337-10 Standard Practice for Human Systems Integration Program Requirements for Ships and Marine Systems, Equipment and Facilities
DI-HFAC 81743 Human Systems Integration Program Plan
The INCOSE Systems Engineering Handbook provides an authoritative reference to understand the discipline of Systems Engineering for student and practicing professionals. [ 6 ] The human part of the system is associated with systems engineering activities from start to finish: from requirements development, to architectural design processes, verification, validation and operation. [ 6 ] HSI is integral to the systems engineering process, and must be addressed in all program level integrated development product teams at program, technical, design, and decision reviews throughout the lifecycle of the system. [ 1 ] The guidebook focuses on the integration of HSI into SE processes, and notes that intuitive understanding of the important role of the human as an element of a system is not enough to achieve HSI related cost and performance objectives. HSI assists engineers though the addition of human-centered domain specialists and integrators who ensure that human considerations such as usability, safety and health, maintainability and trainability are accounted for using systematic methodologies grounded in each human-centered domain [ 6 ]
HSI trade studies and analyses are key methods of HSI that often result in insights not otherwise realized in systems engineering:. [ 6 ] The INCOSE Systems Engineering Guidebook recommends a number of steps to effectively incorporate HSI into systems engineering processes [ 6 ]
HSI interacts with a number of SE activities: [ 1 ] [ 6 ]
Planning and management for cost and performance across the lifecycle of a system are accomplished through lifecycle logistics and integrated product support. [ 23 ] These activities ensure that the system will meet sustainment objectives and satisfy user sustainment objectives. [ 24 ] Product Support management covers three focus areas: lifecycle management, technical management and infrastructure management. The HSI domains of training, manpower and personnel fall under infrastructure management and are among the twelve elements of logistics / product support. [ 25 ] Design Interface, one of the twelve elements of logistics / product support, is a subcategory of technical management and includes multiple domains of HSI, including human factors, personnel, habitability, training, safety and occupational health. [ 26 ]
Design Interface (including HSI) is the integration of quantitative systems design characteristics with functional integrated product support elements. In this element of logistics, the systems design parameters drive product support resource requirements. Product support requirements are derived to ensure the system meets availability goals, balancing design and support costs. Design interface is a leading activity that impacts all other logistics / product support elements. [ 23 ] Reliability and maintainability are aspects of design interface that have ties to manpower, personnel and training. Maintainability is a measure of the ease and speed in which a piece of equipment or system can be restored to full functionality after a failure; it is a function of design, personnel availability and skill levels, maintenance procedures, training and test equipment. Low maintainability may increase manpower, personnel and training costs over the lifecycle of the system. [ 23 ] Human factors engineering and usability play an important role in requirements development, definition, design development and evaluation of system support for reliability and maintainability in the operational environment. Safety and occupational health are important aspects of product support: injury, accidental equipment damage, chronic injuries and long-term health problems reduce supportability, reliability and availability [ 23 ]
Human Factors Engineering (HFE) is an engineering discipline that ensures human capabilities and limitations in areas such as perception, cognition, sensory and physical attributes are incorporated into requirements and design. [ 3 ] Effective HFE ensures that systems design capitalizes on, and does not exceed, the abilities of the human user population. [ 3 ] HFE can reduce the scope of manpower and training requirements, and ensure the system can be operated maintained and supported by users, in a habitable, safe and survivable manner. [ 3 ] HFE is concerned with designing human-systems interfaces such as: [ 3 ]
Technical standards and requirements:
HFE Data Information Descriptions :
Manpower focuses on evaluating and defining the right mix of personnel (sometimes referred to as "spaces") for people to operate, maintain and support a system. Manpower requirements should be based on task analysis and consider workload, fatigue, physical and sensory overload, environmental conditions (heat/cold) and reduced visibility. Manpower requirements are the highest cost driver for a system, and can account for up to 70% of the total lifecycle cost. [ 3 ] Requirements are based on the full range of operations from a low operational tempo, peacetime scenario to continuous sustained operations, and should include consideration for surge operations capacity. In the manpower analysis process, labor-intensive "high driver tasks" should be examined, and targeted for engineering design changes to reduce the manpower requirement through automation, or improved usability in design. A top down functional analysis can be the basis for determinations of which functions can be eliminated, consolidated, or simplified to control manpower costs. [ 3 ]
DoD manpower policy comes from DoD Directive 1100.4, Guidance for Manpower Management
The personnel domain is concerned with the human performance characteristics of the user population (cognitive, sensory and physical skills, knowledge, experience and abilities) of operators, maintainers and support staff required for a system. [ 4 ] Cost effective engineering designs minimize personnel requirements, and keep them consistent with the user population. Systems that require new or advance personnel requirements will experience cost increases in other domains, such as training. [ 3 ] The user group identified for a system may be referred to as the "target audience". The target audience is situated within a larger organizational structure, and recruitment, retention and personnel policies that may impact or be impacted by the new system should be considered. HSI and the personnel domain may impact policy, or policy may impact HSI. For example, the system may require additional recruitment to sustain the organizational workforce while employing the new system. An example of policy impacting HSI is increased diversity in the user population that may alter anthropometric requirements for the system and impact requirements in the HFE domain. [ 3 ]
Manpower and personnel standards include:
Standard Practice for Manpower and Personnel SAE1010
The training domain is concerned with giving the target audience the opportunity to acquire, gain or enhance the knowledge, skills and abilities needed to operate, maintain and support a system. [ 3 ] [ 26 ] The target audience may be individuals or groups; training in a systems engineering / acquisition setting is focused on job-relevant knowledge, skills and abilities aimed at satisfying performance levels specific to the system being designed. [ 23 ] Training the operators, maintainers and support personnel to conduct their respective tasks is a component of the total system and a part of delivering the intended capability of the system. [ 3 ] This includes the integration of training concepts and strategies with elements of logistics support, including technical manuals and procedures, interactive electronic technical manuals, job performance aids, computer based interactive courseware, simulators, and actual equipment, including embedded training capabilities on actual equipment. [ 23 ] Training is an important aspect of configuration management: it is critical that training impacts of any and all changes to the system are evaluated. [ 23 ] The objective of training is to develop and sustain ready, well trained personnel while reducing lifecycle costs, contributing to a positive readiness outcome. [ 3 ] The industry standard practice to develop cost effective training is instructional systems design .
Training standards include:
USA:
Guidance for the Acquisition of Training Data Products and Services (Part 1 of 5) MIL-HDBK 29612/1
Instructional Systems Development/Systems Approach to Training and Education (Part 2 of 5) MIL-HDBK 29612/2
UK
JSP 882 Defence Direction and Guidance for Training and Education
The environment, safety and occupational health domain is focused on determining system design characteristics that minimize risks to human health and physical wellbeing such as acute or chronic illness, disability death, or injury. [ 4 ] In a physical system design, systems safety works closely with systems engineers to identify, document, design out, or mitigate system hazards and reduce residual risk from those hazards. [ 5 ] The three areas that must be considered are: [ 3 ]
A health hazard analysis should be performed periodically during the system lifecycle to identify risks, initiating the risk management process. [ 3 ] In DoD programs, program managers must prepare a Programmatic Environmental, Safety and Occupational Health Evaluation ( PESHE ) which is an overall evaluation of ESOH risks for the program, and documents the progress of HHA program monitoring. [ 4 ]
Systems safety is grounded in a risk management process but Safety risk management has a unique set of processes and procedures. For example, identified hazards should be designed out of the system whenever possible, either through selecting a different design, or altering the design to eliminate the hazard. If a design change isn't feasible, engineered features or devices should be added to interrupt the hazard and prevent a mishap. Warnings (devices, signs or signals) are the next best mitigation, but are considered to be far less beneficial to preventing mishaps. The last resort is personal protective equipment to protect people from the hazard, and training (knowledge skills and abilities to protect against the hazard and prevent a mishap). HFE review and involvement with design interventions introduced to address hazards is an important connection between the systems safety and HFE domain specialists. Design interventions may have manpower and personnel implications, and training mitigations for hazards must be incorporated into continued operator and maintainer training in order to sustain the training intervention.
Systems safety standards include:
USA:
MIL-STD 882 System Safety
UK:
Defence Policy for Health, Safety and Environmental Protection (DSA 01.1)
Survivability is design features that reduce the risk of fratricide , detection and probability of an attack, and enable the crew to continue the mission and avoid acute or chronic illness, severe injury, disability or death in hostile environments. [ 3 ] [ 4 ] Elements of survivability include reducing susceptibility to a mishap or attack (protection against detection for example) and minimizing potential wounds or injury to personnel operating and maintaining the system. Survivability also includes protection from chemical, biological, radioactive and nuclear (CBRN) threats. and should include requirements to preserve integrity of the crew compartment, rapid egress in case of system destruction, and emergency systems for contingency management, escape, survival and rescue [ 3 ]
Survivability is often categorized in the following topics: [ 3 ]
Habitability is the application of human centered design to the physical environment (living areas, personal hygiene facilities, working areas, living areas, and personnel support areas) to sustain and optimize morale, safety, health, comfort and quality of life of personnel. [ 4 ] Design aspects such as lighting; space; ventilation and sanitation; noise and temperature control; religious, medical and food services availability; berthing, bathing and personal hygiene are all aspects of habitability, and directly contribute to personnel effectiveness and mission accomplishment. [ 3 ]
Habitability standards include:
Color Coordination Manual for Habitability DI-MISC 81123
Design Criteria Limits Noise Standards MIL-STD 1474 | https://en.wikipedia.org/wiki/Human_systems_integration |
In the context of human evolution , vestigiality involves those traits occurring in humans that have lost all or most of their original function through evolution . Although structures called vestigial often appear functionless, they may retain lesser functions or develop minor new ones. In some cases, structures once identified as vestigial simply had an unrecognized function. Vestigial organs are sometimes called rudimentary organs . [ 1 ] Many human characteristics are also vestigial in other primates and related animals.
Charles Darwin listed a number of putative human vestigial features, which he termed rudimentary, in The Descent of Man (1871). These included the muscles of the ear ; wisdom teeth ; the appendix ; the tail bone ; body hair ; and the semilunar fold in the corner of the eye . Darwin also commented on the sporadic nature of many vestigial features, particularly musculature. Making reference to the work of the anatomist William Turner , Darwin highlighted a number of sporadic muscles that he identified as vestigial remnants of the panniculus carnosus , particularly the sternalis muscle . [ 2 ] [ 3 ]
In 1893, Robert Wiedersheim published The Structure of Man , a book on human anatomy and its relevance to evolutionary history. This book contains a list of 86 human organs he considered vestigial, which he called "wholly or in part functionless, some appearing in the Embryo alone, others present during Life constantly or inconstantly. For the greater part Organs which may be rightly termed Vestigial." [ 4 ] His list of supposedly vestigial organs included many of the examples on this page as well as others then mistakenly believed to be purely vestigial, such as the pineal gland , the thymus gland , and the pituitary gland . Some of these organs that had lost their obvious, original functions later turned out to have retained functions that had gone unrecognized before the discovery of hormones or many of the functions and tissues of the immune system. [ 5 ] [ 6 ] Examples included:
Historically, there was a trend not only to dismiss the appendix as being uselessly vestigial, but an anatomical hazard liable to dangerous inflammation . As late as the mid-20th century, many reputable authorities conceded it no beneficial function. [ 7 ] This was a view supported, or perhaps inspired, by Darwin himself in the 1874 edition of his book The Descent of Man, and Selection in Relation to Sex . The organ's patent liability to appendicitis and poorly understood role left it open to blame for a number of possibly unrelated conditions. For example, in 1916, a surgeon claimed that removal of the appendix had cured several cases of trifacial neuralgia and other nerve pain about the head and face, even though he said the evidence for appendicitis in those patients was inconclusive. [ 8 ] The discovery of hormones and hormonal principles, notably by Bayliss and Starling , argued against these views, but in the early 20th century, a great deal of fundamental research remained to be done on the functions of large parts of the digestive tract. In 1916, an author found it necessary to argue against the idea that the colon had no important function and that "the ultimate disappearance of the appendix is a coordinate action and not necessarily associated with such frequent inflammations as we are witnessing in the human". [ 9 ]
There had been a long history of doubt about such dismissive views. Around 1920, the surgeon Kenelm Hutchinson Digby documented previous observations, going back more than 30 years, that suggested lymphatic tissues, such as the tonsils and appendix, might have substantial immunological functions.
The appendix was once believed to be a vestige of a redundant organ that in ancestral species had digestive functions, much as it still does in extant species in which intestinal flora hydrolyze cellulose and similar indigestible plant materials. [ 10 ] This view has changed in recent decades, [ 11 ] with research suggesting that the appendix may serve an important purpose. In particular, it may serve as a reservoir for beneficial gut bacteria , possibly to allow the bacteria to reestablish in the colon during recovery from diarrhea or other illnesses. [ 12 ]
Some herbivorous animals, such as rabbits, have a terminal vermiform appendix and cecum that apparently bear patches of tissue with immune functions and that may also be important in maintaining the composition of intestinal flora . It does not seem to have much digestive function, if any, and is not present in all herbivores, even those with large caeca. [ 13 ] As shown in the accompanying pictures, the human appendix typically is about comparable to that of the rabbit's in size, though the caecum is reduced to a single bulge where the ileum empties into the colon. [ 7 ] Some carnivorous animals have appendices too, but few have more than vestigial caeca. [ 14 ] In line with the possibility that vestigial organs develop new functions, some research suggests that the appendix may guard against the loss of symbiotic bacteria that aid in digestion, though that is unlikely to be a novel function, given the presence of vermiform appendices in many herbivores. [ 15 ] [ 16 ] Intestinal bacterial populations entrenched in the appendix may support quick reestablishment of the flora of the large intestine after an illness, poisoning, or after an antibiotic treatment depletes or otherwise causes harmful changes to the bacterial population of the colon. [ 17 ]
A 2013 study refutes the idea of an inverse relationship between cecum size and appendix size and presence. It is widely present in Euarchontoglires (a superorder of mammals that includes rodents, lagomorphs and primates) and has also evolved independently in the diprotodont marsupials and monotremes , and is highly diverse in size and shape, which could suggest it is not vestigial. Researchers deduce that the appendix has the ability to protect good bacteria in the gut: when the gut is affected by diarrhea or another illness that cleans out the intestines, the good bacteria in the appendix can repopulate the digestive system and keep the person healthy. [ 18 ]
The coccyx , or tailbone, is the remnant of a lost tail . [ 19 ] All mammals have a tail at some point in their development; in humans, it is present for a period of 4 weeks, during stages 14 to 22 of human embryogenesis . [ 20 ] This tail is most prominent in human embryos 31–35 days old. [ 21 ] The tailbone, at the end of the spine, has lost its original function in assisting balance and mobility, though it still serves some secondary functions, such as being an attachment point for muscles, which explains why it has not degraded further.
In rare cases, congenital defect results in a short tail-like structure being present at birth. Twenty-three cases of human babies born with such a structure have been reported in the medical literature since 1884. [ 22 ] [ 23 ] In these cases, the spine and skull were determined to be entirely normal. The only abnormality was that of a tail approximately 12 centimeters long. These tails, though of no deleterious effect, were almost always surgically removed. [ 24 ]
Wisdom teeth are vestigial third molars that human ancestors used to help in grinding down plant tissue. The common postulation is that their skulls had larger jaws with more teeth, which were possibly used to help chew down foliage to compensate for a lack of ability to efficiently digest the cellulose that makes up a plant cell wall. As human diets changed, smaller jaws were naturally selected , but the third molars, or "wisdom teeth", still commonly develop in human mouths. [ 25 ]
Agenesis (failure to develop) of wisdom teeth in human populations ranges from zero in Tasmanian Aboriginals to nearly 100% in indigenous Mexicans . [ 26 ] The difference is related to the PAX9 gene (and perhaps other genes). [ 27 ]
In some animals, the vomeronasal organ (VNO) is part of a second, completely separate sense of smell, known as the accessory olfactory system . Many studies have been performed to find if there is an actual presence of a VNO in adult human beings. Trotier et al. [ 28 ] estimate that around 92% of their subjects who had not had septal surgery had at least one intact VNO. Kjaer and Fisher Hansen, on the other hand, [ 29 ] found that the VNO structure disappeared during fetal development as it does for some primates. [ 30 ] Smith and Bhatnagar (2000) [ 31 ] asserted that Kjaer and Fisher Hansen simply missed the structure in older fetuses. Won (2000) found evidence of a VNO in 13 of his 22 cadavers (59.1%) and in 22 of his 78 living patients (28.2%). [ 32 ] Given these findings, some scientists have argued that there is a VNO in adult human beings. [ 33 ] [ 34 ] Most have sought to identify the opening of the vomeronasal organ in humans, rather than identify the tubular epithelial structure itself. [ 35 ] Thus it has been argued that such studies, employing macroscopic observational methods, have sometimes missed or even misidentified the vomeronasal organ. [ 36 ]
Among studies that use microanatomical methods, there is no reported evidence that human beings have active sensory neurons like those in other animals' working vomeronasal systems. [ 36 ] [ 37 ] Furthermore, no evidence suggests there are nerve and axon connections between any existing sensory receptor cells in the adult human VNO and the brain. [ 38 ] Likewise, there is no evidence of any accessory olfactory bulb in adult human beings, [ 36 ] and the key genes involved in other mammals' VNO function have become pseudogenes in human beings. Therefore, while the presence of a structure in adult human beings is debated, a review of the scientific literature by Tristram Wyatt concluded, "most in the field ... are sceptical about the likelihood of a functional VNO in adult human beings on current evidence." [ 39 ]
The ears of a macaque monkey and most other monkeys have far more developed muscles than those of humans, and therefore have the capability to move their ears to better hear potential threats. [ 40 ] Humans and other primates such as the orangutan and chimpanzee however have ear muscles that are minimally developed and non-functional, yet still large enough to be identifiable. [ 10 ] A muscle attached to the ear that cannot move the ear, for whatever reason, can no longer be said to have any biological function. In humans there is variability in these muscles, such that some people are able to move their ears in various directions, and it can be possible for others to gain such movement by repeated trials. [ 10 ] [ 41 ] In such primates, the inability to move the ear is compensated mainly by the ability to turn the head on a horizontal plane, an ability which is not common to most monkeys—a function once provided by one structure is now replaced by another. [ 42 ]
The outer structure of the ear also shows some vestigial features, such as the node or point on the helix of the ear known as Darwin's tubercle which is found in around 10% of the population.
The plica semilunaris is a small fold of tissue on the inside corner of the eye. It is the vestigial remnant of the nictitating membrane , i.e., third eyelid, an organ that is fully functional in some other species of mammals. [ 43 ] Its associated muscles are also vestigial. [ 10 ] Only one species of primate , the Calabar angwantibo , is known to have a functioning nictitating membrane. [ 44 ]
The orbitalis muscle is a vestigial or rudimentary nonstriated muscle (smooth muscle) of the eye that crosses from the infraorbital groove and sphenomaxillary fissure and is intimately united with the periosteum of the orbit. It was described by Johannes Peter Müller and is often called Müller's muscle. The muscle forms an important part of the lateral orbital wall in some animals, but in humans it is not known to have any significant function. [ 45 ] [ 46 ]
In the internal genitalia of each human sex, there are some residual organs of mesonephric and paramesonephric ducts during embryonic development:
Human vestigial structures also include leftover embryological remnants that once served a function during development, such as the belly button, and analogous structures between biological sexes. For example, men are also born with two nipples, which are not known to serve a function compared to women. [ 47 ] In regards to genitourinary development, both internal and external genitalia of male and female fetuses have the ability to fully or partially form their analogous phenotype of the opposite biological sex if exposed to a lack/overabundance of androgens or the SRY gene during fetal development. [ 48 ] [ 49 ] Examples of vestigial remnants of genitourinary development include the hymen , which is a membrane that surrounds or partially covers the external vaginal opening that derives from the sinus tubercle during fetal development and is homologous to the male seminal colliculus . [ 50 ] Other examples include the glans penis and the clitoris , the labia minora and the ventral penis, and the ovarian follicles and the seminiferous tubules. [ 50 ] Some researchers [ who? ] have hypothesized that the persistence of the hymen may be to provide temporary protection from infection , as it separates the vaginal lumen from the urogenital sinus cavity during development. [ 51 ]
A number of muscles in the human body are thought to be vestigial, either by virtue of being greatly reduced in size compared to homologous muscles in other species, by having become principally tendonous, or by being highly variable in their frequency within or between populations.
The occipitalis minor is a muscle in the back of the head which normally joins to the auricular muscles of the ear. This muscle is very sporadic in frequency—always present in Malays, present in 56% of Africans, 50% of Japanese, and 36% of Europeans, and nonexistent in the Khoikhoi people of southwestern Africa and in Melanesians . [ 52 ] Other small muscles in the head associated with the occipital region and the post-auricular muscle complex are often variable in their frequency. [ 53 ]
The platysma , a quadrangular (four sides) muscle in a sheet-like configuration, is a vestigial remnant of the panniculus carnosus of animals. In horses, it is the muscle that allows it to flick a fly off its back. [ citation needed ]
In many animals, the upper lip and sinus area is associated with whiskers or vibrissae which serve a sensory function. In humans, these whiskers do not exist but there are still sporadic cases where elements of the associated vibrissal capsular muscles or sinus hair muscles can be found. Based on histological studies of the upper lips of 20 cadavers, Tamatsu et al. found that structures resembling such muscles were present in 35% (7/20) of their specimens. [ 54 ]
The palmaris longus muscle is seen as a small tendon between the flexor carpi radialis and the flexor carpi ulnaris , although it is not always present. The muscle is absent in about 14% of the population, however this varies greatly with ethnicity. It is believed that this muscle actively participated in the arboreal locomotion of primates, but currently has no function, because it does not provide more grip strength. [ 55 ] One study has shown the prevalence of palmaris longus agenesis in 500 Indian patients to be 17.2% (8% bilateral and 9.2% unilateral). [ 56 ] The palmaris is a popular source of tendon material for grafts and this has prompted studies which have shown the absence of the palmaris does not have any appreciable effect on grip strength. [ 57 ]
The levator claviculae muscle in the posterior triangle of the neck is a supernumerary muscle present in only 2–3% of all people [ 58 ] but nearly always present in most mammalian species, including gibbons and orangutans . [ 59 ]
The pyramidalis muscle of the abdomen is a small and triangular muscle, anterior to the rectus abdominis , and contained in the rectus sheath . It is absent in 20% of humans and when absent, the lower end of the rectus then becomes proportionately increased in size. Anatomical studies suggest that the forces generated by the pyramidalis muscles are relatively small. [ 60 ]
The latissimus dorsi muscle of the back has several sporadic variations . One particular variant is the existence of the dorsoepitrochlearis or latissimocondyloideus muscle which is a muscle passing from the tendon of the latissimus dorsi to the long head of the triceps brachii . It is notable due to its well developed character in other apes and monkeys, where it is an important climbing muscle, namely the dorsoepitrochlearis brachii. [ 61 ] [ 62 ] This muscle is found in ≈5% of humans. [ 63 ]
The plantaris muscle is composed of a thin muscle belly and a long thin tendon. The muscle belly is approximately 5–10 centimetres (2–4 inches) long, and is absent in 7–10% of the human population. It has some weak functionality in moving the knee and ankle but is generally considered redundant and is often used as a source of tendon for grafts. The long, thin tendon of the plantaris is humorously called "the freshman's nerve", as it is often mistaken for a nerve by new medical students.
Another example of human vestigiality occurs in the tongue, specifically the chondroglossus muscle . In a morphological study of 100 Japanese cadavers, it was found that 86% of fibers identified were solid and bundled in the appropriate way to facilitate speech and mastication. The other 14% of fibers were short, thin and sparse – nearly useless, and thus concluded to be of vestigial origin. [ 64 ]
Extra nipples or breasts sometimes appear along the mammary lines of humans, appearing as a remnant of mammalian ancestors who possessed more than two nipples or breasts. [ 65 ] [ 66 ] One 2021 report demonstrated that all healthy young men and women who participated in an anatomic study of the front surface of the body exhibited 8 pairs of focal fat mounds running along the embryological mammary ridges from axillae to the upper inner thighs. These were always located in the same relative anatomic sites – analogous to the loci of breasts in other placental mammals – and often had nipple-like moles or extra hairs located atop the mounds. Therefore, focal fatty prominences on the fronts of human torsos likely represent chains of vestigial breasts composed of primordial breast fat. [ 67 ]
Humans also bear some vestigial behaviors and reflexes. [ 68 ]
The formation of goose bumps in humans under stress is a vestigial reflex ; a possible function in the distant evolutionary ancestors of humanity was to raise the body's hair, making the ancestor appear larger and scaring off predators. [ 69 ] [ 68 ] Raising the hair is also used to trap an extra layer of air, keeping an animal warm. [ 68 ] Due to the diminished amount of hair in humans, the reflex formation of goose bumps when cold is also vestigial. [ 68 ]
The palmar grasp reflex is thought to be a vestigial behavior in human infants. When placing a finger or object to the palm of an infant, it will securely grasp it. This grasp is found to be rather strong. [ 70 ] Some infants —37% according to a 1932 study—are able to support their own weight from a rod, [ 71 ] although there is no way they can cling to their mother. The grasp is also evident in the feet. When a baby is sitting down, its prehensile feet assume a curled-in posture, similar to that observed in an adult chimp. [ 72 ] [ 73 ] An ancestral primate would have had sufficient body hair to which an infant could cling, unlike modern humans, thus allowing its mother to escape from danger, such as climbing up a tree in the presence of a predator without having to occupy her hands holding her baby.
It has been proposed that the hiccup is an evolutionary remnant of earlier amphibian respiration . [ 74 ] Amphibians such as tadpoles gulp air and water across their gills via a rather simple motor reflex akin to mammalian hiccuping. The motor pathways that enable hiccuping form early during fetal development, before the motor pathways that enable normal lung ventilation form. Additionally, hiccups and amphibian gulping are inhibited by elevated CO 2 and may be stopped by GABA B receptor agonists, illustrating a possible shared physiology and evolutionary heritage. These proposals may explain why premature infants spend 2.5% of their time hiccuping, possibly gulping like amphibians, as their lungs are not yet fully formed. Fetal intrauterine hiccups are of two types. The physiological type occurs before 28 weeks after conception and tend to last five to ten minutes. These hiccups are part of fetal development and are associated with the myelination of the phrenic nerve , which primarily controls the thoracic diaphragm. The phylogeny hypothesis explains how the hiccup reflex might have evolved, and if there is not an explanation, it may explain hiccups as an evolutionary remnant, held-over from our amphibious ancestors.
This hypothesis has been questioned because of the existence of the afferent loop of the reflex, the fact that it does not explain the reason for glottic closure, and because the very short contraction of the hiccup is unlikely to have a significant strengthening effect on the slow-twitch muscles of respiration. [ citation needed ]
There are many pseudogenes present in the human genome . One example of this is L-gulonolactone oxidase , a gene that is functional in most other mammals and produces an enzyme that synthesizes vitamin C . [ 75 ] In humans and other members of the suborder Haplorrhini , a mutation disabled the gene and made it unable to produce the enzyme. However, the remains of the gene are still present in the human genome. [ 76 ] | https://en.wikipedia.org/wiki/Human_vestigiality |
Viruses are a major cause of human waterborne and water-related diseases. Waterborne diseases are caused by water that is contaminated by human and animal urine and feces that contain pathogenic microorganisms . A subject can get infected through contact with or consumption of the contaminated water. Viruses affect all living organisms from single cellular plants, bacteria and animal to the highest forms of plants and animals including human beings. Within a specific kingdom ( Plantae, Animalia, Fungi etc.) the localization of viruses colonizing the host can vary: Some human viruses, for example, HIV, colonizes only the immune system, while influenza viruses on the other hand can colonize either the upper respiratory tract or the lower respiratory tract depending on the type (human Influenza virus or avian influenza viruses respectively). [ 1 ] Different viruses can have different routes of transmission; for example, HIV is directly transferred by contaminated body fluids from an infected host into the tissue or bloodstream of a new host while influenza is airborne and transmitted through inhalation of contaminated air containing viral particles by a new host. Research has also suggested that solid surface plays a role in the transmission of water viruses. In experiments that used E.coli phages, Qβ, fr, T4, and MS2 confirmed that viruses survive on a solid surface longer compared to when they are in water. Because of this adaptation to survive longer on solid surfaces, viruses now have a prolonged opportunities to infect humans. [ 2 ] Enteric viruses primarily infect the intestinal tract through ingestion of food and water contaminated with viruses of fecal origin. Some viruses can be transmitted through all three routes of transmission.
Water virology started about half a century ago when scientists attempted to detect the polio virus in water samples. [ 3 ] Since then, other pathogenic viruses that are responsible for gastroenteritis, hepatitis, and many other virus strains have replaced enteroviruses as the main aim for detection in the water environment. [ 3 ]
Water virology was born after a large hepatitis outbreak transmitted through water was confirmed in New Delhi between December 1955 and January 1956. [ 4 ]
Viruses can cause massive human mortality. The smallpox virus killed an estimated 10 to 15 million people per year until 1967. [ 3 ] Smallpox was finally eliminated in 1977 by extinction of the virus through vaccination, and the impact of viruses such as influenza, poliomyelitis and measles are mainly controlled by vaccination. [ 4 ]
Despite advances in vaccination and prevention of viral diseases, it is estimated that in the 1980s a child died approximately every six seconds from diarrhea confirmed by WHO. [ citation needed ] Many cases of hepatitis A and/or E, both of which are enteric viruses, are typically transmitted by food and water. Extreme examples include the outbreak of 300,000 cases of hepatitis A and 25,000 cases of gastroenteritis in 1988 in Shanghai caused by shellfish harvested from a sewage polluted estuary. [ 5 ] In 1991, an outbreak of 79,000 cases of hepatitis E in Kanpur was ascribed to drinking polluted water. [ 3 ]
A more recent outbreak of Hepatitis E in South Sudan killed 88 people. Medecins Sans Frontieres (MSF) said it had treated almost 4,000 patients since the outbreak was identified in South Sudan in July 2012. In this outbreak, Hepatitis E , which causes liver infections, and was thought to be spread by drinking water contaminated with feces. [ 6 ] Even more recently In 2014, another Hepatitis E outbreak occurred in south Sudan refugee camp situated in Ethiopia. The outbreak, which began in April 2014 and ended in January 2015, claimed a total of twenty-one lives. [ 7 ]
Sewage contaminated water contains many viruses, over one hundred species are reported and can lead to diseases that affect human beings. For example, hepatitis , gastroenteritis , meningitis , fever , rash , and conjunctivitis can all be spread through contaminated water. More viruses are being discovered in water because of new detection and characterization methods, although only some of these viruses are human pathogens. [ 4 ]
Viruses need a suitable environment to survive in. There are many characteristics that control the survival of viruses in water such as temperature, light, pH, salinity, organic matter, suspended solids or sediments, and air–water interfaces.
Temperature has the highest effect on virus's survival in water since lower temperatures are the key to longer virus survival. For instance, an article published in 2018 noted that it takes one year for certain viruses including poliovirus and echovirus to decrease by a 5log unit at a temperature of 4 ° C, while it takes only a week to obtain same result at a temperature of 37 ° C (human body temperature). The rate of protein, nucleic acid denaturation and chemical reactions that destroy the viral capsid are increased at higher temperatures, thus viruses will survive best at low temperatures. Hepatitis A, adenoviruses and parvoviruses have the highest survival rate in low temperatures amongst enteric viruses. [ 3 ] [ 8 ]
Ultraviolet light (UV) is the light in sunlight and can inactivate viruses by causing cross-linking of the nucleotides in the viral genome. Many viruses in water are exterminated in the presence of sunlight. The combination of higher temperatures and more UV in the summer time corresponds to shorter viral survival in summer compared to winter. Double stranded DNA viruses like adenoviruses are more resistant to UV light inactivation than enteroviruses because they can use their host cell to repair the damage caused by the UV light. [ 3 ]
Visible light can also affect virus survival by a process called photodynamic inactivation but the length and intensity of the light exposure can change the inactivation rate. [ 3 ]
The pH of most natural water is between 5–9. Enteric viruses are stable in these conditions. On the other hand, many enteric viruses are more stable at pH 3-5 than at pH 9 and 12. Enteroviruses can survive at pH 11–11.5 and 1–2, but for only short periods. Adenoviruses and rotaviruses are delicate to a pH of 10 or greater and leads to inactivation. [ 3 ]
In a general perspective, viruses don't survive in areas with high concentration of salt. Thus, viruses can live longer in a freshwater habitat than water bodies with high salt concentration. It is also known that certain heavy metals are toxic to viruses. [ 9 ]
Some types of coliphages (a type of bacteriophage) are inactive in an of air-water-solid interface. This is due to the unfolding of the viruses' protein capsid (which is a crucial component for infecting the host). Aggravation of this effect is seen when the ionic strength of the solution increases [ 4 ]
Aggregation is one of the most known methods for the survival of viruses. In a liquid environment, viruses tend to form a clump (aggregation). This aggregation result in a reduced rate of virus inactivation promptly showing that viral particles that do not aggregate are more easily destroyed. It has also been proven that aggregation may form spontaneously or may result by nucleation on particles of water. [ 8 ]
Water that is intended for drinking should go through some treatment to reduce pathogenic viral and bacterial concentrations. As the density of the human population has increased the incidence of sewage contamination of water has increased as well, thus the risk to humans from pathogenic viruses will increase if precautions are not taken. [ 3 ]
Scientific studies suggest that the most common viruses found are caliciviruses, astroviruses and enteric viruses. Laboratories are still looking for improved methods to detect these pathogenic viruses. Reducing the amount of viruses in drinking water is accomplished by various treatments that are typically part of drinking water treatment systems in developed countries. [ 3 ] [ 10 ]
Water purification of surface water (water from lakes, rivers, or reservoirs) typically utilizes four treatment stages: coagulation and flocculation, sedimentation, filtration, and disinfection. The first three stages remove mainly dirt and larger particles, although filtration does reduce the number of viruses and bacteria in the water the number of pathogens present after filtration is still considered too high for drinking water. Purification of water from underground aquifers, called ground water, may skip some of these steps as ground water tends to have fewer contaminants than surface water. The last step, disinfection, is primarily responsible for the reduction of pathogenic viruses to safe levels in all drinking water sources. The most common disinfectants used are chlorine and chloramine. Ozone and UV light can also be used to treat large volumes of water to remove pathogens. [ 11 ] [ 10 ]
In an article published in 2010, it was determined that nanoparticles of silver could significantly inactivate the activity of some water viruses. When 5.4 ml of the nanoparticles of silver was added to a water virus, its activity decreased by 4log. [ 12 ]
The quality of drinking water is ensured through a framework of water safety plans that ensures the safe disposal of human waste so that drinking water supplies are not contaminated. Improving the water supply, sanitation, hygiene and management of our water resources could prevent ten percent of total global disease. [ 13 ]
Half of the hospital beds occupied in the world are related to the lack of safe drinking water. Unsafe water leads to the 88% of the global cases of diarrhea and 90% of the deaths of diarreaheal diseases in children under five years old. Most of these deaths occur in developing countries due to poverty and the high cost of safe water. [ 13 ] An article published in 2003 by CDC concluded that the death of children (less than five years of age) caused by rotavirus on a global scale ranges between 352,000 and 592,000. [ 14 ]
Approximately 1.1 billion people do not have access to improved water and 2.4 billion people do not have access to sanitation facilities. This situation leads to 2 million preventable deaths each year. [ 15 ] | https://en.wikipedia.org/wiki/Human_viruses_in_water |
Human waste (or human excreta ) refers to the waste products of the human digestive system , menses , and human metabolism including urine and feces . As part of a sanitation system that is in place, human waste is collected, transported, treated and disposed of or reused by one method or another, depending on the type of toilet being used, ability by the users to pay for services and other factors. Fecal sludge management is used to deal with fecal matter collected in on-site sanitation systems such as pit latrines and septic tanks .
The sanitation systems in place differ vastly around the world, with many people in developing countries having to resort to open defecation where human waste is deposited in the environment, for lack of other options. Improvements in " water , sanitation and hygiene " (WASH) around the world is a key public health issue within international development and is the focus of Sustainable Development Goal 6 .
People in developed countries tend to use flush toilets where the human waste is mixed with water and transported to sewage treatment plants .
Children's excreta can be disposed of in diapers and mixed with municipal solid waste . Diapers are also sometimes dumped directly into the environment, leading to public health risks.
The term "human waste" is used in the general media to mean several things, such as sewage , sewage sludge , blackwater - in fact anything that may contain some human feces . [ 1 ] In the stricter sense of the term, human waste is in fact human excreta, i.e. urine and feces , with or without water being mixed in. For example, dry toilets collect human waste without the addition of water.
Human waste is considered a biowaste , as it is a vector for both viral and bacterial diseases. It can be a serious health hazard if it gets into sources of drinking water. The World Health Organization (WHO) reports that nearly 2.2 million people die annually from diseases caused by contaminated water, such as cholera or dysentery. A major accomplishment of human civilization has been the reduction of disease transmission via human waste through the practice of hygiene and sanitation , which can employ a variety of different technologies.
Even high-mountains are not free from human waste. Each year, millions of mountaineers visit high-mountain areas. They generate tons of feces and urine annually which cause environmental pollution. Human feces pose a greater threat to the mountain environment than uncontrolled deposit of urine, due to the higher pathogen content of feces. [ 2 ]
Methods of processing depend on the type of human waste:
The amount of water mixed with human waste can be reduced by the use of waterless urinals and composting toilets and by recycling greywater . The most common method of human waste treatment in rural areas where municipal sewage systems are unavailable is the use of septic tank systems. In remote rural places without sewage or septic systems, small populations allow for the continued use of honey buckets and sewage lagoons (see anaerobic lagoon ) without the threat of disease presented by places with denser populations. Bucket toilets are used by rural villages in Alaska where, due to permafrost , conventional waste treatment systems cannot be utilized.
Human waste in the form of wastewater (sewage) is used to irrigate and fertilize fields in many parts of the developing world where fresh water is unavailable. There is great potential for wastewater agriculture to produce more food for consumers in urban areas, as long as there is sufficient education about the dangers of eating such food uncooked. [ 3 ] | https://en.wikipedia.org/wiki/Human_waste |
The Humanities Advanced Technology and Information Institute ( HATII ) was a research and teaching institute at the University of Glasgow in Scotland . It was established in 1997 with Professor Seamus Ross as Founding Director until 2009. HATII led research in archival and library science and in information/knowledge management. Research strengths were in the areas of humanities computing, digitisation, digital curation and preservation, and archives and records management.
HATII partner in research initiatives AHDS Performing Arts , 3D-COFORM (Tools and Expertise for 3D Collection Formation), [ 1 ] [ 2 ] SHAMAN (Sustaining Heritage Access through Multivalent ArchiviNg), [ 3 ] DigiCULT, [ 4 ] CASPAR (Cultural, Artistic and Scientific knowledge Preservation, for Access and Retrieval), [ 5 ] DELOS Digital Library Network of Excellence Preservation Cluster, [ 6 ] Planets (Preservation and Long-term Access to our Cultural and Scientific Heritage), [ 7 ] Primarily History, Mapping the Practice and Profession of Sculpture in Britain and Ireland 1851-1951 and TheGlasgowStory. Its Electronic Research Preservation and Access NETwork (ERPANET) had a broad impact on developing the preservation research community ethos in Europe. [ 8 ] [ 9 ] It was followed by Digital Preservation Europe|DigitalPreservationEurope (DPE) [ 10 ] which produced research outputs DRAMBORA and PLATTER, experimented with animation as a mechanism for dissemination of scholarship. [ 11 ] HATII was a founding partner of the UK's Digital Curation Centre in 2004. [ 12 ]
Between 1997 and when it launched its first degree programs in the early 2000s, HATII taught in multimedia (from 1997), digitisation (from 1998), and cyberspace studies (from 2000). HATII founded the UK's first postgraduate programme in digital preservation/curation as an MSc Information Management and Preservation in 2001. [ 13 ] In 2003, it launched a joint honours MA in Arts and Media Informatics which eventually became a single honours MA in Digital Media and Information Studies. [ 14 ] Both the undergraduate MA and the MSc were accredited CILIP (Chartered Institute of Library and Information Professionals) and the MSc accredited by the UK Archives and Records Association . In 2010 HATII established an MSc programme in Museum Studies. [ citation needed ]
After twenty years HATII became, in September 2017, Information Studies. Lorna Hughes was appointed the first head of Information Studies in 2016. [ citation needed ] | https://en.wikipedia.org/wiki/Humanities_Advanced_Technology_and_Information_Institute |
The Humanities Indicators is a project of the American Academy of Arts and Sciences that provides statistical tools for answering questions about humanities education in the United States. Researchers use the Indicators to analyze primary and secondary humanities education, undergraduate and graduate education in the humanities , the humanities workforce, levels and sources of program funding, public understanding and impact of the humanities, and other areas of concern. [ 1 ] [ 2 ]
Data from the Humanities Indicators has been used in discussions about the US decline in the number of humanities college majors. [ 3 ] [ 4 ] To address questions about the workforce outcomes of humanities graduates, the Indicators issued reports on the State of the Humanities 2021: Workforce & Beyond .
The Humanities Indicators report examined not only graduates' employment and earnings relative to other fields, but also their satisfaction with their work after graduation and their lives more generally. The data reveal that despite disparities in median earnings, humanities majors are quite similar to graduates from other fields with respect to their perceived well-being. The report was widely cited in the media as an important intervention in the discussion. [ 5 ] [ 6 ]
In 2019, the Humanities Indicators also administered the first national survey on public attitudes about the humanities, finding wide engagement with the field (though often under different names) and substantial support for the field. [ 7 ] [ 8 ]
This article related to a non-profit organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Humanities_Indicators |
Humanium Metal is a brand of metal made by melting down illegal firearms seized in conflict zones . The creation and distribution of this metal is done through a marketing campaign called "The Humanium Metal Initiative", started in 2016 by Swedish nonprofit organization IM Swedish Development Partner. The stated objective of the program is to draw attention to issues of gun violence and contribute toward the ending of illegal firearms trade . Humanium Metal is used for the creation of non-lethal commodities, such as wristwatches , buttons , and spinning tops , with proceeds returning to violence prevention efforts and support for gun-violence survivors in the areas from which the firearms were seized.
The Humanium Metal Initiative was developed by Peter Brune of IM Swedish Development Partner in partnership with designer Johan Pihl. [ 1 ] The objective of Humanium Metal is "to spread awareness of the devastating impact of illegal firearms and armed violence, as well as generate funds urgently needed to empower people living in conflict-torn societies." [ 2 ] The campaign is implemented in conjunction with Swedish advertising agencies Great Works and Akestam Holst. [ 3 ]
Humanium Metal was first produced in November 2016 in El Salvador , where firearms seized by the Salvadoran government were converted into one ton of metal. [ 1 ] [ 4 ] The project has since expanded to Guatemala , and, as of 2018, it plans to expand to Honduras and Colombia . [ 3 ]
The program has received endorsements from the Dalai Lama , former director general of the International Atomic Energy Agency Hans Blix , and Nobel Peace Prize winner Desmond Tutu . The program has also partnered with the Swedish Ministry for Foreign Affairs . [ 3 ]
As of end of 2022, the program had destroyed more than 12,000 firearms in El Salvador, Zambia and the United States. More than US$1.2 million have already been channeled to civil society interventions in violence-affected areas.
The most common method for producing Humanium Metal is when governments seize illegal firearms and melt down their metal, turning it into ingots , wire, or pellets. [ 1 ] The metal is 95% iron and is sent to Sweden, where they are reduced to powder that can be used in the production of metal objects. As of 2018, Humanium Metal was priced at about $6.60 per ounce . [ 5 ] [ better source needed ] In 2018, Stockholm -based watchmaker TRIWA began to market wristwatches 3D-printed with Humanium Metal. [ 4 ] In 2019, the Humanium Metal Initiative partnered with The Non-Violence Project Foundation to produce small-scale replicas of Swedish artist Carl Fredrik Reuterswärd 's 1985 sculpture Non-Violence . Other companies have produced spinning tops, buttons, and bracelets made from Humanium Metal. [ 6 ] A Good Company has made a limited-edition A Good Humanium Metal pen, 25% of the sales of which goes to support projects tackling violent crime and rebuilding conflicted-afflicted communities in El Salvador. [ 7 ]
In 2020, Scottish artist Frank To created paintings using powdered Humanium Metal mixed with paint. [ 8 ]
In December 2020, IM partnered with the Zambia Police Service to destroy more than 6,000 firearms and turn them into Humanium Metal. [ 9 ]
In 2021, the police department of Falmouth, Maine publicly destroyed a set of illegal weapons and announced their intention to turn them into Humanium Metal. [ 10 ]
In 2017, the Humanium Metal Initiative won the Grand Prix for Innovation at the Cannes Lions Festival for Creativity . [ 11 ] In 2018, the program won the advertising category of Fast Company 's 2018 World Changing Ideas Awards. [ 3 ] | https://en.wikipedia.org/wiki/Humanium_Metal |
A human–animal hybrid and animal–human hybrid is an organism that incorporates elements from both humans and non-human animals . Technically, in a human–animal hybrid , each cell has both human and non-human genetic material. It is in contrast to an individual where some cells are human and some are derived from a different organism, called a human-animal chimera . [ 1 ] (A human chimera , on the other hand, consists only of human cells, from different zygotes.)
Examples of human–animal hybrids mainly include humanized mice that have been genetically modified by xenotransplantation of human genes. [ 2 ] Humanized mice are commonly used as small animal models in biological and medical research for human therapeutics.
Human–animal hybrids are the subject of legal, moral, and technological debate in the context of recent advances in genetic engineering . [ 3 ] [ 4 ] [ 5 ]
Human–animal hybrids have existed throughout social cultures for a long time (particularly in terms of mythology ), being a part of storytelling across multiple continents , and have also been incorporated into comic books , films , video games , and other related mass media in recent decades. [ 6 ] [ 3 ] [ 7 ] [ 4 ] [ 8 ]
Defined by the magazine H+ as "genetic alterations that are blendings [sic] of animal and human forms", such hybrids may be referred by other names occasionally such as "para-humans". [ 6 ] [ 3 ] They may additionally may be called "humanized animals". [ 5 ] Technically speaking, they are also related to "cybrids" ( cytoplasmic hybrids ), with "cybrid" cells featuring foreign human nuclei inside of them being a topic of interest. Possibly, a real-world human-animal hybrid may be an entity formed from either a human egg fertilized by a nonhuman sperm or a nonhuman egg fertilized by a human sperm. [ 3 ]
Artificially created human-animal hybrids include humanized mice that have been xenotransplanted with human gene products, so as to be utilized for gaining relevant insights in the in vivo context for understanding of human-specific physiology and pathologies. [ 2 ] Humanized mice are commonly used as small animal models in biological and medical research for human therapeutics including infectious diseases and cancer. For example, genetically modified mice may be born with human leukocyte antigen genes in order to provide a more realistic environment when introducing human white blood cells into them in order to study immune system responses. [ 9 ]
Advances in genetic engineering have generally caused a large number of debates and discussions in the fields related to bioethics , including research relating to the creation of human-animal hybrids. Although the two topics are not strictly related, the debates involving the creation of human-animal hybrids have paralleled that of the debates around the stem-cell research controversy. [ 3 ]
The question of what line exists between a "human" being and a "non-human" being has been a difficult one for many researchers to answer. While animals having one percent or less of their cells originally coming from humans may clearly appear to be in the same boat as other animals, no consensus exists on how to think about beings in a genetic middle ground that have something like an even mix. "I don't think anyone knows in terms of crude percentages how to differentiate between humans and nonhumans," U.S. patent office official John Doll has stated. [ 5 ] Critics of increased government restrictions include scientists such as Dr. Douglas Kniss, head of the Laboratory of Perinatal Research at Ohio State University , who has remarked that formal laws aren't the best option since the "notion of animal-human hybrids is very complex." He's also argued that their creation is inherent "not the kind of thing we support" in his kind of research since scientists should "want to respect human life". [ 3 ]
In contrast, notable socio-economic theorist Jeremy Rifkin has expressed opposition to research that creates beings crossing species boundaries, arguing that it interferes with the fundamental 'right to exist' possessed by each animal species. "One doesn't have to be religious or into animal rights to think this doesn't make sense," he has argued when expressing support for anti-chimera and anti-hybrid legislation. As well, William Cheshire, associate professor of neurology at the Mayo Clinic 's Florida branch, has called the issue "unexplored biologic territory" and advocated for a "moral threshold of human neural development" to restrict the destroying a human embryo to obtain cell material and/or the creation of an organism that's partly human and partly animal." He has said, "We must be cautious not to violate the integrity of humanity or of animal life over which we have a stewardship responsibility". [ 4 ]
While laws against the creation of hybrid beings have been proposed in U.S. states and in the U.S. Congress , several scientists have argued that legal barriers might go too far and prohibit medically beneficial studies into human modification. [ 3 ] [ 4 ] [ 5 ]
In terms of scientific ethics , restrictions on the creation of human–animal hybrids have proved a controversial matter in multiple countries. While the state of Arizona banned the practice altogether in 2010, a proposal on the subject that sparked some interest in the United States Senate from 2011 to 2012 ended up going nowhere. Although the two concepts are not strictly related, discussions of experimentation into blended human and animal creatures has paralleled the discussions around embryonic stem-cell research (the ' stem cell controversy '). [ 3 ] The creation of genetically modified organisms for a multitude of purposes has taken place in the modern world for decades, examples being specifically designed foodstuffs made to have features such as higher crop yields through better disease resistance. [ 10 ]
President George W. Bush brought up the topic in his 2006 State of the Union Address , in which he called for the prohibition of "human cloning in all its forms", "creating or implanting embryos for experiments", "creating human-animal hybrids ", and also "buying, selling, or patenting human embryos". He argued, "A hopeful society has institutions of science and medicine that do not cut ethical corners and that recognize the matchless value of every life." He also stated that humanity "should never be discarded, devalued or put up for sale." [ 11 ]
A 2005 appropriations bill passed by the U.S. Congress and signed into law by President Bush contained specific wording forbidding any patents on humans or human embryos. [ 5 ] In terms of outright bans on hybrid research in the first place, a measure came up in the 110th Congress entitled the Human-Animal Hybrid Prohibition Act of 2008 . Congressman Chris Smith ( R , NJ-4 ) introduced it on April 24, 2008. The text of the proposed act stated that "human dignity and the integrity of the human species are compromised" if such hybrids exist and set up the punishment of imprisonment for up to ten years as well as a fine of over one million dollars. Though attracting support from many co-sponsors such as then Representatives Mary Fallin , Duncan Hunter , Joseph R. Pitts , and Rick Renzi among others, the Act failed to get through Congress. [ 12 ]
A related proposal had come up in the U.S. Senate the prior year, the Human-Animal Hybrid Prohibition Act of 2007 , and it also had failed. That effort was proposed by then-Senator Sam Brownback ( R , KS ) on November 15, 2007. Featuring the same language as the later measure in the House, its bipartisan group of cosponsors included then Senators Tom Coburn , Jim DeMint , and Mary Landrieu . [ 13 ]
A localized measure designed to ban the creation of hybrid entities came up in the state of Arizona in 2010. The proposal was signed into law by then Governor Jan Brewer . Its sponsor stated that it was needed to clarify important "ethical boundaries" in research. [ 3 ]
For thousands of years, these hybrids have been one of the most common themes in storytelling about animals throughout the world. The lack of a strong divide between humanity and animal nature in multiple traditional and ancient cultures has provided the underlying historical context for the popularity of tales where humans and animals have mingling relationships, such as in which one turns into the other or in which some mixed being goes through a journey. [ 14 ] Interspecies friendships within the animal kingdom, as well as between humans and their pets, additionally provides an underlying root for the popularity of such beings. [ 6 ]
In various mythologies throughout history, many particularly famous hybrids have existed, including as a part of Egyptian and Indian spirituality. [ 14 ] The entities have also been characters in fictional media such as in H. G. Wells ' work The Island of Doctor Moreau , adapted into the popular 1932 film Island of Lost Souls . [ 7 ] In legendary terms, the hybrids have played varying roles from that of trickster and/or villain to serving as divine heroes in very different contexts, depending on the given culture. [ 14 ]
Beings displaying a mixture of human and animal traits while also having a similarly blended appearance have played a vast and varied role in multiple traditions around the world. [ 14 ] Artist and scholar Pietro Gaietto has written that "representations of human-animal hybrids always have their origins in religion". In "successive traditions they may change in meaning but they still remain within spiritual culture", Gaietto has argued, when looking back in an evolution -minded point of view. The beings show up in both Greek and Roman mythology , with various elements of ancient Egyptian society ebbing and flowing into those cultures in particular. Prominent examples in ancient Egyptian religion , featuring some of the earliest such hybrid beings, include the canine -like god of death known as Anubis and the lion-like Sphinx . [ 15 ] [ unreliable source? ] Other instances of these types of characters include figures within both Chinese and Japanese mythology . [ 14 ] [ 16 ] The observation of interspecies friendships within the animal kingdom, as well as the bonds existing between humans and their pets, have been a source of the appeal in such stories. [ 6 ]
A prominent hybrid figure that's internationally known is the mythological Greek figure of Pan. A deity that rules over and symbolizes the untamed wild, he helps express the inherent beauty of the natural world as the Greeks saw things. He specifically received reverence by ancient hunters , fishermen, shepherds, and other groups with a close connection to nature. Pan is a Satyr who possesses the hindquarters, legs, and horns of a goat while otherwise being essentially human in appearance; stories of his encounters with different gods, humans, and others have been a part of popular culture in several different cultures for many years. [ 17 ] The human-animal hybrid has appeared in acclaimed works of art by figures such as Francis Bacon , [ 8 ] also being mentioned in poetic pieces such as in John Fletcher's writings. [ 17 ] Specifically, the human-animal hybrid has appeared in acclaimed works of art by figures such as Francis Bacon . [ 8 ] Additional famous mythological hybrids include the Egyptian god of death , named Anubis , and the fox-like Japanese beings that are called Kitsune . [ 14 ]
In Chinese mythology , the figure of Chu Pa-chieh ( Chinese : 豬八戒 ; pinyin : Zhūbājiè ) undergoes a personal journey in which he gives up wickedness for virtue. After causing a disturbance in heaven from his licentious actions, he is exiled to Earth. By mistake, he enters the womb of a sow and ends up being born as a half-man/half-pig entity. With the head and ears of a pig coupled with a human body, his already animal-like sense of selfishness from his past life remains. Killing and eating his mother as well as devouring his brothers , he makes his way to a mountain hideout, spending his days preying on unwary travelers unlucky enough to cross his path. However, the exhortations of the kind goddess Kuan Yin , journeying in China, persuade him to seek a nobler path, and his life's journey and the side of goodness proceeds on such that even he is ordained a priest by the goddess herself. [ 18 ] Remarking on the character's role in the religious novel Journey to the West , where the being first appears, professor Victor H. Mair has commented that "[p]ig-human hybrids represent descent and the grotesque, a capitulation to the basest appetites" rather than "self-improvement". [ 16 ]
Several hybrid entities have long played a major role in Japanese media and in traditional beliefs within the country. For example, a warrior god known as Amida received worship as a part of Japanese mythology for many years; he possessed a generally humanoid appearance while having a canine-like head. However, the god's devotional popularity fell in about the middle of the 19th century. [ 15 ] [ unreliable source? ] A Tanuki resembles a raccoon dog , but its shape-shifting talents allow it to turn into humans for the purposes of trickery, such as impersonating Buddhist monks . The fox-like creatures known as Kitsune also possess similar powers, and stories abound of them tricking human men into marriage by turning into seductive women. [ 14 ]
Other examples include characters in ancient Anatolia and Mesopotamia . The latter region has had the tradition of a malevolent human-animal hybrid deity in Pazuzu , the demon featuring a humanoid shape yet having grotesque features such as sharp talons . [ 15 ] [ unreliable source? ] The character picked up revived attention when an interpretation of it appeared in William Peter Blatty 's 1971 novel The Exorcist and the Academy Award winning 1973 film adaption of the same name , with the demon possessing the body of an innocent young girl. The movie, regarded as one of the greatest horror films of all time , has a prologue in which co-protagonist Father Merrin ( Max von Sydow ) visits an archaeological dig in Iraq and ominously discovers an old statue of the monstrous being. [ 19 ] [ 20 ]
"Theriocephaly" (from Greek θηρίον therion 'beast' and κεφαλή kefalí 'head') is the anthropomorphic condition or quality of having the head of an animal with a body either mostly or entirely looking human – the term being commonly used to refer the depiction of deities or otherwise specially able individuals. An entity with such qualities is said to be "theriomorphous". [ 21 ] Many of the gods and goddesses worshipped by the ancient Egyptians , for example, were commonly depicted as being theriocephalic. This phenomenon partly represented an intermediate step in a longer process of anthropomorphization of former animal deities (e.g. the goddess Hathor in her earliest form was depicted as a cow and in her latest manifestation as a woman with cows ears and sometimes a hairstyle resembling cows horns). But the form of depiction sometimes depended also on the aspects of a deity an artist wanted to accentuate (e.g. Ba , the aspect of personality of a human soul, was depicted as a bird with a humans head). This can also be seen in the different hieroglyphs that could be used to write the name of a single deity.
Other notable examples include:
Examples from other geographic areas include:
Many prominent pieces of children's literature over the past two centuries have featured humanized animal characters, often as protagonists in the stores. In the opinion of popular educator Lucy Sprague Mitchell , the appeal of such mythical and fantastic beings comes from how children desire "direct" language "told in terms of images— visual, auditory, tactile, muscle images". Another author has remarked that an "animal costume" provides "a way to emphasize or even exaggerate a particular characteristic".
The anthropomorphic characters in the seminal works by English writer Beatrix Potter in particular live an ambiguous situation, having human dress yet displaying many instinctive animal traits. Writing on the popularity of Peter Rabbit , a later author commented that in "balancing humanized domesticity against wild rabbit foraging, Potter subverted parental authority and its built in hypocrisy" in Potter's child-centered books. Writer Lisa Fraustino has cited on the subject R.M. Lockley 's tongue-in-cheek observation: "Rabbits are so human. Or is it the other way around— humans are so rabbit?" [ 22 ]
Writer H. G. Wells created his famous work The Island of Doctor Moreau , featuring a mixture of horror and science fiction elements, to promote the anti- vivisection cause as a part of his long-time advocacy for animal rights . Wells' story describes a man stuck on an island ruled over by the titular Dr. Moreau, a morally depraved scientist who has created several human-animal hybrids referred to as 'Beast Folk' through vivisection and even by combining parts of other animals for some of the 'Beast Folk'. The story has been adapted into film several times, with varying success. The most acclaimed version is the 1932 black-and-white treatment called Island of Lost Souls . [ 7 ] Wells himself wrote that "this story was the response of an imaginative mind to the reminder that humanity is but animal rough-hewn to a reasonable shape and in perpetual internal conflict between instinct and injunction," with the scandals surrounding Oscar Wilde being the impetus for the English writer's treatment of themes such as ethics and psychology. Challenging the Victorian era viewpoints of its time, the 1896 work presents a complex situation in which enhancing animals into hybrids involves both terrifying violence and pain as well as appears essentially futile, given the power of raw instinct. A pessimistic view towards the ability of human civilization to live by law-abiding , moral standards for long thus follows. [ 23 ]
On a more everyday life tone, featuring human-animal hybrids of mythological beings having common human experiences, A Centaur's Life , known in Japan as Centaur's Worries ( Japanese : セントールの悩み , Hepburn : Sentōru no Nayami ) , is a Japanese slice of life comedy manga series by Kei Murayama. [ 24 ] [ 25 ] The series has been serialized in Tokuma Shoten 's Monthly Comic Ryū magazine since February 2011, and is published in English by Seven Seas Entertainment . [ 26 ] [ 27 ] An anime television series adaptation by Haoliners Animation League aired in Japan from July to September 2017. [ 28 ] [ 29 ]
The 1986 horror film The Fly features a deformed and monstrous human-fly hybrid , played by actor Jeff Goldblum . [ 6 ] His character, scientist Seth Brundle, undergoes a teleportation experiment that goes awry and fuses him at a fundamental genetic level with a common fly caught besides him. Brundle experiences drastic mutations as a result that horrify him. Movie critic Gerardo Valero has written that the famous horror work, "released at the dawn of the AIDS epidemic ", "was seen by many as a metaphor for the disease" while also playing on bodily fears about dismemberment and coming apart that human beings inherently share. [ 30 ]
The H. P. Lovecraft –inspired movie Dagon , released in 2001, additionally features grotesque hybrid beings.
Heroic character examples of human-animal anthropomorphic characters include the two protagonists of the 2002 movie The Cat Returns ( Japanese title : 猫の恩返し), with the animated film featuring a young girl (named "Haru") being transformed against her will into a feline -human hybrid and fighting a villainous king of the cats with the help of a dashing male cat companion (known as the "Baron") at her side.
The science fiction film Splice , released 2009, shows scientists mixing together human and animal DNA in the hopes of advancing medical research at the pharmaceutical company that they work at. Calamitous results occur when the hybrid named Dren (portrayed by Delphine Chanéac ) is born. [ 3 ]
In terms of comic books , examples of fictional human-animal hybrids include the characters in Charles Burns ' Black Hole series. In those comics, a set of teenagers in a 1970s era town become afflicted by a bizarre disease; the sexually transmitted affliction mutates them into monstrous forms. [ 6 ]
Marvel Comics has a race of human-animal hybrids called the New Men who were created by the High Evolutionary by evolving the animals into humanoid forms.
Multiple video games have featured human-animal hybrids as enemies for the protagonist(s) to defeat, including powerful boss characters . For instance, the 2014 survival horror release The Evil Within includes grotesque hybrid beings, looking like the undead , attacking main character Detective Sebastian Castellanos. With partners Joseph Oda and Julie Kidman, the protagonist attempts investigate a multiple homicide at a mental hospital yet discovers a mysterious figure who turns the world around them into a living nightmare , Castellanos having to find the truth about the criminal psychopath . [ 31 ]
With general U.S. popular culture and its various subcultures , the furry fandom consists of individuals interested in a variety of artistic materials , this often featuring "furry art... [that] depicts a human-animal hybrid in everyday life". Specific people involved in creative media will frequently come up with a " fursona " depicting a version or versions of themselves as a hybrid creature. This practice functions as an outlet based on "personal ideas of self-expression" ( self-realization ). [ 32 ] | https://en.wikipedia.org/wiki/Human–animal_hybrid |
Human–city interaction is the intersection between human-computer interaction and urban computing . The area involves data-driven methods such as analysis tools, prediction methods to present the solutions to urban design problems. Practitioners, Designers, software engineers in this area employ large sets of user-centric data to design urban environments with high levels of interactivity. [ 1 ] This discipline mainly focuses on the user perspective and devises various interaction design between the citizen (user) and various urban entities. Common examples in the discipline include the interactivity between human and buildings, [ 2 ] Interaction between Human and IoT devices, [ 3 ] participatory and collective urban design, [ 4 ] and so on. The discipline attracts growing interests from people of various background such as designers, urban planners, computer scientists, and even architecture. Although the design canvas between human and city is board, Lee et al. proposed a framework considering the multi-disciplinary interests (Urban, Computers and Human) together, [ 5 ] in which the emerging technologies such as extended reality (XR) can serve as a platform for such co-design purposes. [ 6 ]
This computing article is a stub . You can help Wikipedia by expanding it .
This design -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Human–city_interaction |
Human-Robot Collaboration is the study of collaborative processes in human and robot agents work together to achieve shared goals. Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include robots for homes, hospitals, and offices, space exploration and manufacturing. Human-Robot Collaboration (HRC) is an interdisciplinary research area comprising classical robotics, human-computer interaction, artificial intelligence, process design, layout planning, ergonomics, cognitive sciences, and psychology. [ 1 ] [ 2 ]
Industrial applications of human-robot collaboration involve Collaborative Robots , or cobots, that physically interact with humans in a shared workspace to complete tasks such as collaborative manipulation or object handovers. [ 3 ]
Collaboration is defined as a special type of coordinated activity, one in which two or more agents work jointly with each other, together performing a task or carrying out the activities needed to satisfy a shared goal. [ 5 ] The process typically involves shared plans, shared norms and mutually beneficial interactions. [ 6 ] Although collaboration and cooperation are often used interchangeably, collaboration differs from cooperation as it involves a shared goal and joint action where the success of both parties depend on each other. [ 7 ]
For effective human-robot collaboration, it is imperative that the robot is capable of understanding and interpreting several communication mechanisms similar to the mechanisms involved in human-human interaction. [ 8 ] The robot must also communicate its own set of intents and goals to establish and maintain a set of shared beliefs and to coordinate its actions to execute the shared plan. [ 5 ] [ 9 ] In addition, all team members demonstrate commitment to doing their own part, to the others doing theirs, and to the success of the overall task. [ 9 ] [ 10 ]
Human-human collaborative activities are studied in depth in order to identify the characteristics that enable humans to successfully work together. [ 11 ] These activity models usually aim to understand how people work together in teams, how they form intentions and achieve a joint goal. Theories on collaboration inform human-robot collaboration research to develop efficient and fluent collaborative agents. [ 12 ]
The belief-desire-intention (BDI) model is a model of human practical reasoning that was originally developed by Michael Bratman. [ 13 ] The approach is used in intelligent agents research to describe and model intelligent agents. [ 14 ] The BDI model is characterized by the implementation of an agent's beliefs (the knowledge of the world, state of the world), desires (the objective to accomplish, desired end state) and intentions (the course of actions currently under execution to achieve the desire of the agent) in order to deliberate their decision-making processes. [ 15 ] BDI agents are able to deliberate about plans, select plans and execute plans.
Shared Cooperative Activity defines certain prerequisites for an activity to be considered shared and cooperative: mutual responsiveness, commitment to the joint activity and commitment to mutual support. [ 9 ] [ 16 ] An example case to illustrate these concepts would be a collaborative activity where agents are moving a table out the door, mutual responsiveness ensures that movements of the agents are synchronized; a commitment to the joint activity reassures each team member that the other will not at some point drop his side; and a commitment to mutual support deals with possible breakdowns due to one team member’s inability to perform part of the plan. [ 9 ]
Joint Intention Theory proposes that for joint action to emerge, team members must communicate to maintain a set of shared beliefs and to coordinate their actions towards the shared plan. [ 17 ] In collaborative work, agents should be able to count on the commitment of other members, therefore each agent should inform the others when they reach the conclusion that a goal is achievable, impossible, or irrelevant. [ 9 ]
The approaches to human-robot collaboration include human emulation (HE) and human complementary (HC) approaches. Although these approaches have differences, there are research efforts to develop a unified approach stemming from potential convergences such as Collaborative Control. [ 18 ] [ 19 ]
The human emulation approach aims to enable computers to act like humans or have human-like abilities in order to collaborate with humans. It focuses on developing formal models of human-human collaboration and applying these models to human-computer collaboration. In this approach, humans are viewed as rational agents who form and execute plans for achieving their goals and infer other people's plans. Agents are required to infer the goals and plans of other agents, and collaborative behavior consists of helping other agents to achieve their goals. [ 18 ]
The human complementary approach seeks to improve human-computer interaction by making the computer a more intelligent partner that complements and collaborates with humans. The premise is that the computer and humans have fundamentally asymmetric abilities. Therefore, researchers invent interaction paradigms that divide responsibility between human users and computer systems by assigning distinct roles that exploit the strengths and overcome the weaknesses of both partners. [ 18 ]
Specialization of Roles: Based on the level of autonomy and intervention, there are several human-robot relationships including master-slave, supervisor–subordinate, partner–partner, teacher–learner and fully autonomous robot. In addition to these roles, homotopy (a weighting function that allows a continuous change between leader and follower behaviors) was introduced as a flexible role distribution. [ 20 ]
Establishing shared goal(s): Through direct discussion about goals or inference from statements and actions, agents must determine the shared goals they are trying to achieve. [ 18 ]
Allocation of Responsibility and Coordination: Agents must decide how to achieve their goals, determine what actions will be done by each agent, and how to coordinate the actions of individual agents and integrate their results. [ 18 ]
Shared context: Agents must be able to track progress toward their goals. They must keep track of what has been achieved and what remains to be done. They must evaluate the effects of actions and determine whether an acceptable solution has been achieved. [ 18 ]
Communication: Any collaboration requires communication to define goals, negotiate over how to proceed and who will do what, and evaluate progress and results. [ 18 ]
Adaptation and learning: Collaboration over time require partners to adapt themselves to each other and learn from one's partner both directly or indirectly. [ 4 ] [ 18 ]
Time and space: The time-space taxonomy divides human-robot interaction into four categories based on whether the humans and robots are using computing systems at the same time (synchronous) or different times (asynchronous) and while in the same place (collocated) or in different places (non-collocated). [ 21 ] [ 22 ]
Ergonomics: Human factors and ergonomics are one of the key aspects for a sustainable human-robot collaboration. The robot control system can use biomechanical models and sensors to optimize various ergonomic metrics, such as muscle fatigue . [ 4 ] [ 23 ] | https://en.wikipedia.org/wiki/Human–robot_collaboration |
Human–robot interaction ( HRI ) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction , artificial intelligence , robotics , natural language processing , design , psychology and philosophy . A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems. [ 1 ]
Human–robot interaction has been a topic of both science fiction and academic speculation even before any robots existed. Because much of active HRI development depends on natural language processing , many aspects of HRI are continuations of human communications , a field of research which is much older than robotics.
The origin of HRI as a discrete problem was stated by 20th-century author Isaac Asimov in 1941, in his novel I, Robot . Asimov coined Three Laws of Robotics , namely:
These three laws provide an overview of the goals engineers and researchers hold for safety in the HRI field, although the fields of robot ethics and machine ethics are more complex than these three principles. However, generally human–robot interaction prioritizes the safety of humans that interact with potentially dangerous robotics equipment. Solutions to this problem range from the philosophical approach of treating robots as ethical agents (individuals with moral agency ), to the practical approach of creating safety zones. These safety zones use technologies such as lidar to detect human presence or physical barriers to protect humans by preventing any contact between machine and operator. [ 3 ]
Although initially robots in the human–robot interaction field required some human intervention to function, research has expanded this to the extent that fully autonomous systems are now far more common than in the early 2000s. [ 4 ] Autonomous systems include from simultaneous localization and mapping systems which provide intelligent robot movement to natural-language processing and natural-language generation systems which allow for natural, human-esque interaction which meet well-defined psychological benchmarks. [ 5 ]
Anthropomorphic robots (machines which imitate human body structure) are better described by the biomimetics field, but overlap with HRI in many research applications. Examples of robots which demonstrate this trend include Willow Garage 's PR2 robot , the NASA Robonaut , and Honda ASIMO . However, robots in the human–robot interaction field are not limited to human-like robots: Paro and Kismet are both robots designed to elicit emotional response from humans, and so fall into the category of human–robot interaction. [ 6 ]
Goals in HRI range from industrial manufacturing through Cobots , medical technology through rehabilitation, autism intervention, and elder care devices, entertainment, human augmentation, and human convenience. [ 7 ] Future research therefore covers a wide range of fields, much of which focuses on assistive robotics, robot-assisted search-and-rescue, and space exploration. [ 8 ]
Robots are artificial agents with capacities of perception and action in the physical world often referred by researchers as workspace. Their use has been generalized in factories but nowadays they tend to be found in the most technologically advanced societies in such critical domains as search and rescue, military battle, mine and bomb detection, scientific exploration, law enforcement, entertainment and hospital care.
These new domains of applications imply a closer interaction with the user. The concept of closeness is to be taken in its full meaning, robots and humans share the workspace but also share goals in terms of task achievement. This close interaction needs new theoretical models, on one hand for the robotics scientists who work to improve the robots utility and safety and on the other hand to evaluate the risks and benefits of this new "friend" for our modern society. The subfield of physical human–robot interaction (pHRI) has largely focused on device design to enable people to safely interact with robotic systems, but is increasingly developing algorithmic approaches in an attempt to support fluent and expressive interactions between humans and robotic systems. [ 1 ]
With the advance in AI , the research is focusing on one part towards the safest physical interaction but also on a socially correct interaction, dependent on cultural criteria. The goal is to build an intuitive, and easy communication with the robot through speech, gestures, and facial expressions.
Kerstin Dautenhahn refers to friendly Human–robot interaction as "Robotiquette" defining it as the "social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans" [ 9 ] The robot has to adapt itself to our way of expressing desires and orders and not the contrary. But every day environments such as homes have much more complex social rules than those implied by factories or even military environments. Thus, the robot needs perceiving and understanding capacities to build dynamic models of its surroundings. It needs to categorize objects , recognize and locate humans and further recognize their emotions . The need for dynamic capacities pushes forward every sub-field of robotics.
Furthermore, by understanding and perceiving social cues, robots can enable collaborative scenarios with humans. For example, with the rapid rise of personal fabrication machines such as desktop 3D printers , laser cutters , etc., entering our homes, scenarios may arise where robots can collaboratively share control, co-ordinate and achieve tasks together. Industrial robots have already been integrated into industrial assembly lines and are collaboratively working with humans. The social impact of such robots have been studied [ 10 ] and has indicated that workers still treat robots and social entities, rely on social cues to understand and work together.
On the other end of HRI research the cognitive modelling of the "relationship" between human and the robots benefits the psychologists and robotic researchers the user study are often of interests on both sides. This research endeavours part of human society. For effective human – humanoid robot interaction [ 11 ] numerous communication skills [ 12 ] and related features should be implemented in the design of such artificial agents/systems.
HRI research spans a wide range of fields, some general to the nature of HRI.
Methods for perceiving humans in the environment are based on sensor information. Research on sensing components and software led by Microsoft provide useful results for extracting the human kinematics (see Kinect ). An example of older technique is to use colour information for example the fact that for light skinned people the hands are lighter than the clothes worn. In any case a human modelled a priori can then be fitted to the sensor data. The robot builds or has (depending on the level of autonomy the robot has) a 3D mapping of its surroundings to which is assigned the humans locations.
Most methods intend to build a 3D model through vision of the environment. The proprioception sensors permit the robot to have information over its own state. This information is relative to a reference. Theories of proxemics may be used to perceive and plan around a person's personal space.
A speech recognition system is used to interpret human desires or commands. By combining the information inferred by proprioception, sensor and speech the human position and state (standing, seated). In this matter, natural-language processing is concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural-language data. For instance, neural-network architectures and learning algorithms that can be applied to various natural-language processing tasks including part-of-speech tagging, chunking, named-entity recognition , and semantic role labeling . [ 13 ]
Motion planning in dynamic environments is a challenge that can at the moment only be achieved for robots with 3 to 10 degrees of freedom . Humanoid robots or even 2 armed robots, which can have up to 40 degrees of freedom, are unsuited for dynamic environments with today's technology. However lower-dimensional robots can use the potential field method to compute trajectories which avoid collisions with humans.
Humans exhibit negative social and emotional responses as well as decreased trust toward some robots that closely, but imperfectly, resemble humans; this phenomenon has been termed the "Uncanny Valley." [ 14 ] However recent research in telepresence robots has established that mimicking human body postures and expressive gestures has made the robots likeable and engaging in a remote setting. [ 15 ] Further, the presence of a human operator was felt more strongly when tested with an android or humanoid telepresence robot than with normal video communication through a monitor. [ 16 ]
While there is a growing body of research about users' perceptions and emotions towards robots, we are still far from a complete understanding. Only additional experiments will determine a more precise model.
Based on past research, we have some indications about current user sentiment and behavior around robots: [ 17 ] [ 18 ]
A large body of work in the field of human–robot interaction has looked at how humans and robots may better collaborate. The primary social cue for humans while collaborating is the shared perception of an activity, to this end researchers have investigated anticipatory robot control through various methods including: monitoring the behaviors of human partners using eye tracking , making inferences about human task intent, and proactive action on the part of the robot. [ 23 ] The studies revealed that the anticipatory control helped users perform tasks faster than with reactive control alone.
A common approach to program social cues into robots is to first study human–human behaviors and then transfer the learning. [ 24 ] For example, coordination mechanisms in human–robot collaboration [ 25 ] are based on work in neuroscience [ 26 ] which examined how to enable joint action in human–human configuration by studying perception and action in a social context rather than in isolation. These studies have revealed that maintaining a shared representation of the task is crucial for accomplishing tasks in groups. For example, the authors have examined the task of driving together by separating responsibilities of acceleration and braking i.e., one person is responsible for accelerating and the other for braking; the study revealed that pairs reached the same level of performance as individuals only when they received feedback about the timing of each other's actions. Similarly, researchers have studied the aspect of human–human handovers with household scenarios like passing dining plates in order to enable an adaptive control of the same in human–robot handovers. [ 27 ] Another study in the domain of Human Factors and Ergonomics of human–human handovers in warehouses and supermarkets reveal that Givers and Receivers perceive handover tasks differently which has significant implications for designing user-centric human–robot collaborative systems. [ 28 ] Most recently, researchers have studied a system that automatically distributes assembly tasks among co-located workers to improve co-ordination. [ 29 ]
Some research involved designing a new robot while others use available robots to conduct study. Some commonly used robots are Nao , a humanoid and programmable robot. Pepper , another social humanoid robot, and Misty , a programmable companion robot.
The majority of robots are of a white color, stemming from a bias against robots of other colors. [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ]
The application areas of human–robot interaction include robotic technologies that are used by humans for industry, medicine, and companionship, among other purposes.
Major manufacturers like FANUC produce a wide range of industrial robots have been implemented to collaborate with humans to perform industrial manufacturing tasks. While humans have the flexibility and the intelligence to consider different approaches to solve the problem, choose the best option among all choices, and then command robots to perform assigned tasks, robots are able to be more precise and more consistent in performing repetitive and dangerous work. [ 35 ] Together, the collaboration of industrial robots and humans demonstrates that robots have the capabilities to ensure efficiency of manufacturing and assembling. [ 35 ] However, there are persistent concerns about the safety of human–robot collaboration, since industrial robots have the ability to move heavy objects and operate often dangerous and sharp tools, quickly and with force. As a result, this presents a potential threat to the people who work in the same workspace. [ 35 ] Therefore, the planning of safe and effective layouts for collaborative workplaces is one of the most challenging topics that research faces. [ 36 ]
A rehabilitation robot is an example of a robot-aided system implemented in health care . This type of robot would aid stroke survivors or individuals with neurological impairment to recover their hand and finger movements. [ 37 ] [ 38 ] In the past few decades, the idea of how human and robot interact with each other is one factor that has been widely considered in the design of rehabilitation robots. [ 38 ] For instance, human–robot interaction plays an important role in designing exoskeleton rehabilitation robots since the exoskeleton system makes direct contact with humans' body. [ 37 ]
Nursing robots are aimed to provide assistance to elderly people who may have faced a decline in physical and cognitive function, and, consequently, developed psychosocial issues. [ 39 ] By assisting in daily physical activities, physical assistance from the robots would allow the elderly to have a sense of autonomy and feel that they are still able to take care of themselves and stay in their own homes. [ 39 ]
Long-term research on human-robot interaction could show that residents of care home are willing to interact with humanoid robots and benefit from cognitive and physical activation that is led by the robot Pepper. [ 40 ] Another long-term study in a care home could show that people working in the care sector are willing to use robots in their daily work with the residents. [ 41 ] But it also revealed that even though that the robots are ready to be used, they do need human assistants, they cannot replace the human work force but they can assist them and give them new possibilities. [ 41 ]
Over the past decade, human–robot interaction has shown promising outcomes in autism intervention. [ 43 ] Children with autism spectrum disorders (ASD) are more likely to connect with robots than humans, and using social robots is considered to be a beneficial approach to help these children with ASD. [ 43 ]
However, social robots that are used to intervene in children's ASD are not viewed as viable treatment by clinical communities because the study of using social robots in ASD intervention, often, does not follow standard research protocol. [ 43 ] In addition, the outcome of the research could not demonstrate a consistent positive effect that could be considered as evidence-based practice (EBP) based on the clinical systematic evaluation. [ 43 ] As a result, the researchers have started to establish guidelines which suggest how to conduct studies with robot-mediated intervention and hence produce reliable data that could be treated as EBP that would allow clinicians to choose to use robots in ASD intervention. [ 43 ]
Education robots
Robots can become tutors or peers in the classroom. [ 44 ] When acting as a tutor, the robot can provide instruction, information and also individual attention to student. When acting as a peer learner, the robot can enable "learning by teaching" for students. [ 45 ]
Robots can be configured as collaborative robot and can be used for rehabilitation of users with motor impairment. Using various interactive technologies like automatic speech recognition , eye gaze tracking and so on, users with motor impairment can control robotic agents and use it for rehabilitation activities like powered wheelchair control, object manipulation and so on.
A specific example of human–robot interaction is the human-vehicle interaction in automated driving. The goal of human-vehicle cooperation is to ensure safety, security, and comfort in automated driving systems . [ 46 ] The continued improvement in this system and the progress in advancements towards highly and fully automated vehicles aim to make the driving experience safer and more efficient in which humans do not need to intervene in the driving process when there is an unexpected driving condition such as a pedestrian walking across the street when it is not supposed to. [ 46 ]
Unmanned aerial vehicles (UAV) and unmanned underwater vehicles (UUV) have the potential to assist search and rescue work in wilderness areas , such as locating a missing person remotely from the evidence that they left in surrounding areas. [ 47 ] [ 48 ] The system integrates autonomy and information, such as coverage maps , GPS information and quality search video, to support humans performing the search and rescue work efficiently in the given limited time. [ 47 ] [ 48 ]
Humans have been working on achieving the next breakthrough in space exploration, such as a crewed mission to Mars. [ 49 ] This challenge identified the need for developing planetary rovers that are able to assist astronauts and support their operations during their mission. [ 49 ] The collaboration between rovers, UAVs, and humans enables leveraging capabilities from all sides and optimizes task performance. [ 49 ]
Human labor has been greatly used in agriculture but Agricultural robots like milking robots have been adopted in large-scale farming. Hygiene is the main issue in the agri-food sector and the invention of this technology has widely impacted agriculture. Robots can also be used in tasks that might be hazardous to human health like in the application of chemicals to plants. [ 50 ]
Bartneck and Okada [ 51 ] suggest that a robotic user interface can be described by the following four properties:
The International Conference on Future Applications of AI, Sensors, and Robotics in Society explore the state of the art research, highlighting the future challenges as well as the hidden potential behind the technologies. The accepted contributions to this conference will be published annually in the special edition of the Journal of Future Robot Life.
The International Conference on Social Robotics is a conference for scientists, researchers, and practitioners to report and discuss the latest progress of their forefront research and findings in social robotics, as well as interactions with human beings and integration into our society.
The International Congress on Love and Sex with Robots is an annual congress that invites and encourages a broad range of topics, such as AI, Philosophy, Ethics, Sociology, Engineering, Computer Science, Bioethics.
The earliest academic papers on the subject were presented at the 2006 E.C. Euron Roboethics Atelier, organized by the School of Robotics in Genoa, followed a year later by the first book – "Love and Sex with Robots" – published by Harper Collins in New York. Since that initial flurry of academic activity in this field the subject has grown significantly in breadth and worldwide interest. Three conferences on Human–Robot Personal Relationships were held in the Netherlands during the period 2008–2010, in each case the proceedings were published by respected academic publishers, including Springer-Verlag. After a gap until 2014 the conferences were renamed as the "International Congress on Love and Sex with Robots", which have previously taken place at the University of Madeira in 2014; in London in 2016 and 2017; and in Brussels in 2019. Additionally, the Springer-Verlag "International Journal of Social Robotics", had, by 2016, published articles mentioning the subject, and an open access journal called "Lovotics" was launched in 2012, devoted entirely to the subject. The past few years have also witnessed a strong upsurge of interest by way of increased coverage of the subject in the print media, TV documentaries and feature films, as well as within the academic community.
The International Congress on Love and Sex with Robots provides an excellent opportunity for academics and industry professionals to present and discuss their innovative work and ideas in an academic symposium.
This symposium is organized in collaboration with the Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour.
The IEEE International Symposium on Robot and Human Interactive Communication ( RO-MAN ) was founded in 1992 by Profs. Toshio Fukuda, Hisato Kobayashi, Hiroshi Harashima and Fumio Hara. Early workshop participants were mostly Japanese, and the first seven workshops were held in Japan. Since 1999, workshops have been held in Europe and the United States as well as Japan, and participation has been of international scope.
This conference is amongst the best conferences in the field of HRI and has a very selective reviewing process. The average acceptance rate is 26% and the average attendance is 187. Around 65% of the contributions to the conference come from the US and the high level of quality of the submissions to the conference becomes visible by the average of 10 citations that the HRI papers attracted so far. [ 52 ]
There are many conferences that are not exclusively HRI, but deal with broad aspects of HRI, and often have HRI papers presented.
There are currently two dedicated HRI Journals
and there are several more general journals in which one will find HRI articles.
There are several books available that specialise on Human–Robot Interaction. While there are several edited books, only a few dedicated texts are available:
Many universities offer courses in Human–Robot Interaction.
There are also online courses available such as Mooc : | https://en.wikipedia.org/wiki/Human–robot_interaction |
In mathematics , the Humbert polynomials π λ n , m ( x ) are a generalization of Pincherle polynomials introduced by Humbert ( 1921 ) given by the generating function
Boas & Buck (1958 , p.58).
This polynomial -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Humbert_polynomials |
Hume's principle or HP says that, given two collections of objects F {\displaystyle {\mathcal {F}}} and G {\displaystyle {\mathcal {G}}} with properties F {\displaystyle F} and G {\displaystyle G} respectively, the number of objects with property F {\displaystyle F} is equal to the number of objects with property G {\displaystyle G} if and only if there is a one-to-one correspondence (a bijection) between F {\displaystyle {\mathcal {F}}} and G {\displaystyle {\mathcal {G}}} . In other words, that bijections are the "correct" way of measuring size.
HP can be stated formally in systems of second-order logic . It is named for the Scottish philosopher David Hume and was coined by George Boolos . The principle plays a central role in Gottlob Frege 's philosophy of mathematics. Frege shows that HP and suitable definitions of arithmetical notions entail all axioms of what we now call second-order arithmetic . This result is known as Frege's theorem , which is the foundation for a philosophy of mathematics known as neo-logicism .
Hume's Principle appears in Frege's Foundations of Arithmetic (§63), [ 1 ] which quotes from Part III of Book I of David Hume 's A Treatise of Human Nature (1740).
In the treatise, Hume sets out seven fundamental relations between ideas, in particular concerning proportion in quantity or number . He argues that our reasoning about proportion in quantity, as represented by geometry , can never achieve "perfect precision and exactness", since its principles are derived from sense-appearance. He contrasts this with reasoning about number or arithmetic , in which such a precision can be attained:
Algebra and arithmetic [are] the only sciences in which we can carry on a chain of reasoning to any degree of intricacy, and yet preserve a perfect exactness and certainty. We are possessed of a precise standard, by which we can judge of the equality and proportion of numbers; and according as they correspond or not to that standard, we determine their relations, without any possibility of error. When two numbers are so combined, as that the one has always a unit answering to every unit of the other, we pronounce them equal ; and it is for want of such a standard of equality in [spatial] extension, that geometry can scarce be esteemed a perfect and infallible science. (I. III. I.) [ 2 ]
Note Hume's use of the word number in the ancient sense to mean a set or collection of things rather than the common modern notion of "positive integer". The ancient Greek notion of number ( arithmos ) is of a finite plurality composed of units. See Aristotle , Metaphysics , 1020a14 and Euclid , Elements , Book VII, Definition 1 and 2. The contrast between the old and modern conception of number is discussed in detail in Mayberry (2000).
The principle that cardinal number was to be characterized in terms of one-to-one correspondence had previously been used by Georg Cantor , whose writings Frege knew. The suggestion has therefore been made that Hume's principle ought better be called "Cantor's Principle" or "The Hume-Cantor Principle". But Frege criticized Cantor on the ground that Cantor defines cardinal numbers in terms of ordinal numbers , whereas Frege wanted to give a characterization of cardinals that was independent of the ordinals. Cantor's point of view, however, is the one embedded in contemporary theories of transfinite numbers , as developed in axiomatic set theory . | https://en.wikipedia.org/wiki/Hume's_principle |
Hume-Rothery rules , named after William Hume-Rothery , are a set of basic rules that describe the conditions under which an element could dissolve in a metal , forming a solid solution . There are two sets of rules; one refers to substitutional solid solutions, and the other refers to interstitial solid solutions.
For substitutional solid solutions, the Hume-Rothery rules are as follows:
For interstitial solid solutions, the Hume-Rothery Rules are:
Fundamentally, the Hume-Rothery rules are restricted to binary systems that form either substitutional or interstitial solid solutions. However, this approach limits assessing advanced alloys which are commonly multicomponent systems. Free energy diagrams (or phase diagrams ) offer in-depth knowledge of equilibrium restraints in complex systems. In essence the Hume-Rothery rules (and Pauling's rules ) are based on geometrical restraints. Likewise are the advancements being done to the Hume-Rothery rules. Where they are being considered as critical contact criterion describable with Voronoi diagrams . [ 8 ] This could ease the theoretical phase diagram generation of multicomponent systems.
For alloys containing transition metal elements there is a difficulty in interpretation of the Hume-Rothery electron concentration rule, as the values of e/a values (number of itinerant electrons per atom) for transition metals have been quite controversial for a long time, and no satisfactory solutions have yet emerged. [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/Hume-Rothery_rules |
Humeanism refers to the philosophy of David Hume and to the tradition of thought inspired by him. Hume was an influential eighteenth century Scottish philosopher well known for his empirical approach, which he applied to various fields in philosophy. [ 1 ] [ 2 ] In the philosophy of science , he is notable for developing the regularity theory of causation , which in its strongest form states that causation is nothing but constant conjunction of certain types of events without any underlying forces responsible for this regularity of conjunction. This is closely connected to his metaphysical thesis that there are no necessary connections between distinct entities . The Humean theory of action defines actions as bodily behavior caused by mental states and processes without the need to refer to an agent responsible for this. The slogan of Hume's theory of practical reason is that "reason is...the slave of the passions". It restricts the sphere of practical reason to instrumental rationality concerning which means to employ to achieve a given end. But it denies reason a direct role regarding which ends to follow. Central to Hume's position in metaethics is the is-ought distinction . It states that is-statements , which concern facts about the natural world, do not imply ought-statements , which are moral or evaluative claims about what should be done or what has value. In philosophy of mind , Hume is well known for his development of the bundle theory of the self. It states that the self is to be understood as a bundle of mental states and not as a substance acting as the bearer of these states, as is the traditional conception. Many of these positions were initially motivated by Hume's empirical outlook . It emphasizes the need to ground one's theories in experience and faults opposing theories for failing to do so. But many philosophers within the Humean tradition have gone beyond these methodological restrictions and have drawn various metaphysical conclusions from Hume's ideas.
Causality is usually understood as a relation between two events where the earlier event is responsible for bringing about or necessitating the later event. [ 3 ] Hume's account of causality has been influential. His first question is how to categorize causal relations. On his view, they belong either to relations of ideas or matters of fact . This distinction is referred to as Hume's fork . [ 4 ] Relations of ideas involve necessary connections that are knowable a priori independently of experience. Matters of fact , on the other hand, concern contingent propositions about the world knowable only a posteriori through perception and memory. [ 1 ] [ 5 ] Causal relations fall under the category of matters of facts, according to Hume, since it is conceivable that they do not obtain, which would not be the case if they were necessary. For Hume's empiricist outlook , this means that causal relations should be studied by attending to sensory experience. [ 1 ] [ 5 ] The problem with this is that the causal relation itself is never given directly in perception. Through visual perception, for example, we can know that a stone was first thrown in the direction of a window and that subsequently, the window broke, but we do not directly see that the throwing caused the breaking. This leads to Hume's skeptical conclusion: that, strictly speaking, we do not know that a causal relation was involved. [ 1 ] [ 5 ] Instead, we just assume it based on earlier experiences that had very similar chains of events as their contents. This results in a habit of expecting the later event given the impression of the earlier one. On the metaphysical level, this conclusion has often been interpreted as the thesis that causation is nothing but constant conjunction of certain types of events. This is sometimes termed the "simple regularity theory of causation". [ 1 ] [ 5 ] [ 6 ]
A closely related metaphysical thesis is known as Hume's dictum : "[t]here is no object, which implies the existence of any other if we consider these objects in themselves". [ 7 ] Jessica Wilson provides the following contemporary formulation: "[t]here are no metaphysically necessary connections between wholly distinct, intrinsically typed, entities". [ 8 ] Hume's intuition motivating this thesis is that while experience presents us with certain ideas of various objects, it might as well have presented us with very different ideas. So when I perceive a bird on a tree, I might as well have perceived a bird without a tree or a tree without a bird. This is so because their essences do not depend upon one another. [ 7 ] Followers and interpreters of Hume have sometimes used Hume's dictum as the metaphysical foundation of Hume's theory of causation . On this view, there cannot be any causal relation in a robust sense since this would involve one event necessitating another event, the possibility of which is denied by Hume's dictum. [ 8 ] [ 9 ]
Hume's dictum has been employed in various arguments in contemporary metaphysics . It can be used, for example, as an argument against nomological necessitarianism , the view that the laws of nature are necessary, i.e. are the same in all possible worlds . [ 10 ] [ 11 ] To see how this might work, consider the case of salt being thrown into a cup of water and subsequently dissolving. [ 12 ] This can be described as a series of two events, a throwing-event and a dissolving-event. Necessitarians hold that all possible worlds with the throwing-event also contain a subsequent dissolving-event. But the two events are distinct entities, so according to Hume's dictum, it is possible to have one event without the other. David Lewis follows this line of thought in formulating his principle of recombination : "anything can coexist with anything else, at least provided they occupy distinct spatiotemporal positions. Likewise, anything can fail to coexist with anything else". [ 13 ] Combined with the assumption that reality consists on the most fundamental level of nothing but a spatio-temporal distribution of local natural properties, this thesis is known as " Humean supervenience ". It states that laws of nature and causal relations merely supervene on this distribution of local natural properties. [ 14 ] [ 15 ] An even wider application is to use Hume's dictum as the foundational principle determining which propositions or worlds are possible and which are impossible based on the notion of recombination. [ 16 ] [ 17 ]
Not all interpreters agree that the reductive metaphysical outlook on causation of the Humean tradition presented in the last paragraphs actually reflects Hume's own position. [ 18 ] [ 19 ] [ 6 ] Some argue against the metaphysical aspect , instead claiming that Hume's view concerning causality remained within the field of epistemology as a skeptical position on the possibility of knowing about causal relations. Others, sometimes referred to as the "New Hume tradition", reject the reductive aspect by holding that Hume was, despite his skeptical outlook, a robust realist about causation. [ 18 ] [ 19 ]
Theories of action try to determine what actions are, specifically their essential features. One important feature of actions , which sets them apart from mere behavior , is that they are intentional or guided "under an idea". [ 20 ] [ 21 ] On this issue, Hume's analysis of action emphasizes the role of psychological faculties and states, like reasoning, sensation, memory, and passion. It is characteristic of his outlook that it manages to define action without reference to an agent . Agency arises instead from psychological states and processes like beliefs, desires and deliberation. [ 20 ] [ 22 ] [ 23 ] Some actions are initiated upon concluding an explicit deliberation on which course of action to take. But for many other actions, this is not the case. Hume infers from this that " acts of the will " are not a necessary requirement for actions. [ 20 ]
The most prominent philosopher of action in the Humean tradition is Donald Davidson . Following Hume in defining actions without reference to an agent, he holds that actions are bodily movements that are caused by intentions. [ 24 ] The intentions themselves are explained in terms of beliefs and desires . [ 21 ] For example, the action of flipping a light switch rests, on the one hand, on the agent's belief that this bodily movement would turn on the light and, on the other hand, on the desire to have light. [ 25 ] According to Davidson, it is not just the bodily behavior that counts as the action but also the consequences that follow from it. So the movement of the finger flipping the switch is part of the action as well as the electrons moving through the wire and the light bulb turning on. Some consequences are included in the action even though the agent did not intend them to happen. [ 26 ] [ 27 ] It is sufficient that what the agent does "can be described under an aspect that makes it intentional". [ 28 ] [ 27 ] So, for example, if flipping the light switch alerts the burglar then alerting the burglar is part of the agent's actions. [ 21 ]
One important objection to Davidson's and similar Humean theories focuses on the central role assigned to causation in defining action as bodily behavior caused by intention. The problem has been referred to as wayward or deviant causal chains. [ 29 ] A causal chain is wayward if the intention caused its goal to realize but in a very unusual way that was not intended, e.g. because the skills of the agent are not exercised in the way planned. [ 21 ] For example, a rock climber forms the intention to kill the climber below him by letting go of the rope. A wayward causal chain would be that, instead of opening the holding hand intentionally, the intention makes the first climber so nervous that the rope slips through his hand and thus leads to the other climber's death. [ 30 ] Davidson addresses this issue by excluding cases of wayward causation from his account since they are not examples of intentional behavior in the strict sense. So bodily behavior only constitutes an action if it was caused by intentions in the right way . But this response has been criticized because of its vagueness since spelling out what "right way" means has proved rather difficult. [ 31 ] [ 32 ]
The slogan of Hume's theory of practical reason is that "reason is...the slave of the passions". [ 22 ] It expresses the idea that it is the function of practical reason to find the means for realizing pre-given ends . Important for this issue is the distinction between means and ends . [ 33 ] Ends are based on intrinsic desires , which are about things that are wanted for their own sake or are valuable in themselves . Means , on the other hand, are based on instrumental desires which want something for the sake of something else and thereby depend on other desires. [ 34 ] [ 35 ] So on this view, practical reason is about how to achieve something but it does not concern itself with what should be achieved. [ 36 ] What should be achieved is determined by the agent's intrinsic desires. This may vary a lot from person to person since different people want very different things. [ 20 ]
In contemporary philosophy, Hume's theory of practical reason is often understood in terms of norms of rationality . [ 20 ] On the one hand, it is the thesis that we should be motivated to employ the means necessary for the ends we have. Failing to do so would be irrational. [ 36 ] Expressed in terms of practical reasons, it states that if an agent has a reason to realize an end, this reason is transmitted from the end to the means, i.e. the agent also has a derivative reason to employ the means. [ 22 ] [ 37 ] This thesis is seldom contested since it seems quite intuitive. Failing to follow this requirement is a form of error, not only when judged from an external perspective, but even from the agent's own perspective: the agent cannot plead that he does not care since he already has a desire for the corresponding end. [ 22 ] [ 20 ]
On the other hand, contemporary Humeanism about practical reason includes the assertion that only our desires determine which initial reasons we have. [ 22 ] [ 36 ] [ 38 ] So having a desire to swim at the beach provides the agent with a reason to do so, which in turn provides him with a reason to travel to the beach. On this view, whether the agent has this desire is not a matter of being rational or not. Rationality just requires that an agent who wants to swim at the beach should be motivated to travel there. This thesis has proved most controversial. [ 22 ] Some have argued that desires do not provide reasons at all, or only in special cases. This position is often combined with an externalist view of rationality: that reasons are given not from the agent's psychological states but from objective facts about the world, for example, from what would be objectively best. [ 39 ] [ 40 ] This is reflected, for example, in the view that some desires are bad or irrational and can be criticized on these grounds. [ 36 ] On this position, psychological states like desires may be motivational reasons , which move the agent, but not normative reasons , which determine what should be done. [ 41 ] [ 42 ] Others allow that desires provide reasons in the relevant sense but deny that this role is played only by desires. So there may be other psychological states or processes , like evaluative beliefs or deliberation, that also determine what we should do. [ 43 ] This can be combined with the thesis that practical reason has something to say about which ends we should follow, for example, by having an impact either on these other states or on desires directly. [ 20 ]
A common dispute between Humeans and Anti-Humeans in the field of practical reason concerns the status of morality . Anti-Humeans often assert that everyone has a reason to be moral. [ 22 ] But this seems to be incompatible with the Humean position, according to which reasons depend on desires and not everyone has a desire to be moral. This poses the following threat: it may lead to cases where an agent simply justifies his immoral actions by pointing out that he had no desire to be moral. [ 20 ] One way to respond to this problem is to draw a clear distinction between rationality and morality. If rationality is concerned with what should be done according to the agent's own perspective then it may well be rational to act immorally in cases when the agent lacks moral desires. Such actions are then rationally justified but immoral nonetheless. [ 22 ] But it is a contested issue whether there really is such a gap between rationality and morality. [ 44 ]
Central to Hume's position in metaethics is the is-ought distinction . It is guided by the idea that there is an important difference between is-statements , which concern facts about the natural world, and ought-statements , which are moral or evaluative claims about what should be done or what has value. The key aspect of this difference is that is-statements do not imply ought-statements . [ 45 ] [ 46 ] [ 47 ] [ 48 ] This is important, according to Hume, because this type of mistaken inference has been a frequent source of error in the history of philosophy. Based on this distinction, interpreters have often attributed various related philosophical theses to Hume in relation to contemporary debates in metaethics. [ 45 ] [ 46 ] One of these theses concerns the dispute between cognitivism and non-cognitivism . Cognitivists assert that ought-statements are truth-apt , i.e. are either true or false. They resemble is-statements in this sense, which is rejected by non-cognitivists. [ 49 ] [ 50 ] Some non-cognitivists deny that ought-statements have meaning at all, although the more common approach is to account for their meaning in other ways. Prescriptivists treat ought-statements as prescriptions or commands, which are meaningful without having a truth-value. [ 51 ] Emotivists , on the other hand, hold that ought-statements merely express the speaker's emotional attitudes in the form of approval or disapproval. [ 52 ] The debate between cognitivism and non-cognitivism concerns the semantic level about the meaning and truth-value of statements. It is reflected on the metaphysical level as the dispute about whether normative facts about what should be the case are part of reality, as realists claim, or not, as anti-realists contend. [ 53 ] [ 54 ] Based on Hume's denial that ought-statements are about facts, he is usually interpreted as an anti-realist. [ 46 ] But interpreters of Hume have raised various doubts both for labeling him as an anti-realist and as a non-cognitivist. [ 47 ]
In philosophy of mind, Hume is well known for his development of the bundle theory of the self. [ 55 ] [ 56 ] [ 57 ] In his analyses, he uses the terms "self", "mind" and "person" interchangeably. [ 58 ] He denies the traditional conception, usually associated with René Descartes , that the mind is constituted by a substance or an immaterial soul that acts as the bearer of all its mental states. [ 57 ] The key to Hume's critique of this conception comes from his empirical outlook : that such a substance is never given as part of our experience. Instead, introspection only shows a manifold of mental states, referred to by Hume as "perceptions". [ 58 ] [ 59 ] For Hume, this epistemic finding implies a semantic conclusion: that the words "mind" or "self" cannot mean substance of mental states but must mean bundle of perceptions . This is the case because, according to Hume, words are associated with ideas and ideas are based on impressions. So without impressions of a mental substance, we lack the corresponding idea. [ 58 ] Hume's theory is often interpreted as involving an ontological claim about what selves actually are, which goes beyond the semantic claim about what the word "self" means. But others contend that this constitutes a misinterpretation of Hume since he restricts his claims to the epistemic and semantic level. [ 59 ]
One problem for the bundle theory of the self is how to account for the unity of the self. This is usually understood in terms of diachronic unity , i.e. how the mind is unified with itself at different times or how it persists through time. But it can also be understood in terms of synchronic unity, i.e. how at one specific time, there is unity among the different mental states had by the same subject. [ 55 ] [ 57 ] A substance, unlike a simple collection, can explain either type of unity. This is why bundles are not equated with mere collections, the difference being that the bundled elements are linked to each other by a relation often referred to as "compresence", "co-personality" or "co-consciousness". Hume tried to understand this relation in terms of resemblance and causality . [ 55 ] [ 56 ] On this account, two perceptions belong to the same mind if they resemble each other and/or stand in the right causal relations to each other. Hume's particular version of this approach is usually rejected, but there are various other proposals on how to solve this problem compatible with the bundle theory. They include accounting for the unity in terms of psychological continuity or seeing it as a primitive aspect of the compresence-relation . [ 60 ] [ 61 ] [ 57 ] | https://en.wikipedia.org/wiki/Humeanism |
Humic substances ( HS ) are colored relatively recalcitrant organic compounds naturally formed during long-term decomposition and transformation of biomass residues. The color of humic substances varies from bright yellow to light or dark brown leading to black. The term comes from humus , which in turn comes from the Latin word humus , meaning "soil, earth". [ 1 ] Humic substances represent the major part of organic matter in soil , peat , coal , and sediments , and are important components of dissolved natural organic matter (NOM) in lakes (especially dystrophic lakes ), rivers, and sea water . Humic substances account for 50 – 90% of cation exchange capacity in soils.
"Humic substances" is an umbrella term covering humic acid, fulvic acid and humin, which differ in solubility. By definition, humic acid (HA) is soluble in water at neutral and alkaline pH, but insoluble at acidic pH < 2. Fulvic acid (FA) is soluble in water at any pH. Humin is not soluble in water at any pH.
This definition of humic substances is largely operational. It is rooted in the history of soil science and, more precisely, in the tradition of alkaline extraction, which dates back to 1786, when Franz Karl Achard treated peat with a solution of potassium hydroxide and, after subsequent addition of an acid, obtained an amorphous dark precipitate (i.e., humic acid). Aquatic humic substances were isolated for the first time in 1806, from spring water by Jöns Jakob Berzelius .
In terms of chemistry, FA, HA, and humin share more similarities than differences and represent a continuum of humic molecules. All of them are constructed from similar aromatic , polyaromatic , aliphatic , and carbohydrate units and contain the same functional groups (mainly carboxylic , phenolic , and ester groups), albeit in varying proportions.
Water solubility of humic substances is primarily governed by interplay of two factors: the amount of ionizable functional groups and (mainly carboxylic) and molecular weight (MW). In general, fulvic acid has a higher amount of carboxylic groups and lower average molecular weight than does humic acid. Measured average molecular weights vary with source; however, molecular weight distributions of HA and FA overlap significantly.
Age and origin of the source material determine the chemical structure of humic substances. In general, humic substances derived from soil and peat (which takes hundreds to thousands of years to form) have higher molecular weight, higher amounts of O and N, more carbohydrate units, and fewer polyaromatic units than humic substances derived from coal and leonardite (which takes millions of years to form).
Isolation of HS is the result of an alkaline extraction from solid sources of NOM the adsorption of HS on a resin. [ 2 ] [ 3 ] [ 4 ] A newer view of humic substances is that they are not mostly high-molecular-weight macropolymers but rather represent a heterogeneous mixture of relatively small molecular components of the soil organic matter auto-assembled in supramolecular associations and are composed of a variety of compounds of biological origin and synthesized by abiotic and biotic reactions in soil. and surface waters [ 5 ] It is the large molecular complexity of the soil humeome [ 6 ] that confers to humic matter its bioactivity in, its stability in ecosystems, soil and its role as plant growth promoter (in particular plant roots). [ 7 ]
The academic definition of humic substances is under debate and some researchers argue against the traditional concepts of humification and seek to forgo alkali extract method and to analyze the soil directly. [ 8 ]
The formation of HS in nature is one of the least understood aspects of humus chemistry and one of the most intriguing. Historically, there have been three main theories to explain it: the lignin theory of Waksman (1932), the polyphenol theory, and the sugar-amine condensation theory of Maillard (1911). [ 9 ] [ 10 ] Humic substances are formed by the microbial degradation of dead biota matter , such as lignin , cellulose. ligno-cellulose and charcoal . [ 11 ] [ 12 ] Humic substances in the lab are resistant to further biodegradation. Their structure, elemental composition and content of functional groups of a given sample depend on the water or soil source and the specific procedures and conditions of extraction. Nevertheless, the average properties of lab extractes HS from different sources are remarkably similar.
Historically, scientists have used variations of similar methods for extracting HS from NOM and separating the extracts into HA and FA. The International Humic Substances Society advocates using standard laboratory methods to prepare humic and fulvic acids. Humic substances are extracted from soil and other solid sources using 0.1 M NaOH, under a nitrogen atmosphere, to prevent abiotic oxidation of some of the components of HS. The HA is then precipitated at pH 1. The soluble fraction is treated on a resin column to separate fulvic acid components from other acid soluble compounds. [ 13 ] The fraction of NOM not extracted by 0.1 NaOH is humin. Humic and fulvic acid are extracted from natural waters using a resin column after microfiltration and acidification to pH 2. The humic materials are eluted from the column with NaOH, and humic acid is precipitated at pH 1. After adjusting the pH to 2, fulvic acid is separated from other acid soluble compounds, using a resin column as with solid phase sources. [ 14 ] An analytical method for quantifying humic acid and fulvic acid in commercial ores and humic products, has been developed based on the IHSS humic acid and fulvic acid preparation methods. [ 15 ]
Scientists associated with the IHSS have also isolated the entire NOM from blackwater rivers using reverse osmosis . The retentate from this process contains both humic and fulvic acids, predominately humic acid. The NOM from hard water streams has been isolated using reverse osmosis and electrodialysis in tandem. [ 16 ]
Extracted humic acid not a single acid ; instead, it is a complex mixture of many different acids containing carboxyl and phenolate groups so that the mixture behaves functionally as a dibasic acid or, occasionally, as a tribasic acid . Commercial humic acid used to amend soil is manufactured using these well-established procedures. Humic acids can form complexes with ions that are commonly found in the environment creating humic colloids . [ 17 ]
A sequential chemical fractionation can isolate more homogeneous humic fractions and determine their molecular structures by advanced spectroscopic and chromatographic methods. [ 18 ] Substances identified in humic extracts and directly in soil include mono-, di-, and tri- hydroxycarboxylic acids , fatty acids , dicarboxylic acids , linear alcohols, phenolic acids , terpenoids , carbohydrates, and amino acids. [ 19 ] This suggests humic molecules may form a supramolecular structures held together by non-covalent forces, such as van der Waals force , π-π , and CH-π bonds. [ 20 ]
Since the dawn of modern chemistry, humic substances are among the most studied among natural materials. Despite long study, their molecular structure remains debatable. The traditional view has been that humic substances are hetero- poly-condensates, in varying associations with clay. [ 21 ] A more recent view is that relatively small molecules also play major a role. [ 20 ]
A typical humic substance is a mixture of many molecules, some of which are based on a motif of aromatic nuclei with phenolic and carboxylic substituents, linked together; The functional groups that contribute most to surface charge and reactivity of humic substances are phenolic and carboxylic groups. Humic substances commonly behave as mixtures of dibasic acids, with a pK 1 value around 4 for protonation of carboxyl groups and around 8 for protonation of phenolate groups in HA. Fulvic acids are more acidic than HA. There is considerable overall similarity among individual humic acids. For this reason, measured pK values for a given sample are average values relating to the constituent species. The other important characteristic is charge density . [ 22 ]
The more recent determinations of molecular weights of HS show that the molecular weights are not as great as once thought. Reported number average molecular weights of soil HA are < 6000 but they are highly poly disperse with some components with much larger measure molecular weights and much lower. [ 23 ] Measured number average molecular weights of aquatic HS with HA ≤ 1700 and FA < 900. [ 23 ] The aquatic HA and FA are also highly poly disperse. The number of individually distinct components in HS, as measured by mass spectroscopy is in the thousands. The average composition of HA and FA can be represented by model structures.
The presence of carboxylate and phenolate groups gives the humic acids the ability to form complexes with ions such as Mg 2+ , Ca 2+ , Fe 2+ , and Fe 3+ creating humic colloids . Many humic acids have two or more of these groups arranged so as to enable the formation of chelate complexes. [ 24 ] The formation of (chelate) complexes is an important aspect of the biological role of humic acids in regulating bioavailability of metal ions. [ 25 ]
Criticism
Decomposition products of dead plant materials form intimate associations with minerals, making it difficult to isolate and characterize soil organic constituents. 18th century soil chemists successfully used alkaline extraction to isolate a portion of the organic constituents in soil. This led to the theory that a 'humification' process created distinct 'humic substances' like 'humic acid', 'fulvic acid', and 'humin'. [ 8 ] However, modern chemical analysis methods applied to unprocessed mineral soil have not directly observed large humic molecules. This suggests that the extraction and fractionation techniques used to isolate humic substances alter the original chemical composition of the organic matter. Since the definition of humic substances like humic and fulvic acids relies on their separation through these methods, it raises the question of whether the distinction between these compounds accurately reflects the natural state of organic matter in soil. [ 26 ] Despite these concerns, the 'humification' theory persists in the field and in even textbooks, and attempts to redefine 'humic substances' in soil have resulted in a proliferation of conflicting definitions. This lack of consensus makes it difficult to communicate scientific understanding of soil processes and properties accurately." [ 8 ]
The presence of humic acid in water intended for potable or industrial use can have a significant impact on the treatability of that water and the success of chemical disinfection processes. For instance, humic and fulvic acids can react with the chemicals used in the chlorination process to form disinfection byproducts such as dihaloacetonitriles, which are toxic to humans. [ 27 ] [ 28 ] Accurate methods of establishing humic acid concentrations are therefore essential in maintaining water supplies, especially from upland peaty catchments in temperate climates.
As a lot of different bio-organic molecules in very diverse physical associations are mixed together in natural environments, it is cumbersome to measure their exact concentrations in the humic superstructure. For this reason, concentrations of humic acid are traditionally estimated out of concentrations of organic matter, typically from concentrations of total organic carbon (TOC) or dissolved organic carbon (DOC).
Extraction procedures are bound to alter some of the chemical linkages present in the soil humic substances (mainly ester bonds in biopolyesters such as cutins and suberins). The humic extracts are composed of large numbers of different bio-organic molecules that have not yet been totally separated and identified. However, single classes of residual biomolecules have been identified by selective extractions and chemical fractionation, and are represented by alkanoic and hydroxy alkanoic acids, resins, waxes, lignin residues, sugars, and peptides.
Organic matter soil amendments have been known by farmers to be beneficial to plant growth for longer than recorded history. [ 29 ] However, the chemistry and function of the organic matter have been a subject of controversy since humans began postulating about it in the 18th century. Until the time of Liebig , it was supposed that humus was used directly by plants, but, after Liebig showed that plant growth depends upon inorganic compounds, many soil scientists held the view that organic matter was useful for fertility only as it was broken down with the release of its constituent nutrient elements into inorganic forms.
At the present time, soil scientists hold a more holistic view and at least recognize that humus influences soil fertility through its effect on the water-holding capacity of the soil. Also, since plants have been shown to absorb and translocate the complex organic molecules of systemic insecticides , they can no longer discredit the idea that plants may be able to absorb the soluble forms of humus; [ 30 ] this may in fact be an essential process for the uptake of otherwise insoluble iron oxides.
A study on the effects of humic acid on plant growth was conducted at Ohio State University which said in part "humic acids increased plant growth" and that there were "relatively large responses at low application rates". [ 31 ]
A 1998 study by scientists at the North Carolina State University College of Agriculture and Life Sciences showed that addition of humate to soil significantly increased root mass in creeping bentgrass turf. [ 32 ] [ 33 ]
A 2018 study by scientists at the University of Alberta showed that humic acids can reduce prion infectivity in laboratory experiments, but that this effect may be uncertain in the environment due to minerals in the soil that buffer the effect. [ 34 ]
Humans can affect the production of humic substances via a variety of ways: by making use of natural processes by composting lignin or adding biochar (see soil rehabilitation ), or by industrial synthesis of artificial humic substances from organic feedstocks directly. These artificial substances may be similarly divided into artificial humic acid (A-HA) and artificial fulvic acid (A-FA). [ 35 ] [ 36 ]
A more recent process known as hydrothermal humification and fulvification [ 37 ] , allows the conversion of a wide range of biomass and biogenic residues into artificial humin , A-HA , and A-FA under controlled temperature (180°C–250°C) and autogenic pressure, similar to hydrothermal carbonization but in an alkaline solution, which results in the autoneutralization of the reaction medium through the conversion of biomass components— cellulose , hemicellulose , and lignin —within minutes to hours, compared to the natural process which takes years in nature. [ 38 ] [ 39 ] This method enables rapid and tunable production of artificial humic substances while retaining critical functional groups important for soil health, carbon sequestration , and plant growth stimulation. [ 40 ] Artificial humic acids have also been shown to mitigate the negative effects of drought on soil microbial communities, supporting microbial diversity and functionality under stress conditions. [ 41 ] The synthesized humic substances, produced within a few hours, were successfully applied to save a 160-year-old beech tree in Park Sanssouci , Potsdam , Germany, which was under stress due to water scarcity and the sandy soil conditions typical of Brandenburg . [ 42 ]
Lignosulfonates , a by-product from the sulfite pulping of wood, are valorized in the industrial fabrication of concrete where they serve as water reducer , or concrete superplasticizer , to decrease the water-cement ratio (w/c) of fresh concrete while preserving its workability. The w/c ratio of concrete is one of the main parameter controlling the mechanical strength of hardened concrete and its durability. The same wood pulping process can also be applied to obtain humus-like substances by hydrolysis and oxidation . A kind of artificial "lignohumate" can be directly produced from wood in this way. [ 43 ]
Agricultural litter can be turned into an artificial humic substance by a hydrothermal reaction . The resulting mixture can increase the content of dissolved organic matter (DOM) and total organic carbon (TOC) in soil. [ 36 ]
Lignite (brown coal) may also be oxidized to produce humic substances, reversing the natural process of coal formation under anoxic and reducing conditions . This form of "mineral-derived fulvic acid" is widely used in China. [ 44 ] This process also occurs in nature, producing leonardite . [ 45 ]
In economic geology , the term humate refers to geological materials, such as weathered coal beds (leonardite), mudrock , or pore material in sandstones , that are rich in humic acids. Humate has been mined from the Fruitland Formation of New Mexico for use as a soil amendment since the 1970s, with nearly 60,000 metric tons produced by 2016. [ 46 ] Humate deposits may also play an important role in the genesis of uranium ore bodies. [ 47 ]
The heavy-metal binding abilities of humic acids have been exploited to develop remediation technologies for removing lead from wastewater. To this end, Yurishcheva et al. coated magnetic nanoparticles with humic acids. After capturing lead ions, the nanoparticles can then be captured using a magnet. [ 48 ]
Archeology finds that ancient Egypt used mudbricks reinforced with straw and humic acids. [ 49 ] | https://en.wikipedia.org/wiki/Humic_substance |
Humidity is the concentration of water vapor present in the air. Water vapor, the gaseous state of water, is generally invisible to the human eye. [ 2 ] Humidity indicates the likelihood for precipitation , dew , or fog to be present.
Humidity depends on the temperature and pressure of the system of interest. The same amount of water vapor results in higher relative humidity in cool air than warm air. A related parameter is the dew point . The amount of water vapor needed to achieve saturation increases as the temperature increases. As the temperature of a parcel of air decreases it will eventually reach the saturation point without adding or losing water mass. The amount of water vapor contained within a parcel of air can vary significantly. For example, a parcel of air near saturation may contain 8 g of water per cubic metre of air at 8 °C (46 °F), and 28 g of water per cubic metre of air at 30 °C (86 °F)
Three primary measurements of humidity are widely employed: absolute, relative, and specific. Absolute humidity is expressed as either mass of water vapor per volume of moist air (in grams per cubic meter) [ 3 ] or as mass of water vapor per mass of dry air (usually in grams per kilogram). [ 4 ] Relative humidity , often expressed as a percentage, indicates a present state of absolute humidity relative to a maximum humidity given the same temperature. Specific humidity is the ratio of water vapor mass to total moist air parcel mass.
Humidity plays an important role for surface life. For animal life dependent on perspiration (sweating) to regulate internal body temperature, high humidity impairs heat exchange efficiency by reducing the rate of moisture evaporation from skin surfaces. This effect can be calculated using a heat index table, or alternatively using a similar humidex .
The notion of air "holding" water vapor or being "saturated" by it is often mentioned in connection with the concept of relative humidity. This, however, is misleading—the amount of water vapor that enters (or can enter) a given space at a given temperature is almost independent of the amount of air (nitrogen, oxygen, etc.) that is present. Indeed, a vacuum has approximately the same equilibrium capacity to hold water vapor as the same volume filled with air; both are given by the equilibrium vapor pressure of water at the given temperature. [ 5 ] [ 6 ] There is a very small difference described under "Enhancement factor" below, which can be neglected in many calculations unless great accuracy is required.
Absolute humidity is the total mass of water vapor (gas form of water) present in a given volume or mass of air. It does not take temperature into consideration. Absolute humidity in the atmosphere ranges from near zero to roughly 30 g (1.1 oz) per cubic metre when the air is saturated at 30 °C (86 °F). [ 8 ] [ 9 ]
Air is a gas, and its volume varies with pressure and temperature, per Boyle's law . Absolute humidity is defined as water mass per volume of air. A given mass of air will grow or shrink as the temperature or pressure varies. So the absolute humidity of a mass of air will vary due to changes in temperature or pressure, even when the proportion of water in that mass of air (its specific humidity ) remains constant. This makes the term absolute humidity as defined not ideal for some situations.
Absolute humidity is the mass of the water vapor ( m H 2 O ) {\displaystyle (m_{{\text{H}}_{2}{\text{O}}})} , divided by the volume of the air and water vapor mixture ( V net ) {\displaystyle (V_{\text{net}})} , which can be expressed as: A H = m H 2 O V net . {\displaystyle AH={\frac {m_{{\text{H}}_{2}{\text{O}}}}{V_{\text{net}}}}.} In the equation above, if the volume is not set, the absolute humidity varies with changes in air temperature or pressure. Because of this variability, use of the term absolute humidity as defined is inappropriate for computations in chemical engineering, such as drying, where temperature variations might be significant. As a result, absolute humidity in chemical engineering may refer to mass of water vapor per unit mass of dry air, also known as the humidity ratio or mass mixing ratio (see "specific humidity" below), which is better suited for heat and mass balance calculations. [ citation needed ] Mass of water per unit volume as in the equation above is also defined as volumetric humidity . Because of the potential confusion, British Standard BS 1339 [ 10 ] suggests avoiding the term "absolute humidity". Units should always be carefully checked. Many humidity charts are given in g/kg or kg/kg, but any mass units may be used.
Relative humidity is the ratio of how much water vapour is in the air to how much water vapour the air could potentially contain at a given temperature and pressure.
If a sample of humid air at temperature T 1 contains water vapour with partial pressure P w the relative humidity RH is: [ 11 ]
where P s is the saturation pressure of water at temperature T 1 .
Relative humidity varies with any change in the temperature or pressure of the air: colder air can contain less vapour, and water will tend to condense out of the air more at lower temperatures. So changing the temperature of air can change the relative humidity, even when the specific humidity remains constant. If two parcels of air have the same specific humidity and temperature but different pressures, the parcel at the higher pressure will have the higher relative humidity.
Cooling air increases the relative humidity. If the relative humidity rises to 100% (the dew point ) and there is an available surface or particle, the water vapour will condense into liquid or deposit into ice. Likewise, warming air decreases the relative humidity. Warming some air containing a fog may cause that fog to evaporate , as the droplets are prone to total evaporation due to the lowering partial pressure of water vapour in that air, as the temperature rises.
Relative humidity only considers the invisible water vapour. Mists, clouds, fogs and aerosols of water do not count towards the measure of relative humidity of the air, although their presence is an indication that a body of air may be close to the dew point.
Relative humidity is normally expressed as a percentage; a higher percentage means that the air–water mixture is more humid. At 100% relative humidity, the air is saturated and is at its dew point. In the absence of a foreign body on which droplets or crystals can nucleate , the relative humidity can exceed 100%, in which case the air is said to be supersaturated . Introduction of some particles or a surface to a body of air above 100% relative humidity will allow condensation or ice to form on those nuclei, thereby removing some of the vapour and lowering the humidity.
In a scientific notion, the relative humidity ( R H {\displaystyle RH} or ϕ {\displaystyle \phi } ) of an air-water mixture is defined as the ratio of the partial pressure of water vapor ( p {\displaystyle p} ) in air to the saturation vapor pressure ( p s {\displaystyle p_{s}} ) of water at the same temperature, usually expressed as a percentage: [ 12 ] [ 13 ] [ 5 ] ϕ = 100 % ⋅ p / p s {\displaystyle \phi =100\%\cdot p/p_{s}}
Relative humidity is an important metric used in weather forecasts and reports, as it is an indicator of the likelihood of precipitation , dew, or fog. In hot summer weather, a rise in relative humidity increases the apparent temperature to humans (and other animals) by hindering the evaporation of perspiration from the skin. For example, according to the heat index , a relative humidity of 75% at air temperature of 80.0 °F (26.7 °C) would feel like 83.6 ± 1.3 °F (28.7 ± 0.7 °C). [ 14 ] [ 15 ]
Because wood changes shape with changes in humidity, relative humidity is used to evaluate moisture content and size changes in wood, such as making allowances for seasonal movement in wood floors.
Specific humidity (or moisture content) is the ratio of the mass of water vapor to the total mass of the air parcel. [ 16 ] Specific humidity is approximately equal to the mixing ratio , which is defined as the ratio of the mass of water vapor in an air parcel to the mass of dry air for the same parcel. It is typically represented with the symbol ω, and is commonly used in HVAC system design. [ citation needed ]
The term relative humidity is reserved for systems of water vapor in air. The term relative saturation is used to describe the analogous property for systems consisting of a condensable phase other than water in a non-condensable phase other than air. [ 17 ]
A device used to measure humidity of air is called a psychrometer or hygrometer . A humidistat is a humidity-triggered switch, often used to control a humidifier or a dehumidifier .
The humidity of an air and water vapor mixture is determined through the use of psychrometric charts if both the dry bulb temperature ( T ) and the wet bulb temperature ( T w ) of the mixture are known. These quantities are readily estimated by using a sling psychrometer .
There are several empirical formulas that can be used to estimate the equilibrium vapor pressure of water vapor as a function of temperature. The Antoine equation is among the least complex of these, having only three parameters ( A , B , and C ). Other formulas, such as the Goff–Gratch equation and the Magnus–Tetens approximation , are more complicated but yield better accuracy. [ citation needed ]
The Arden Buck equation is commonly encountered in the literature regarding this topic: [ 18 ] e w ∗ = ( 1.0007 + 3.46 × 10 − 6 P ) × 6.1121 e 17.502 T / ( 240.97 + T ) , {\displaystyle e_{w}^{*}=\left(1.0007+3.46\times 10^{-6}P\right)\times 6.1121\,e^{17.502T/(240.97+T)},} where T {\displaystyle T} is the dry-bulb temperature expressed in degrees Celsius (°C), P {\displaystyle P} is the absolute pressure expressed in millibars, and e w ∗ {\displaystyle e_{w}^{*}} is the equilibrium vapor pressure expressed in millibars. Buck has reported that the maximal relative error is less than 0.20% between −20 and +50 °C (−4 and 122 °F) when this particular form of the generalized formula is used to estimate the equilibrium vapor pressure of water.
There are various devices used to measure and regulate humidity. Calibration standards for the most accurate measurement include the gravimetric hygrometer, chilled mirror hygrometer , and electrolytic hygrometer. The gravimetric method, while the most accurate, is very cumbersome. For fast and very accurate measurement the chilled mirror method is effective. [ 19 ] For process on-line measurements, the most commonly used sensors nowadays are based on capacitance measurements to measure relative humidity, [ 20 ] frequently with internal conversions to display absolute humidity as well. These are cheap, simple, generally accurate and relatively robust. All humidity sensors face problems in measuring dust-laden gas, such as exhaust streams from clothes dryers.
Humidity is also measured on a global scale using remotely placed satellites. These satellites are able to detect the concentration of water in the troposphere at altitudes between 4 and 12 km (2.5 and 7.5 mi). Satellites that can measure water vapor have sensors that are sensitive to infrared radiation . Water vapor specifically absorbs and re-radiates radiation in this spectral band. Satellite water vapor imagery plays an important role in monitoring climate conditions (like the formation of thunderstorms) and in the development of weather forecasts .
Humidity depends on water vaporization and condensation, which, in turn, mainly depends on temperature. Therefore, when applying more pressure to a gas saturated with water, all components will initially decrease in volume approximately according to the ideal gas law . However, some of the water will condense until returning to almost the same humidity as before, giving the resulting total volume deviating from what the ideal gas law predicted.
Conversely, decreasing temperature would also make some water condense, again making the final volume deviate from predicted by the ideal gas law. Therefore, gas volume may alternatively be expressed as the dry volume, excluding the humidity content. This fraction more accurately follows the ideal gas law. On the contrary the saturated volume is the volume a gas mixture would have if humidity was added to it until saturation (or 100% relative humidity).
Humid air is less dense than dry air because a molecule of water ( m ≈ 18 Da ) is less massive than either a molecule of nitrogen ( m ≈ 28 ) or a molecule of oxygen ( m ≈ 32 ). About 78% of the molecules in dry air are nitrogen (N 2 ). Another 21% of the molecules in dry air are oxygen (O 2 ). The final 1% of dry air is a mixture of other gases.
For any gas, at a given temperature and pressure, the number of molecules present in a particular volume is constant. Therefore, when some number N of water molecules (vapor) is introduced into a volume of dry air, the number of air molecules in that volume must decrease by the same number N for the pressure to remain constant without using a change in temperature. The numbers are exactly equal if we consider the gases as ideal . The addition of water molecules, or any other molecules, to a gas, without removal of an equal number of other molecules, will necessarily require a change in temperature, pressure, or total volume; that is, a change in at least one of these three parameters.
If temperature and pressure remain constant, the volume increases, and the dry air molecules that were displaced will initially move out into the additional volume, after which the mixture will eventually become uniform through diffusion. Hence the mass per unit volume of the gas—its density—decreases. Isaac Newton discovered this phenomenon and wrote about it in his book Opticks . [ 21 ]
The relative humidity of an air–water system is dependent not only on the temperature but also on the absolute pressure of the system of interest. This dependence is demonstrated by considering the air–water system shown below. The system is closed (i.e., no matter enters or leaves the system).
If the system at State A is isobarically heated (heating with no change in system pressure), then the relative humidity of the system decreases because the equilibrium vapor pressure of water increases with increasing temperature. This is shown in State B.
If the system at State A is isothermally compressed (compressed with no change in system temperature), then the relative humidity of the system increases because the partial pressure of water in the system increases with the volume reduction. This is shown in State C. Above 202.64 kPa, the RH would exceed 100% and water may begin to condense.
If the pressure of State A was changed by simply adding more dry air, without changing the volume, the relative humidity would not change.
Therefore, a change in relative humidity can be explained by a change in system temperature, a change in the volume of the system, or change in both of these system properties.
The enhancement factor ( f w ) {\displaystyle (f_{w})} is defined as the ratio of the saturated vapor pressure of water in moist air ( e w ′ ) {\displaystyle (e'_{w})} to the saturated vapor pressure of pure water: f W = e w ′ e w ∗ . {\displaystyle f_{W}={\frac {e'_{w}}{e_{w}^{*}}}.}
The enhancement factor is equal to unity for ideal gas systems. However, in real systems the interaction effects between gas molecules result in a small increase of the equilibrium vapor pressure of water in air relative to equilibrium vapor pressure of pure water vapor. Therefore, the enhancement factor is normally slightly greater than unity for real systems.
The enhancement factor is commonly used to correct the equilibrium vapor pressure of water vapor when empirical relationships, such as those developed by Wexler, Goff, and Gratch, are used to estimate the properties of psychrometric systems.
Buck has reported that, at sea level, the vapor pressure of water in saturated moist air amounts to an increase of approximately 0.5% over the equilibrium vapor pressure of pure water. [ 18 ]
Climate control refers to the control of temperature and relative humidity in buildings, vehicles and other enclosed spaces for the purpose of providing for human comfort, health and safety, and of meeting environmental requirements of machines, sensitive materials (for example, historic) and technical processes.
While humidity itself is a climate variable, it also affects other climate variables. Environmental humidity is affected by winds and by rainfall.
The most humid cities on Earth are generally located closer to the equator, near coastal regions. Cities in parts of Asia and Oceania are among the most humid. Bangkok , Ho Chi Minh City , Kuala Lumpur , Hong Kong , Manila , Jakarta , Naha , Singapore , Kaohsiung and Taipei have very high humidity most or all year round because of their proximity to water bodies and the equator and often overcast weather.
Some places experience extreme humidity during their rainy seasons combined with warmth giving the feel of a lukewarm sauna, such as Kolkata , Chennai and Kochi in India, and Lahore in Pakistan. Sukkur city located on the Indus River in Pakistan has some of the highest and most uncomfortable dew points in the country, frequently exceeding 30 °C (86 °F) in the monsoon season. [ 22 ]
High temperatures combine with the high dew point to create heat index in excess of 65 °C (149 °F). Darwin experiences an extremely humid wet season from December to April. Houston , Miami , San Diego , Osaka , Shanghai , Shenzhen and Tokyo also have an extreme humid period in their summer months. During the South-west and North-east Monsoon seasons (respectively, late May to September and November to March), expect heavy rains and a relatively high humidity post-rainfall.
Outside the monsoon seasons, humidity is high (in comparison to countries further from the Equator), but completely sunny days abound. In cooler places such as Northern Tasmania, Australia, high humidity is experienced all year due to the ocean between mainland Australia and Tasmania. In the summer the hot dry air is absorbed by this ocean and the temperature rarely climbs above 35 °C (95 °F).
Humidity affects the energy budget and thereby influences temperatures in two major ways. First, water vapor in the atmosphere contains "latent" energy. During transpiration or evaporation, this latent heat is removed from surface liquid, cooling the Earth's surface. This is the biggest non-radiative cooling effect at the surface. It compensates for roughly 70% of the average net radiative warming at the surface.
Second, water vapor is the most abundant of all greenhouse gases . Water vapor, like a green lens that allows green light to pass through it but absorbs red light, is a "selective absorber". Like the other greenhouse gasses, water vapor is transparent to most solar energy. However, it absorbs the infrared energy emitted (radiated) upward by the Earth's surface, which is the reason that humid areas experience very little nocturnal cooling but dry desert regions cool considerably at night. This selective absorption causes the greenhouse effect. It raises the surface temperature substantially above its theoretical radiative equilibrium temperature with the sun, and water vapor is the cause of more of this warming than any other greenhouse gas.
Unlike most other greenhouse gases, however, water is not merely below its boiling point in all regions of the Earth, but below its freezing point at many altitudes. As a condensible greenhouse gas, it precipitates , with a much lower scale height and shorter atmospheric lifetime — weeks instead of decades. Without other greenhouse gases, Earth's blackbody temperature , below the freezing point of water, would cause water vapor to be removed from the atmosphere. [ 23 ] [ 24 ] [ 25 ] Water vapor is thus a "slave" to the non-condensible greenhouse gases. [ 26 ] [ 27 ] [ 28 ]
Humidity is one of the fundamental abiotic factors that defines any habitat (the tundra, wetlands, and the desert are a few examples), and is a determinant of which animals and plants can thrive in a given environment. [ 29 ]
The human body dissipates heat through perspiration and its evaporation. Heat convection , to the surrounding air, and thermal radiation are the primary modes of heat transport from the body. Under conditions of high humidity, the rate of evaporation of sweat from the skin decreases. Also, if the atmosphere is as warm or warmer than the skin during times of high humidity, blood brought to the body surface cannot dissipate heat by conduction to the air. With so much blood going to the external surface of the body, less goes to the active muscles, the brain, and other internal organs. Physical strength declines, and fatigue occurs sooner than it would otherwise. Alertness and mental capacity also may be affected, resulting in heat stroke or hyperthermia .
Domesticated plants and animals (e.g. lizards) require regular upkeep of humidity percent when grown in-home and container conditions, for optimal thriving environment.
Although humidity is an important factor for thermal comfort, humans are more sensitive to variations in temperature than they are to changes in relative humidity. [ 30 ] Humidity has a small effect on thermal comfort outdoors when air temperatures are low, a slightly more pronounced effect at moderate air temperatures, and a much stronger influence at higher air temperatures. [ 31 ]
Humans are sensitive to humid air because the human body uses evaporative cooling as the primary mechanism to regulate temperature. Under humid conditions, the rate at which perspiration evaporates on the skin is lower than it would be under arid conditions. Because humans perceive the rate of heat transfer from the body rather than temperature itself, we feel warmer when the relative humidity is high than when it is low.
Humans can be comfortable within a wide range of humidities depending on the temperature—from 30 to 70% [ 32 ] —but ideally not above the Absolute (60 °F Dew Point), [ 33 ] between 40% [ 34 ] and 60%. [ 35 ] In general, higher temperatures will require lower humidities to achieve thermal comfort compared to lower temperatures, with all other factors held constant. For example, with clothing level = 1, metabolic rate = 1.1, and air speed 0.1 m/s, a change in air temperature and mean radiant temperature from 20 °C to 24 °C would lower the maximum acceptable relative humidity from 100% to 65% to maintain thermal comfort conditions. The CBE Thermal Comfort Tool can be used to demonstrate the effect of relative humidity for specific thermal comfort conditions and it can be used to demonstrate compliance with ASHRAE Standard 55–2017. [ 36 ]
Some people experience difficulty breathing in humid environments. Some cases may possibly be related to respiratory conditions such as asthma, while others may be the product of anxiety. Affected people will often hyperventilate in response, causing sensations of numbness, faintness, and loss of concentration , among others. [ 37 ]
Very low humidity can create discomfort, respiratory problems, and aggravate allergies in some individuals. Low humidity causes tissue lining nasal passages to dry, crack and become more susceptible to penetration of rhinovirus cold viruses. [ 38 ] Extremely low (below 20 %) relative humidities may also cause eye irritation. [ 39 ] [ 40 ] The use of a humidifier in homes, especially bedrooms, can help with these symptoms. [ 41 ] Indoor relative humidities kept above 30% reduce the likelihood of the occupant's nasal passages drying out, especially in winter. [ 39 ] [ 42 ] [ 43 ]
Air conditioning reduces discomfort by reducing not just temperature but humidity as well. Heating cold outdoor air can decrease relative humidity levels indoors to below 30%. [ 44 ] According to ASHRAE Standard 55-2017: Thermal Environmental Conditions for Human Occupancy , indoor thermal comfort can be achieved through the PMV method with relative humidities ranging from 0% to 100%, depending on the levels of the other factors contributing to thermal comfort. [ 45 ] However, the recommended range of indoor relative humidity in air conditioned buildings is generally 30–60%. [ 46 ] [ 47 ]
Higher humidity reduces the infectivity of aerosolized influenza virus. A study concluded, "Maintaining indoor relative humidity >40% will significantly reduce the infectivity of aerosolized virus." [ 48 ]
Excess moisture in buildings expose occupants to fungal spores, cell fragments, or mycotoxins . [ 49 ] Infants in homes with mold have a much greater risk of developing asthma and allergic rhinitis . [ 49 ] More than half of adult workers in moldy/humid buildings develop nasal or sinus symptoms due to mold exposure. [ 49 ]
Mucociliary clearance in the respiratory tract is also hindered by low humidity. One study in dogs found that mucus transport was lower at an absolute humidity of 9 g/m 3 than at 30 g/m 3 . [ 50 ]
Increased humidity can also lead to changes in total body water that usually leads to moderate weight gain, especially if one is acclimated to working or exercising in hot and humid weather. [ 51 ]
Common construction methods often produce building enclosures with a poor thermal boundary, requiring an insulation and air barrier system designed to retain indoor environmental conditions while resisting external environmental conditions. [ 52 ] The energy-efficient, heavily sealed architecture introduced in the 20th century also sealed off the movement of moisture, and this has resulted in a secondary problem of condensation forming in and around walls, which encourages the development of mold and mildew. Additionally, buildings with foundations not properly sealed will allow water to flow through the walls due to capillary action of pores found in masonry products. Solutions for energy-efficient buildings that avoid condensation are a current topic of architecture.
For climate control in buildings using HVAC systems, the key is to maintain the relative humidity at a comfortable range—low enough to be comfortable but high enough to avoid problems associated with very dry air.
When the temperature is high and the relative humidity is low, evaporation of water is rapid; soil dries, wet clothes hung on a line or rack dry quickly, and perspiration readily evaporates from the skin. Wooden furniture can shrink, causing the paint that covers these surfaces to fracture.
When the temperature is low and the relative humidity is high, evaporation of water is slow. When relative humidity approaches 100%, condensation can occur on surfaces, leading to problems with mold, corrosion, decay, and other moisture-related deterioration. Condensation can pose a safety risk as it can promote the growth of mold and wood rot as well as possibly freezing emergency exits shut.
Certain production and technical processes and treatments in factories, laboratories, hospitals, and other facilities require specific relative humidity levels to be maintained using humidifiers, dehumidifiers and associated control systems.
The basic principles for buildings, above, also apply to vehicles. In addition, there may be safety considerations. For instance, high humidity inside a vehicle can lead to problems of condensation, such as misting of windshields and shorting of electrical components. In vehicles and pressure vessels such as pressurized airliners, submersibles and spacecraft, these considerations may be critical to safety, and complex environmental control systems including equipment to maintain pressure are needed.
Airliners operate with low internal relative humidity, often under 20%, [ 53 ] especially on long flights. The low humidity is a consequence of drawing in the very cold air with a low absolute humidity, which is found at airliner cruising altitudes. Subsequent warming of this air lowers its relative humidity. This causes discomfort such as sore eyes, dry skin, and drying out of mucosa, but humidifiers are not employed to raise it to comfortable mid-range levels because the volume of water required to be carried on board can be a significant weight penalty. As airliners descend from colder altitudes into warmer air, perhaps even flying through clouds a few thousand feet above the ground, the ambient relative humidity can increase dramatically.
Some of this moist air is usually drawn into the pressurized aircraft cabin and into other non-pressurized areas of the aircraft and condenses on the cold aircraft skin. Liquid water can usually be seen running along the aircraft skin, both on the inside and outside of the cabin. Because of the drastic changes in relative humidity inside the vehicle, components must be qualified to operate in those environments. The recommended environmental qualifications for most commercial aircraft components is listed in RTCA DO-160 .
Cold, humid air can promote the formation of ice, which is a danger to aircraft as it affects the wing profile and increases weight. Naturally aspirated internal combustion engines have a further danger of ice forming inside the carburetor . Aviation weather reports ( METARs ) therefore include an indication of relative humidity, usually in the form of the dew point .
Pilots must take humidity into account when calculating takeoff distances, because high humidity requires longer runways and will decrease climb performance.
Density altitude is the altitude relative to the standard atmosphere conditions (International Standard Atmosphere) at which the air density would be equal to the indicated air density at the place of observation, or, in other words, the height when measured in terms of the density of the air rather than the distance from the ground. "Density Altitude" is the pressure altitude adjusted for non-standard temperature.
An increase in temperature, and, to a much lesser degree, humidity, will cause an increase in density altitude. Thus, in hot and humid conditions, the density altitude at a particular location may be significantly higher than the true altitude.
Electronic devices are often rated to operate only under certain humidity conditions (e.g., 10% to 90%). The optimal humidity for electronic devices is 30% to 65%. At the top end of the range, moisture may increase the conductivity of permeable insulators leading to malfunction. Too low humidity may make materials brittle. A particular danger to electronic items, regardless of the stated operating humidity range, is condensation . When an electronic item is moved from a cold place (e.g., garage, car, shed, air conditioned space in the tropics) to a warm humid place (house, outside tropics), condensation may coat circuit boards and other insulators, leading to short circuit inside the equipment. Such short circuits may cause substantial permanent damage if the equipment is powered on before the condensation has evaporated . A similar condensation effect can often be observed when a person wearing glasses comes in from the cold (i.e. the glasses become foggy). [ 54 ]
It is advisable to allow electronic equipment to acclimatise for several hours, after being brought in from the cold, before powering on. Some electronic devices can detect such a change and indicate, when plugged in and usually with a small droplet symbol, that they cannot be used until the risk from condensation has passed. In situations where time is critical, increasing air flow through the device's internals, such as removing the side panel from a PC case and directing a fan to blow into the case, will reduce significantly the time needed to acclimatise to the new environment.
In contrast, a very low humidity level favors the build-up of static electricity , which may result in spontaneous shutdown of computers when discharges occur. Apart from spurious erratic function, electrostatic discharges can cause dielectric breakdown in solid-state devices , resulting in irreversible damage. Data centers often monitor relative humidity levels for these reasons.
High humidity can often have a negative effect on the capacity of chemical plants and refineries that use furnaces as part of a certain processes (e.g., steam reforming , wet sulfuric acid processes). For example, because humidity reduces ambient oxygen concentrations (dry air is typically 20.9% oxygen, but at 100% relative humidity the air is 20.4% oxygen), flue gas fans must intake air at a higher rate than would otherwise be required to maintain the same firing rate. [ 55 ]
High humidity in the oven, represented by an elevated wet-bulb temperature , increases the thermal conductivity of the air around the baked item, leading to a quicker baking process or even burning. Conversely, low humidity slows the baking process down. [ 56 ]
At 100% relative humidity, air is saturated and at its dew point : the water vapor pressure would permit neither evaporation of nearby liquid water nor condensation to grow the nearby water; neither sublimation of nearby ice nor deposition to grow the nearby ice.
Relative humidity can exceed 100%, in which case the air is supersaturated . Cloud formation requires supersaturated air. Cloud condensation nuclei lower the level of supersaturation required to form fogs and clouds – in the absence of nuclei around which droplets or ice can form, a higher level of supersaturation is required for these droplets or ice crystals to form spontaneously. In the Wilson cloud chamber , which is used in nuclear physics experiments, a state of supersaturation is created within the chamber, and moving subatomic particles act as condensation nuclei so trails of fog show the paths of those particles.
For a given dew point and its corresponding absolute humidity , the relative humidity will change inversely, albeit nonlinearly, with the temperature. This is because the vapor pressure of water increases with temperature—the operative principle behind everything from hair dryers to dehumidifiers .
Due to the increasing potential for a higher water vapor partial pressure at higher air temperatures, the water content of air at sea level can get as high as 3% by mass at 30 °C (86 °F) compared to no more than about 0.5% by mass at 0 °C (32 °F). This explains the low levels (in the absence of measures to add moisture) of humidity in heated structures during winter, resulting in dry skin, itchy eyes, and persistence of static electric charges. Even with saturation (100% relative humidity) outdoors, heating of infiltrated outside air that comes indoors raises its moisture capacity, which lowers relative humidity and increases evaporation rates from moist surfaces indoors, including human bodies and household plants.
Similarly, during summer in humid climates a great deal of liquid water condenses from air cooled in air conditioners. Warmer air is cooled below its dew point, and the excess water vapor condenses. This phenomenon is the same as that which causes water droplets to form on the outside of a cup containing an ice-cold drink.
A useful rule of thumb is that the maximum absolute humidity doubles for every 20 °F (11 °C) increase in temperature. Thus, the relative humidity will drop by a factor of 2 for each 20 °F (11 °C) increase in temperature, assuming conservation of absolute moisture. For example, in the range of normal temperatures, air at 68 °F (20 °C) and 50% relative humidity will become saturated if cooled to 50 °F (10 °C), its dew point, and 41 °F (5 °C) air at 80% relative humidity warmed to 68 °F (20 °C) will have a relative humidity of only 29% and feel dry. By comparison, thermal comfort standard ASHRAE 55 requires systems designed to control humidity to maintain a dew point of 16.8 °C (62.2 °F) though no lower humidity limit is established. [ 45 ]
Water vapor is a lighter gas than other gaseous components of air at the same temperature, so humid air will tend to rise by natural convection . This is a mechanism behind thunderstorms and other weather phenomena. Relative humidity is often mentioned in weather forecasts and reports, as it is an indicator of the likelihood of dew, or fog. In hot summer weather, it also increases the apparent temperature to humans (and other animals) by hindering the evaporation of perspiration from the skin as the relative humidity rises. This effect is calculated as the heat index or humidex .
A device used to measure humidity is called a hygrometer ; one used to regulate it is called a humidistat , or sometimes hygrostat . These are analogous to a thermometer and thermostat for temperature, respectively.
The field concerned with the study of physical and thermodynamic properties of gas–vapor mixtures is named psychrometrics . | https://en.wikipedia.org/wiki/Humidity |
Hummers' method is a chemical process that can be used to generate graphite oxide through the addition of potassium permanganate to a solution of graphite , sodium nitrate , and sulfuric acid . It is commonly used by engineering and lab technicians as a reliable method of producing quantities of graphite oxide. It is also able to be devised in the creation of a one-atom-thick version of the substance known as graphene oxide .
Graphite oxide is a compound of carbon , oxygen , and hydrogen where there is a ratio between 2.1 and 2.9 of carbon to oxygen. Graphite oxide is typically a yellowish solid. It is also known as graphene oxide when used to form unimolecular sheets.
Hummers' method [ 1 ] was developed in 1958 as a safer, faster and more efficient method of producing graphite oxide. Before the method was developed, the production of graphite oxide was slow and hazardous to make because of the use of concentrated sulfuric and nitric acid. The Staudenmeier–Hoffman–Hamdi method [ 2 ] introduced the addition of potassium chlorate. However, this method had more hazards and produced one gram of graphite oxide to ten grams of potassium chlorate. [ 3 ]
William S. Hummers and Richard E. Offeman created their method as an alternative to the above methods after noting the hazards they posed to workers at the National Lead Company . Their approach was similar in that it involved adding graphite to a solution of concentrated acid. However, they simplified it to just graphite, concentrated sulfuric acid, sodium nitrate, and potassium permanganate. They also did not have to use temperatures above 98 °C and avoided most of the explosive risk of the Staudenmeier–Hoffman–Hamdi method.
The procedure starts with 100 g graphite and 50 g of sodium nitrate in 2.3 liters of sulfuric acid at 66 °C which is then cooled to 0 °C. 300 g of potassium permanganate is then added to the solution and stirred. Water is then added in increments until the solution is approximately 32 liters.
The final solution contains about 0.5% of solids to then be cleaned of impurities and dehydrated with phosphorus pentoxide .
The basic chemical reaction involved in the Hummers' method is the oxidation of graphite, introducing molecules of oxygen to the pure carbon graphene. The reaction occurs between the graphene and the concentrated sulfuric acid with the potassium permanganate and sodium nitrate acting as catalysts. The process is capable of yielding approximately 188 g of graphite oxide to 100 g of graphite used. The ratio of carbon to oxygen produced is within the range of 1 to 2.1–2.9 that is characteristic of graphite oxide. The contaminants are determined to be mostly ash and water. Toxic gases such as dinitrogen tetraoxide and nitrogen dioxide are evolved in the process. The final product is typically 47.06% carbon, 27.97% oxygen, 22.99% water, and 1.98% ash with a carbon-to-oxygen ratio of 2.25. All of these results are comparable to the methods that preceded them.
The method has been taken up by many researchers and chemists who are interested in the use of graphite oxide for other purposes, because it is the fastest [ 4 ] conventional method of producing graphite oxide while maintaining a relatively high C/O ratio. When researchers and chemists are introducing a large quantity of graphite oxide within time limitations, Hummers' method is usually referenced in some form.
Graphite oxide captured the attention of the scientific community after the discovery of graphene in 2004. Many teams are looking into ways of using graphite oxide as a shortcut to mass production of graphene. So far, the materials produced by these methods have shown to have more defects than those produced directly from graphite. Hummers' method remains a key point of interest because it is an easy method of producing large quantities of graphite oxide.
Other groups have been focused on making improvements to the Hummers' method to make it more efficient and environmentally friendly. One such process is eliminating the use of NaNO 3 from the process. [ 5 ] [ 6 ] The addition of persufate (S 2 O 8 2− ) ensures the complete oxidation and exfoliation of graphite to yield suspensions of individual graphite oxide sheets. The elimination of nitrate is also advantageous as it stops the production of gases such as nitrogen dioxide and dinitrogen tetraoxide.
Besides graphene, Hummers' method has become a point of interest in photocatalysts . [ 7 ] After discovering that graphite oxide is reactive to many of the wavelengths of light found within sunlight, teams have been looking into methods of using it to bolster the speed of reaction in decomposition of water and organic matter. The most common method for producing the graphite oxide in these experiments has been Hummers' method. | https://en.wikipedia.org/wiki/Hummers'_method |
Humoral immunity is the aspect of immunity that is mediated by macromolecules – including secreted antibodies , complement proteins , and certain antimicrobial peptides – located in extracellular fluids . Humoral immunity is named so because it involves substances found in the humors , or body fluids . It contrasts with cell-mediated immunity . Humoral immunity is also referred to as antibody-mediated immunity .
The study of the molecular and cellular components that form the immune system , including their function and interaction, is the central science of immunology . The immune system is divided into a more primitive innate immune system and an acquired or adaptive immune system of vertebrates , each of which contain both humoral and cellular immune elements.
Humoral immunity refers to antibody production and the coinciding processes that accompany it, including: Th2 activation and cytokine production, germinal center formation and isotype switching, and affinity maturation and memory cell generation. It also refers to the effector functions of antibodies, which include pathogen and toxin neutralization, classical complement activation, and opsonin promotion of phagocytosis and pathogen elimination. [ 1 ]
The concept of humoral immunity developed based on the analysis of antibacterial activity of the serum components. Hans Buchner is credited with the development of the humoral theory. [ 2 ] In 1890, Buchner described alexins as "protective substances" that exist in the blood serum and other bodily fluids and are capable of killing microorganisms . Alexins, later redefined as "complements" by Paul Ehrlich , were shown to be the soluble components of the innate response that leads to a combination of cellular and humoral immunity. This discovery helped to bridge the features of innate and acquired immunity . [ 2 ]
Following the 1888 discovery of the bacteria that cause diphtheria and tetanus , Emil von Behring and Kitasato Shibasaburō showed that disease need not be caused by microorganisms themselves. They discovered that cell-free filtrates were sufficient to cause disease. In 1890, filtrates of diphtheria, later named diphtheria toxins , were used to vaccinate animals in an attempt to demonstrate that immunized serum contained an antitoxin that could neutralize the activity of the toxin and could transfer immunity to non-immune animals. [ 3 ] In 1897, Paul Ehrlich showed that antibodies form against the plant toxins ricin and abrin , and proposed that these antibodies are responsible for immunity. [ 2 ] Ehrlich, with his colleague von Behring, went on to develop the diphtheria antitoxin , which became the first major success of modern immunotherapy . [ 3 ] The discovery of specified compatible antibodies became a major tool in the standardization of immunity and the identification of lingering infections . [ 3 ]
Antibodies or Immunoglobulins are glycoproteins found within blood and lymph . Structurally, antibodies are large Y-shaped globular proteins . In mammals, there are five types of antibodies: immunoglobulin A , immunoglobulin D , immunoglobulin E , immunoglobulin G , and immunoglobulin M . Each immunoglobulin class differs in its biological properties and has evolved to deal with different antigens. [ 5 ] Antibodies are synthesized and secreted by plasma cells that are derived from the B cells of the immune system.
An antibody is used by the acquired immune system to identify and neutralize foreign objects like bacteria and viruses. Each antibody recognizes a specific antigen unique to its target. By binding their specific antigens, antibodies can cause agglutination and precipitation of antibody-antigen products, prime for phagocytosis by macrophages and other cells, block viral receptors, and stimulate other immune responses, such as the complement pathway .
An incompatible blood transfusion causes a transfusion reaction , which is mediated by the humoral immune response. This type of reaction, called an acute hemolytic reaction, results in the rapid destruction (hemolysis) of the donor red blood cells by host antibodies. The cause is usually a clerical error, such as the wrong unit of blood being given to the wrong patient. The symptoms are fever and chills, sometimes with back pain and pink or red urine ( hemoglobinuria ). The major complication is that hemoglobin released by the destruction of red blood cells can cause acute kidney failure .
In humoral immune response, the naive B cells begin the maturation process in the bone marrow, gaining B-cell receptors (BCRs) along the cell surface. [ 6 ] These BCRs are membrane-bound protein complexes that have a high binding affinity for specific antigens ; this specificity is derived from the amino acid sequence of the heavy and light polypeptide chains that constitute the variable region of the BCR. [ 7 ] Once a BCR interacts with an antigen, it creates a binding signal which directs the B cell to produce a unique antibody that only binds with that antigen . The mature B cells then migrate from the bone marrow to the lymph nodes or other lymphatic organs , where they begin to encounter pathogens.
When a B cell encounters an antigen, a signal is activated, the antigen binds to the receptor and is taken inside the B cell by endocytosis . The antigen is processed and presented on the B cell's surface again by MHC-II proteins . The MHC-II proteins are recognized by helper T cells , stimulating the production of proteins, allowing for B cells to multiply and the descendants to differentiate into antibody-secreting cells circulating in the blood. [ 8 ] B cells can be activated through certain microbial agents without the help of T-cells and have the ability to work directly with antigens to provide responses to pathogens present. [ 8 ]
The B cell waits for a helper T cell (T H ) to bind to the complex. This binding will activate the T H cell, which then releases cytokines that induce B cells to divide rapidly, making thousands of identical clones of the B cell. These daughter cells either become plasma cells or memory cells . The memory B cells remain inactive here; later, when these memory B cells encounter the same antigen due to reinfection, they divide and form plasma cells. On the other hand, the plasma cells produce a large number of antibodies which are released freely into the circulatory system .
These antibodies will encounter antigens and bind with them. This will either interfere with the chemical interaction between host and foreign cells, or they may form bridges between their antigenic sites hindering their proper functioning. Their presence might also attract macrophages or killer cells to attack and phagocytose them.
The complement system is a biochemical cascade of the innate immune system that helps clear pathogens from an organism. It is derived from many small blood plasma proteins that work together to disrupt the target cell's plasma membrane leading to cytolysis of the cell. The complement system consists of more than 35 soluble and cell-bound proteins, 12 of which are directly involved in the complement pathways. [ 1 ] The complement system is involved in the activities of both innate immunity and acquired immunity.
Activation of this system leads to cytolysis, chemotaxis , opsonization , immune clearance, and inflammation , as well as the marking of pathogens for phagocytosis. The proteins account for 5% of the serum globulin fraction. Most of these proteins circulate as zymogens , which are inactive until proteolytic cleavage . [ 1 ]
Three biochemical pathways activate the complement system: the classical complement pathway , the alternate complement pathway , and the mannose-binding lectin pathway . [ 9 ] These processes differ only in the process of activating C3 convertase , [ 10 ] which is the initial step of complement activation, and the subsequent process are eventually the same.
The classical pathway is initiated through exposure to free-floating antigen-bound antibodies. This leads to enzymatic cleavage of smaller complement subunits which synthesize to form the C3 convertase.
This differs from the mannose-binding lectin pathway, which is initiated by bacterial carbohydrate motifs, such as mannose, found on the surface of bacterium. After the binding process, the same subunit cleavage and synthesis occurs as in the classical pathway. The alternate complement pathway completely diverges from the previous pathways, as this pathway spontaneously initiates in the presence of hydrolyzed C3, which then recruits other subunits which can be cleaved to form C3 convertase. In all three pathways, once C3 convertase is synthesized, complements are cleaved into subunits which either form a structure called the membrane attack complex (MAC) on the bacterial cell wall to destroy the bacteria [ 11 ] or act as cytokines and chemokines, amplifying the immune response. | https://en.wikipedia.org/wiki/Humoral_immunity |
Humorism , the humoral theory , or humoralism , was a system of medicine detailing a supposed makeup and workings of the human body, adopted by Ancient Greek and Roman physicians and philosophers .
Humorism began to fall out of favor in the 17th century and it was definitively disproved with the discovery of microbes.
The concept of "humors" may have origins in Ancient Egyptian medicine , [ 1 ] or Mesopotamia , [ 2 ] though it was not systemized until ancient Greek thinkers. The word humor is a translation of Greek χυμός , [ 3 ] chymos (literally 'juice' or ' sap ', metaphorically 'flavor'). Early texts on Indian Ayurveda medicine presented a theory of three or four humors (doṣas), [ 4 ] [ 5 ] which they sometimes linked with the five elements ( pañca-bhūta ): earth, water, fire, air, and space. [ 6 ]
The concept of "humors" (chemical systems regulating human behaviour) became more prominent from the writing of medical theorist Alcmaeon of Croton (c. 540–500 BC). His list of humors was longer and included fundamental elements described by Empedocles , such as water, earth, fire, air, etc. Hippocrates is usually credited with applying this idea to medicine. In contrast to Alcmaeon, Hippocrates suggested that humors are the vital bodily fluids: blood , phlegm , yellow bile, and black bile. Alcmaeon and Hippocrates posited that an extreme excess or deficiency of any of the humors ( bodily fluid ) in a person can be a sign of illness. Hippocrates, and then Galen , suggested that a moderate imbalance in the mixture of these fluids produces behavioral patterns. [ 7 ] One of the treatises attributed to Hippocrates, On the Nature of Man , describes the theory as follows:
The Human body contains blood, phlegm, yellow bile, and black bile. These are the things that make up its constitution and cause its pains and health. Health is primarily that state in which these constituent substances are in the correct proportion to each other, both in strength and quantity, and are well mixed. Pain occurs when one of the substances presents either a deficiency or an excess, or is separated in the body and not mixed with others. [ 8 ] The body depends heavily on the four humors because their balanced combination helps to keep people in good health. Having the right amount of humor is essential for health. The pathophysiology of disease is consequently brought on by humor excesses and/or deficiencies. [ 9 ]
The existence of fundamental biochemical substances and structural components in the body remains a compellingly shared point with Hippocratic beliefs, despite the fact that current science has moved away from those four Hippocratic humors. [ 9 ]
Although the theory of the four humors does appear in some Hippocratic texts, other Hippocratic writers accepted the existence of only two humors, while some refrained from discussing the humoral theory at all. [ 10 ] Humoralism, or the doctrine of the four temperaments, as a medical theory retained its popularity for centuries, largely through the influence of the writings of Galen (129–201 AD). The four essential elements—humors—that make up the human body, according to Hippocrates, are in harmony with one another and act as a catalyst for preserving health. [ 9 ] Hippocrates' theory of four humors was linked with the popular theory of the four elements (earth, fire, water, and air) proposed by Empedocles , but this link was not proposed by Hippocrates or Galen, who referred primarily to bodily fluids. While Galen thought that humors were formed in the body, rather than ingested, he believed that different foods had varying potential to act upon the body to produce different humors. Warm foods, for example, tended to produce yellow bile, while cold foods tended to produce phlegm. Seasons of the year, periods of life, geographic regions, and occupations also influenced the nature of the humors formed. As such, certain seasons and geographic areas were understood to cause imbalances in the humors, leading to varying types of disease across time and place. For example, cities exposed to hot winds were seen as having higher rates of digestive problems as a result of excess phlegm running down from the head, while cities exposed to cold winds were associated with diseases of the lungs, acute diseases, and "hardness of the bowels", as well as ophthalmies (issues of the eyes), and nosebleeds. Cities to the west, meanwhile, were believed to produce weak, unhealthy, pale people that were subject to all manners of disease. [ 11 ] In the treatise, On Airs, Waters, and Places , a Hippocratic physician is described arriving to an unnamed city where they test various factors of nature including the wind, water, and soil to predict the direct influence on the diseases specific to the city based on the season and the individual. [ 12 ]
A fundamental idea of Hippocratic medicine was the endeavor to pinpoint the origins of illnesses in both the physiology of the human body and the influence of potentially hazardous environmental variables like air, water, and nutrition, and every humor has a distinct composition and is secreted by a different organ. [ 13 ] Aristotle's concept of eucrasia—a state resembling equilibrium—and its relationship to the right balance of the four humors allow for the maintenance of human health, offering a more mathematical approach to medicine. [ 13 ]
The imbalance of humors, or dyscrasia , was thought to be the direct cause of all diseases. Health was associated with a balance of humors, or eucrasia . The qualities of the humors, in turn, influenced the nature of the diseases they caused. Yellow bile caused warm diseases and phlegm caused cold diseases. In On the Temperaments , Galen further emphasized the importance of the qualities. An ideal temperament involved a proportionally balanced mixture of the four qualities. Galen identified four temperaments in which one of the qualities (warm, cold, moist, or dry) predominated, and four more in which a combination of two (warm and moist, warm and dry, cold and dry, or cold and moist) dominated. These last four, named for the humors with which they were associated—sanguine, choleric, melancholic and phlegmatic—eventually became better known than the others. While the term temperament came to refer just to psychological dispositions, Galen used it to refer to bodily dispositions, which determined a person's susceptibility to particular diseases, as well as behavioral and emotional inclinations.
Disease could also be the result of the "corruption" of one or more of the humors, which could be caused by environmental circumstances, dietary changes, or many other factors. [ 14 ] These deficits were thought to be caused by vapors inhaled or absorbed by the body. Greeks and Romans, and the later Muslim and Western European medical establishments that adopted and adapted classical medical philosophy, believed that each of these humors would wax and wane in the body, depending on diet and activity. When a patient was suffering from a surplus or imbalance of one of the four humors, then said patient's personality and/or physical health could be negatively affected.
Therefore, the goal of treatment was to rid the body of some of the excess humor through techniques like purging, bloodletting, catharsis, diuresis, and others. Bloodletting was already a prominent medical procedure by the first century, but venesection took on even more significance once Galen of Pergamum declared blood to be the most prevalent humor. [ 15 ] The volume of blood extracted ranged from a few drops to several litres over the course of several days, depending on the patient's condition and the doctor's practice. [ 16 ]
Even though humorism theory had several models that used two, three, and five components, the most famous model consists of the four humors described by Hippocrates and developed further by Galen . The four humors of Hippocratic medicine are black bile (Greek: μέλαινα χολή , melaina chole ), yellow bile (Greek: ξανθὴ χολή , xanthe chole ), phlegm (Greek: φλέγμα , phlegma ), and blood (Greek: αἷμα , haima ). Each corresponds to one of the traditional four temperaments . Based on Hippocratic medicine, it was believed that for a body to be healthy, the four humors should be balanced in amount and strength. [ 17 ] The proper blending and balance of the four humors was known as eukrasia . [ 18 ]
Humorism theory was improved by Galen, who incorporated his understanding of the humors into his interpretation of the human body. He believed the interactions of the humors within the body were the key to investigating the physical nature and function of the organ systems. Galen combined his interpretation of the humors with his collection of ideas concerning nature from past philosophers in order to find conclusions about how the body works. For example, Galen maintained the idea of the presence of the Platonic tripartite soul, which consisted of " thumos (spiritedness), epithumos (directed spiritedness, i.e. desire), and Sophia (wisdom)". [ 19 ] Through this, Galen found a connection between these three parts of the soul and the three major organs that were recognized at the time: the brain, the heart, and the liver. [ 19 ] This idea of connecting vital parts of the soul to vital parts of the body was derived from Aristotle's sense of explaining physical observations, and Galen utilized it to build his view of the human body. The organs (named organa ) had specific functions (called chreiai ) that contributed to the maintenance of the human body, and the expression of these functions is shown in characteristic activities (called energeiai ) of a person. [ 20 ] While the correspondence of parts of the body to the soul was an influential concept, Galen decided that the interaction of the four humors with natural bodily mechanisms were responsible for human development and this connection inspired his understanding of the nature of the components of the body.
Galen recalls the correspondence between humors and seasons in his On the Doctrines of Hippocrates and Plato , and says that, "As for ages and the seasons, the child ( παῖς ) corresponds to spring, the young man ( νεανίσκος ) to summer, the mature man ( παρακµάζων ) to autumn, and the old man ( γέρων ) to winter". [ 21 ] He also related a correspondence between humors and seasons based on the properties of both. Blood, as a humor, was considered hot and wet. This gave it a correspondence to spring. Yellow bile was considered hot and dry, which related it to summer. Black bile was considered cold and dry, and thus related to autumn. Phlegm, cold and wet, was related to winter. [ 22 ]
Galen also believed that the characteristics of the soul follow the mixtures of the body, but he did not apply this idea to the Hippocratic humors. He believed that phlegm did not influence character. In his On Hippocrates ' The Nature of Man , Galen stated: "Sharpness and intelligence ( ὀξὺ καὶ συνετόν ) are caused by yellow bile in the soul, perseverance and consistency ( ἑδραῖον καὶ βέβαιον ) by the melancholic humor, and simplicity and naivety ( ἁπλοῦν καὶ ἠλιθιώτερον ) by blood. But the nature of phlegm has no effect on the character of the soul ( τοῦ δὲ φλέγµατος ἡ φύσις εἰς µὲν ἠθοποιῗαν ἄχρηστος )." [ 23 ] He further said that blood is a mixture of the four elements: water, air, fire, and earth.
These terms only partly correspond to modern medical terminology, in which there is no distinction between black and yellow bile, and phlegm has a very different meaning. It was believed that the humors were the basic substances from which all liquids in the body were made. Robin Fåhræus (1921), a Swedish physician who devised the erythrocyte sedimentation rate , suggested that the four humors were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen: a dark clot forms at the bottom (the "black bile"); above the clot is a layer of red blood cells (the "blood"); above this is a whitish layer of white blood cells (the "phlegm"); the top layer is clear yellow serum (the "yellow bile"). [ 24 ]
Many Greek texts were written during the golden age of the theory of the four humors in Greek medicine after Galen. One of those texts was an anonymous treatise called On the Constitution of the Universe and of Man , published in the mid-19th century by J. L. Ideler. In this text, the author establishes the relationship between elements of the universe (air, water, earth, fire) and elements of the man (blood, yellow bile, black bile, phlegm). [ 25 ] He said that:
Seventeenth century English playwright Ben Jonson wrote humor plays , where character types were based on their humoral complexion.
It was thought that the nutritional value of the blood was the source of energy for the body and the soul. Blood was believed to consist of small proportional amounts of the other three humors. This meant that taking a blood sample would allow for determination of the balance of the four humors in the body. [ 26 ] It was associated with a sanguine nature (enthusiastic, active, and social). [ 27 ] [ 28 ] : 103–05 Blood is considered to be hot and wet, sharing these characteristics with the season of spring. [ 29 ]
Yellow bile was associated with a choleric nature (ambitious, decisive, aggressive, and short-tempered). [ 30 ] It was thought to be fluid found within the gallbladder , or in excretions such as vomit and feces. [ 26 ] The associated qualities for yellow bile are hot and dry with the natural association of summer and fire. It was believed that an excess of this humor in an individual would result in emotional irregularities such as increased anger or irrational behaviour. [ 31 ]
Black bile was associated with a melancholy nature, the word melancholy itself deriving from the Greek for 'black bile', μέλαινα χολή ( melaina kholé ). Depression was attributed to excess or unnatural black bile secreted by the spleen . [ 32 ] Cancer was also attributed to an excess of black bile concentrated in a specific area. [ 33 ] The seasonal association of black bile was to autumn as the cold and dry characteristics of the season reflect the nature of man. [ 29 ]
Phlegm was associated with all phlegmatic nature, thought to be associated with reserved behavior. [ 34 ] The phlegm of humorism is far from phlegm as it is defined today. Phlegm was used as a general term to describe white or colorless secretions such as pus, mucus, saliva, or sweat. [ 26 ] Phlegm was also associated with the brain, possibly due to the color and consistency of brain tissue. [ 26 ] The French physiologist and Nobel laureate Charles Richet , when describing humorism's "phlegm or pituitary secretion" in 1910, asked rhetorically, "this strange liquid, which is the cause of tumours , of chlorosis , of rheumatism , and cacochymia – where is it? Who will ever see it? Who has ever seen it? What can we say of this fanciful classification of humors into four groups, of which two are absolutely imaginary?" [ 35 ] The seasonal association of phlegm is winter due to the natural properties of being cold and wet. [ 36 ]
Humors were believed to be produced via digestion as the final products of hepatic digestion. Digestion is a continuous process taking place in every animal, and it can be divided into four sequential stages. [ 37 ] The gastric digestion stage, the hepatic digestion stage, the vascular digestion stage, and the tissue digestion stage. Each stage digests food until it becomes suitable for use by the body. In gastric digestion, food is made into chylous, which is suitable for the liver to absorb and carry on digestion. Chylous is changed into chymous in the hepatic digestion stage. Chymous is composed of the four humors: blood, phlegm, yellow bile, and black bile. These four humors then circulate in the blood vessels . In the last stage of digestion, tissue digestion, food becomes similar to the organ tissue for which it is destined.
If anything goes wrong leading up to the production of humors, there will be an imbalance leading to disease. Proper organ functioning is necessary in the production of good humor. The stomach and liver also have to function normally for proper digestion. If there are any abnormalities in gastric digestion, the liver, blood vessels, and tissues cannot be provided with the raw chylous, which can cause abnormal humor and blood composition. A healthy functioning liver is not capable of converting abnormal chylous into normal chylous and normal humors.
Humors are the end product of gastric digestion, but they are not the end product of the digestion cycle, so an abnormal humor produced by hepatic digestion will affect other digestive organs.
According to Hippocratic humoral theory, jaundice is present in the Hippocratic Corpus . Some of the first descriptions of jaundice come from the Hippocratic physicians (icterus). [ 38 ] The ailment appears multiple times in the Hippocratic Corpus, where its genesis, description, prognosis, and therapy are given. The five kinds of jaundice mentioned in the Hippocratic Corpus all share a yellow or greenish skin color. [ 38 ]
A modern doctor will undoubtedly start to think of the symptoms listed in contemporary atlases of medicine after reading the clinical symptoms of each variety of jaundice listed in the Hippocratic Corpus. Despite the fact that the Hippocratic physicians' therapeutic approaches have little to do with contemporary medical practice, their capacity for observation as they described the various forms of jaundice is remarkable. [ 38 ] In the Hippocratic Corpus, the Hippocratic physicians make multiple references to jaundice. At that time, jaundice was viewed as an illness unto itself rather than a symptom brought on by a disease. [ 38 ]
Empedocles 's theory suggested that there are four elements : earth, fire, water, and air, with the earth producing the natural systems. Since this theory was influential for centuries, later scholars paired qualities associated with each humor as described by Hippocrates/Galen with seasons and "basic elements" as described by Empedocles . [ 39 ]
The following table shows the four humors with their corresponding elements, seasons, sites of formation, and resulting temperaments: [ 40 ]
Medieval medical tradition in the Golden Age of Islam adopted the theory of humorism from Greco-Roman medicine, notably via the Persian polymath Avicenna 's The Canon of Medicine (1025). Avicenna summarized the four humors and temperaments as follows: [ 41 ]
The Unani school of medicine, practiced in Perso-Arabic countries, India, and Pakistan, is based on Galenic and Avicennian medicine in its emphasis on the four humors as a fundamental part of the methodologic paradigm.
The humoralist system of medicine was highly individualistic, for all patients were said to have their own unique humoral composition. [ 43 ] From Hippocrates onward, the humoral theory was adopted by Greek, Roman and Islamic physicians , and dominated the view of the human body among European physicians until at least 1543 when it was first seriously challenged by Andreas Vesalius , who mostly criticized Galen's theories of human anatomy and not the chemical hypothesis of behavioural regulation (temperament).
Typical 18th-century practices such as bleeding a sick person or applying hot cups to a person were based on the humoral theory of imbalances of fluids (blood and bile in those cases). Methods of treatment like bloodletting, emetics and purges were aimed at expelling a surplus of a humor. [ 44 ] Apocroustics were medications intended to stop the flux of malignant humors to a diseased body part. [ 45 ]
16th-century Swiss physician Paracelsus further developed the idea that beneficial medical substances could be found in herbs, minerals and various alchemical combinations thereof. These beliefs were the foundation of mainstream Western medicine well into the 17th century. Specific minerals or herbs were used to treat ailments simple to complex, from an uncomplicated upper respiratory infection to the plague. For example, chamomile was used to decrease heat, and lower excessive bile humor. Arsenic was used in a poultice bag to 'draw out' the excess humor(s) that led to symptoms of the plague. Apophlegmatisms , in pre-modern medicine, were medications chewed in order to draw away phlegm and humors.
Although advances in cellular pathology and chemistry criticized humoralism by the 17th century, the theory had dominated Western medical thinking for more than 2,000 years. [ 46 ] [ 47 ] Only in some instances did the theory of humoralism wane into obscurity. One such instance occurred in the sixth and seventh centuries in the Byzantine Empire when traditional secular Greek culture gave way to Christian influences. Though the use of humoralist medicine continued during this time, its influence was diminished in favor of religion. [ 48 ] The revival of Greek humoralism, owing in part to changing social and economic factors, did not begin until the early ninth century. [ 49 ] Use of the practice in modern times is pseudoscience . [ 50 ]
Humoral theory was the grand unified theory of medicine, before the invention of modern medicine, for more than 2,000 years. The theory was one of the fundamental tenets of the teachings of the Greek physician-philosopher Hippocrates (460–370 BC), who is regarded as the first practitioner of medicine, appropriately referred to as the "Father of Modern Medicine". [ 51 ]
With the advent of the Doctrine of Specific Etiology , the humoral theory's demise hastened even further. This demonstrates that there is only one precise cause and one specific issue for each and every sickness or disorder that has been diagnosed. [ 51 ] Additionally, the identification of messenger molecules like hormones, growth factors, and neurotransmitters suggests that the humoral theory has not yet been made fully moribund. Humoral theory is still present in modern medical terminology, which refers to humoral immunity when discussing elements of immunity that circulate in the bloodstream, such as hormones and antibodies. [ 51 ]
Modern medicine refers to humoral immunity or humoral regulation when describing substances such as hormones and antibodies , but this is not a remnant of the humor theory. It is merely a literal use of humoral , i.e. pertaining to bodily fluids (such as blood and lymph).
The concept of humorism was not definitively disproven until 1858. [ 46 ] [ 47 ] There were no studies performed to prove or disprove the impact of dysfunction in known bodily organs producing named fluids (humors) on temperament traits simply because the list of temperament traits was not defined up until the end of the 20th century.
Theophrastus and others developed a set of characters based on the humors. Those with too much blood were sanguine. Those with too much phlegm were phlegmatic. Those with too much yellow bile were choleric, and those with too much black bile were melancholic. The idea of human personality based on humors contributed to the character comedies of Menander and, later, Plautus .
Through the neo-classical revival in Europe, the humor theory dominated medical practice, and the theory of humoral types made periodic appearances in drama. The humors were an important and popular iconographic theme in European art, found in paintings, tapestries, [ 52 ] and sets of prints.
The humors can be found in Elizabethan works , such as in The Taming of the Shrew , in which the character Petruchio, a choleric man, uses humoral therapy techniques on Katherina, a choleric woman, in order to tame her into the socially acceptable phlegmatic woman. [ 53 ] Some examples include: he yells at the servants for serving mutton, a choleric food, to two people who are already choleric; he deprives Katherina of sleep; and he, Katherina and their servant Grumio endure a cold walk home, for cold temperatures were said to tame choleric temperaments.
The theory of the four humors features prominently in Rupert Thomson 's 2005 novel Divided Kingdom . | https://en.wikipedia.org/wiki/Humorism |
Humphry John Moule Bowen (22 June 1929 – 9 August 2001) was a British botanist and chemist . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ excessive citations ]
Bowen was born in Oxford , son of the chemist Edmund Bowen and Edith Bowen (nee Moule). [ 8 ] He attended the Dragon School , gaining a scholarship to Rugby School and then a demyship to Magdalen College, Oxford . He won the Gibbs Prize [ 9 ] in 1949 and completed a DPhil in chemistry at Oxford University in 1953 before starting his professional career as a chemist. Bowen was also a proficient amateur actor in his early years, appearing with a young Ronnie Barker at Oxford. [ 3 ]
His first post was with the Atomic Energy Research Establishment (AERE) near the village of Harwell where he lived, working at the Wantage Research Laboratory, then in Berkshire . [ 4 ] His early work started an interest in radioisotopes and trace elements that he maintained throughout his working life. While at AERE, he spent several months in 1956 attending the British nuclear tests at Maralinga in Australia to study the environmental effects of radiation . [ 4 ]
Bowen realized that the calibration of different instruments intended to measure trace elements was an important issue that needed addressing. His solution was to produce a good supply of a material which later become known as Bowen's Kale . [ 11 ] This was a dried, crushed chomogenate of the plant kale , that was stable and consistent enough to be distributed as a research calibration standard - probably the first successful example of such a standard. [ 6 ]
In 1964, he was appointed as a lecturer in the chemistry department at the University of Reading . Later he was promoted to Reader in analytical chemistry in 1974. At Reading, Bowen undertook consultancy for Dunlop , investigating potential uses for their products. [ 3 ] When the Torrey Canyon oil disaster occurred in 1967, he realized that it might be possible to use foam booms to block the oil from spreading in the English Channel . His original experiments were conducted in a small bucket in his laboratory. [ 4 ] Although not entirely successful in reality at the time due to the rough seas, this lateral thinking combined his interest in chemistry with his love of nature and has since been effectively deployed to protect ports and harbours against encroaching oil slicks. Bowen wrote a number of professional books in the field of chemistry, including two editions of Trace elements in Biochemistry (1966 and 1976). [ 12 ]
In 1968, Bowen noted that the paint used for yellow line road markings can contain chromate pigment, which may cause urban pollution as it deteriorates. [ 13 ] He pointed out that hexavalent chromium in dust can cause dermatitis ulceration on the skin, inflammation of the nasal mucosa and larynx , and lung cancer . [ 13 ]
From 1951 onwards, Bowen was a long-serving member of the Botanical Society of the British Isles (BSBI). He was meetings secretary for a period and the official recorder of plants for the counties of Berkshire and Dorset , producing Floras for both counties. [ 1 ] [ 2 ] He retired to Winterborne Kingston in Dorset at the end of his life. He was also one of the leading contributors of botanical data for the Flora of Oxfordshire . [ 14 ] He acted as an expert botanical guide on tours around Europe, especially Greece and Turkey . [ 4 ]
Humphry Bowen donated a large collection of lichens from Berkshire and Oxfordshire to the Museum of Reading in the 1970s. [ 15 ] He established the Bowen Cup at the University of Reading in 1988, an annual prize for the student in the Department of Chemistry at the University who achieves the top marks in Part II Analytical Chemistry. [ 16 ] | https://en.wikipedia.org/wiki/Humphry_Bowen |
A humster is a hybrid cell line made from a zona-free hamster oocyte fertilized with human sperm. [ 1 ] It always consists of single cells, and cannot form a multi-cellular being. Humsters are usually destroyed before they divide into two cells ; if isolated and left alone to divide, they would still be unviable . [ 2 ]
Humsters are routinely created mainly for two reasons:
Somatic cell hybrids between humans and hamsters or mice have been used for the mapping of various traits since at least the 1970s. [ 3 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Humster |
In rotational-vibrational and electronic spectroscopy of diatomic molecules , Hund 's coupling cases are idealized descriptions of rotational states in which specific terms in the molecular Hamiltonian and involving couplings between angular momenta are assumed to dominate over all other terms. There are five cases, proposed by Friedrich Hund in 1926-27 [ 1 ] and traditionally denoted by the letters (a) through (e). Most diatomic molecules are somewhere between the idealized cases (a) and (b). [ 2 ]
To describe the Hund's coupling cases, we use the following angular momenta (where boldface letters indicate vector quantities):
These vector quantities depend on corresponding quantum numbers whose values are shown in molecular term symbols used to identify the states. For example, the term symbol 2 Π 3/2 denotes a state with S = 1/2, Λ = 1 and J = 3/2.
Hund's coupling cases are idealizations. The appropriate case for a given situation can be found by comparing three strengths: the electrostatic coupling of L {\displaystyle \mathbf {L} } to the internuclear axis, the spin-orbit coupling , and the rotational coupling of L {\displaystyle \mathbf {L} } and S {\displaystyle \mathbf {S} } to the total angular momentum J {\displaystyle \mathbf {J} } .
For 1 Σ states the orbital and spin angular momenta are zero and the total angular momentum is just the nuclear rotational angular momentum. [ 3 ] For other states, Hund proposed five possible idealized modes of coupling. [ 4 ]
The last two rows are degenerate because they have the same good quantum numbers . [ 5 ]
In practice there are also many molecular states which are intermediate between the above limiting cases. [ 3 ]
The most common [ 6 ] case is case (a) in which L {\displaystyle \mathbf {L} } is electrostatically coupled to the internuclear axis, and S {\displaystyle \mathbf {S} } is coupled to L {\displaystyle \mathbf {L} } by spin-orbit coupling . Then both L {\displaystyle \mathbf {L} } and S {\displaystyle \mathbf {S} } have well-defined axial components, Λ {\displaystyle \Lambda } and Σ {\displaystyle \Sigma } respectively. As they are written with the same Greek symbol, the spin component Σ {\displaystyle \Sigma } should not be confused with Σ {\displaystyle \Sigma } states, which are states with orbital angular component Λ {\displaystyle \Lambda } equal to zero. Ω {\displaystyle {\boldsymbol {\Omega }}} defines a vector of magnitude Ω = Λ + Σ {\displaystyle \Omega =\Lambda +\Sigma } pointing along the internuclear axis. Combined with the rotational angular momentum of the nuclei R {\displaystyle \mathbf {R} } , we have J = Ω + R {\displaystyle \mathbf {J} ={\boldsymbol {\Omega }}+\mathbf {R} } . In this case, the precession of L {\displaystyle \mathbf {L} } and S {\displaystyle \mathbf {S} } around the nuclear axis is assumed to be much faster than the nutation of Ω {\displaystyle {\boldsymbol {\Omega }}} and R {\displaystyle \mathbf {R} } around J {\displaystyle \mathbf {J} } .
The good quantum numbers in case (a) are Λ {\displaystyle \Lambda } , S {\displaystyle S} , Σ {\displaystyle \Sigma } , J {\displaystyle J} and Ω {\displaystyle \Omega } . However L {\displaystyle L} is not a good quantum number because the vector L {\displaystyle \mathbf {L} } is strongly coupled to the electrostatic field and therefore precesses rapidly around the internuclear axis with an undefined magnitude. [ 6 ] We express the rotational energy operator as H r o t = B R 2 = B ( J − L − S ) 2 {\displaystyle H_{rot}=B\mathbf {R} ^{2}=B(\mathbf {J} -\mathbf {L} -\mathbf {S} )^{2}} , where B {\displaystyle B} is a rotational constant. There are, ideally, 2 S + 1 {\displaystyle 2S+1} fine-structure states, each with rotational levels having relative energies B J ( J + 1 ) {\displaystyle BJ(J+1)} starting with J = Ω {\displaystyle J=\Omega } . [ 2 ] For example, a 2 Π state has a 2 Π 1/2 term (or fine structure state) with rotational levels J {\displaystyle \mathbf {J} } = 1/2, 3/2, 5/2, 7/2, ... and a 2 Π 3/2 term with levels J {\displaystyle \mathbf {J} } = 3/2, 5/2, 7/2, 9/2.... [ 4 ] Case (a) requires Λ {\displaystyle \Lambda } > 0 and so does not apply to any Σ states, and also S {\displaystyle S} > 0 so that it does not apply to any singlet states. [ 7 ]
The selection rules for allowed spectroscopic transitions depend on which quantum numbers are good. For Hund's case (a), the allowed transitions must have Δ Λ = 0 , ± 1 {\displaystyle \Delta \Lambda =0,\pm 1} and Δ S = 0 {\displaystyle \Delta S=0} and Δ Σ = 0 {\displaystyle \Delta \Sigma =0} and Δ Ω = 0 , ± 1 {\displaystyle \Delta \Omega =0,\pm 1} and Δ J = 0 , ± 1 {\displaystyle \Delta J=0,\pm 1} . [ 8 ] In addition, symmetrical diatomic molecules have even (g) or odd (u) parity and obey the Laporte rule that only transitions between states of opposite parity are allowed.
In case (b), the spin-orbit coupling is weak or non-existent (in the case Λ = 0 {\displaystyle \Lambda =0} ). In this case, we take N = Λ + R {\displaystyle \mathbf {N} ={\boldsymbol {\Lambda }}+\mathbf {R} } and J = N + S {\displaystyle \mathbf {J} =\mathbf {N} +\mathbf {S} } and assume L {\displaystyle \mathbf {L} } precesses quickly around the internuclear axis.
The good quantum numbers in case (b) are Λ {\displaystyle \Lambda } , N {\displaystyle N} , S {\displaystyle S} , and J {\displaystyle J} . We express the rotational energy operator as H r o t = B R 2 = B ( N − L ) 2 {\displaystyle H_{rot}=B\mathbf {R} ^{2}=B(\mathbf {N} -\mathbf {L} )^{2}} , where B {\displaystyle B} is a rotational constant. The rotational levels therefore have relative energies B N ( N + 1 ) {\displaystyle BN(N+1)} starting with N = Λ {\displaystyle N=\Lambda } . [ 2 ] For example, a 2 Σ state has rotational levels N {\displaystyle N} = 0, 1, 2, 3, 4, ..., and each level is divided by spin-rotation coupling into two levels J {\displaystyle J} = N {\displaystyle N} ± 1/2 (except for N {\displaystyle N} = 0 which corresponds only to J {\displaystyle J} = 1/2 because J {\displaystyle J} cannot be negative). [ 9 ]
Another example is the 3 Σ ground state of dioxygen , which has two unpaired electrons with parallel spins. The coupling type is Hund's case b), and each rotational level N is divided into three levels J {\displaystyle J} = N − 1 {\displaystyle N-1} , N {\displaystyle N} , N + 1 {\displaystyle N+1} . [ 10 ]
For case b) the selection rules for quantum numbers Λ {\displaystyle \Lambda } , S {\displaystyle S} , Σ {\displaystyle \Sigma } and Ω {\displaystyle \Omega } and for parity are the same as for case a). However for the rotational levels, the rule for quantum number J {\displaystyle J} does not apply and is replaced by the rule Δ N = 0 , ± 1 {\displaystyle \Delta N=0,\pm 1} . [ 11 ]
In case (c), the spin-orbit coupling is stronger than the coupling to the internuclear axis, and Λ {\displaystyle \Lambda } and Σ {\displaystyle \Sigma } from case (a) cannot be defined. Instead L {\displaystyle \mathbf {L} } and S {\displaystyle \mathbf {S} } combine to form J a {\displaystyle \mathbf {J} _{a}} , which has a projection along the internuclear axis of magnitude Ω {\displaystyle \Omega } . Then J = Ω + R {\displaystyle \mathbf {J} ={\boldsymbol {\Omega }}+\mathbf {R} } , as in case (a).
The good quantum numbers in case (c) are J a {\displaystyle J_{a}} , J {\displaystyle J} , and Ω {\displaystyle \Omega } . [ 2 ] Since Λ {\displaystyle \Lambda } is undefined for this case, the states cannot be described as Σ {\displaystyle \Sigma } , Π {\displaystyle \Pi } or Δ {\displaystyle \Delta } . [ 12 ] An example of Hund's case (c) is the lowest 3 Π u state of diiodine (I 2 ), which approximates more closely to case (c) than to case (a). [ 6 ]
The selection rules for S {\displaystyle S} , Ω {\displaystyle \Omega } and parity are valid as for cases (a) and (b), but there are no rules for Λ {\displaystyle \Lambda } and Σ {\displaystyle \Sigma } since these are not good quantum numbers for case (c). [ 6 ]
In case (d), the rotational coupling between L {\displaystyle \mathbf {L} } and R {\displaystyle \mathbf {R} } is much stronger than the electrostatic coupling of L {\displaystyle \mathbf {L} } to the internuclear axis. Thus we form N {\displaystyle \mathbf {N} } by coupling L {\displaystyle \mathbf {L} } and R {\displaystyle \mathbf {R} } and the form J {\displaystyle \mathbf {J} } by coupling N {\displaystyle \mathbf {N} } and S {\displaystyle \mathbf {S} } .
The good quantum numbers in case (d) are L {\displaystyle L} , R {\displaystyle R} , N {\displaystyle N} , S {\displaystyle S} , and J {\displaystyle J} . Because R {\displaystyle R} is a good quantum number, the rotational energy is simply H r o t = B R 2 = B R ( R + 1 ) {\displaystyle H_{rot}=B\mathbf {R} ^{2}=BR(R+1)} . [ 2 ]
In case (e), we first form J a {\displaystyle \mathbf {J} _{a}} and then form J {\displaystyle \mathbf {J} } by coupling J a {\displaystyle \mathbf {J} _{a}} and R {\displaystyle \mathbf {R} } . This case is rare but has been observed. [ 13 ] Rydberg states which converge to ionic states with spin–orbit coupling (such as 2 Π) are best described as case (e). [ 14 ]
The good quantum numbers in case (e) are J a {\displaystyle J_{a}} , R {\displaystyle R} , and J {\displaystyle J} . Because R {\displaystyle R} is once again a good quantum number, the rotational energy is H r o t = B R 2 = B R ( R + 1 ) {\displaystyle H_{rot}=B\mathbf {R} ^{2}=BR(R+1)} . [ 2 ] | https://en.wikipedia.org/wiki/Hund's_cases |
Hund's rule of maximum multiplicity is a rule based on observation of atomic spectra , which is used to predict the ground state of an atom or molecule with one or more open electronic shells . The rule states that for a given electron configuration , the lowest energy term is the one with the greatest value of spin multiplicity . [ 1 ] This implies that if two or more orbitals of equal energy are available, electrons will occupy them singly before filling them in pairs . The rule, discovered by Friedrich Hund in 1925, is of important use in atomic chemistry , spectroscopy , and quantum chemistry , and is often abbreviated to Hund's rule , ignoring Hund's other two rules .
The multiplicity of a state is defined as 2S + 1, where S is the total electronic spin. [ 2 ] A high multiplicity state is therefore the same as a high-spin state. The lowest-energy state with maximum multiplicity usually has unpaired electrons all with parallel spin. Since the spin of each electron is 1/2, the total spin is one-half the number of unpaired electrons, and the multiplicity is the number of unpaired electrons + 1. For example, the nitrogen atom ground state has three unpaired electrons of parallel spin, so that the total spin is 3/2 and the multiplicity is 4.
The lower energy and increased stability of the atom arise because the high-spin state has unpaired electrons of parallel spin, which must reside in different spatial orbitals according to the Pauli exclusion principle . An early but incorrect explanation of the lower energy of high multiplicity states was that the different occupied spatial orbitals create a larger average distance between electrons, reducing electron-electron repulsion energy. [ 3 ] However, quantum-mechanical calculations with accurate wave functions since the 1970s have shown that the actual physical reason for the increased stability is a decrease in the screening of electron-nuclear attractions, so that the unpaired electrons can approach the nucleus more closely and the electron-nuclear attraction is increased. [ 3 ]
As a result of Hund's rule, constraints are placed on the way atomic orbitals are filled in the ground state using the Aufbau principle . Before any two electrons occupy an orbital in a subshell, other orbitals in the same subshell must first each contain one electron. Also, the electrons filling a subshell will have parallel spin before the shell starts filling up with the opposite spin electrons (after the first orbital gains a second electron). As a result, when filling up atomic orbitals, the maximum number of unpaired electrons (and hence maximum total spin state) is assured.
For example, in the oxygen atom, the 2p 4 subshell arranges its electrons as [↑↓] [↑] [↑] rather than [↑↓] [↑] [↓] or [↑↓] [↑↓][ ]. The manganese (Mn) atom has a 3d 5 electron configuration with five unpaired electrons all of parallel spin, corresponding to a 6 S ground state. [ 4 ] The superscript 6 is the value of the multiplicity , corresponding to five unpaired electrons with parallel spin in accordance with Hund's rule.
An atom can have a ground state with two incompletely filled subshells which are close in energy. The lightest example is the chromium (Cr) atom with a 3d 5 4s electron configuration. Here there are six unpaired electrons all of parallel spin for a 7 S ground state. [ 5 ]
Although most stable molecules have closed electron shells, a few have unpaired electrons for which Hund's rule is applicable. The most important example is the dioxygen molecule, O 2 , which has two degenerate pi antibonding molecular orbitals (π*) occupied by only two electrons. In accordance with Hund's rule, the ground state is triplet oxygen with two unpaired electrons in singly occupied orbitals. The singlet oxygen state with one doubly occupied and one empty π* is an excited state with different chemical properties and greater reactivity than the ground state. | https://en.wikipedia.org/wiki/Hund's_rule_of_maximum_multiplicity |
In atomic physics and quantum chemistry , Hund's rules refers to a set of rules that German physicist Friedrich Hund formulated around 1925, which are used to determine the term symbol that corresponds to the ground state of a multi-electron atom . The first rule is especially important in chemistry , where it is often referred to simply as Hund's Rule .
The three rules are: [ 1 ] [ 2 ] [ 3 ]
These rules specify in a simple way how usual energy interactions determine which term includes the ground state. The rules assume that the repulsion between the outer electrons is much greater than the spin–orbit interaction, which is in turn stronger than any other remaining interactions. This is referred to as the LS coupling regime.
Closed shells and subshells do not contribute to the quantum numbers for total S , the total spin angular momentum and for L , the total orbital angular momentum. It can be shown that for full orbitals and suborbitals both the residual electrostatic energy (repulsion between electrons) and the spin–orbit interaction can only shift all the energy levels together. Thus when determining the ordering of energy levels in general only the outer valence electrons must be considered.
Due to the Pauli exclusion principle , two electrons cannot share the same set of quantum numbers within the same system; therefore, there is room for only two electrons in each spatial orbital. One of these electrons must have, (for some chosen direction z ) m s = 1 ⁄ 2 , and the other must have m s = − 1 ⁄ 2 . Hund's first rule states that the lowest energy atomic state is the one that maximizes the total spin quantum number for the electrons in the open subshell . The orbitals of the subshell are each occupied singly with electrons of parallel spin before double occupation occurs. (This is occasionally called the "bus seat rule" since it is analogous to the behaviour of bus passengers who tend to occupy all double seats singly before double occupation occurs.)
Two different physical explanations have been given [ 5 ] for the increased stability of high multiplicity states. In the early days of quantum mechanics , it was proposed that electrons in different orbitals are further apart, so that electron–electron repulsion energy is reduced. However, accurate quantum-mechanical calculations (starting in the 1970s) have shown that the reason is that the electrons in singly occupied orbitals are less effectively screened or shielded from the nucleus, so that such orbitals contract and electron–nucleus attraction energy becomes greater in magnitude (or decreases algebraically).
As an example, consider the ground state of silicon . The electron configuration of Si is 1s 2 2s 2 2p 6 3s 2 3p 2 (see spectroscopic notation ). We need to consider only the outer 3p 2 electrons, for which it can be shown (see term symbols ) that the possible terms allowed by the Pauli exclusion principle are 1 D , 3 P , and 1 S . Hund's first rule now states that the ground state term is 3 P (triplet P) , which has S = 1. The superscript 3 is the value of the multiplicity = 2 S + 1 = 3. The diagram shows the state of this term with M L = 1 and M S = 1.
This rule deals with reducing the repulsion between electrons. It can be understood from the classical picture that if all electrons are orbiting in the same direction (higher orbital angular momentum) they meet less often than if some of them orbit in opposite directions. In the latter case the repulsive force increases, which separates electrons. This adds potential energy to them, so their energy level is higher.
For silicon there is only one triplet term, so the second rule is not required. The lightest atom that requires the second rule to determine the ground state term is titanium (Ti, Z = 22) with electron configuration 1s 2 2s 2 2p 6 3s 2 3p 6 3d 2 4s 2 . In this case the open shell is 3d 2 and the allowed terms include three singlets ( 1 S, 1 D, and 1 G) and two triplets ( 3 P and 3 F). (Here the symbols S, P, D, F, and G indicate that the total orbital angular momentum quantum number has values 0, 1, 2, 3 and 4, respectively, analogous to the nomenclature for naming atomic orbitals.)
We deduce from Hund's first rule that the ground state term is one of the two triplets, and from Hund's second rule that this term is 3 F (with L = 3 {\displaystyle L=3} ) rather than 3 P (with L = 1 {\displaystyle L=1} ). There is no 3 G term since its ( M L = 4 , M S = 1 ) {\displaystyle (M_{L}=4,M_{S}=1)} state would require two electrons each with ( M L = 2 , M S = + 1 / 2 ) {\displaystyle (M_{L}=2,M_{S}=+1/2)} , in violation of the Pauli principle. (Here M L {\displaystyle M_{L}} and M S {\displaystyle M_{S}} are the components of the total orbital angular momentum L and total spin S along the z-axis chosen as the direction of an external magnetic field.)
This rule considers the energy shifts due to spin–orbit coupling . In the case where the spin–orbit coupling is weak compared to the residual electrostatic interaction, L {\displaystyle L} and S {\displaystyle S} are still good quantum numbers and the splitting is given by: Δ E = ζ ( L , S ) { L ⋅ S } = ( 1 / 2 ) ζ ( L , S ) { J ( J + 1 ) − L ( L + 1 ) − S ( S + 1 ) } {\displaystyle {\begin{aligned}\Delta E&=\zeta (L,S)\{\mathbf {L} \cdot \mathbf {S} \}\\&=\ (1/2)\zeta (L,S)\{J(J+1)-L(L+1)-S(S+1)\}\end{aligned}}}
The value of ζ ( L , S ) {\displaystyle \zeta (L,S)} changes from plus to minus for shells greater than half full. This term gives the dependence of the ground state energy on the magnitude of J {\displaystyle J\,} .
The 3 P {\displaystyle {}^{3}\!P\,} lowest energy term of Si consists of three levels, J = 2 , 1 , 0 {\displaystyle J=2,1,0\,} . With only two of six possible electrons in the shell, it is less than half-full and thus 3 P 0 {\displaystyle {}^{3}\!P_{0}\,} is the ground state.
For sulfur (S) the lowest energy term is again 3 P {\displaystyle {}^{3}\!P\,} with spin–orbit levels J = 2 , 1 , 0 {\displaystyle J=2,1,0\,} , but now there are four of six possible electrons in the shell so the ground state is 3 P 2 {\displaystyle {}^{3}\!P_{2}\,} .
If the shell is half-filled then L = 0 {\displaystyle L=0\,} , and hence there is only one value of J {\displaystyle J\,} (equal to S {\displaystyle S\,} ), which is the lowest energy state. For example, in phosphorus the lowest energy state has S = 3 / 2 , L = 0 {\displaystyle S=3/2,\ L=0} for three unpaired electrons in three 3p orbitals. Therefore, J = S = 3 / 2 {\displaystyle J=S=3/2} and the ground state is 4 S 3 / 2 {\displaystyle {}^{4}\!S_{3/2}\,} .
Hund's rules work best for the determination of the ground state of an atom or molecule.
They are also fairly reliable (with occasional failures) for the determination of the lowest state of a given excited electronic configuration . Thus, in the helium atom , Hund's first rule correctly predicts that the 1s2s triplet state ( 3 S) is lower than the 1s2s singlet ( 1 S). Similarly for organic molecules, the same rule predicts that the first triplet state (denoted by T 1 in photochemistry ) is lower than the first excited singlet state (S 1 ), which is generally correct.
However Hund's rules should not be used to order states other than the lowest for a given configuration. [ 5 ] For example, the titanium atom ground state configuration is ...3d 2 for which a naïve application of Hund's rules would suggest the ordering 3 F < 3 P < 1 G < 1 D < 1 S. In reality, however, 1 D lies below 1 G. | https://en.wikipedia.org/wiki/Hund's_rules |
The Hundred-dollar, Hundred-digit Challenge problems are 10 problems in numerical mathematics published in 2002 by Nick Trefethen ( 2002 ). A $100 prize was offered to whoever produced the most accurate solutions, measured up to 10 significant digits . The deadline for the contest was May 20, 2002. In the end, 20 teams solved all of the problems perfectly within the required precision, and an anonymous donor aided in producing the required prize monies. The challenge and its solutions were described in detail in the book (Folkmar Bornemann, Dirk Laurie & Stan Wagon et al. 2004 ).
From ( Trefethen 2002 ):
These answers have been assigned the identifiers OEIS : A117231 , OEIS : A117232 , OEIS : A117233 , OEIS : A117234 , OEIS : A117235 , OEIS : A117236 , OEIS : A117237 , OEIS : A117238 , OEIS : A117239 , and OEIS : A117240 in the On-Line Encyclopedia of Integer Sequences . | https://en.wikipedia.org/wiki/Hundred-dollar,_Hundred-digit_Challenge_problems |
The Hundred Fowls Problem is a problem first discussed in the fifth century CE Chinese mathematics text Zhang Qiujian suanjing (The Mathematical Classic of Zhang Qiujian), a book of mathematical problems written by Zhang Qiujian. It is one of the best known examples of indeterminate problems in the early history of mathematics . [ 1 ] The problem appears as the final problem in Zhang Qiujian suanjing (Problem 38 in Chapter 3). However, the problem and its variants have appeared in the medieval mathematical literature of India, Europe and the Arab world. [ 2 ]
The name "Hundred Fowls Problem" is due to the Belgian historian Louis van Hee. [ 3 ]
The Hundred Fowls Problem as presented in Zhang Qiujian suanjing can be translated as follows: [ 4 ]
Let x be the number of cocks, y be the number of hens, and z be the number of chicks, then the problem is to find x , y and z satisfying the following equations:
Obviously, only non-negative integer values are acceptable. Expressing y and z in terms of x we get
Since x , y and z all must be integers, the expression for y suggests that x must be a multiple of 4. Hence the general solution of the system of equations can be expressed using an integer parameter t as follows: [ 5 ]
Since y should be a non-negative integer, the only possible values of t are 0, 1, 2 and 3. So the complete set of solutions is given by
of which the last three have been given in Zhang Qiujian suanjing . [ 3 ] However, no general method for solving such problems has been indicated, leading to a suspicion of whether the solutions have been obtained by trial and error. [ 1 ]
The Hundred Fowls Problem found in Zhang Qiujian suanjing is a special case of the general problem of finding integer solutions of the following system of equations:
Any problem of this type is sometime referred to as "Hundred Fowls problem". [ 3 ]
Some variants of the Hundred Fowls Problem have appeared in the mathematical literature of several cultures. [ 1 ] [ 2 ] In the following we present a few sample problems discussed in these cultures.
Mahavira 's Ganita-sara-sangraha contains the following problem:
The Bakshali manuscript gives the problem of solving the following equations:
The English mathematician Alcuin of York (8th century, c.735-19 May 804 AD) has stated seven problems similar to the Hundred Fowls Problem in his Propositiones ad acuendos iuvenes . Here is a typical problem:
Abu Kamil (850 - 930 CE) considered non-negative integer solutions of the following equations: | https://en.wikipedia.org/wiki/Hundred_Fowls_Problem |
In arithmetic , a hundredth is a single part of something that has been divided equally into a hundred parts. [ 1 ] For example, a hundredth of 675 is 6.75. In this manner it is used with the prefix " centi- " such as in centimeter . A hundredth is also one percent .
A hundredth is the reciprocal of 100.
A hundredth is written as a decimal fraction as 0.01, and as a vulgar fraction as 1/100. [ 2 ]
“Hundredth” is also the ordinal number that follows “ninety-ninth” and precedes “hundred and first.” It is written as 100th.
This article about a number is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hundredth |
Hungarian Astronautical Society [ 1 ] abbreviated as MANT (Magyar Asztronautikai Társaság), is a non-profit organization focusing on educational and informative activities on space science , founded in 1986. The association considers itself a successor of the Astronautical Committee of the association called Scientific Lyceum [ 2 ] (Hun. abbr.: TIT), founded in 1956; and the Central Astronautical Section of the Federation of Technological and Sciences Associations (Hun. abbr.: MTESZ). Members of the society are space researchers, other professionals concerned in space-related fields and others interested in the interdisciplinary and state-of-the-art uses of outer space.
The main aims of the society are:
Main regular events of the society:
The camp was founded by Aunt Magdi, the "Space Granny", in 1994. Her intent was to introduce space research and astronautics to the youth. That time the program of the camp consisted mostly of lectures presented by the best-known Hungarian scientists in the field. Later the camp had a younger leadership, leading to a shift to more active and creative programs. From 2010 the duration of the camp is one week, from Sunday to Saturday.
The participants are students between the age of 13 and 18, all interested in sciences and space topics. We are proud of the fact that the ratio of the girls among the participants is approaching 50%. About quarter of the students return next year and every fifth student becomes a regular camper. Some acknowledged space researchers started their "space career" in the Space Camp, like two of the secretaries general and several Members of Board.
1994 – Kecskemét, 1995 – Eger, 1996 – Veszprém, 1997 – Veszprém, 1998 – Győr, 1999 – Kecskemét, 2000 – Sopron, 2001 – Debrecen, 2002 – Székesfehérvár, 2003 – Budapest, 2004 – Kiskunhalas, 2005 – Gyulaháza, 2006 – Szentlélek, 2007 – Hollóstető, 2008 – Szentlélek, 2010 – Gyomaendrőd, 2011 – Sátoraljaújhely, 2012 – Kecskemét, 2013 – Alsómocsolád, 2014 – Felsőtárkány, 2015 – Sopron, 2016 – Debrecen, 2017 – Bakonybél, 2018 - Zalaegerszeg, 2019 - Sátoraljaújhely, 2020 - virtual (3-day online event), 2021 - virtual (3-day online event), 2022 - Székesfehérvár (planned)
MANT organizes a competition for primary and secondary school students every year. The topics announced in around October are always different, but focus on a current aspect of space research and exploration.
In the camp space researchers and astronautical experts give lectures. Topics cover a wide range, from the basics of astronomy through problems of space debris to deciding the dilemma if Pluto is a planet or not. Other activities include several practical exercises make the camp exciting, e.g. water rocketry, underwater assembly, creating stereo pictures, astronomical observation, excursions, bathing, etc. There is a main topic every year which meets the topic of the Student Competition.
Since 2015 private individuals and companies are invited to co-finance the Hungarian Space Camp as Mentors. MANT welcomes financial support of participation of needy students from Hungary and from the Hungarian diaspora in the surrounding countries. In the past years about one fifth of the participants was able to join the Camp by favor of a Mentor.
MANT yearly announces a Student Competition for primary and secondary school students, since 1992. The topics announced around October are always different, but focus on a current aspect of space research and exploration. The topic of the Student Competition is going to be the main topic of the Space Camp of that year. E.g. title of the Competition was "Civilians in Space" in 2014, "Beyond Mars" in 2015 and "Cleaning in Space" in 2017. Categories: writing essay; drawing; preparing project plan; creating video, website, Facebook page or blog. Small teams of two or three students are also welcome. Applications are evaluated in two age classes: age between 11–14 and 15–18. The competition is open for visually impaired students as well. Their works are evaluated separately. Prizes include participation in the Space Camp for free or for a reduced fee; science books and magazines; a visit to a selected space research institute or company; free one-year membership in the society.
Hungarian Astronautical Society in collaboration with the international Space Generation Advisory Council initiated a yearly event called Space Academy [ 7 ] in 2015 for university students and young professionals between ages 18 and 35. [ 8 ] It is a four-day workshop in August where the participants outline a solution for a given task or problem together.
Space Academy Club is a lecture series organized by the Hungarian Astronautical Society and the Hungarian organizers of Space Generation Advisory Council. It takes place usually in February, April, September and November, during university semesters. It targets primarily university students and young professionals of the age between 18 and 35. The series is connected with Space Academy in their name and their target groups.
MANT has been publishing its Astronautical Brochure since 1961, presenting the recent activities of MANT and summarizing the globally most significant affairs of space research. Hungarian researchers working in space research and its interdisciplinary fields present their latest results.
Space Research Day is organized yearly. It joins the international World Space Week in October. [ 9 ] It usually takes place at one of the universities of Budapest, or at the Hungarian Academy of Sciences. The program consists of lectures on actual space activities and space related results and exchange of views. There are blocks for professionals, interested amateurs and students.
Hungarian Space Research Forum is a traditional biennial conference of researchers of the field, to be held for the 30th time in 2017. Its former title and now subtitle Ionosphere and Magnetosphere Physical Seminar expresses its original specialty which gained a broader horizon in the past decades. [ 10 ] [ 11 ] Hungarian physicists, geophysicists, astronomers, meteorologists etc. take part in this seminar by presenting their latest researches through lectures and posters.
MANT is willing to provide lectures for schools in various fields related to space activities titled School Day.
Dr. Charles Simonyi , the first repeat space tourist launched on April 7, 2007 (GMT), on board Soyuz TMA-10 to the International Space Station and returned on April 21, 2007, on his first space flight. The Hungarian-born Simonyi is a licensed amateur radio operator (KE7KDP) and he contacted students of Puskás Tivadar Távközlési Technikum (Puskás Tivadar Telecommunications Polytechnic, Budapest) on April 13. [ 12 ] This occasion of radio contact was organized in collaboration with MANT.
This structure is used since 2009 (as of April 2020): | https://en.wikipedia.org/wiki/Hungarian_Astronautical_Society |
The Hungarian Chemical Society ( Hungarian : Magyar Kémikusok Egyesülete , pronounced [ˈmɒɟɒr ˈkeːmikuʃok ˈɛɟːɛʃylɛtɛ] ) was founded in 1907. [ 1 ] [ 2 ] It is a voluntary society of more than 2,000 members [ 3 ] which aims to provide a forum for those interested in chemistry and promote chemistry in Hungary . [ 4 ]
The Hungarian Chemical Journal ( Hungarian : Magyar Kémikusok Lapja ) is the official journal of the society and is released monthly. [ 5 ] [ 6 ]
This Hungary -related article is a stub . You can help Wikipedia by expanding it .
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hungarian_Chemical_Society |
In politics , humanitarian aid , and the social sciences , hunger is defined as a condition in which a person does not have the physical or financial capability to eat sufficient food to meet basic nutritional needs for a sustained period. In the field of hunger relief, the term hunger is used in a sense that goes beyond the common desire for food that all humans experience, also known as an appetite . The most extreme form of hunger, when malnutrition is widespread, and when people have started dying of starvation through lack of access to sufficient, nutritious food, leads to a declaration of famine . [ 2 ]
Throughout history, portions of the world's population have often suffered sustained periods of hunger. In many cases, hunger resulted from food supply disruptions caused by war , plagues , or adverse weather . In the decades following World War II , technological progress and enhanced political cooperation suggested it might be possible to substantially reduce the number of people suffering from hunger. While progress was uneven, by 2015, the threat of extreme hunger had receded for a large portion of the world's population. According to the FAO's 2023 The State of Food Security and Nutrition in the World report, this positive trend had reversed from about 2017, when a gradual rise in number of people suffering from chronic hunger became discernible. In 2020 and 2021, due to the COVID-19 pandemic , there was an increase in the number of people suffering from undernourishment. A recovery occurred in 2022 along with the economic rebound, though the impact on global food markets caused by the invasion of Ukraine meant the reduction in world hunger was limited. [ 3 ]
While most of the world's people continue to live in Asia , much of the increase in hunger since 2017 occurred in Africa and South America . The FAO's 2017 report discussed three principal reasons for the recent increase in hunger: climate , conflict , and economic slowdowns . The 2018 edition focused on extreme weather as a primary driver of the increase in hunger, finding rising rates to be especially severe in countries where agricultural systems were most sensitive to extreme weather variations. The 2019 SOFI report found a strong correlation between increases in hunger and countries that had suffered an economic slowdown . The 2020 edition instead looked at the prospects of achieving the hunger related Sustainable Development Goal (SDG). It warned that if nothing was done to counter the adverse trends of the past six years, the number of people suffering from chronic hunger could rise by over 150 million by 2030. The 2023 report reported a sharp jump in hunger caused by the COVID-19 pandemic, which leveled off in 2022. According to the report of United Nations from 2025, hunger increases globally for 6 years in a row. [ 4 ]
Many thousands of organizations are engaged in the field of hunger relief, operating at local, national, regional, or international levels. Some of these organizations are dedicated to hunger relief, while others may work in several different fields. The organizations range from multilateral institutions to national governments, to small local initiatives such as independent soup kitchens . Many participate in umbrella networks that connect thousands of different hunger relief organizations. At the global level, much of the world's hunger relief efforts are coordinated by the UN and geared towards achieving SDG 2 of Zero Hunger by 2030.
There is one globally recognized approach for defining and measuring hunger generally used by those studying or working to relieve hunger as a social problem. This is the United Nation's FAO measurement, which is typically referred to as chronic undernourishment (or in older publications, as 'food deprivation,' 'chronic hunger,' or just plain 'hunger.') For the FAO:
Not all of the organizations in the hunger relief field use the FAO definition of hunger. Some use a broader definition that overlaps more fully with malnutrition. The alternative definitions do however tend to go beyond the commonly understood meaning of hunger as a painful or uncomfortable motivational condition; the desire for food is something that all humans frequently experience, even the most affluent, and is not in itself a social problem . [ 10 ] [ 8 ] [ 7 ] [ 6 ]
Very low food supply can be described as "food insecure with hunger." A change in description was made in 2006 at the recommendation of the Committee on National Statistics ( National Research Council , 2006) in order to distinguish the physiological state of hunger from indicators of food availability. [ 11 ] Food insecure is when food intake of one or more household members was reduced and their eating patterns were disrupted at times during the year because the household lacked money and other resources for food. [ 11 ] Food security statistics is measured by using survey data, based on household responses to items about whether the household was able to obtain enough food to meet their needs. [ 12 ]
The United Nations publishes an annual report on the state of food security and nutrition across the world. Led by the FAO , the report was joint authored by four other UN agencies: the WFP , IFAD , WHO and UNICEF . The theme of the 2024 report is on how efforts to meet SDG 2.1 & 2.2 can be financed. The FAO's yearly report provides a statistical overview on the prevalence of hunger around the world, and is widely considered the main global reference for tracking hunger. No simple set of statistics can ever fully capture the multi dimensional nature of hunger however. Reasons include that the FAO's key metric for hunger, "undernourishment", is defined solely in terms of dietary energy availability – disregarding micro-nutrients such as vitamins or minerals. Second, the FAO uses the energy requirements for minimum activity levels as a benchmark; many people would not count as hungry by the FAO's measure yet still be eating too little to undertake hard manual labour, which might be the only sort of work available to them. Thirdly, the FAO statistics do not always reflect short-term undernourishment. [ 7 ] [ 13 ] [ 14 ] [ 15 ] [ 3 ] [ 16 ]
According to the report of United Nations, from 2025, hunger increases globally for 6 years in a row. As funding become scarce, a possible solution is "investment in sustainable agriculture, which is four times more cost-effective than direct food assistance but only accounts for three percent of humanitarian funds." [ 4 ]
An alternative measure of hunger across the world is the Global Hunger Index (GHI). Unlike the FAO's measure, the GHI defines hunger in a way that goes beyond raw calorie intake, to include for example ingestion of micronutrients. GDI is a multidimensional statistical tool used to describe the state of countries' hunger situation. The GHI measures progress and failures in the global fight against hunger. [ 17 ] The GHI is updated once a year. The data from the 2015 report showed that Hunger levels have dropped 27% since 2000. Fifty two countries remained at serious or alarming levels. [ 18 ] The 2019 GHI report expresses concern about the increase in hunger since 2015. In addition to the latest statistics on Hunger and Food Security, the GHI also features different special topics each year. The 2019 report includes an essay on hunger and climate change, with evidence suggesting that areas most vulnerable to climate change have suffered much of the recent increases in hunger. [ 19 ] [ 20 ]
Throughout history, the need to aid those suffering from hunger has been commonly, though not universally, [ 21 ] recognized. The philosopher Simone Weil wrote that feeding the hungry when you have resources to do so is the most obvious of all human obligations . She says that as far back as Ancient Egypt , many believed that people had to show they had helped the hungry in order to justify themselves in the afterlife. Weil writes that Social progress is commonly held to be first of all, "...a transition to a state of human society in which people will not suffer from hunger." [ 22 ] Social historian Karl Polanyi wrote that before markets became the world's dominant form of economic organization in the 19th century, most human societies would either starve all together or not at all, because communities would invariably share their food. [ 23 ]
While some of the principles for avoiding famines had been laid out in the first book of the Bible , [ 24 ] they were not always understood. Historical hunger relief efforts were often largely left to religious organizations and individual kindness. Even up to early modern times, political leaders often reacted to famine with bewilderment and confusion. From the first age of globalization, which began in the 19th century, it became more common for the elite to consider problems like hunger in global terms. However, as early globalization largely coincided with the high peak of influence for classical liberalism , there was relatively little call for politicians to address world hunger. [ 25 ] [ 26 ]
In the late nineteenth and early twentieth century, the view that politicians ought not to intervene against hunger was increasingly challenged by campaigning journalists. There were also more frequent calls for large scale intervention against world hunger from academics and politicians, such as U.S. President Woodrow Wilson . Funded both by the government and private donations, the U.S. was able to dispatch millions of tons of food aid to European countries during and in the years immediately after WWI, organized by agencies such as the American Relief Administration . Hunger as an academic and social topic came to further prominence in the U.S. thanks to mass media coverage of the issue as a domestic problem during the Great Depression . [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 1 ] [ 31 ]
While there had been increasing attention to hunger relief from the late 19th century, Dr David Grigg has summarised that prior to the end of World War II , world hunger still received relatively little academic or political attention; whereas after 1945 there was an explosion of interest in the topic. [ 29 ]
After World War II , a new international politico-economic order came into being, which was later described as Embedded liberalism . For at least the first decade after the war, the United States, then by far the period's most dominant national actor, was strongly supportive of efforts to tackle world hunger and to promote international development. It heavily funded the United Nation's development programmes, and later the efforts of other multilateral organizations like the International Monetary Fund (IMF) and the World Bank (WB). [ 29 ] [ 1 ] [ 32 ]
The newly established United Nations became a leading player in co-ordinating the global fight against hunger. The UN has three agencies that work to promote food security and agricultural development: the Food and Agriculture Organization (FAO), the World Food Programme (WFP) and the International Fund for Agricultural Development (IFAD). FAO is the world's agricultural knowledge agency, providing policy and technical assistance to developing countries to promote food security, nutrition and sustainable agricultural production, particularly in rural areas. WFP 's key mission is to deliver food into the hands of the hungry poor. The agency steps in during emergencies and uses food to aid recovery after emergencies. Its longer term approaches to hunger helps the transition from recovery to development. IFAD , with its knowledge of rural poverty and exclusive focus on poor rural people, designs and implements programmes to help those people access the assets, services and opportunities they need to overcome poverty. [ 29 ] [ 1 ] [ 32 ]
Following successful post WWII reconstruction of Germany and Japan, the IMF and WB began to turn their attention to the developing world. A great many civil society actors were also active in trying to combat hunger, especially after the late 1970s when global media began to bring the plight of starving people in places like Ethiopia to wider attention. Most significant of all, especially in the late 1960s and 70s, the Green revolution helped improved agricultural technology propagate throughout the world. [ 29 ] [ 1 ] [ 32 ]
The United States began to change its approach to the problem of world hunger from about the mid 1950s. Influential members of the administration became less enthusiastic about methods they saw as promoting an over reliance on the state, as they feared that might assist the spread of communism . By the 1980s, the previous consensus in favour of moderate government intervention had been displaced across the western world. The IMF and World Bank in particular began to promote market-based solutions. In cases where countries became dependent on the IMF , they sometimes forced national governments to prioritize debt repayments and sharply cut public services. This sometimes had a negative effect on efforts to combat hunger. [ 33 ] [ 34 ] [ 35 ]
Organizations such as Food First raised the issue of food sovereignty and claimed that every country on earth (with the possible minor exceptions of some city-states) has sufficient agricultural capacity to feed its own people, but that the " free trade " economic order, which from the late 1970s to about 2008 had been associated with such institutions as the IMF and World Bank , had prevented this from happening. The World Bank itself claimed it was part of the solution to hunger, asserting that the best way for countries to break the cycle of poverty and hunger was to build export-led economies that provide the financial means to buy foodstuffs on the world market. However, in the early 21st century the World Bank and IMF became less dogmatic about promoting free market reforms. They increasingly returned to the view that government intervention does have a role to play, and that it can be advisable for governments to support food security with policies favourable to domestic agriculture, even for countries that do not have a comparative advantage in that area. As of 2012, the World Bank remains active in helping governments to intervene against hunger. [ 36 ] [ 29 ] [ 1 ] [ 32 ] [ 37 ]
Until at least the 1980s—and, to an extent, the 1990s—the dominant academic view concerning world hunger was that it was a problem of demand exceeding supply. Proposed solutions often focused on boosting food production, and sometimes on birth control. There were exceptions to this, even as early as the 1940s, Lord Boyd-Orr , the first head of the UN 's FAO , had perceived hunger as largely a problem of distribution, and drew up comprehensive plans to correct this. Few agreed with him at the time, however, and he resigned after failing to secure support for his plans from the US and Great Britain . In 1998, Amartya Sen won a Nobel Prize in part for demonstrating that hunger in modern times is not typically the product of a lack of food. Rather, hunger usually arises from food distribution problems, or from governmental policies in the developed and developing world. It has since been broadly accepted that world hunger results from issues with the distribution as well as the production of food. [ 33 ] [ 34 ] [ 35 ] Sen's 1981 essay Poverty and Famines: An Essay on Entitlement and Deprivation played a prominent part in forging the new consensus. [ 1 ] [ 38 ]
In 2007 and 2008, rapidly increasing food prices caused a global food crisis . Food riots erupted in several dozen countries; in at least two cases, Haiti and Madagascar , this led to the toppling of governments. A second global food crisis unfolded due to the spike in food prices of late 2010 and early 2011. Fewer food riots occurred, due in part to greater availability of food stock piles for relief. However, several analysts argue the food crisis was one of the causes of the Arab Spring . [ 32 ] [ 39 ] [ 40 ]
In the early 21st century, the attention paid to the problem of hunger by the leaders of advanced nations such as those that form the G8 had somewhat subsided. [ 39 ] Prior to 2009, large scale efforts to fight hunger were mainly undertaken by governments of the worst affected countries, by civil society actors, and by multilateral and regional organizations. In 2009, Pope Benedict published his third encyclical, Caritas in Veritate , which emphasised the importance of fighting against hunger. The encyclical was intentionally published immediately before the July 2009 G8 Summit to maximise its influence on that event. At the Summit, which took place at L'Aquila in central Italy, the L'Aquila Food Security Initiative was launched, with a total of US$22 billion committed to combat hunger. [ 41 ] [ 42 ]
Food prices fell sharply in 2009 and early 2010, though analysts credit this much more to farmers increasing production in response to the 2008 spike in prices, than to the fruits of enhanced government action. However, since the 2009 G8 summit, the fight against hunger became a high-profile issue among the leaders of the worlds major nations and was a prominent part of the agenda for the 2012 G-20 summit . [ 39 ] [ 43 ] [ 44 ]
In April 2012, the Food Assistance Convention was signed, the world's first legally binding international agreement on food aid. The May 2012 Copenhagen Consensus recommended that efforts to combat hunger and malnutrition should be the first priority for politicians and private sector philanthropists looking to maximize the effectiveness of aid spending. They put this ahead of other priorities, like the fight against malaria and AIDS . [ 45 ] Also in May 2012, U.S. President Barack Obama launched a "new alliance for food security and nutrition"—a broad partnership between private sector, governmental and civil society actors—that aimed to "...achieve sustained and inclusive agricultural growth and raise 50 million people out of poverty over the next 10 years." [ 33 ] [ 43 ] [ 46 ] [ 47 ] The UK's prime minister David Cameron held a hunger summit on 12 August, the last day of the 2012 Summer Olympics . [ 43 ]
The fight against hunger has also been joined by an increased number of regular people. While folk throughout the world had long contributed to efforts to alleviate hunger in the developing world, there has recently been a rapid increase in the numbers involved in tackling domestic hunger even within the economically advanced nations of the Global North . This had happened much earlier in North America than it did in Europe. In the US, the Reagan administration scaled back welfare the early 1980s, leading to a vast increase of charity sector efforts to help Americans unable to buy enough to eat. According to a 1992 survey of 1000 randomly selected US voters, 77% of Americans had contributed to efforts to feed the hungry, either by volunteering for various hunger relief agencies such as food banks and soup kitchens , or by donating cash or food. [ 48 ] Europe, with its more generous welfare systems, had little awareness of domestic hunger until the food price inflation that began in late 2006, and especially as austerity-imposed welfare cuts began to take effect in 2010. Various surveys reported that upwards of 10% of Europe's population had begun to suffer from food insecurity . Especially since 2011, there has been a substantial increase in grass roots efforts to help the hungry by means of food banks , both in the UK and in continental Europe. [ 49 ] [ 50 ] [ 51 ] [ 52 ] [ 53 ]
By July 2012, the 2012 US drought had already caused a rapid increase in the price of grain and soy, with a knock on effect on the price of meat. As well as affecting hungry people in the US, this caused prices to rise on the global markets; the US is the world's biggest exporter of food. This led to much talk of a possible third 21st century global food crisis. The Financial Times reported that the BRICS may not be as badly affected as they were in the earlier crises of 2008 and 2011. However, smaller developing countries that must import a substantial portion of their food could be hard hit. The UN and G20 has begun contingency planning so as to be ready to intervene if a third global crisis breaks out. [ 36 ] [ 40 ] [ 54 ] [ 55 ] By August 2013 however, concerns had been allayed, with above average grain harvests expected from major exporters, including Japan, Brazil, Ukraine and the US. [ 56 ] 2014 also saw a good worldwide harvest, leading to speculation that grain prices could soon begin to fall. [ 57 ]
In an April 2013 summit held in Dublin concerning Hunger, Nutrition, Climate Justice , and the post 2015 MDG framework for global justice, Ireland's President Higgins said that only 10% of deaths from hunger are due to armed conflict and natural disasters, with ongoing hunger being both the "greatest ethical failure of the current global system" and the "greatest ethical challenge facing the global community." [ 58 ] $4.15 billion of new commitments were made to tackle hunger at a June 2013 Hunger Summit held in London, hosted by the governments of Britain and Brazil, together with The Children's Investment Fund Foundation . [ 59 ] [ 60 ]
Despite the hardship caused by the 2008 financial crisis and global increases in food prices that occurred around the same time, the UN's global statistics show it was followed by close to year on year reductions in the numbers suffering from hunger around the world. By 2019 however, evidence had mounted that this progress seemed to have gone into reverse over the last four years. The numbers suffering from hunger had risen both in absolute terms and very slightly even as a percentage of the world's population. [ 61 ] [ 62 ] [ 13 ]
In 2019, FAO its annual edition of The State of Food and Agriculture which asserted that food loss and waste has potential effects on food security and nutrition through changes in the four dimensions of food security: food availability, access, utilization and stability. However, the links between food loss and waste reduction and food security are complex, and positive outcomes are not always certain. Reaching acceptable levels of food security and nutrition inevitably implies certain levels of food loss and waste. Maintaining buffers to ensure food stability requires a certain amount of food to be lost or wasted. At the same time, ensuring food safety involves discarding unsafe food, which then is counted as lost or wasted, while higher-quality diets tend to include more highly perishable foods. How the impacts on the different dimensions of food security play out and affect the food security of different population groups depends on where in the food supply chain the reduction in losses or waste takes place as well as on where nutritionally vulnerable and food-insecure people are located geographically. [ 63 ]
In April and May 2020, concerns were expressed that the COVID-19 pandemic could result in a doubling in global hunger unless world leaders acted to prevent this. Agencies such as the WFP warned that this could include the number of people facing acute hunger rising from 135 million to about 265 million by the end of 2020. Indications of extreme hunger were seen in various cities, such as fatal stampedes when word spread that emergency food aid was being handed out. Letters calling for co-ordinated action to offset the effects of the COVID-19 pandemic were written to the G20 and G7 , by various actors including NGOs, UN staff, corporations, academics and former national leaders. [ 64 ] [ 65 ] [ 66 ] [ 9 ] The FAO found that 122 million more people experienced hunger in 2022 compared to 2019. [ 67 ] Following the 2022 invasion of Ukraine , concerns have been raised over hunger resulting from rising food prices. This is forecast to risk civil unrest even in many middle income countries, where government capability to protect their populations was largely exhausted by the Covid pandemic, and has not yet recovered. [ 68 ]
Between 713 and 757 million people may have faced hunger in 2023 – one out of 11 people in the world, and one out of every five in Africa . The prevalence of moderate or severe food insecurity has remained unchanged at the global level from 2020 to 2023 with hunger is still on the rise in Africa, but it has remained relatively unchanged in Asia , while progress has been made in the Latin American and Caribbean region. Africa is the region with the largest percentage of the population facing hunger – 20.4 %, compared with 8.1% in Asia, 6.2& in Latin America and the Caribbean, and 7.3% in Oceania. However, Asia is still home to the largest number: 384.5 million, or more than half of all those facing hunger in the world. In Africa, 298.4 million people may have faced hunger in 2023, compared with 41.0 million in Latin America and the Caribbean, and 3.3 million in Oceania. [ 69 ]
Many thousands of hunger relief organisations exist across the world. Some but not all are entirely dedicated to fighting hunger. They range from independent soup kitchens that serve only one locality, to global organisations. Organisations working at the global and regional level will often focus much of their efforts on helping hungry communities to better feed themselves, for example by sharing agricultural technology. With some exceptions, organisations that work just on the local level tend to focus more on providing food directly to hungry people. Many of the entities are connected by a web of national, regional and global alliances that help them share resources, knowledge, and coordinate efforts. [ 70 ]
The United Nations is central to global efforts to relieve hunger, most especially through the FAO , and also via other agencies: such as WFP , IFAD , WHO and UNICEF . After the Millennium Development Goals expired in 2015, the Sustainable Development Goals (SDGs) became key objectives to shape the world's response to development challenges such as hunger. In particular Goal 2 : Zero Hunger sets globally agreed targets to end hunger, achieve food security and improved nutrition and promote sustainable agriculture. [ 71 ] [ 8 ] [ 9 ]
Aside from the UN agencies themselves, hundreds of other actors address the problem of hunger on the global level, often involving participation in large umbrella organisations. These include national governments, religious groups, international charities and in some cases international corporations. Though except perhaps in the cases of dedicated charities, the priority these organisations assign to hunger relief may vary from year to year. In many cases the organisations partner with the UN agencies, though often they pursue independent goals. For example, as consensus began to form for the SDG zero hunger goal to aim to end hunger by 2030, a number of organizations formed initiatives with the more ambitious target to achieve this outcome early, by 2025:
The objective of SDG 2 is to "end hunger, achieve food security and improved nutrition and promote sustainable agriculture " by 2030. SDG2 recognizes that dealing with hunger is not only based on increasing food production but also on proper markets, access to land and technology and increased and efficient incomes for farmers. [ 77 ]
A report by the International Food Policy Research Institute (IFPRI) of 2013 argued that the emphasis of the SDGs should be on eliminating hunger and under-nutrition, rather than on poverty, and that attempts should be made to do so by 2025 rather than 2030. [ 75 ] The argument is based on an analysis of experiences in Russia, China, Vietnam, Brazil, and Thailand and the fact that people suffering from severe hunger face extra impediments to improving their lives, whether it be by education or work. Three pathways to achieve this were identified: 1) agriculture-led; 2) social protection- and nutrition- intervention-led; or 3) a combination of both of these approaches. [ 75 ]
Much of the world's regional alliances are located in Africa. For example, the Alliance for Food Sovereignty in Africa or the Alliance for a Green Revolution in Africa . [ 78 ] [ 70 ]
The Food and Agriculture Organization of the UN has created a partnership that will act through the African Union 's CAADP framework aiming to end hunger in Africa by 2025. It includes different interventions including support for improved food production, a strengthening of social protection and integration of the right to food into national legislation. [ 79 ]
Examples of hunger relief organisations that operate on the national level include The Trussell Trust in the United Kingdom, the Nalabothu Foundation in India, and Feeding America in the United States. [ 80 ]
A food bank (or foodbank) is a non-profit, charitable organization that aids in the distribution of food to those who have difficulty purchasing enough to avoid hunger. Food banks tend to run on different operating models depending on where they are located. In the U.S., Australia, and to some extent in Canada, foodbanks tend to perform a warehouse type function, storing and delivering food to front line food orgs, but not giving it directly to hungry peoples themselves. In much of Europe and elsewhere, food banks operate on the front line model, where they hand out parcels of uncooked food direct to the hungry, typically giving them enough for several meals which they can eat in their homes. In the U.S and Australia, establishments that hand out uncooked food to individual people are instead called food pantries , food shelves or food closets'. [ 81 ]
In Less Developed Countries , there are charity-run food banks that operate on a semi-commercial system that differs from both the more common "warehouse" and "frontline" models. In some rural LDCs such as Malawi, food is often relatively cheap and plentiful for the first few months after the harvest, but then becomes more and more expensive. Food banks in those areas can buy large amounts of food shortly after the harvest, and then as food prices start to rise, they sell it back to local people throughout the year at well below market prices. Such food banks will sometimes also act as centers to provide small holders and subsistence farmers with various forms of support. [ 82 ]
A soup kitchen , meal center, or food kitchen is a place where food is offered to the hungry for free or at a below market price . Frequently located in lower-income neighborhoods, they are often staffed by volunteer organizations, such as church or community groups. Soup kitchens sometimes obtain food from a food bank for free or at a low price, because they are considered a charity , which makes it easier for them to feed the many people who require their services.
Local establishments calling themselves "food banks" or "soup kitchens" are often run either by Christian churches or less frequently by secular civil society groups. Other religions carry out similar hunger relief efforts, though sometimes with slightly different methods. For example, in the Sikh tradition of Langar , food is served to the hungry direct from Sikh temples. There are exceptions to this, for example in the UK Sikhs run some of the food banks, as well as giving out food direct from their Gurdwaras . [ 83 ] [ 84 ]
World Bank studies consistently find that about 60% of those who are hungry are female. Globally, women typically face greater economic barriers compared to men and have access to fewer resources, creating greater obstacles to food security. In both developing and advanced countries, parents sometimes go without food so they can feed their children. Women, however, seem more likely to make this sacrifice than men. Older sources sometimes claim this phenomenon is unique to developing countries, due to greater sexual inequality. More recent findings suggested that mothers often miss meals in advanced economies too. For example, a 2012 study undertaken by Netmums in the UK found that one in five mothers sometimes misses out on food to save their children from hunger. [ 36 ] [ 85 ] [ 86 ]
One partner-households are especially vulnerable to food insecurity and highlight a gender disparity in food security. In the U.S., households with children raised by single-mothers are more likely to be food insecure compared to households with single-fathers. [ 87 ] Differences in time allocation between paid work and unpaid work may also be an explanation for increased food disparity in women-lead households, as women tend to dedicate more time to unpaid work comparatively. [ 88 ]
In several periods and regions, gender has also been an important factor determining whether or not victims of hunger would make suitable examples for generating enthusiasm for hunger relief efforts. James Vernon, in his Hunger: A Modern History , wrote that in Britain before the twentieth century, it was generally only women and children suffering from hunger who could arouse compassion. Men who failed to provide for themselves and their families were often regarded with contempt. [ 28 ]
This changed after World War I , where thousands of men who had proved their manliness in combat found themselves unable to secure employment. Similarly, female gender could be advantageous for those wishing to advocate for hunger relief, with Vernon writing that being a woman helped Emily Hobhouse draw the plight of hungry people to wider attention during the Second Boer War . [ 28 ]
The elderly have an increased risk of going hungry as well as increased negative effects of hunger. In the US the number of seniors experiencing hunger rose 88% between 2001 and 2011. [ 89 ]
This age group suffers the most from chronic conditions, including heart disease, diabetes, and respiratory diseases. Eighty percent of this group has a minimum of one chronic condition, and almost 70% have two or more. [ 90 ] These illnesses are exacerbated and are more likely to develop under the addition of hunger. A report from 2017 shows that seniors facing this issue are 60% more likely to experience depression than seniors who are not hungry, and 40% are more likely to develop congestive heart failure. The added stress of inconsistent and inadequate feedings make these conditions much more dangerous. [ 91 ]
Fixed incomes often limit the elderly's ability to freely purchase food necessities. [ citation needed ] Medical costs and housing may take priority over quality foods. Limited mobility makes it difficult for these individuals to physically leave their homes, especially in areas lacking public transportation or transportation catering to a disabled body. [ citation needed ] The COVID-19 pandemic made things more difficult, older people statistically suffer worse outcomes, and so could be reluctant to venture out for food. [ citation needed ]
The Supplemental Nutrition Assistance Program (SNAP) provides aid to low-income seniors in relation to food security. This is an opportunity for seniors who receive benefits to allocate money in their budgets for other needs, such as medical or housing bills. However, participation is extremely low. Fewer than half of eligible seniors are enrolled and receive benefits; 3 out of five seniors are qualified but not enrolled. [ 92 ]
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 ( license statement/permission ). Text taken from The State of Food and Agriculture 2019. Moving forward on food loss and waste reduction, In brief , 24, FAO, FAO.
This article incorporates text from a free content work. Licensed under CC BY 4.0. Text taken from The State of Food Security and Nutrition in the World 2024 , FAO, IFAD, UNICEF, WFP and WHO, FAO. | https://en.wikipedia.org/wiki/Hunger |
The Hunsdiecker reaction (also called the Borodin reaction or the Hunsdiecker–Borodin reaction ) is a name reaction in organic chemistry whereby silver salts of carboxylic acids react with a halogen to produce an organic halide . [ 1 ] It is an example of both a decarboxylation and a halogenation reaction as the product has one fewer carbon atoms than the starting material (lost as carbon dioxide ) and a halogen atom is introduced its place. [ 2 ] [ 3 ] A catalytic approach has been developed. [ 4 ]
The reaction is named after Cläre Hunsdiecker and her husband Heinz Hunsdiecker , whose work in the 1930s [ 5 ] [ 6 ] developed it into a general method. [ 1 ]
The reaction was first demonstrated by Alexander Borodin in 1861 in his reports of the preparation of methyl bromide ( CH 3 Br ) from silver acetate ( CH 3 CO 2 Ag ). [ 7 ] [ 8 ]
Three decades later, Angelo Simonini, working as a student of Adolf Lieben at the University of Vienna , investigated the reactions of silver carboxylates with iodine . [ 2 ] He found that the products formed are determined by the stoichiometry within the reaction mixture. Using a carboxylate-to-iodine ratio of 1:1 leads to an alkyl iodide product, in line with Borodin's findings and the modern understanding of the Hunsdiecker reaction. However, a 2:1 ratio favours the formation of an ester product that arises from decarboxylation of one carboxylate and coupling the resulting alkyl chain with the other. [ 9 ] [ 10 ]
Using a 3:2 ratio of reactants leads to the formation of a 1:1 mixture of both products. [ 9 ] [ 10 ] These processes are sometimes known as the Simonini reaction rather than as modifications of the Hunsdiecker reaction. [ 2 ] [ 3 ]
In terms of reaction mechanism , the Hunsdiecker reaction is believed to involve organic radical intermediates. The silver salt 1 reacts with bromine to form the acyl hypohalite intermediate 2 . Formation of the diradical pair 3 allows for radical decarboxylation to form the diradical pair 4 , which recombines to form the organic halide 5 . The trend in the yield of the resulting halide is primary > secondary > tertiary. [ 2 ] [ 3 ]
The reaction cannot be performed in protic solvents , as these induce decomposition of the intermediate acetyl hypohalite . [ citation needed ]
Other counterions than silver typically have slow reaction rates. The relativistic metals mercury , thallium , and lead are preferred: inert counterions, such as the alkali metals , have only rarely led to reported success. [ 11 ] : 464 The Kochi reaction is a variation on the Hunsdiecker reaction developed by Jay Kochi that uses lead(IV) acetate and lithium chloride ( lithium bromide can also be used) to effect the halogenation and decarboxylation. [ 12 ]
In the presence of multiple bonds , the intermediate acetyl hypohalite prefers to add to the bond, producing an α-haloester. Steric considerations suppress this tendency in α,β-unsaturated carboxylic acids, which instead polymerize (see below). [ 11 ] : 468
Mercuric oxide and bromine convert 3-chlorocyclobutanecarboxylic acid to 1-bromo-3-chlorocyclobutane. This is known as Cristol-Firth modification. [ 13 ] [ 14 ] [ 15 ] The 1,3-dihalocyclobutanes were key precursors to propellanes . [ 16 ] The reaction has been applied to the preparation of ω-bromo esters with chain lengths between five and seventeen carbon atoms, with the preparation of methyl 5-bromovalerate published in Organic Syntheses as an exemplar. [ 17 ]
For unsaturated compounds, the radical conditions associated with the Hunsdiecker reaction can also induce polymerization instead of decarboxylation. [ 11 ] : 468 Consequently, reactions with α,β-unsaturated carboxylic acids typically give low yield. [ 18 ] Kuang et al have found that an alternate radical halogenating agent, N-halosuccinimide, combined with a lithium acetate catalyst, gives a higher yield of β-halostyrenes. The reaction also improves in the presence of microwave irradiation , which preferentially synthesizes ( E )-β-arylvinyl halides. [ 19 ]
For a green metal-free reaction, tetrabutylammonium trifluoroacetate serves as an alternative catalyst. [ 20 ] However, it only exhibits comparable yields to the original lithium acetate when performed with micellular surfactants . [ 19 ] [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Hunsdiecker_reaction |
In paleoanthropology , the hunting hypothesis is the hypothesis that human evolution was primarily influenced by the activity of hunting for relatively large and fast animals, and that the activity of hunting distinguished human ancestors from other hominins .
While it is undisputed that early humans were hunters, the importance of this fact for the final steps in the emergence of the genus Homo out of earlier australopithecines , with its bipedalism and production of stone tools (from about 2.5 million years ago), and eventually also control of fire (from about 1.5 million years ago), is emphasized in the "hunting hypothesis", and de-emphasized in scenarios that stress the omnivore status of humans as their recipe for success, and social interaction , including mating behaviour as essential in the emergence of language and culture.
Advocates of the hunting hypothesis tend to believe that tool use and toolmaking essential to effective hunting were an extremely important part of human evolution, and trace the origin of language and religion to a hunting context.
As societal evidence David Buss cites that modern tribal population deploy hunting as their primary way of acquiring food. [ 1 ] The Aka pygmies in the Central African Republic spend 56% of their quest for nourishment hunting, 27% gathering, and 17% processing food. Additionally, the !Kung in Botswana retain 40% of their calories from hunting and this percentage varies from 20% to 90% depending on the season. [ 2 ] For physical evidence Buss first looks to the guts of humans and apes. The human gut consists mainly of the small intestines , which are responsible for the rapid breakdown of proteins and absorption of nutrients. The ape's gut is primarily colon , which indicates a vegetarian diet. This structural difference supports the hunting hypothesis in being an evolutionary branching point between modern humans and modern primates. Buss also cites human teeth in that fossilized human teeth have a thin enamel coating with very little heavy wear and tear that would result from a plant diet. The absence of thick enamel also indicates that historically humans have maintained a meat-heavy diet. [ 2 ] Buss notes that the bones of animals human ancestors killed found at Olduvai Gorge have cut marks at strategic points on the bones that indicate tool usage and provide evidence for ancestral butchers. [ 2 ]
Women are theorized to have participated in hunting, either on their own or as a collective group effort. [ 3 ] It is suggested that in the past, women targeted low but guaranteed food, whereas men targeted higher risk higher reward food. [ 4 ] The Gathering Hypothesis is a view that states men provided the evolution of the current human through hunting while women contributed via gathering. [ 5 ] Though criticized by many, it provides clues that both hunting and gathering were patterns of acquiring food and resources.
According to the hunting hypothesis, women are preoccupied with pregnancy and dependent children and so do not hunt because it is dangerous and less profitable. In addition, subsistence labor differentiates as observations suggests gender patterns originate from genetic traits. [ 3 ] Another possible explanation for women gathering is their inherent prioritization of rearing offspring, which is difficult to uphold if women were hunting. [ 6 ] Hunting is seen as more cost effective for men than for women. [ 5 ] The division of labor allows both types of resources (animals and plants) to be utilized. [ 5 ] Individual or small group hunting requires patience and skill more than strength, so women are just as capable as men. Plant collecting can be a physically demanding task so strength, endurance, or patience does not explain why women do not regularly hunt large game. [ 4 ] Since women hunt while menstruating, and if a child is still being breastfed, the mother may take him or her along in a shoulder sling while hunting or gathering. [ 4 ] Women hunt when it is compatible with children, and this usually means communal net hunts and/or hunting small game, and if childcare prevents a woman from hunting when young, the expertise to be an effective hunter later on may not be acquired. [ 4 ]
Though the hunting hypothesis is still being debated today, many experts have theorized the impact that women had concerning their involvement with hunter-gatherers being primarily males, was much larger than previously thought. [ 4 ] [ 7 ] [ 8 ] [ 3 ] Women in foraging societies do hunt small game regularly and, occasionally, large game. [ 4 ] The majority of human's evolutionary history consisted of being hunter-gatherers as such women evolved the necessary traits needed for hunting such as endurance, movement coordination, and athleticism. [ 7 ] Hunting big game requires a collaborative effort, thus participation from all abled-bodies was encouraged which included females. [ 3 ] In addition, Atlatl or Spear-thrower's required more energy to be utilized so contributions from everyone, including females, would've contributed with mitigating the energy exerted to use Atlatl's . [ 3 ] Such examples consist of the Martu women in western Australia, for example, who frequently hunt goannas and skink . [ 4 ] Women also participate in communal game drives and can have extensive land knowledge as well, which they use to assist their husbands in hunting. [ 4 ] Kelly Robert's example consists of 6 Agta women who are hunters and returned home with a kill 31 percent of the time, whereas men averaged 17 percent. [ 4 ] The women's expertise with hunting was further shown with mixed groups of male and female hunters being the most successful, coming home with kills 41 percent of the time. [ 4 ] Agta females who have reached the end of their childbearing years, those with children old enough to look after themselves in camp, or those who are sterile are the ones who intentionally hunt. [ 4 ] It's noted that women target reliable but low-return-rate foods, whereas men target less reliable but high-return-rate foods. [ 4 ] This could be an explanation as to why women weren't commonly documented as hunters.
Buss purports that the hunting hypothesis explains the high level of human male parental investment in offspring as compared to primates. Meat is an economical and condensed food resource in that it can be brought home to feed the young, as it is not efficient to carry low-calorie food across great distances. Thus, the act of hunting and the required transportation of the kill in order to feed offspring is a reasonable explanation for human male provisioning. [ 2 ]
Buss suggests that the Hunting hypothesis also explains the advent of strong male coalitions. Although chimpanzees form male-male coalitions, they tend to be temporary and opportunistic. Contrastingly, large game hunters require consistent and coordinated cooperation to succeed in large game hunting. Thus male coalitions were the result of working together to succeed in providing meat for the hunters themselves and their families. [ 2 ] Kristen Hawkes suggests further that obtaining resources intended for community consumption increases a male's fitness by appealing to the male's society and thus being in the good favor of both males and females. The male relationship would improve hunting success and create alliances for future conflict and the female relationship would improve direct reproductive success. [ 2 ] Buss proposes alternate explanations of emergence of the strong male coalitions. He suggests that male coalitions may have been the result of group-on-group aggression, defense, and in-group political alliances. This explanation does not support the relationship between male coalitions and hunting. [ 2 ]
Hawkes proposes that hunters pursue large game and divide the kill across the group. Hunters compete to divvy up the kill to signal courage, power, generosity, prosocial intent, and dedication. By engaging in these activities, hunters receive reproductive benefits and respect. [ 9 ] These reproductive benefits lead to greater reproductive success in more skilled hunters. [ 9 ] Evidence of these hunting goals that do not only benefit the families of the hunters are in the Ache and Hadza men. Hawkes notes that their hunting techniques are less efficient than alternative methods and are energetically costly, but the men place more importance on displaying their bravery, power, and prosocial intent than on hunting efficiency. This method is different as compared to other societies where hunters retain the control of their kills and signal their intent of sharing. This alternate method aligns with the coalition support hypothesis, in efforts to create and preserve political associations. [ 9 ]
The meat from successful large game hunts are more than what a single hunter can consume. Further, hunting success varies by week. One week a hunter may succeed in hunting large game and the next may return with no meat. In this situation Buss suggests that there are low costs to giving away meat that cannot be eaten by the individual hunter on his own and large benefits from the expectation of the returned favor in a week where his hunting is not successful. [ 2 ] Hawkes calls this sharing “tolerated theft” and purports that the benefits of reciprocal altruism stem from the result that families will experience “lower daily variation and higher daily average” in their resources. [ 10 ]
Provisioning may actually be a form of sexual competition between males for females. [ 11 ] Hawkes suggests that male provisioning is a particularly human behavior, which forges the nuclear family. [ 10 ] The structure of familial provisioning determines a form of resource distribution. However, Hawkes does acknowledge inconsistencies across societies and contexts such as the fluctuating time courses dedicated to hunting and gathering, which are not directly correlated with return rates, the fact that nutrition value is often chosen over caloric count, and the fact that meat is a more widely spread resource than other resources. [ 10 ]
The show-off hypothesis is the concept that more successful men have better mate options. The idea relates back to the fact that meat, the result of hunting expeditions, is a distinct resource in that it comes in large quantities that more often than not the hunter's own family is not able to consume in a timely manner so that the meat doesn't go sour. [ 2 ] Also the success of hunting is unpredictable whereas berries and fruits, unless there is a drought or a bad bush, are fairly consistent in seasonality. Kristen Hawkes argues that women favor neighbors opting for men who provide the advantageous, yet infrequent meat feasts. [ 10 ] These women may profit from alliance and the resulting feasts, especially in times of shortage. Hawkes suggests that it would be beneficial for women to reward men who employ the “show-off strategy” by supporting them in a dispute, caring for their offspring, or providing sexual favors. [ 10 ] The benefits women may gain from their alignment lie in favored treatment of the offspring spawned by the show-off from neighbors. [ 10 ] Buss echoes and cites Hawke's thoughts on the show-off's benefits in sexual access, increased likelihood of having children, and the favorable treatment his children would receive from the other members of the society. [ 2 ] Hawkes also suggests that show-offs are more likely to live in large groups and thus be less susceptible to predators. [ 10 ] Show-offs gain more benefits from just sharing with their family (classical fitness) in the potential favorable treatment from the community and reciprocal altruism from other members of the community. [ 10 ]
Hawkes uses the Ache people of Paraguay as evidence for the Show-off hypothesis. Food acquired by men was more widely distributed across the community and inconsistent resources that came in large quantities when acquired were also more widely shared. [ 10 ]
While this is represented in the Ache according to Hawkes, Buss notes that this trend is contradicted in the Hadza who evenly distribute the meat across all members of their population and whose hunters have very little control over the distribution. In the Hadza the show-off hypothesis does not have to do with the resources that result from hunting, but from the prestige and risk that is involved in big game hunting. There are possible circuitous benefits such as protection and defense. [ 2 ]
The Gathering Hypothesis is the view that men provided critical evolutionary propulsion of the modern human through hunting, whereas women contributed via gathering. [ 5 ] In addition, it helps provide for the fact that our ancestor's diets consisted mostly of plant food. [ 5 ] It's suggested by David Buss that stone tools were invented not strictly for hunting, but for gathering plants and used for digging them up. [ 5 ] This could explain the migration from forests to woodlands as tools allowed easy access to previously used methods. As such, this view results in the hunting part of the modern human coming much later. [ 5 ] Though women weren't strictly hunters, a woman's time investment in foraging depended on how much food her husband brought back. [ 5 ] Gathering plant foods allows a person to return to camp when necessary, but hunting may require an overnight stay so as to continue tracking the animal in the morning. [ 4 ]
The Gathering Hypothesis has been criticized by those who believe it's incapable of explaining our human origins in the primate lineage. [ 5 ] A common argument against the Gathering hypothesis is if gathering was the best or most efficient method of acquiring food, then why wouldn’t men just gather and stop wasting their time hunting. [ 5 ] The division of labor among men and woman is unaccounted for throughout cultures. [ 5 ] Hunting often takes the hunter far away from the home base, selection would favor hunters who could find their way home without getting lost along the way. [ 5 ] Locating and gathering edible nuts, berries, fruit, and tubers would require a different set of spatial skills. [ 5 ] The high prevalence of male hunters and female gatherers among traditional societies, although not conclusive evidence, provides one more clue that both activities are part of the human pattern of procuring food. [ 5 ] | https://en.wikipedia.org/wiki/Hunting_hypothesis |
In ecology , hunting success is the proportion of hunts initiated by a predatory organism that end in success. Hunting success is determined by a number of factors such as the features of the predator, timing, different age classes, conditions for hunting, experience, and physical capabilities. Predators selectively target certain categories of prey, in particular prey of a certain size. Prey animals that are in poor health are targeted and this contributes to the predator's hunting success. Different predation strategies can also contribute to hunting success, for example, hunting in groups gives predators an advantage over a solitary predator, and pack hunters like lions can kill animals that are too powerful for a solitary predator to overcome.
Similar to hunting success, kill rates are the number of animals an individual predator kills per time unit. Hunting success rate focuses on the percentage of successful hunts. [ 1 ] Hunting success is also measured in humans, but due to their unnaturally high hunting success, human hunters can have a big effect on prey population and behaviour, especially in areas lacking natural predators, recreational hunting can have inferences for wildlife populations.
Predators may actively seek out prey, if the predator spots its preferred target it would decide whether to attack or continue searching, and success ultimately depends on a number of factors. Predators may deploy a variety of hunting methods such as ambush, ballistic interception, pack hunting or pursuit predation. Hunting success is used to measure a predator's success rate against a species of prey or against all prey species in its diet, for example in the Mweya area of Queen Elizabeth National Park , lions had a hunting success of 54% against African buffaloes and 35.7% against common warthogs , though their overall hunting success was only 27.9%. [ 2 ] [ 3 ]
Hunting success across the animal kingdom vary from 5–97% and hunting success can greatly differ between different populations of the same species. Hunting success can be measured for predators in different trophic levels. Hunting success rate is the percentage of captures in a number of initiated hunts, for example, 1 in 2 to 20 tiger hunts are guessed to end in success, which means tigers are guessed to have a hunting success rate of between 5–50%. Percentage is the preferred method used to write hunting success rather than raw numbers. Usually a single study is used to represent the hunting success of an entire species or in some cases estimations are used. [ 4 ] [ 5 ] [ 1 ]
Hunting success can also be used to define the number of kills a human hunter makes over a specific number of hunts. However, hunting success is not used to define the number of animals a poacher, or a canned trophy hunter kills. [ 6 ]
Detailed field studies show that prey are usually successful at escaping predators, with hunting success rates as low as 1–5% in many systems. The result of a predatory attack largest depends on the interaction between the predator's physical performance and any evasive maneuvers by the prey animal. [ 7 ]
Most mammals have a hunting success below 50% [ 20 ] but some mammals such as African wild dogs and harbour porpoises can have hunting success rates of over 90%. The African wild dog is one of the most effective hunters on earth, with hunting success reaching a maximum of 90%. Their high levels of hunting success is due to their highly co-operative hunting behaviour accompanied with high stamina. Wild dogs typically use their stamina to exhaust their prey, which are usually caught after a chase lasting an average of 2 km (1.2 mi). The wild dog's stamina and the prey animal's exhaustion are the driving factors that cause most successful hunts. [ 21 ] Harbour porpoises are not usually social but on multiple occasions they've been recorded hunting cooperatively. The average group size consists of about two individuals. Using echolocation , they locate prey and capture them. They continuously forage throughout the day and night to meet their body requirements. It is hypothesized that harbour porpoises eat large amounts of food, about 10% of their own body mass. Another theory suggests that harbour porpoises require relatively large energy-rich prey, with high hunting success rates to meet their estimated metabolic requirements. [ 13 ]
Dragonflies have the highest observed hunting success of any animal, with success rates as high as 97%. They are also opportunistic and pursue a variety of prey. Predatory performance may have consequences in terms of energetics, mortality and potential loss of feeding or mating territories. The reason for their hunting success is due to many unique evolutionary adaptations, which includes aspects of eyesight and flight. In terms of flight, dragonflies can independently control their fore and hind wings, they can also hover and fly in any direction, including backwards. They can fixate on their prey and predict its next move, catching it midair with extreme accuracy. Each of a dragonfly's eyes is made up of thousands of units known as ommatidia that run across its head. This gives them almost 360-degree-vision, which helps them spot prey more efficiently. [ 22 ] [ 23 ]
The black-footed cat has the highest hunting success of any member of family Felidae . In 1993, a female and male were observed for 622 hours, a kill was made every 50 minutes and they had a hunting success of 60%. A total of 550 animals were consumed. About 14 small animals were caught each night. Their hunting success is due to their hunting behaviour and frequency of initiated hunts. They use three different ways of hunting, which includes "fast hunting", "slow hunting" and "sit and wait" hunt. They use these three hunting strategies to ambush or pursue their prey which mostly includes small mammals, insects and small birds. [ 19 ]
It seems that a predator speed relative to prey speed and other predators speed, is other factor influecing the hunting success.
When hunting Thomson's gazelles , a cheetah have a hunting success rate of 70%, compared to 57% of African wild dogs, 33% of spotted hyenas, 33% of jackals and 26% of lions . [ 24 ]
When hunting impalas , a cheetah have a hunting success of 26%, compared to 16% of leopards and 15.5% of African wild dogs. [ 25 ] [ 26 ]
Kill rates is the number of prey or biomass killed by an individual predator per unit time. A predator's functional response refers to how kill rates vary with prey density and are of central importance when predicting the stability threshold for prey populations under the effects of predation, and also estimate the potential carrying capacity of the populations of predators. Kill rates and functional responses are both influenced by diverse ecological variables. Kill rates differ between males and females, solitary individuals, social individuals, mothers with cubs, different age classes, individual fitness, prey availability, experience, etc.
Kill rates are required to further understand functional responses and predator-prey dynamics, as well as develop conservation strategies for predator species around the world. Kill rate studies have been conducted for large carnivores such as gray wolves , jaguars , tigers and leopards . A kill rate study of cougars showed that females with cubs had the highest kill rate, with one adult female with cubs in northern California having a kill rate of 2.35 ungulates per week. Adult males averaged 0.84 ungulates per week, females with cubs had an average of 1.24 ungulates per week and solitary females had a mean kill rate of 0.99 ungulates per week. [ 27 ]
Hunting success depends on the distance or time the predator has to catch its prey, comparable to the distance (time) that the prey has to escape. [ 28 ] In the wild, a discrepancy is observed between the carnivore's low hunting success and highly selective predation on ill animals. This behaviour may be described by the co-adaptive evolution of predator and prey. A predator like a wolf cannot always hunt a given deer, because an error in prey choice can lead to energy loss, injury and even death. [ 29 ] Predators tend to seek vulnerable prey, and this is the basis of the selective impact of predators on the population of prey species. [ 30 ] The low hunting success rate of wild carnivores, may be due to the fact that identification of potentially vulnerable prey from distance is imperfect, the more so that the behaviour of prey compensate for its poor health. In the wild, the capacity for distinguishing odors or a slight difference in prey behaviour are influenced by a number of factors, such as wind strength and direction, the body condition and features of the predator, its experience, conditions for pursuing prey and much more. [ 31 ] The microbiota (metabolites at the surface of the body) in animals exposed to long-term stress are responsible for their specific stress odor, this allows predators to evaluate the vulnerability of its potential prey. The causes of reduced health differs and depends on the individual animal's sensitivity to several biotic and abiotic factors such as endogenous, infectious, and parasitic diseases, intra- and interspecific interactions, etc. The host macro-organism, which is the microflora system helps predators to judge the state of its prey. [ 32 ]
Increased hunting success is a frequently cited benefit of group living in social predators and this is also a hypothesis for the evolution of sociality. [ 33 ] However, previous research shows that the benefit of increased hunting success is only present in small groups. In several group hunting taxas, ranging from insects to primates, despite the cooperation among the hunters, the hunting success of the larger group size does not increase. [ 34 ] Research shows that predator groups of 2–5 animals have the highest hunting success rates, then levels off, or even declines, across larger groups. [ 35 ] It has been theorised that the hunting success of predators hunting formidable prey increases with group size. This pattern is caused by the increased cooperation in large groups due to the much lower chance a solitary predator has against such prey. The low hunting success of solitary predators promotes cooperation because an extra hunter can sufficiently improve group hunting success to avoid the risk of injury and energy loss.
Field studies show that different predator hunting methods (ambush, pursuit predation, etc.) can lead to distinct number of individuals or prey captured. [ 36 ] Due to this, predators with different hunting strategies can cause competing trophic cascades and function at different trophic levels. [ 37 ] Predators are often classified as active or sit-and-wait predators by their average hunting behaviour. [ 38 ] The locomotor crossover hypothesis states that ambush predators should have more success when hunting fast-moving prey, whereas cursorial predators should be more successful when hunting sedentary prey. Studies reveal that starvation can cause an ambush predator to adopt a pursuit predation hunting method, though ambush predators regularly switch to pursuit predation when prey densities are lower. [ 39 ] [ 40 ] Experiments show that differences in prey's anti-predator responses to the environment can influence predator behaviour or success. Field observations show that predators can alter their hunting behaviour at larger scales according to prey behaviour, but at smaller scales they seek specific locations where they can facilitate hunting.
Conditions in the environment have an influence on a predator's ability to detect prey and vice versa. A primary mechanism is the limiting of foraging time obtained by mobile predators due to the risk of unfavourable conditions. The importance of predators on community functioning in gentle environments, an effect which reduces in stressful situations. Hydrodynamic stress associated with waves decreases the predator's success, as these conditions restrict predator mobility and foraging activity. Environmental conditions may impair a predator's ability to find or consume prey. For instance, green crab predation drastically decreased in the vicinity of the Damariscotta River with high flow celerities, though they were found at greater densities in high flow rates. Similar incidents happened when fish, insects and copepods exhibited much lower foraging success in more rapid flows. As a result, environmental conditions can influence predators by reducing their ability to find or handle prey. Behavioural research shows that environmental conditions like hydrodynamics can have a big effect in systems where predators rely on chemical cues to find their prey. [ 41 ]
A predator's hunting behaviour is suited for hunting in specific types of vegetative cover and is thus a largely custom characteristic in taxonomic families . Felids for instance typically use dense cover to stalk or ambush prey, whereas canids do not use vegetative cover when hunting. Sympatric predators like the Canada lynx and the coyote were tracked in the snow for three seasonal winters and hunting behaviour in relation to vegetative cover was studied. The main prey for both species were snowshoe hares , the lynx pursued hares more frequently in sparse white spruce canopies than coyotes, on the other hand coyotes pursued hares more in dense spruce than lynxes. It is thought that the hunting behaviour of lynxes varies according to cover, while that of coyotes is fixed. However, coyotes appeared to use cover to their advantage when stalking hares, possibly an influence of snow on the hunting methods of each of the predator species. [ 42 ]
Hunting success in humans differ in methods used, selected prey, the performance of the hunter, weather conditions, etc. A study showed that hunters who used dogs had a hunting success of 60%, while those who employed persistent hunting had a hunting success of 37–100% over 15 attempted hunts. Hunters who hunted with bows and arrows had a hunting success of only 5%, whereas others who hunted with springhare probe had a hunting success of 14% and yet others who used clubs and spears had a success rate of 45%. The study was based on the hunting methods of the bushmen in southern Africa. [ 6 ]
In Kentucky , US, a study was conducted about the factors influencing the flush and hunting success of hunters in three game species which were ruffed grouse , northern bobwhite and the cottontail rabbit . Encounter rates may have effects on population dynamics, hunter satisfaction, and hunter retention. In a 12-year span between 2003 and 2015, there were about 3,948 grouse hunts, 19,301 rabbit hunts, and 4,798 bobwhite hunts took place. In this case, hunting success was defined as the number of animals a hunting party flushed out. Hunting success was expected to increase over the hunting season due to cover being reduced and weather being more hospitable for upland hunting. Hunting was usually enhanced when more hunters and dogs were introduced to hunting parties. [ 43 ]
There are many types of hunting that human hunters employ, these types include recreational hunting (e.g. trophy hunting ), medium/small game hunting (e.g. deer hunting ), fowling , pest control / nuisance management , commercial hunting (e.g. whaling ) and poaching . In terms of hunting methods 24 methods are used. This methods include baiting (i.e. the use of baits to lure animals), battue (i.e. scaring animals into a killing zone), beagling (i.e. using beagles in hunts), the use of camouflage to hunt, shooting, the use of dogs , persistence hunting (i.e. use of stamina to exhaust prey), stalking and much more. Modern regulations differentiate between lawful hunting and illegal poaching, where uncontrolled hunting of animals occur.
Historical, substinence, and sport hunting can greatly differ, with modern hunting regulations addressing the issues of hunting and the most sustainable way to hunt. Techniques vary between government regulations, a hunter's personal ethics, local practices, hunting equipment, and the target animal species. Hunters may use a combined of two or more hunting techniques, though law may forbid hunters from using techniques common in activities like poaching and wildlife management. [ 44 ]
The exploitation of animal species currently threatens many species with extinction. Particularly in tropical rainforests, where hunting for food poses the most severe threat to many species in tropical rainforests. In some cases, Piro shortgun hunters took a limited number of shotgun cartridges on hunting trips, and they usually pay no attention to less profitable prey early in the trip, when the chance for more profitable prey becomes more likely. [ 45 ] Human disturbance can influence the behaviour of wild animals, which can have inferences for wildlife populations. [ 46 ] For example, in Northeastern Gabon , studies show that hunting and human disturbance decreased the population of large mammals near roads and in more populated areas. In particular, primates like chimpanzees and mandrills were found far from the roads, this could possibly be due to more intense hunting of these species for either bushmeat or in retaliation for crop raiding. [ 47 ] Most large predators have been extirpated from the range of the white-tailed deer , so hunters have now taken this predatory role. Hunters can indirectly affect prey species, indirect behavioural responses includes altered selection of resource, space use or movement. Deers realize that humans are a threat and adapt behavioural strategies by minimizing movement and showing high resistancy times in established ranges, factors that influence harvest susceptibility. [ 48 ] | https://en.wikipedia.org/wiki/Hunting_success |
Huntingdon Life Sciences ( HLS ) was a contract research organisation (CRO) organized in Maryland and headquartered in East Millstone, New Jersey . It was founded in 1951 in Cambridgeshire, England . It had two laboratories in the United Kingdom and one in the United States. With over 1,600 employees, it was the largest non-clinical CRO in Europe and the third-largest non-clinical CRO in the world. [ 2 ] In September 2015, Huntingdon Life Sciences, Harlan Laboratories , GFA, NDA Analytics and LSR associates merged into Envigo (now Inotiv ).
HLS provided contract research organization services in pre-clinical and non-clinical biological safety evaluation research. As with other major CROs operating in this business area, its major business is serving the pharmaceutical industry. However, more than a third of its business came from non-pharmaceutical sources, such as the crop protection industry which accounts for around 60% of its non-pharmaceutical business.
HLS had two facilities in the UK ( Huntingdon , Cambridgeshire and Eye, Suffolk ), one in the USA ( East Millstone, New Jersey ) and an office in Japan (Tokyo).
The company was one of the largest participants in the international primate trade and has been criticized for its animal testing practices, most specifically animal testing on non-human primates as well as on beagles . The Stop Huntingdon Animal Cruelty campaign was formed with the goal of shutting down the company due to animal rights violations.
Huntingdon Life Sciences was founded in the UK in 1951 as Nutrition Research Co. Ltd., a commercial organisation that initially focused on nutrition , veterinary , and biochemical research. The original facilities were split over two locations; the main offices were within Cromwell House in the town of Huntingdon ; and the main laboratories were at the Hartford Field Station, just over a mile away. It then became involved with pharmaceuticals , food additives , and industrial and consumer chemicals. In 1959 it changed its name to Nutritional Research Unit Ltd. The company benefited in the early 1960s from increased government regulatory testing requirements, especially in the pharmaceutical industry. In 1964, it was acquired by Becton Dickinson . [ 3 ]
In April 1983, Becton Dickinson created Huntingdon Research Centre PLC. It then offered four million American depositary receipts (ADRs) for sale at $15 each, representing the company's entire interest in Huntingdon. In 1985, as it began to expand its operations, the company changed its name to Huntingdon International Holdings plc. That year, it established Huntingdon Analytical Services Inc. to conduct business in the United States.
To augment its CRO business, Huntingdon acquired Minnesota's Twin City Testing Laboratory and affiliated companies in 1985, followed by the acquisition of Nebraska Testing Corporation in 1986; Travis Laboratories and Kansas City Test Laboratory Inc. in 1989; and Southwestern Laboratories, Inc. in 1990. Huntingdon also diversified its operations, primarily in the United States, becoming involved in engineering and environmental services.
In 1987, HLS acquired Northern Engineering and Testing. In 1988, it acquired Empire Soils Investigations, Chen Associates, and Asteco Inc. In 1988, HLS was floated on the London Stock Exchange and in 1989 obtained a listing on the New York Stock Exchange . In 1990, Huntingdon acquired the St. Louis branch of Envirodyne Engineers and Whiteley Holdings. In 1991, it acquired Austin Research Engineers, followed by Travers Morgan.
By the early 1990s, Huntingdon was organised into three business groups: the Life Sciences Group, the Engineering/Environmental Group, and the Travers Morgan Group, which offered engineering and environmental consulting services outside of the United States. However, only the Life Sciences Group showed long-term promise. Travers Morgan was allowed to lapse into insolvency, control passed into other hands, and Huntingdon wrote off the investment. In 1995, the engineering and environmental businesses were sold to Maxim Engineers of Dallas, Texas.
To bolster its CRO business and reinforce its U.S. presence, in 1995, Huntingdon acquired the toxicology business of Applied Biosciences International for $32.5 million in cash, plus the Leicester Clinical Research Centre. The deal included a U.S. laboratory located near Princeton, New Jersey , as well as two British facilities. In 1997, Huntingdon International Holdings changed its name to Huntingdon Life Sciences Group. The U.K. subsidiary, Huntingdon Research Centre, changed its name to Huntingdon Life Sciences, while the U.S. business operated as Huntingdon Life Sciences Inc.
In 2002, HLS moved its financial centre to the United States and incorporated in Maryland as Life Sciences Research.
In 2009, HLS was acquired. [ 4 ]
In September 2015, Huntingdon Life Sciences, Harlan Laboratories , GFA, NDA Analytics and LSR associates merged into Envigo (now Inotiv ).
The latest available public figures from 2008 show that HLS employs more than 1,600 staff across all of its facilities. They break down as: [ 5 ]
HLS uses animals in the biomedical research it conducts for its customers. The most recent numbers released state that in the UK around 60,000 animals are used annually. [ 12 ] This number is broken down by species:
Huntingdon is criticised by animal rights and animal welfare groups for using animals in research, for instances of animal abuse and for the wide range of substances it tests on animals, particularly non-medical products. It is claimed by SHAC that 500 animals died every day at HLS (182,500 a year), [ 13 ] a figure at odds with HLS' published numbers.
Huntingdon's labs were infiltrated by undercover animal rights activists in 1997 in the UK and in 1998 in the US.
In 1997, film secretly recorded inside HLS in the UK by BUAV and subsequently broadcast on Channel 4 television as "It's a Dog's Life", showed serious breaches of animal-protection laws, including a beagle puppy being held up by the scruff of the neck and repeatedly punched in the face, and animals being taunted. [ 14 ]
The laboratory technicians responsible were suspended from HLS the day after the broadcast. All three were later dismissed. [ 15 ] Two of the men seen hitting and shaking dogs were found guilty under the Protection of Animals Act 1911 of "cruelly terrifying dogs." It was the first time laboratory technicians had been prosecuted for animal cruelty in the UK. HLS admitted that the technicians' behaviour was deplorable and a new management team was introduced the following year which, according to The Daily Telegraph , "introduced greater openness and new training methods." [ 15 ]
In 1998, an undercover investigator for People for the Ethical Treatment of Animals (PETA) used a camera hidden in her glasses to make 50 hours of videotape of the HLS laboratories in Princeton, New Jersey. She also made four 90-minute audiotapes, photocopied 8,000 company documents, and copied the company's client list. According to PETA some of the film she shot showed a monkey being dissected while still alive and conscious. The president of HLS in New Jersey, Alan Staple, said the monkey was alive but sedated during the dissection. [ 16 ]
A 2001 article from The Resurgence Trust stated that HLS obtained a "gagging order" in the US that prevents PETA from publicising or talking about any of the information that they discovered. The order also prevented PETA from communicating with the American Department of Agriculture, which had been going to investigate the evidence. [ 17 ]
The Stop Huntingdon Animal Cruelty (SHAC) campaign is based in the UK and US, and has aimed to close the company down since 1999. According to its website, the campaign's methods are restricted to non-violent direct action , as well as lobbying and demonstrations. It targets not only HLS itself, but any company, institution, or person allegedly doing business with the laboratory, whether as clients, suppliers, or even disposal and cleaning services, and the employees of those companies.
Despite its stated non-violent position, SHAC members have been convicted of crimes of violence against HLS employees. On 25 October 2010 five SHAC members received prison sentences for threatening HLS staff. SHAC has also been accused of encouraging arson and violent assault. An HLS director was assaulted in front of his child. [ 18 ] HLS managing director Brian Cass was sent a mousetrap primed with razor blades, [ 18 ] and in February 2001 was attacked by three men armed with pickaxe handles and CS gas. [ 19 ] Another businessman with links to HLS was attacked and knocked unconscious adjacent to a barn his assailants had set alight. [ 15 ]
Both SHAC and Animal Liberation Front activists have been alleged to have been engaged in harassment and intimidation , including issuing hoax bomb threats and death threats. [ 20 ] In 2003, Daniel Andreas San Diego was accused by the American FBI of "ecoterrorism" in support of SHAC in the San Francisco Area; however, there is some question whether his "terrorist plot" was an entrapment operation by the American FBI. [ 21 ] In 2008 seven of SHAC's senior members were described by prosecutors as "some of the key figures in the Animal Liberation Front" and found guilty of conspiracy to blackmail HLS. [ 22 ]
The campaign against HLS led to its share price crashing, the Royal Bank of Scotland closing its bank account, and the British government arranging for the Bank of England to give them an account. [ 23 ] In 2000, HLS was dropped from the New York Stock Exchange because of its market capitalization had fallen below NYSE limits. [ 23 ]
From 2006, The Daily Telegraph reports, the British Government took the decision to tackle "the problem of animal rights extremism." [ 15 ] On 1 May 2007, a police campaign called Operation Achilles was enacted against SHAC, a series of raids involving 700 police officers in England, Amsterdam, and Belgium. [ 24 ] In total, 32 people linked to the group were arrested, [ 25 ] and seven leading members of SHAC, including Greg Avery , were found guilty of blackmail. [ 26 ] Police estimated in 2007 that, as a consequence of the operation, "up to three quarters of the most violent activists" were jailed. Der Spiegel writes that the number of attacks on HLS and their business declined drastically but "the movement is by no means dead." [ 24 ] | https://en.wikipedia.org/wiki/Huntingdon_Life_Sciences |
In computer science , the Hunt–Szymanski algorithm , [ 1 ] [ 2 ] also known as Hunt–McIlroy algorithm , is a solution to the longest common subsequence problem . It was one of the first non-heuristic algorithms used in diff which compares a pair of files each represented as a sequence of lines. To this day, variations of this algorithm are found in incremental version control systems , wiki engines , and molecular phylogenetics research software.
The worst-case complexity for this algorithm is O ( n 2 log n ) , but in practice O ( n log n ) is rather expected. [ 3 ] [ 4 ]
The algorithm was proposed by Harold S. Stone as a generalization of a special case solved by Thomas G. Szymanski. [ 5 ] [ 6 ] [ 7 ] James W. Hunt refined the idea, implemented the first version of the candidate-listing algorithm used by diff and embedded it into an older framework of Douglas McIlroy . [ 5 ]
The description of the algorithm appeared as a technical report by Hunt and McIlroy in 1976. [ 5 ] The following year, a variant of the algorithm was finally published in a joint paper by Hunt and Szymanski. [ 5 ] [ 8 ]
The Hunt–Szymanski algorithm is a modification to a basic solution for the longest common subsequence problem which has complexity O ( n 2 ) . The solution is modified so that there are lower time and space requirements for the algorithm when it is working with typical inputs.
Let A i be the i th element of the first sequence.
Let B j be the j th element of the second sequence.
Let P ij be the length of the longest common subsequence for the first i elements of A and the first j elements B .
Consider the sequences A and B .
A contains three elements:
B contains three elements:
The steps that the above algorithm would perform to determine the length of the longest common subsequence for both sequences are shown in the diagram. The algorithm correctly reports that the longest common subsequence of the two sequences is two elements long.
The above algorithm has worst-case time and space complexities of O ( mn ) ( see big O notation ), where m is the number of elements in sequence A and n is the number of elements in sequence B . The Hunt–Szymanski algorithm modifies this algorithm to have a worst-case time complexity of O ( mn log m ) and space complexity of O ( mn ) , though it regularly beats the worst case with typical inputs.
The Hunt–Szymanski algorithm only considers what the authors call essential matches, or k -candidates. k -candidates are pairs of indices ( i , j ) such that:
The second point implies two properties of k -candidates:
To create the longest common subsequence from a collection of k -candidates, a grid with each sequence's contents on each axis is created. The k -candidates are marked on the grid. A common subsequence can be created by joining marked coordinates of the grid such that any increase in i is accompanied by an increase in j .
This is illustrated in the adjacent diagram.
Black dots represent candidates that would have to be considered by the simple algorithm and the black lines are connections that create common subsequences of length 3.
Red dots represent k -candidates that are considered by the Hunt–Szymanski algorithm and the red line is the connection that creates a common subsequence of length 3. | https://en.wikipedia.org/wiki/Hunt–Szymanski_algorithm |
The Hunza cuisine , also called the Burusho cuisine ( Burushaski : بروشو دݘیرس ), consists of a series of selective food and drink intake practiced by the Burusho people (also called the Hunza people) of northern Pakistan . Alternative medicine and natural health advocates have argued without providing any scientific evidence that the Hunza diet can increase longevity to 120 years. [ 1 ] The diet mostly consists of raw food including nuts, fresh vegetables, dry vegetables, mint, fruits and seeds added with yogurt. The cooked meal, daal included with chappati , is included for dinner.
In the 1930s, Swiss-German physician Ralph Bircher conducted research on the Hunza diet. [ 2 ] In his book about the Hunza, Jay Hoffman argued that, by the ratio to cats, dogs and horses, humans should live up to 120 to 150 years, and argues the Hunza diet to be the key to this longevity. [ 3 ] Such ideas also promoted by natural health advocates have been discredited. There is no reliable documentation validating the age of alleged Hunza supercentenarians. [ 1 ] [ 4 ]
False claims about the Hunza people living to be hundreds of years old in perfect health from their diet of " natural foods " were promoted by J. I. Rodale and G. T. Wrench . [ 5 ] The claims had no basis in fact and were refuted by a team of Japanese researchers from Kyoto University in 1960 who had examined Hunza inhabitants. The medical team found rampant signs of poor health amongst the Hunza, including goitre , malnutrition, rheumatism , tuberculosis and high levels of infant mortality. [ 5 ]
In 2005, the Encyclopedia of World Geography stated that "to date there is no credible evidence that determines that the Hunzakut diet of old, not to mention the current diet of the past four decades, contributes to longevity." [ 2 ]
Another myth associated with the Hunza people is that because their diet is alleged to be high in apricot seeds they are free from disease. This has proven to be untrue as medical scientists have found that the Hunzas suffer from a variety of disease including cancer. [ 1 ] [ 6 ] | https://en.wikipedia.org/wiki/Hunza_diet |
The Hurd–Mori 1,2,3-thiadiazole synthesis is a name reaction in organic chemistry that allows for the generation of 1,2,3- thiadiazoles through the reaction of hydrazone derivatives with an N -acyl or N - tosyl group reacted with thionyl chloride . [ 1 ] [ 2 ] [ 3 ] [ 4 ] An analogous reaction gives 1,2,3-selenadiazoles by using selenium dioxide instead of thionyl chloride. [ 4 ]
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hurd–Mori_1,2,3-thiadiazole_synthesis |
In mathematics, a Hurewicz space is a topological space that satisfies a certain basic selection principle that generalizes σ-compactness . A Hurewicz space is a space in which for every sequence of open covers U 1 , U 2 , … {\displaystyle {\mathcal {U}}_{1},{\mathcal {U}}_{2},\ldots } of the space there are finite sets F 1 ⊂ U 1 , F 2 ⊂ U 2 , … {\displaystyle {\mathcal {F}}_{1}\subset {\mathcal {U}}_{1},{\mathcal {F}}_{2}\subset {\mathcal {U}}_{2},\ldots } such that every point of the space belongs to all but finitely many sets ⋃ F 1 , ⋃ F 2 , … {\displaystyle \bigcup {\mathcal {F}}_{1},\bigcup {\mathcal {F}}_{2},\ldots } .
In 1926, Witold Hurewicz [ 1 ] introduced the above property of topological spaces that is formally stronger than the Menger property . He didn't know whether Menger's conjecture is true, and whether his property is strictly stronger than the Menger property, but he conjectured that in the class of metric spaces his property is equivalent to σ {\displaystyle \sigma } -compactness.
Hurewicz conjectured that in ZFC every Hurewicz metric space is σ-compact. Just, Miller, Scheepers , and Szeptycki [ 2 ] proved that Hurewicz's conjecture is false, by showing that there is, in ZFC, a set of real numbers that is Menger but not σ-compact. Their proof was dichotomic, and the set witnessing the failure of the conjecture heavily depends on whether a certain (undecidable) axiom holds or not.
Bartoszyński and Shelah [ 3 ] (see also Tsaban 's solution based on their work [ 4 ] ) gave a uniform ZFC example of a Hurewicz subset of the real line that is not σ-compact.
Hurewicz asked whether in ZFC his property is strictly stronger than the Menger property. In 2002, Chaber and Pol in unpublished note, using dichotomy proof, showed that there is a Hurewicz subset of the real line that is not Menger. In 2008, Tsaban and Zdomskyy [ 5 ] gave a uniform example of a Hurewicz subset of the real line that is Menger but not Hurewicz.
For subsets of the real line, the Hurewicz property can be characterized using continuous functions into the Baire space N N {\displaystyle \mathbb {N} ^{\mathbb {N} }} . For functions f , g ∈ N N {\displaystyle f,g\in \mathbb {N} ^{\mathbb {N} }} , write f ≤ ∗ g {\displaystyle f\leq ^{*}g} if f ( n ) ≤ g ( n ) {\displaystyle f(n)\leq g(n)} for all but finitely many natural numbers n {\displaystyle n} . A subset A {\displaystyle A} of N N {\displaystyle \mathbb {N} ^{\mathbb {N} }} is bounded if there is a function g ∈ N N {\displaystyle g\in \mathbb {N} ^{\mathbb {N} }} such that f ≤ ∗ g {\displaystyle f\leq ^{*}g} for all functions f ∈ A {\displaystyle f\in A} . A subset of N N {\displaystyle \mathbb {N} ^{\mathbb {N} }} is unbounded if it is not bounded. Hurewicz proved that a subset of the real line is Hurewicz iff every continuous image of that space into the Baire space is unbounded. In particular, every subset of the real line of cardinality less than the bounding number b {\displaystyle {\mathfrak {b}}} is Hurewicz.
Let X {\displaystyle X} be a topological space. The Hurewicz game played on X {\displaystyle X} is a game with two players Alice and Bob.
1st round : Alice chooses an open cover U 1 {\displaystyle {\mathcal {U}}_{1}} of X {\displaystyle X} . Bob chooses a finite set F 1 ⊂ U 1 {\displaystyle {\mathcal {F}}_{1}\subset {\mathcal {U}}_{1}} .
2nd round : Alice chooses an open cover U 2 {\displaystyle {\mathcal {U}}_{2}} of X {\displaystyle X} . Bob chooses a finite set F 2 ⊂ U 2 {\displaystyle {\mathcal {F}}_{2}\subset {\mathcal {U}}_{2}} .
etc.
If every point of the space X {\displaystyle X} belongs to all but finitely many sets ⋃ F 1 , ⋃ F 2 , … {\displaystyle \bigcup {\mathcal {F}}_{1},\bigcup {\mathcal {F}}_{2},\ldots } , then Bob wins the Hurewicz game. Otherwise, Alice wins.
A player has a winning strategy if he knows how to play in order to win the game (formally, a winning strategy is a function).
A topological space is Hurewicz iff Alice has no winning strategy in the Hurewicz game played on this space. [ 6 ]
A Tychonoff space X {\displaystyle X} is Hurewicz iff for every compact space C {\displaystyle C} containing the space X {\displaystyle X} , and a G δ {\displaystyle G_{\delta }} subset G of C {\displaystyle C} containing the space X {\displaystyle X} , there is a σ {\displaystyle \sigma } -compact set Y {\displaystyle Y} with X ⊂ Y ⊂ G {\displaystyle X\subset Y\subset G} . [ 2 ] | https://en.wikipedia.org/wiki/Hurewicz_space |
Tornadoes , cyclones , and other storms with strong winds damage or destroy many buildings. However, with proper design and construction, the damage to buildings by these forces can be greatly reduced. A variety of methods can help a building survive strong winds and storm surge .
Waves along coastal areas can destroy many buildings. Buildings should preferably be built on high ground to avoid waves. If waves can reach the building site, the building should be elevated on steel, concrete, or wooden pilings or anchored to solid rock.
Wind on the roof surfaces can cause negative pressures that create a lifting force sufficient to lift the roof off the building. Once this occurs, the building is weakened considerably, and the rest will likely fail as well. To minimize this vulnerability, the upper structure ought to be anchored through the walls to the foundation.
Several methods can be used to anchor the roof. Typically, roof trusses are "toenailed" into the top of the walls, which provide insufficient force to resist high winds. Hurricane ties nail into the wall and wrap over the trusses to provide higher force resistance.
Interlocking metal pan roof systems installed on mobile homes can fail under the pressure differential (lift) created by the high-velocity winds passing over the surface plane of the roof. This is compounded by the wind entering the building allowing the building interior to pressurize, lifting the underside of the roof panels, resulting in the destruction of the building. One example of pan roof systems can be found in this document from Structall Building Systems Archived 2016-03-04 at the Wayback Machine .
To mitigate this pressure differential, pre-installed aluminum tabular channels can be permanently fastened perpendicularly across the top of the interlocking ribs of the metal roof system without disturbing the flow of rainwater at the eaves mid-span and ridge locations of the building.
Earth-sheltered construction is generally more resistant to strong winds and tornadoes than standard construction. Cellars and other earth-sheltered components of other buildings can provide safe refuge during tornadoes.
The physical geometry of a building affects its aerodynamic properties and how well it can withstand a storm. Geodesic dome roofs or buildings have low drag coefficients and can withstand higher wind forces than a square building of the same area. [ 1 ] [ 2 ] Even stronger buildings result from monolithic dome construction. [ 3 ]
A Category 5 hurricane-proof log house is resistant to winds up to 245 miles per hour (394 km/h). Wall logs in such construction must be made of glued laminated timber and all other components of the house, including hurricane straps, must be hurricane-resistant.
A round, or multiple-sided home, is more resistant to hurricane strength winds. [ 4 ] [ 5 ] The round design allows the wind to blow around the home, reducing the build-up of pressure on one side. [ 6 ] Additionally, with the roof and floors built using a radial truss array, that allows any potential energy from sustained winds to disperse across the entire structure instead of building up in one area. [ 7 ]
Building openings such as garage doors and windows are often weak points susceptible to failure by wind pressure and blowing debris. Once failure occurs, wind pressure builds up inside the building resulting in the roof lifting off the building. Hurricane shutters can provide protection.
Doors can be blown into the house by wind, causing potential structural failure (see http://www.floridadisaster.org/hrg/content/openings/openings_index.asp#Hinged_Exterior_Doors ).
Windows can be constructed with plastic panes, shatterproof glass, or glass with protective membranes. The panes are often more firmly attached than normal window panes, including using screws or bolts through the edges of larger panes. Concrete anchor screws are used to secure windows to the concrete structure surrounding them.
Wood has a relatively high degree of flexibility, which can be beneficial under certain building stresses.
Reinforced concrete is a strong, dense material that can withstand the destructive power of very high winds and high-speed debris if used in a building that is designed properly.
After Hurricane Andrew in 1992 caused $16 billion in insured damage, the state of Florida established new building standards and enforcement. The state increased performance criteria for wind-load provisions and adopted new wind provisions from the American Society of Civil Engineers . One important addition to the new code was the requirement of missile-impact resisting glass , which can withstand high-velocity impact from wind-borne debris during a hurricane. Many houses built in South Florida since Hurricane Andrew are cinder block masonry construction reinforced with concrete pillars, hurricane-strapped roof trusses , and codes requirements for adhesives and types of roofing. [ 8 ] [ 9 ] Florida also designated high velocity hurricane zones (i.e. High Velocity Hurricane Zone) with special requirements defined for Miami-Dade and Broward Counties. [ 10 ]
Hong Kong requires many structures to withstand winds from typhoons. [ 11 ]
Residential construction in Darwin Northern Australia
Notes | https://en.wikipedia.org/wiki/Hurricane-proof_building |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.