text
stringlengths
11
320k
source
stringlengths
26
161
Software engineering is a branch of both computer science and engineering focused on designing, developing, testing, and maintaining software applications . It involves applying engineering principles and computer programming expertise to develop software systems that meet user needs. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The terms programmer and coder overlap software engineer , but they imply only the construction aspect of a typical software engineer workload. [ 5 ] A software engineer applies a software development process , [ 1 ] [ 6 ] which involves defining, implementing , testing , managing , and maintaining software systems, as well as developing the software development process itself. Beginning in the 1960s, software engineering was recognized as a separate field of engineering . The development of software engineering was seen as a struggle. Problems included software that was over budget, exceeded deadlines, required extensive debugging and maintenance, and unsuccessfully met the needs of consumers or was never even completed. In 1968, NATO held the first software engineering conference, where issues related to software were addressed. Guidelines and best practices for the development of software were established. [ 7 ] The origins of the term software engineering have been attributed to various sources. The term appeared in a list of services offered by companies in the June 1965 issue of "Computers and Automation" [ 8 ] and was used more formally in the August 1966 issue of Communications of the ACM (Volume 9, number 8) in "President's Letter to the ACM Membership" by Anthony A. Oettinger. [ 9 ] [ 10 ] [ 11 ] It is also associated with the title of a NATO conference in 1968 by Professor Friedrich L. Bauer . [ 12 ] Margaret Hamilton described the discipline of "software engineering" during the Apollo missions to give what they were doing legitimacy. [ 13 ] At the time, there was perceived to be a " software crisis ". [ 14 ] [ 15 ] [ 16 ] The 40th International Conference on Software Engineering (ICSE 2018) celebrates 50 years of "Software Engineering" with the Plenary Sessions' keynotes of Frederick Brooks [ 17 ] and Margaret Hamilton . [ 18 ] In 1984, the Software Engineering Institute (SEI) was established as a federally funded research and development center headquartered on the campus of Carnegie Mellon University in Pittsburgh, Pennsylvania , United States. [ 19 ] Watts Humphrey founded the SEI Software Process Program, aimed at understanding and managing the software engineering process. [ 19 ] The Process Maturity Levels introduced became the Capability Maturity Model Integration for Development (CMMI-DEV), which defined how the US Government evaluates the abilities of a software development team. Modern, generally accepted best practices for software engineering have been collected by the ISO/IEC JTC 1/SC 7 subcommittee and published as the Software Engineering Body of Knowledge (SWEBOK). [ 6 ] Software engineering is considered one of the major computing disciplines. [ 20 ] Notable definitions of software engineering include: The term has also been used less formally: Individual commentators have disagreed sharply on how to define software engineering or its legitimacy as an engineering discipline. David Parnas has said that software engineering is, in fact, a form of engineering. [ 30 ] [ 31 ] Steve McConnell has said that it is not, but that it should be. [ 32 ] Donald Knuth has said that programming is an art and a science. [ 33 ] Edsger W. Dijkstra claimed that the terms software engineering and software engineer have been misused in the United States. [ 34 ] Requirements engineering is about elicitation, analysis, specification, and validation of requirements for software . Software requirements can be functional , non-functional or domain. Functional requirements describe expected behaviors (i.e. outputs). Non-functional requirements specify issues like portability, security, maintainability, reliability, scalability, performance, reusability, and flexibility. They are classified into the following types: interface constraints, performance constraints (such as response time, security, storage space, etc.), operating constraints, life cycle constraints (maintainability, portability, etc.), and economic constraints. Knowledge of how the system or software works is needed when it comes to specifying non-functional requirements. Domain requirements have to do with the characteristic of a certain category or domain of projects. [ 35 ] Software design is the process of making high-level plans for the software. Design is sometimes divided into levels: Software construction typically involves programming (a.k.a. coding), unit testing , integration testing , and debugging so as to implement the design. [ 1 ] [ 6 ] "Software testing is related to, but different from, ... debugging". [ 6 ] Testing during this phase is generally performed by the programmer and with the purpose to verify that the code behaves as designed and to know when the code is ready for the next level of testing. [ citation needed ] Software testing is an empirical, technical investigation conducted to provide stakeholders with information about the quality of the software under test. [ 1 ] [ 6 ] When described separately from construction, testing typically is performed by test engineers or quality assurance instead of the programmers who wrote it. It is performed at the system level and is considered an aspect of software quality . Program analysis is the process of analyzing computer programs with respect to an aspect such as performance , robustness , and security . Software maintenance refers to supporting the software after release. It may include but is not limited to: error correction , optimization, deletion of unused and discarded features, and enhancement of existing features. [ 1 ] [ 6 ] Usually, maintenance takes up 40% to 80% of project cost. [ 37 ] Knowledge of computer programming is a prerequisite for becoming a software engineer. In 2004, the IEEE Computer Society produced the SWEBOK , which has been published as ISO/IEC Technical Report 1979:2005, describing the body of knowledge that they recommend to be mastered by a graduate software engineer with four years of experience. [ 38 ] Many software engineers enter the profession by obtaining a university degree or training at a vocational school. One standard international curriculum for undergraduate software engineering degrees was defined by the Joint Task Force on Computing Curricula of the IEEE Computer Society and the Association for Computing Machinery , and updated in 2014. [ 20 ] A number of universities have Software Engineering degree programs; as of 2010 [update] , there were 244 Campus Bachelor of Software Engineering programs, 70 Online programs, 230 Masters-level programs, 41 Doctorate-level programs, and 69 Certificate-level programs in the United States. In addition to university education, many companies sponsor internships for students wishing to pursue careers in information technology. These internships can introduce the student to real-world tasks that typical software engineers encounter every day. Similar experience can be gained through military service in software engineering. Half of all practitioners today have degrees in computer science , information systems , or information technology . [ citation needed ] A small but growing number of practitioners have software engineering degrees. In 1987, the Department of Computing at Imperial College London introduced the first three-year software engineering bachelor's degree in the world; in the following year, the University of Sheffield established a similar program. [ 39 ] In 1996, the Rochester Institute of Technology established the first software engineering bachelor's degree program in the United States; however, it did not obtain ABET accreditation until 2003, the same year as Rice University , Clarkson University , Milwaukee School of Engineering , and Mississippi State University . [ 40 ] In 1997, PSG College of Technology in Coimbatore, India was the first to start a five-year integrated Master of Science degree in Software Engineering. [ citation needed ] Since then, software engineering undergraduate degrees have been established at many universities. A standard international curriculum for undergraduate software engineering degrees, SE2004 , was defined by a steering committee between 2001 and 2004 with funding from the Association for Computing Machinery and the IEEE Computer Society . As of 2004 [update] , about 50 universities in the U.S. offer software engineering degrees, which teach both computer science and engineering principles and practices. The first software engineering master's degree was established at Seattle University in 1979. Since then, graduate software engineering degrees have been made available from many more universities. Likewise in Canada, the Canadian Engineering Accreditation Board (CEAB) of the Canadian Council of Professional Engineers has recognized several software engineering programs. In 1998, the US Naval Postgraduate School (NPS) established the first doctorate program in Software Engineering in the world. [ citation needed ] Additionally, many online advanced degrees in Software Engineering have appeared such as the Master of Science in Software Engineering (MSE) degree offered through the Computer Science and Engineering Department at California State University, Fullerton . Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers. [ 41 ] ETS (École de technologie supérieure) University and UQAM (Université du Québec à Montréal) were mandated by IEEE to develop the Software Engineering Body of Knowledge ( SWEBOK ), which has become an ISO standard describing the body of knowledge covered by a software engineer. [ 6 ] Legal requirements for the licensing or certification of professional software engineers vary around the world. In the UK, there is no licensing or legal requirement to assume or use the job title Software Engineer. In some areas of Canada, such as Alberta, British Columbia, Ontario, [ 42 ] and Quebec, software engineers can hold the Professional Engineer (P.Eng) designation and/or the Information Systems Professional (I.S.P.) designation. In Europe, Software Engineers can obtain the European Engineer (EUR ING) professional title. Software Engineers can also become professionally qualified as a Chartered Engineer through the British Computer Society . In the United States, the NCEES began offering a Professional Engineer exam for Software Engineering in 2013, thereby allowing Software Engineers to be licensed and recognized. [ 43 ] NCEES ended the exam after April 2019 due to lack of participation. [ 44 ] Mandatory licensing is currently still largely debated, and perceived as controversial. [ 45 ] [ 46 ] The IEEE Computer Society and the ACM , the two main US-based professional organizations of software engineering, publish guides to the profession of software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge – 2004 Version , or SWEBOK , defines the field and describes the knowledge the IEEE expects a practicing software engineer to have. The most current version is SWEBOK v4. [ 6 ] The IEEE also promulgates a "Software Engineering Code of Ethics". [ 47 ] There are an estimated 26.9 million professional software engineers in the world as of 2022, up from 21 million in 2016. [ 48 ] [ 49 ] Many software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers . Some organizations have specialists to perform each of the tasks in the software development process . Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Many companies hire interns , often university or college students during a summer break, or externships . Specializations include analysts , architects , developers , testers , technical support , middleware analysts , project managers , software product managers , educators , and researchers . Most software engineers and programmers work 40 hours a week, but about 15 percent of software engineers and 11 percent of programmers worked more than 50 hours a week in 2008. [ 50 ] Potential injuries in these occupations are possible because like other workers who spend long periods sitting in front of a computer terminal typing at a keyboard, engineers and programmers are susceptible to eyestrain, back discomfort, Thrombosis , Obesity , and hand and wrist problems such as carpal tunnel syndrome . [ 51 ] The U. S. Bureau of Labor Statistics (BLS) counted 1,365,500 software developers holding jobs in the U.S. in 2018. [ 52 ] Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and many software engineers hold computer science degrees. [ 53 ] The BLS estimates from 2023 to 2033 that computer software engineering would increase by 17%. [ 54 ] This is down from the 2022 to 2032 BLS estimate of 25% for software engineering. [ 54 ] [ 55 ] And, is further down from their 30% 2010 to 2020 BLS estimate. [ 56 ] Due to this trend, job growth may not be as fast as during the last decade, as jobs that would have gone to computer software engineers in the United States would instead be outsourced to computer software engineers in countries such as India and other foreign countries. [ 57 ] [ 50 ] In addition, the BLS Job Outlook for Computer Programmers, the U.S. Bureau of Labor Statistics (BLS) Occupational Outlook predicts a decline of -7 percent from 2016 to 2026, a further decline of -9 percent from 2019 to 2029, a decline of -10 percent from 2021 to 2031. [ 57 ] and then a decline of -11 percent from 2022 to 2032. [ 57 ] Since computer programming can be done from anywhere in the world, companies sometimes hire programmers in countries where wages are lower. [ 57 ] [ 58 ] [ 59 ] Furthermore, the ratio of women in many software fields has also been declining over the years as compared to other engineering fields. [ 60 ] Then there is the additional concern that recent advances in Artificial Intelligence might impact the demand for future generations of Software Engineers. [ 61 ] [ 62 ] [ 63 ] [ 64 ] [ 65 ] [ 66 ] [ 67 ] However, this trend may change or slow in the future as many current software engineers in the U.S. market flee the profession or age out of the market in the next few decades. [ 57 ] The Software Engineering Institute offers certifications on specific topics like security , process improvement and software architecture . [ 68 ] IBM , Microsoft and other companies also sponsor their own certification examinations. Many IT certification programs are oriented toward specific technologies, and managed by the vendors of these technologies. [ 69 ] These certification programs are tailored to the institutions that would employ people who use these technologies. Broader certification of general software engineering skills is available through various professional societies. As of 2006 [update] , the IEEE had certified over 575 software professionals as a Certified Software Development Professional (CSDP). [ 70 ] In 2008 they added an entry-level certification known as the Certified Software Development Associate (CSDA). [ 71 ] The ACM had a professional certification program in the early 1980s, [ citation needed ] which was discontinued due to lack of interest. The ACM and the IEEE Computer Society together examined the possibility of licensing of software engineers as Professional Engineers in the 1990s, but eventually decided that such licensing was inappropriate for the professional industrial practice of software engineering. [ 45 ] John C. Knight and Nancy G. Leveson presented a more balanced analysis of the licensing issue in 2002. [ 46 ] In the U.K. the British Computer Society has developed a legally recognized professional certification called Chartered IT Professional (CITP) , available to fully qualified members ( MBCS ). Software engineers may be eligible for membership of the British Computer Society or Institution of Engineering and Technology and so qualify to be considered for Chartered Engineer status through either of those institutions. In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP) . [ 72 ] In Ontario, Canada, Software Engineers who graduate from a Canadian Engineering Accreditation Board (CEAB) accredited program, successfully complete PEO's ( Professional Engineers Ontario ) Professional Practice Examination (PPE) and have at least 48 months of acceptable engineering experience are eligible to be licensed through the Professional Engineers Ontario and can become Professional Engineers P.Eng. [ 73 ] The PEO does not recognize any online or distance education however; and does not consider Computer Science programs to be equivalent to software engineering programs despite the tremendous overlap between the two. This has sparked controversy and a certification war. It has also held the number of P.Eng holders for the profession exceptionally low. The vast majority of working professionals in the field hold a degree in CS, not SE. Given the difficult certification path for holders of non-SE degrees, most never bother to pursue the license. The initial impact of outsourcing, and the relatively lower cost of international human resources in developing third world countries led to a massive migration of software development activities from corporations in North America and Europe to India and later: China, Russia, and other developing countries. This approach had some flaws, mainly the distance / time zone difference that prevented human interaction between clients and developers and the massive job transfer. This had a negative impact on many aspects of the software engineering profession. For example, some students in the developed world avoid education related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers . [ 74 ] Although statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected. [ 75 ] Nevertheless, the ability to smartly leverage offshore and near-shore resources via the follow-the-sun workflow has improved the overall operational capability of many organizations. [ 76 ] When North Americans leave work, Asians are just arriving to work. When Asians are leaving work, Europeans arrive to work. This provides a continuous ability to have human oversight on business-critical processes 24 hours per day, without paying overtime compensation or disrupting a key human resource, sleep patterns. While global outsourcing has several advantages, global – and generally distributed – development can run into serious difficulties resulting from the distance between developers. This is due to the key elements of this type of distance that have been identified as geographical, temporal, cultural and communication (that includes the use of different languages and dialects of English in different locations). [ 77 ] Research has been carried out in the area of global software development over the last 15 years and an extensive body of relevant work published that highlights the benefits and problems associated with the complex activity. As with other aspects of software engineering research is ongoing in this and related areas. There are various prizes in the field of software engineering: Some call for licensing, certification and codified bodies of knowledge as mechanisms for spreading the engineering knowledge and maturing the field. [ 81 ] Some claim that the concept of software engineering is so new that it is rarely understood, and it is widely misinterpreted, including in software engineering textbooks, papers, and among the communities of programmers and crafters. [ 82 ] Some claim that a core issue with software engineering is that its approaches are not empirical enough because a real-world validation of approaches is usually absent, or very limited and hence software engineering is often misinterpreted as feasible only in a "theoretical environment." [ 82 ] Edsger Dijkstra , a founder of many of the concepts in software development today, rejected the idea of "software engineering" up until his death in 2002, arguing that those terms were poor analogies for what he called the "radical novelty" of computer science : A number of these phenomena have been bundled under the name "Software Engineering". As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot." [ 83 ]
https://en.wikipedia.org/wiki/Software_engineer
Software engineers make up a significant portion of the global workforce . As of 2022, there are an estimated 26.9 million professional software engineers worldwide, up from 21 million in 2016. [ 1 ] [ 2 ] In 2023, there were an estimated 1.6 million [ 3 ] professional software developers in North America. There are 166 million people [ 4 ] employed in the US workforce, making software developers 0.96% of the total workforce. [ 5 ] [ 6 ] [ 7 ] [ 8 ] The following two tables compare the number of software engineers (611,900 in 2002) versus the number of traditional engineers (1,157,020 in 2002). There are another 1,500,000 people in system analysis , system administration , and computer support, many of whom might be called software engineers. Many systems analysts manage software development teams, and as analysis is an important software engineering role, many of them may be considered software engineers in the near future. This means that the number of software engineers may actually be much higher. It is important to note that the number of software engineers declined by 5 to 10 percent from 2000 to 2002. (2002) (2002) (2002) (2021) (2021) (2021) Computer and information system managers (264,790) manage software projects, as well as computer operations. Similarly, Construction and engineering managers (413,750) oversee engineering projects, manufacturing plants, and construction sites. Computer management is 64% the size of construction and engineering management. [ citation needed ] Most people working in the field of computer science, whether making software systems (software engineering) or studying the theoretical and mathematical facts of software systems (computer science), acquire degrees in computer science. The data shows that the combined number of chemistry and physics educators (29,610) nearly equals the number of engineering educators (29,310). It is estimated that roughly half of computer science educators emphasize the practical (software engineering), and the other half emphasize the theoretical (computer science). [ citation needed ] This means that software engineering education is 56% the size of traditional engineering education. There are more computer science educators than chemistry and physics educators combined, or engineering educators. [ citation needed ] [ citation needed ] Software engineers are part of the much larger software, hardware, application, and operations community. In 2000 in the U.S., there were about 680,000 software engineers and about 10,000,000 IT workers. There are no numbers on testers in the BLS data. [ citation needed ] There has been a healthy growth in the number of India 's IT professionals over the past few years. From a base of 6,800 knowledge workers in 1985–86, the number increased to 522,000 software and services professionals by the end of 2001–02. It is estimated that out of these 528,000 knowledge workers, almost 170,000 are working in the IT software and services export industry; nearly 106,000 are working in the IT enabled services and over 230,000 in user organizations. [ 26 ] In May 2024, the Australian government reported that 169,300 Australians are employed as software and applications programmers, 17% of who are women. The role grew annually by 8,300 workers. [ 27 ]
https://en.wikipedia.org/wiki/Software_engineering_demographics
Software engineering professionalism is a movement to make software engineering a profession , with aspects such as degree and certification programs, professional associations , professional ethics , and government licensing. The field is a licensed discipline in Texas in the United States [ 1 ] ( Texas Board of Professional Engineers , since 2013), Engineers Australia [ 2 ] (Course Accreditation since 2001, not Licensing), and many provinces in Davao. In 1993 the IEEE and ACM began a joint effort called JCESEP , which evolved into SWECC in 1998 to explore making software engineering into a profession. The ACM pulled out of SWECC in May 1999, objecting to its support for the Texas professionalization efforts, of having state licenses for software engineers . ACM determined that the state of knowledge and practice in software engineering was too immature to warrant licensing, and that licensing would give false assurances of competence even if the body of knowledge were mature. [ 3 ] The IEEE continued to support making software engineering a branch of traditional engineering. In Canada the Canadian Information Processing Society established the Information Systems Professional certification process. Also, by the late 1990s (1999 in British Columbia) the discipline of software engineering as a professional engineering discipline was officially created. This has caused some disputes between the provincial engineering associations and companies who call their developers software engineers, even though these developers have not been licensed by any engineering association. [ 4 ] In 1999, the Panel of Software Engineering was formed as part of the settlement between Engineering Canada and the Memorial University of Newfoundland over the school's use of the term "software engineering" in the name of a computer science program. Concerns were raised over the inappropriate use of the name "software engineering" to describe non-engineering programs could lead to student and public confusion, and ultimately threaten public safety. [ 5 ] The Panel issued recommendations to create a Software Engineering Accreditation Board, but the task force created to carry out the recommendations was unable to get the various stakeholders to agree to concrete proposals, resulting in separate accreditation boards. [ 6 ] [ 7 ] Software engineering ethics is a large field. In some ways it began as an unrealistic attempt to define bugs as unethical. [ citation needed ] More recently it has been defined as the application of both computer science and engineering philosophy, principles, and practices to the design and development of software systems. Due to this engineering focus and the increased use of software in mission critical and human critical systems, where failure can result in large losses of capital but more importantly lives such as the Therac-25 system, many ethical codes have been developed by a number of societies, associations and organizations. These entities, such as the ACM , IEEE , EGBC [ 8 ] and Institute for Certification of Computing Professionals (ICCP) [ 9 ] have formal codes of ethics. Adherence to the code of ethics is required as a condition of membership or certification. According to the ICCP, violation of the code can result in revocation of the certificate. Also, all engineering societies require conformance to their ethical codes; violation of the code results in the revocation of the license to practice engineering in the society's jurisdiction. These codes of ethics usually have much in common. They typically relate the need to act consistently with the client's interest, employer's interest, and most importantly the public's interest. They also outline the need to act with professionalism and to promote an ethical approach to the profession. A Software Engineering Code of Ethics [ 10 ] [ 11 ] has been approved by the ACM and the IEEE-CS as the standard for teaching and practicing software engineering. The following are examples of codes of conduct for Professional Engineers. These 2 have been chosen because both jurisdictions have a designation for Professional Software Engineers. Bill Joy argued that "better software" can only enable its privileged end users, make reality more power-pointy as opposed to more humane, and ultimately run away with itself so that "the future doesn't need us." He openly questioned the goals of software engineering in this respect, asking why it isn't trying to be more ethical rather than more efficient. [ citation needed ] In his book Code and Other Laws of Cyberspace , Lawrence Lessig argues that computer code can regulate conduct in much the same way as the legal code. Lessig and Joy urge people to think about the consequences of the software being developed, not only in a functional way, but also in how it affects the public and society as a whole. Overall, due to the youth of software engineering, many of the ethical codes and values have been borrowed from other fields, such as mechanical and civil engineering. However, there are many ethical questions that even these, much older, disciplines have not encountered. Questions about the ethical impact of internet applications, which have a global reach, have never been encountered until recently and other ethical questions are still to be encountered. This means the ethical codes for software engineering are a work in progress, that will change and update as more questions arise. [ citation needed ] Since 2002, the IEEE Computer Society offered the Certified Software Development Professional (CSDP) certification exam (in 2015 this was replaced by several similar certifications). A group of experts from industry and academia developed the exam and maintained it. Donald Bagert, and at a later period Stephen Tockey headed the certification committee. Contents of the exam centered around the SWEBOK ( Software Engineering Body of Knowledge ) guide, with an additional emphasis on Professional Practices and Software Engineering Economics knowledge areas (KAs). The motivation was to produce a structure at an international level for software engineering's knowledge areas. [ 14 ] [ 15 ] Professional licensing has been criticized for many reasons. [ 3 ] The Bureau of Labor Statistics (BLS) classifies computer software engineers as a subcategory of "computer specialists", along with occupations such as computer scientist, Programmer, Database administrator and Network administrator. [ 16 ] The BLS classifies all other engineering disciplines, including computer hardware engineers, as engineers . [ 17 ] Many states prohibit unlicensed persons from calling themselves an Engineer, or from indicating branches or specialties not covered licensing acts. [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] In many states, the title Engineer is reserved for individuals with a Professional Engineering license indicating that they have shown minimum level of competency through accredited engineering education, qualified engineering experience, and engineering board's examinations. [ 28 ] [ 29 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] In April 2013 the National Council of Examiners for Engineering and Surveying (NCEES) began offering a Professional Engineer (PE) exam for Software Engineering. The exam was developed in association with the IEEE Computer Society . [ 30 ] NCEES ended the exam in April 2019 due to lack of participation. [ 31 ] The American National Society of Professional Engineers provides a model law and lobbies legislatures to adopt occupational licensing regulations. The model law requires: Some states require continuing education. In Texas Donald Bagert of Texas became the first professional software engineer in the U.S. on September 4, 1998 or October 9, 1998. As of May 2002, Texas had issued 44 professional engineering licenses for software engineers. Rochester Institute of Technology granted the first Software Engineering bachelor's degrees in 2001. Other universities have followed. In Canada, the use of the job title Engineer is controlled in each province by self-regulating professional engineering organizations who are also tasked with enforcement of the governing legislation. The intent is that any individual holding themselves out as an engineer has been verified to have been educated to a certain accredited level and their professional practice is subject to a code of ethics and peer scrutiny. It is also illegal to use the title Engineer in Canada unless an individual is licensed. IT professionals with degrees in other fields (such as computer science or information systems) are restricted from using the title Software Engineer , or wording Software Engineer in a title, depending on their province or territory of residence. [ citation needed ] In some instances, cases have been taken to court regarding the illegal use of the protected title Engineer . [ 32 ] Most Canadians who earn professional software engineering licenses study software engineering, computer engineering or electrical engineering. Many times these people are already qualified to become professional engineers in their own fields but choose to be licensed as software engineers to differentiate themselves from computer scientists. In British Columbia , The Limited Licence is granted by the Engineers and Geoscientists of British Columbia. Fees are collected by EGBC for the Limited Licensee. In Ontario, the Professional Engineers Act [ 33 ] stipulates a minimum education level of a three-year diploma in technology from a College of Applied Arts and Technology or a degree in a relevant science area. [ 34 ] However, engineering undergraduates and all other applicants are not allowed to use the title of engineer until they complete the minimum amount of work experience of four years in addition to completing the Professional Practice Examination (PPE). If the applicant does not hold an undergraduate engineering degree then they may have to take the Confirmatory Practice Exam or Specific Examination Program unless the exam requirements are waived by a committee. [ 35 ] [ 36 ] A person must be granted the “professional engineer” licence to have the right to practise professional software engineering as a Professional Engineer in Ontario. To become licensed by Professional Engineers Ontario (PEO), one must: Many graduates of Software Engineering programs are unable to obtain the PEO licence since the work they qualify for after graduation as entry-level is not related to engineering i.e. working in a software company writing code or testing code would not qualify them as their work experience does not fulfill the work experience guidelines the PEO sets. Also Software Engineering programs in Ontario and other provinces involve a series of courses in electrical, electronics, and computers engineering qualifying the graduates to even work in those fields. A person must be granted the “engineer” licence to have the right to practise professional software engineering in Quebec. To become licensed by the Quebec order of engineers (in French : Ordre des ingénieurs du Québec - OIQ), a candidate must: The term "engineer" in Canada is restricted to those who have graduated from a qualifying engineering programme. Some universities’ "software engineering" programmes are under the engineering faculty and therefore qualify, for example the University of Waterloo . Others, such as the University of Toronto have "software engineering" in the computer science faculty which does not qualify. This distinction has to do with the way the profession is regulated. Degrees in "Engineering" must be accredited by a national panel and have certain specific requirements to allow the graduate to pursue a career as a professional engineer. "Computer Science" degrees, even those with specialties in software engineering, do not have to meet these requirements so the computer science departments can generally teach a wider variety of topics and students can graduate without specific courses required to pursue a career as a professional engineer. [ 37 ] Throughout the whole of Europe, suitably qualified engineers may obtain the professional European Engineer qualification. In France, the term ingénieur (engineer) is not a protected title and can be used by anyone, even by those who do not possess an academic degree. However, the title Ingénieur Diplomé (Graduate Engineer) is an official academic title that is protected by the government and is associated with the Diplôme d'Ingénieur , which is one of the most prestigious academic degrees in France. The use of the title tölvunarfræðingur ( computer scientist ) is protected by law in Iceland. [ 38 ] Software engineering is taught in Computer Science departments in Icelandic universities. Icelandic law state that a permission must be obtained from the Minister of Industry when the degree was awarded abroad, prior to use of the title. The title is awarded to those who have obtained a BSc degree in Computer Science from a recognized higher educational institution. [ 39 ] In New Zealand, the Institution of Professional Engineers New Zealand (IPENZ), which licenses and regulates the country's chartered engineers (CPEng), recognizes software engineering as a legitimate branch of professional engineering and accepts application of software engineers to obtain chartered status provided they have a tertiary degree of approved subjects. Software Engineering is included whereas Computer Science is normally not. [ 40 ]
https://en.wikipedia.org/wiki/Software_engineering_professionalism
A feature is "a prominent or distinctive user-visible aspect, quality, or characteristic of a software system or systems", as defined by Kang et al. [ 1 ] At the implementation level, "it is a structure that extends and modifies the structure of a given software in order to satisfy a stakeholder’s requirement, to implement and encapsulate a design decision, and to offer a configuration option", as defined by Apel et al. [ 2 ] The term feature means the same for software as it does for any kind of system. For example, the British Royal Navy's HMS Dreadnought (1906) was considered an important milestone in naval technology because of its advanced features that did not exist in pre-dreadnought battleships . [ 3 ] Feature also applies to computer hardware . In the early history of computers, devices such as Digital Equipment Corporation 's PDP-7 minicomputer (created in 1964) was noted for having a wealth of features, such as being the first version of the PDP minicomputer series to use wire wrap , as well as being the first to use the proprietary DEC Flip-Chip module which was invented in the same year. [ 4 ] [ 5 ] Feature also applies to concepts such as a programming language. The Python programming language is well-known for its feature of using whitespace characters (spaces and tabs) instead of curly braces to indicate different blocks of code. [ 6 ] Another similar high-level, object oriented programming language, Ruby , is noteworthy for using the symbols "@" and "$" to highlight different variable scopes, which the developers claim improves code readability. Its developers also claim that one of its important features is a high amount of flexibility. [ 7 ] The Institute of Electrical and Electronics Engineers (IEEE) defines feature in the (obsolete) standard for software test documentation IEEE 829 as a "distinguishing characteristic of a software item (e.g., performance, portability, or functionality)". [ 8 ] Although feature is typically used for a positive aspect of a software system, a software bug is also a feature but with negative value. The terminal emulator xterm has many notable features, including compatibility with the X Window System , the ability to emulate a VT220 and VT320 [ 9 ] terminal with ANSI color, and the ability to input escape sequences using a computer mouse or other similar device, and the ability to run on multiple different Unix-like operating systems (e.g. Linux , AIX , BSD , and HP-UX ). [ 10 ] Feature-rich describes a software system as having many options and capabilities. One mechanism for introducing feature-rich software to the user is the concept of progressive disclosure , a technique where features are introduced gradually as they become required, to reduce the potential confusion caused by displaying a wealth of features at once. [ 11 ] Sometimes, feature-rich is considered a negative attribute. The terms feature creep , software bloat , and featuritis refer to software that is overly feature-rich. [ 12 ] This type of excessive inclusion of features is in some cases a result of design by committee . [ 13 ] To counteract the tendency of software developers to add additional, unnecessary features, the Unix philosophy was developed in the 1970s by Bell Labs employees working on the Unix operating system such as Ken Thompson and Dennis Ritchie . The philosophy can be summarized as: software programs should generally only complete one primary task and that "small is beautiful". [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/Software_feature
Software incompatibility is a characteristic of software components or systems which cannot operate satisfactorily together on the same computer , or on different computers linked by a computer network . They may be components or systems which are intended to operate cooperatively or independently. Software compatibility is a characteristic of software components or systems which can operate satisfactorily together on the same computer, or on different computers linked by a computer network. It is possible that some software components or systems may be compatible in one environment and incompatible in another. Consider sequential programs of the form: A particular program might use a printer (resource A) and a file (resource B) in order to print the file. If several such programs P1,P2,P3 ... operate at the same time, then the first one to execute will block the others until the resources are released, and the programs will execute in turn. There will be no problem. It makes no difference whether a uni-processor or a multi-processor system is used, as it's the allocation of the resources which determines the order of execution. Note, however, that programmers are, in general, not constrained to write programs in a particular way, or even if there are guidelines, then some may differ from the guidelines. A variant of the previous program may be: The resources A and B are the same as in the previous example – not simply dummy variables , as otherwise the programs are identical. As before, if there are several such programs, Q1,Q2,Q3 which run at the same time using resources as before, there will be no problem. However, if several of the Ps are set to run at the same time as several of the Qs, then a deadlock condition can arise. Note that the deadlock need not arise, but may. Now neither P nor Q can proceed 1 . This is one kind of example where programs may demonstrate incompatibility. Another example of a different kind would be where one software component provides service to another. The incompatibility could be as simple as a change in the order of parameters between the software component requesting service, and the component providing the service. This would be a kind of interface incompatibility. This might be considered a bug , but could be very hard to detect in some systems. Some interface incompatibilities can easily be detected during the build stage , particularly for strongly typed systems, others may be hard to find and may only be detected at run time , while others may be almost impossible to detect without a detailed program analysis. Consider the following example: A variant of Q, Q' has similar behaviour, with the following differences: If P never calls Q with y set to 100, then using Q' instead is a compatible computation . However if P calls Q with y set to 100, then using Q' instead will lead to a non-terminating computation. If we assume further that f(x) has a numeric value, then component Q'' defined as: may cause problem behaviour. If P now calls Q'' with = 101, then the results of the computation will be incorrect, but may not cause a program failure. If P calls Q'' with y = 102 then the results are unpredictable, and failure may arise, possibly due to divide by zero or other errors such as arithmetic overflow . If P calls Q'' with y= 103 then in the event that P uses the result in a division operation, then a divide by zero failure may occur. This example shows how one program P1 may be always compatible with another Q1, but that there can be constructed other programs Q1' and Q1'' such that P1 and Q1' are sometimes incompatible, and P1 and Q1'' are always incompatible. Sometimes programs P and Q can be running on the same computer, and the presence of one will inhibit the performance of the other. This can particularly happen where the computer uses virtual memory . The result may be that disk thrashing occurs, and one or both programs will have significantly reduced performance. This form of incompatibility can occur if P and Q are intended to cooperate, but can also occur if P and Q are completely unrelated but just happen to run at the same time. An example might be if P is a program which produces large output files, which happen to be stored in main memory , and Q is an anti-virus program which scans many files on the hard disk. If a memory cache is used for virtual memory, then it is possible for the two programs to interact adversely and the performance of each will be drastically reduced. For some programs P and Q their performance compatibility may depend on the environment in which they are run. They may be substantially incompatible if they are run on a computer with limited main memory, yet it may be possible to run them satisfactorily on a machine with more memory. Some programs may be performance incompatible in almost any environment.
https://en.wikipedia.org/wiki/Software_incompatibility
Software intelligence is insight into the inner workings and structural condition of software assets produced by software designed to analyze database structure, software framework and source code to better understand and control complex software systems in information technology environments. [ 1 ] [ 2 ] Similarly to business intelligence (BI), software intelligence is produced by a set of software tools and techniques for the mining of data and the software's inner-structure. Results are automatically produced and feed a knowledge base containing technical documentation and blueprints of the innerworking of applications, [ 3 ] and make it available to all to be used by business and software stakeholders to make informed decisions, [ 4 ] measure the efficiency of software development organizations, communicate about the software health, prevent software catastrophes. [ 5 ] Software intelligence has been used by Kirk Paul Lafler, an American engineer, entrepreneur, and consultant, and founder of Software Intelligence Corporation in 1979. At that time, it was mainly related to SAS activities, in which he has been an expert since 1979. [ 6 ] In the early 1980s, Victor R. Basili participated in different papers detailing a methodology for collecting valid software engineering data relating to software engineering, evaluation of software development, and variations. [ 7 ] [ 8 ] In 2004, different software vendors in software analysis started using the terms as part of their product naming and marketing strategy. Then in 2010, Ahmed E. Hassan and Tao Xie defined software intelligence as a " practice offering software practitioners up-to-date and pertinent information to support their daily decision-making processes and Software Intelligence should support decision-making processes throughout the lifetime of a software system ". They go on by defining software intelligence as a " strong impact on modern software practice " for the upcoming decades. [ 9 ] Because of the complexity and wide range of components and subjects implied in software, software intelligence is derived from different aspects of software: The capabilities of software intelligence platforms include an increasing number of components: Some considerations must be made in order to successfully integrate the usage of software Intelligence systems in a company. Ultimately the software intelligence system must be accepted and utilized by the users in order for it to add value to the organization. If the system does not add value to the users' mission, they simply don't use it as stated by M. Storey in 2003. [ 20 ] At the code level and system representation, software intelligence systems must provide a different level of abstractions: an abstract view for designing, explaining and documenting and a detailed view for understanding and analyzing the software system. [ 21 ] At the governance level, the user acceptance for software intelligence covers different areas related to the inner functioning of the system as well as the output of the system. It encompasses these requirements: Software intelligence has many applications in all businesses relating to the software environment, whether it is software for professionals, individuals, or embedded software. Depending on the association and the usage of the components, applications will relate to: Software intelligence is a high-level discipline and has been gradually growing covering the applications listed above. There are several markets driving the need for it:
https://en.wikipedia.org/wiki/Software_intelligence
Software law refers to the legal remedies available to protect software-based assets. Software may, under various circumstances and in various countries, be restricted by patent or copyright or both. Most commercial software is sold under some kind of software license agreement . [ 1 ] This software article is a stub . You can help Wikipedia by expanding it . This law -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Software_law
Software metering is the monitoring and controlling of software for analytics and enforcing of agreements . It can be either passive data collection, or active restriction. [ 1 ] [ 2 ] Software metering can take different forms: This computer science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Software_metering
Software of unknown pedigree (SOUP) is software that was developed with a unknown process or methodology , or which has unknown or no safety-related properties. [ 1 ] In the medical device development standard IEC 62304 , SOUP expands to software of unknown provenance , and in some contexts uncertain is used instead of unknown , but any combination of unknown/uncertain and provenance/pedigree refer to the same concept; all with the same abbreviation. The term SOUP is often used in the context of safety-critical and high integrity systems such as medical software – especially in a medical device . A risk that SOUP poses is that it cannot be relied upon to perform safety-related functions, and it may prevent other software, hardware or firmware from performing their safety-related functions. Addressing the risk involves insulating the safety-involved parts of a system from potentially undesirable effects caused by the SOUP. [ 2 ] Rather than prohibiting SOUP, additional controls are often imposed to mitigate risk. Practices may include static program analysis and review of the vendor's development process, design artifacts, and safety guidance. [ 3 ] This software-engineering -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Software_of_unknown_pedigree
Software vendor liability is the issue of product liability for software bugs that cause harm, such as security bugs [ 1 ] or bugs causing medical errors. [ 2 ] For the most part, this liability does not exist in the United States. [ 3 ] [ 4 ] [ 5 ] The possibility of liability is excluded for most software in the European Union Product Liability Directive 1985 but is explicitly provided for in the update issued in 2024. [ 6 ] This software article is a stub . You can help Wikipedia by expanding it . This law -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Software_product_liability
A software quality assurance ( QA ) analyst , also referred to as a software quality analyst or simply a quality assurance ( QA ) analyst , is an individual who is responsible for applying the principles and practices of software quality assurance throughout the software development life cycle . [ 1 ] Software testing is one of many parts of the larger process of QA. [ 2 ] Testing is used to detect errors in a product, while QA also fixes the processes that resulted in those errors. [ 3 ] Software QA analysts may have professional certification from a software testing certification board , like the International Software Testing Qualifications Board (ISTQB). This job-, occupation-, or vocation-related article is a stub . You can help Wikipedia by expanding it . This software-engineering -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Software_quality_assurance_analyst
Software requirements [ 1 ] for a system are the description of what the system should do, the service or services that it provides and the constraints on its operation. The IEEE Standard Glossary of Software Engineering Terminology defines a requirement as: [ 2 ] The activities related to working with software requirements can broadly be broken down into elicitation, analysis, specification, and management. [ 3 ] Note that the wording Software requirements is additionally used in software release notes to explain, which depending on software packages are required for a certain software to be built/installed/used. [ 1 ] Elicitation is the gathering and discovery of requirements from stakeholders and other sources. A variety of techniques can be used such as joint application design (JAD) sessions, interviews, document analysis, focus groups, etc. Elicitation is the first step of requirements development. Analysis is the logical breakdown that proceeds from elicitation. Analysis involves reaching a richer and more precise understanding of each requirement and representing sets of requirements in multiple, complementary ways. Requirements Triage or prioritization of requirements is another activity which often follows analysis. [ 4 ] This relates to Agile software development in the planning phase, e.g. by Planning poker , however it might not be the same depending on the context and nature of the project and requirements or product/service that is being built. Specification involves representing and storing the collected requirements knowledge in a persistent and well-organized fashion that facilitates effective communication and change management. Use cases, user stories, functional requirements, and visual analysis models are popular choices for requirements specification. Validation involves techniques to confirm that the correct set of requirements has been specified to build a solution that satisfies the project's business objectives. Requirements change during projects and there are often many of them. Management of this change becomes paramount to ensuring that the correct software is built for the stakeholders. Taking into account that these activities may involve some artifacts such as observation reports ( user observation ), questionnaires ( interviews , surveys and polls), use cases , user stories ; activities such as requirement workshops ( charrettes ), brainstorming , mind mapping , role-playing ; and even, prototyping ; [ 5 ] software products providing some or all of these capabilities can be used to help achieve these tasks. There is at least one author who advocates, explicitly, for mind mapping tools such as FreeMind ; and, alternatively, for the use of specification by example tools such as Concordion . [ 6 ] Additionally, the ideas and statements resulting from these activities may be gathered and organized with wikis and other collaboration tools such as Trello . The features actually implemented and standards compliance vary from product to product. A Software requirements specification (SRS) document might be created using general-purpose software like a word processor or one of several specialized tools. Some of these tools can import, edit, export and publish SRS documents. It may help to make SRS documents while following a standardised structure and methodology, such as ISO/IEC/IEEE 29148:2018. Likewise, software may or not use some standard to import or export requirements (such as ReqIF ) or not allow these exchanges at all. Tools of this kind verify if there are any errors in a requirements document according to some expected structure or standard. Tools of this kind compare two requirement sets according to some expected document structure and standard. Tools of this kind allow the merging and update of requirement documents. Tools of this kind allow tracing requirements to other artifacts such as models and source code (forward traceability) or, to previous ones such as business rules and constraints (backwards traceability). Model-based systems engineering (MBSE) is the formalised application of modelling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later lifecycle phases. It is also possible to take a model-based approach for some stages of the requirements engineering and, a more traditional one, for others. Very many combinations might be possible. The level of formality and complexity depends on the underlying methodology involved (for instance, i* is much more formal than SysML and, even more formal than UML ) Tools in this category may provide some mix of the capabilities mentioned previously and others such as requirement configuration management and collaboration. The features actually implemented and standards compliance vary from product to product. There are even more capable or general tools that support other stages and activities. They are classified as ALM tools.
https://en.wikipedia.org/wiki/Software_requirements
Software safety (sometimes called software system safety) is an engineering discipline that aims to ensure that software, which is used in safety-related systems (i.e. safety-related software), does not contribute to any hazards such a system might pose. There are numerous standards that govern the way how safety-related software should be developed and assured in various domains. Most of them classify software according to their criticality and propose techniques and measures that should be employed during the development and assurance: System Safety is the overarching discipline that aims to achieve safety by reducing risks in technical systems to an acceptable level. According to the widely adopted system safety standard IEC 61508 , [ 1 ] safety is “freedom from unacceptable risk of harm”. As software alone – which can be considered as pure information – cannot cause any harm by itself, the term software safety is sometimes dismissed and replaced by “software system safety” (e.g. the Joint Software Systems Safety Engineering Handbook [ 8 ] and MIL-STD-882E [ 9 ] use this terminology). This stresses that software can only cause harm in the context of a technical system (see NASA Software Safety Guidebook, [ 10 ] chapter 2.1.2), that has some effect on its environment. The goal of software safety is to make sure that software does not cause or contribute to any hazards in the system where it is used and that it can be assured and demonstrated that this is the case. This is typically achieved by the assignment of a "safety level" to the software and the selection of appropriate processes for the development and assurance of the software. One of the first steps when creating safety-related software is to classify software according to its safety-criticality. Various standards suggest different levels, e.g. Software Levels A-E in DO-178C , [ 4 ] SIL (Safety Integrity Level) 1-4 in IEC 61508, [ 1 ] ASIL (Automotive Safety Integrity Level) A-D in ISO 26262 . [ 2 ] The assignment is typically done in the context of an overarching system, where the worst case consequences of software failures are investigated. For example, automotive standard ISO 26262 requires the performance of a Hazard and Risk Assessment ("HARA") on vehicle level to derive the ASIL of the software executed on a component. It is essential to use an adequate development and assurance process, with appropriate methods and techniques, commensurate with the safety criticality of the software. Software safety standards recommend and sometimes forbid the use of such methods and techniques, depending on the safety level. Most standards suggest a lifecycle model (e.g. EN 50716, [ 3 ] SIL (Safety Integrity Level) 1-4 in IEC 61508 [ 1 ] suggests – among others – a V-model) and prescribe required activities to be executed during the various phases of the software. For example, IEC 61508 requires that software is specified adequately (e.g. by using formal or semi-formal methods), that the software design should be modular and testable, that adequate programming languages are used, documented code reviews are performed and that testing should be performed an several layers to achieve an adequately high test coverage. The focus on the software development and assurance process stems from the fact that software quality (and hence safety) is heavily influenced by the software process, as suggested by IEC 25010. [ 11 ] It is claimed that the process influences the internal software quality attributes (e.g. code quality) and these in turn influence external software quality attributes (e.g. functionality and reliability). The following activities and topics addressed in the development process contribute to safe software. Comprehensive documentation of the complete development and assurance process is required by virtually all software safety standards. Typically, this documentation is reviewed and endorsed by third parties and therefore a prerequisite for the approval of safety-related software. The documentation ranges from various planning documents, requirements specifications, software architecture and design documentation, test cases on various abstraction levels, tool qualification reports, review evidence, verification and validation results etc. Fig C.2 in EN 50716 [ 3 ] lists 32 documents that need to be created along the development lifecycle. Traceability is the practice to establish relationships between different types of requirements and between requirements and design, implementation and testing artefacts. According to EN 50716, [ 3 ] the objective “is to ensure that all requirements can be shown to have been properly met and that no untraceable material has been introduced”. By documenting and maintaining traceability, it becomes possible to follow e.g. a safety requirement into the design of a system (to verify if it considered adequately), further on into the software source code (to verify if the code fulfils the requirement), and to an appropriate test case and test execution (to verify if the safety requirement has been tested adequately). Safety standards can have requirements directly affecting the implementation of the software in source code, such as e.g. the selection of an appropriate programming language, the size and complexity of functions, the use of certain programming constructs and the need for coding standards. Part 3 of IEC 61508 contains the following requirements and recommendations: Appropriate test coverage needs to be demonstrated, i.e. depending on the safety level more rigorous testing schemes have to be applied. A well known requirement regarding test coverage depending on the software level is given in DO-178C: [ 4 ] Software safety standards typically require some activities to be executed with independence, i.e. by a different person, by a person with different reporting lines, or even by an independent organization. This ensures that conflicts of interest are avoided and increases the chances that faults (e.g. in the software design) are identified. For example, EN 50716 [ 3 ] Figure 2 requires the roles “implementer”, “tester” and “verifier” to be held by different people, the role “validator” to be held by a person with different reporting line and the role “assessor” to be held by a person from a different organizational unit. DO-178C [ 4 ] and DO-278A [ 5 ] require several activities (e.g. test coverage verification, assurance activities) to be executed “with independence”, with independence being defined as “separation of responsibilities which ensures the accomplishment of objective evaluation”. In system safety engineering, it is common to allocate upper bounds for failure rates of subsystems or components. It must then be shown that these subsystems or components do not exceed their allocated failure rates, or otherwise redundancy or other fault tolerance mechanisms must be employed. This approach is not practicable for software. Software failure rates cannot be predicted with any confidence. Although significant research in the field of software reliability has been conducted (see for example Lyu (1996), [ 12 ] current software safety standards do not require any of these methods to be used or even discourage their usage, e.g. DO178C [ 4 ] (p. 73) states: “Many methods for predicting software reliability based on developmental metrics have been published, for example, software structure, defect detection rate, etc. This document does not provide guidance for those types of methods, because at the time of writing, currently available methods did not provide results in which confidence can be placed.” ARP 4761 [ 13 ] clause 4.1.2 states that software design errors “are not the same as hardware failures. Unlike hardware failures, probabilities of such errors cannot be quantified.” Software safety and security may have differing interests in some cases. On the one hand safety-related software that is not secure can pose a safety risk, on the other hand, some security practices (e.g. frequent and timely patching) contradict established safety practices (rigorous testing and verification before anything is changed in an operational system). Software that employs artificial intelligence techniques such as machine learning follows a radically different lifecycle. In addition, the behavior is harder to predict than for a traditionally developed system. Hence, the question whether and how these technologies can be used, is under current investigation. Currently, standards generally do not endorse their use. For example, EN 50716 (Table A.3) states that artificial intelligence and machine learning are not recommended for any safety integrity level. Agile software development , which typically features many iterations, is sometimes still stigmatized as being too chaotic for safety-related software development. This might be partially caused by statements such as "working software over comprehensive documentation", which is found in the manifesto for agile development. [ 14 ] Although most software safety standards present the software lifecycle in the traditional waterfall -like sequence, some do contain statements that allow for more flexible lifecycles. DO-178C states that "The processes of a software life cycle may be iterative, that is, entered and reentered." EN 50716 contains Annex C that shows how iterative development lifecycles can be used in line with the requirements of the standard. This article incorporates public domain material from Software handbook . United States Army .
https://en.wikipedia.org/wiki/Software_safety
Software installed in medical devices is assessed for health and safety issues according to international standards. Software classification is based on the potential for hazard(s) that could cause injury to the user or patient. [ 1 ] Per [[IEC 62304|IEC 62304:2006] + A1:2015], the software can be divided into three separate classes: For the purpose of this classification, serious injury is defined as injury or illness that directly or indirectly is life-threatening; results in permanent impairment of a body function or permanent damage to a body structure; or necessitates medical or surgical intervention to prevent permanent impairment of a body function or permanent damage to a body structure.
https://en.wikipedia.org/wiki/Software_safety_classification
Software studies is an emerging interdisciplinary research field, which studies software systems and their social and cultural effects. The implementation and use of software has been studied in recent fields such as cyberculture , Internet studies , new media studies , and digital culture , yet prior to software studies, software was rarely ever addressed as a distinct object of study. To study software as an artifact, software studies draws upon methods and theory from the digital humanities and from computational perspectives on software. Methodologically, software studies usually differs from the approaches of computer science and software engineering , which concern themselves primarily with software in information theory and in practical application; however, these fields all share an emphasis on computer literacy , particularly in the areas of programming and source code . This emphasis on analysing software sources and processes (rather than interfaces) often distinguishes software studies from new media studies, which is usually restricted to discussions of interfaces and observable effects. The conceptual origins of software studies include Marshall McLuhan 's focus on the role of media in themselves, rather than the content of media platforms, in shaping culture. Early references to the study of software as a cultural practice appear in Friedrich Kittler 's essay, "Es gibt keine Software", [ 1 ] Lev Manovich 's Language of New Media , [ 2 ] and Matthew Fuller 's Behind the Blip: Essays on the Culture of Software . [ 3 ] Much of the impetus for the development of software studies has come from video game studies , particularly platform studies, the study of video games and other software artifacts in their hardware and software contexts. New media art, software art , motion graphics , and computer-aided design are also significant software-based cultural practices, as is the creation of new protocols and platforms. The first conference events in the emerging field were Software Studies Workshop 2006 and SoftWhere 2008. [ 4 ] [ 5 ] In 2008, [ citation needed ] MIT Press launched a Software Studies book series [ 6 ] with an edited volume of essays (Fuller's Software Studies: A Lexicon ), [ 7 ] and the first academic program was launched, ( Lev Manovich , Benjamin H. Bratton , and Noah Wardrip-Fruin 's "Software Studies Initiative" at U. California San Diego). [ 8 ] [ verification needed ] In 2011, a number of mainly British researchers established Computational Culture , an open-access peer-reviewed journal. The journal provides a platform for "inter-disciplinary enquiry into the nature of the culture of computational objects, practices, processes and structures." [ 9 ] Software studies is closely related to a number of other emerging fields in the digital humanities that explore functional components of technology from a social and cultural perspective. Software studies' focus is at the level of the entire program, specifically the relationship between interface and code. Notably related are critical code studies , which is more closely attuned to the code rather than the program, [ 10 ] and platform studies, which investigates the relationships between hardware and software. [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Software_studies
A software system is a system of intercommunicating components based on software forming part of a computer system (a combination of hardware and software). It "consists of a number of separate programs , configuration files, which are used to set up these programs, system documentation , which describes the structure of the system, and user documentation , which explains how to use the system". [ 1 ] A software system differs from a computer program or software. While a computer program is generally a set of instructions ( source , or object code ) that perform a specific task, a software system is more or an encompassing concept with many more components such as specification, test results , end-user documentation, maintenance records, etc. [ 2 ] The use of the term software system is at times related to the application of systems theory approaches in the context of software engineering . A software system consists of several separate computer programs and associated configuration files , documentation , etc., that operate together. [ 1 ] The concept is used in the study of large and complex software, because it focuses on the major components of software and their interactions . It is also related to the field of software architecture . Software systems are an active area of research for groups interested in software engineering in particular and systems engineering in general. [ 3 ] Academic journals like the Journal of Systems and Software (published by Elsevier ) are dedicated to the subject. [ 4 ] The ACM Software System Award is an annual award that honors people or an organization "for developing a system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both" . [ 5 ] It has been awarded by the Association for Computing Machinery (ACM) since 1983, with a cash prize sponsored by IBM . Major categories of software systems include those based on application software development , programming software , and system software although the distinction can sometimes be difficult. Examples of software systems include operating systems , computer reservations systems , air traffic control systems, military command and control systems, telecommunication networks , content management systems , database management systems , expert systems , embedded systems , etc.
https://en.wikipedia.org/wiki/Software_system
Software visualization [ 1 ] [ 2 ] or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior —and their development process by means of static, interactive or animated 2-D or 3-D [ 3 ] visual representations of their structure, [ 4 ] execution, [ 5 ] behavior, [ 6 ] and evolution. Software visualization uses a variety of information available about software systems. Key information categories include: The objectives of software visualization are to support the understanding of software systems (i.e., its structure) and algorithms (e.g., by animating the behavior of sorting algorithms) as well as the analysis and exploration of software systems and their anomalies (e.g., by showing classes with high coupling ) and their development and evolution. One of the strengths of software visualization is to combine and relate information of software systems that are not inherently linked, for example by projecting code changes onto software execution traces. [ 7 ] Software visualization can be used as tool and technique to explore and analyze software system information, e.g., to discover anomalies similar to the process of visual data mining . [ 8 ] For example, software visualization is used to monitoring activities such as for code quality or team activity. [ 9 ] Visualization is not inherently a method for software quality assurance . [ citation needed ] Software visualization participates to Software Intelligence in allowing to discover and take advantage of mastering inner components of software systems. Tools for software visualization might be used to visualize source code and quality defects during software development and maintenance activities. There are different approaches to map source code to a visual representation such as by software maps [ 10 ] Their objective includes, for example, the automatic discovery and visualization of quality defects in object-oriented software systems and services. Commonly, they visualize the direct relationship of a class and its methods with other classes in the software system and mark potential quality defects. A further benefit is the support for visual navigation through the software system. More or less specialized graph drawing software is used for software visualization. A small-scale 2003 survey of researchers active in the reverse engineering and software maintenance fields found that a wide variety of visualization tools were used, including general purpose graph drawing packages like GraphViz and GraphEd, UML tools like Rational Rose and Borland Together , and more specialized tools like Visualization of Compiler Graphs (VCG) and Rigi . [ 11 ] : 99–100 The range of UML tools that can act as a visualizer by reverse engineering source is by no means short; a 2007 book noted that besides the two aforementioned tools, ESS-Model, BlueJ , and Fujaba also have this capability, and that Fujaba can also identify design patterns . [ 12 ]
https://en.wikipedia.org/wiki/Software_visualization
Sofya Aleksandrovna Yanovskaya (also Janovskaja ; Russian : Софи́я Алекса́ндровна Яно́вская ; 31 January 1896 – 24 October 1966) was a Soviet mathematician , philosopher and historian , specializing in the history of mathematics , mathematical logic , and philosophy of mathematics . She is best known for her efforts in restoring the research of mathematical logic in the Soviet Union and publishing and editing the mathematical works of Karl Marx . Yanovskaya was born in Pruzhany , a town near Brest , to a Jewish family of accountant Alexander Neimark . From 1915 to 1918, she studied in a woman's college in Odessa , [ 1 ] when she became a communist . She worked as a party official until 1924, when she started teaching at the Institute of Red Professors . With exception of the war years (1941–1945), she worked at Moscow State University until retirement. Engels had noted in his writings that Karl Marx had written some mathematics. Yanonskaya found Marx's '' Mathematical Manuscripts '' and she arranged for their first publication in 1933 in Russian. [ 2 ] She received her doctoral degree in 1935. Her work on Karl Marx's mathematical manuscripts began in 1930s and may have had some influence on the study of non-standard analysis in China . [ 3 ] In the academia she is most remembered now for her work on history and philosophy of mathematics, as well as for her influence on young generation of researchers. She persuaded Ludwig Wittgenstein when he was visiting Soviet Union in 1935 to give up his idea to relocate to the Soviet Union . [ 4 ] [ 5 ] In 1968 Yanovskaya arranged for a better publication of Marx's work. [ 2 ] She died from diabetes in Moscow . For her work, Yanovskaya received the Order of Lenin and other medals.
https://en.wikipedia.org/wiki/Sofya_Yanovskaya
Empirically derived NDVI products have been shown to be unstable, varying with soil colour, soil moisture , and saturation effects from high density vegetation. In an attempt to improve NDVI, Huete [ 1 ] developed a vegetation index that accounted for the differential red and near-infrared extinction through the vegetation canopy . The index is a transformation technique that minimizes soil brightness influences from spectral vegetation indices involving red and near-infrared (NIR) wavelengths. The index is given as: where L is a canopy background adjustment factor. An L value of 0.5 in reflectance space was found to minimize soil brightness variations and eliminate the need for additional calibration for different soils. The transformation was found to nearly eliminate soil-induced variations in vegetation indices. [ 1 ] This meteorology –related article is a stub . You can help Wikipedia by expanding it . This remote sensing -related article is a stub . You can help Wikipedia by expanding it . This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Soil-adjusted_vegetation_index
The soil-plant-atmosphere continuum ( SPAC ) is the pathway for water moving from soil through plants to the atmosphere . Continuum in the description highlights the continuous nature of water connection through the pathway. The low water potential of the atmosphere, and relatively higher (i.e. less negative) water potential inside leaves, leads to a diffusion gradient across the stomatal pores of leaves, drawing water out of the leaves as vapour. [ 1 ] As water vapour transpires out of the leaf, further water molecules evaporate off the surface of mesophyll cells to replace the lost molecules since water in the air inside leaves is maintained at saturation vapour pressure . Water lost at the surface of cells is replaced by water from the xylem , which due to the cohesion-tension properties of water in the xylem of plants pulls additional water molecules through the xylem from the roots toward the leaf. The transport of water along this pathway occurs in components, variously defined among scientific disciplines: SPAC integrates these components and is defined as a: ...concept recognising that the field with all its components (soil, plant, animals and the ambient atmosphere taken together) constitutes a physically integrated, dynamic system in which the various flow processes involving energy and matter occur simultaneously and independently like links in the chain. [ 2 ] This characterises the state of water in different components of the SPAC as expressions of the energy level or water potential of each. Modelling of water transport between components relies on SPAC, as do studies of water potential gradients between segments. This soil science –related article is a stub . You can help Wikipedia by expanding it . This plant physiology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Soil-plant-atmosphere_continuum
Ground–structure interaction ( SSI ) consists of the interaction between soil (ground) and a structure built upon it. It is primarily an exchange of mutual stress , whereby the movement of the ground-structure system is influenced by both the type of ground and the type of structure. This is especially applicable to areas of seismic activity. Various combinations of soil and structure can either amplify or diminish movement and subsequent damage. A building on stiff ground rather than deformable ground will tend to suffer greater damage. A second interaction effect, tied to mechanical properties of soil, is the sinking of foundations, worsened by a seismic event. This phenomenon is called soil liquefaction . Most of the civil engineering structures involve some type of structural element with direct contact with ground. When the external forces, such as earthquakes , act on these systems, neither the structural displacements nor the ground displacements, are independent of each other. The process in which the response of the soil influences the motion of the structure and the motion of the structure influences the response of the soil is termed as soil-structure interaction (SSI) . [ 1 ] Conventional structural design methods neglect the SSI effects. Neglecting SSI is reasonable for light structures in relatively stiff soil such as low rise buildings and simple rigid retaining walls. The effect of SSI, however, becomes prominent for heavy structures resting on relatively soft soils for example nuclear power plants, high-rise buildings and elevated-highways on soft soil. [ 2 ] Damage sustained in recent earthquakes , such as the 1995 Kobe earthquake , have also highlighted that the seismic behavior of a structure is highly influenced not only by the response of the superstructure, but also by the response of the foundation and the ground as well. [ 3 ] Hence, the modern seismic design codes, such as Standard Specifications for Concrete Structures: Seismic Performance Verification JSCE 2005 [ 4 ] stipulate that the response analysis should be conducted by taking into consideration a whole structural system including superstructure, foundation and ground. It is conventionally believed that SSI is a purely beneficial effect, and it can conveniently be neglected for conservative design. SSI provisions of seismic design codes are optional and allow designers to reduce the design base shear of buildings by considering soil-structure interaction (SSI) as a beneficial effect. The main idea behind the provisions is that the soil-structure system can be replaced with an equivalent fixed-base model with a longer period and usually a larger damping ratio. [ 5 ] [ 6 ] Most of the design codes use oversimplified design spectra, which attain constant acceleration up to a certain period, and thereafter decreases monotonically with period. Considering soil-structure interaction makes a structure more flexible and thus, increasing the natural period of the structure compared to the corresponding rigidly supported structure. Moreover, considering the SSI effect increases the effective damping ratio of the system. The smooth idealization of design spectrum suggests smaller seismic response with the increased natural periods and effective damping ratio due to SSI, which is the main justification of the seismic design codes to reduce the design base shear when the SSI effect is considered. The same idea also forms the basis of the current common seismic design codes such as ASCE 7-10 and ASCE 7-16. Although the mentioned idea, i.e. reduction in the base shear, works well for linear soil-structure systems, it is shown that it cannot appropriately capture the effect of SSI on yielding systems. [ 7 ] More recently, Khosravikia et al. [ 8 ] evaluated the consequences of practicing the SSI provisions of ASCE 7-10 and those of 2015 National Earthquake Hazards Reduction Program (NEHRP), which form the basis of the 2016 edition of the seismic design standard provided by the ASCE. They showed that SSI provisions of both NEHRP and ASCE 7-10 result in unsafe designs for structures with surface foundation on moderately soft soils, but NEHRP slightly improves upon the current provisions for squat structures. For structures on very soft soils, both provisions yield conservative designs where NEHRP is even more conservative. Finally, both provisions yield near-optimal designs for other systems. Using rigorous numerical analyses, Mylonakis and Gazetas [ 9 ] have shown that increase in natural period of structure due to SSI is not always beneficial as suggested by the simplified design spectrums. Soft soil sediments can significantly elongate the period of seismic waves and the increase in natural period of structure may lead to the resonance with the long period ground vibration. Additionally, the study showed that ductility demand can significantly increase with the increase in the natural period of the structure due to SSI effect. The permanent deformation and failure of soil may further aggravate the seismic response of the structure. When a structure is subjected to an earthquake excitation, it interacts with the foundation and the soil, and thus changes the motion of the ground. Soil-structure interaction broadly can be divided into two phenomena: a) kinematic interaction and b) inertial interaction. Earthquake ground motion causes soil displacement known as free-field motion. However, the foundation embedded into the soil will not follow the free field motion. This inability of the foundation to match the free field motion causes the kinematic interaction. On the other hand, the mass of the superstructure transmits the inertial force to the soil, causing further deformation in the soil, which is termed as inertial interaction. [ 2 ] At low level of ground shaking, kinematic effect is more dominant causing the lengthening of period and increase in radiation damping. However, with the onset of stronger shaking, near-field soil modulus degradation and soil-pile gapping limit radiation damping, and inertial interaction becomes predominant causing excessive displacements and bending strains concentrated near the ground surface resulting in pile damage near the ground level. [ 2 ] Observations from recent earthquakes have shown that the response of the foundation and soil can greatly influence the overall structural response. There are several cases of severe damages in structures due to SSI in the past earthquakes . Yashinsky [ 10 ] cites damage in number of pile-supported bridge structures due to SSI effect in the Loma Prieta earthquake in San Francisco in 1989. Extensive numerical analysis carried out by Mylonakis and Gazetas [ 9 ] have attributed SSI as one of the reasons behind the dramatic collapse of Hanshin Expressway in 1995 Kobe earthquake . The main types of foundations, based upon several building characteristics, are: The filing of foundations grounds takes place according to the mechanical properties of the grounds themselves: in Italy , for instance, according to the new earthquake -proof norm – Ordinanza 3274/2003 – you can identify the following categories: The type of foundations is selected according to the type of ground; for instance, in the case of homogeneous rock formations connected plinths are selected, while in the case of very low quality grounds plates are chosen. For further information about the various ways of building foundations see foundation (architecture) . Both grounds and structures can be more or less deformable; their combination can or cannot cause the amplification of the seismic effects on the structure. Ground, in fact, is a filter with respect to all the main seismic waves , as stiffer soil fosters high-frequency seismic waves while less compact soil accommodates lower frequency waves. Therefore, a stiff building, characterized by a high fundamental frequency , suffers amplified damage when built on stiff ground and then subjected to higher frequencies. For instance, suppose there are two buildings that share the same high stiffness . They stand on two different soil types: the first, stiff and rocky—the second, sandy and deformable. If subjected to the same seismic event, the building on the stiff ground suffers greater damage. The second interaction effect, tied to mechanical properties of soil, is about the lowering (sinking) of foundations, worsened by the seismic event itself, especially about less compact grounds. This phenomenon is called soil liquefaction . The methods most used to mitigate the problem of the ground-structure interaction consist of the employment of the before-seen isolation systems and of some ground brace techniques, which are adopted above all on the low-quality ones (categories D and E). The most diffused techniques are the jet grouting technique and the pile work technique. The jet-grouting technique consists of injecting in the subsoil some liquid concrete by means of a drill . When this concrete hardens it forms a sort of column that consolidates the surrounding soil. This process is repeated on all areas of the structure. The pile work technique consists of using piles, which, once inserted in the ground, support the foundation and the building above, by moving the loads or the weights towards soil layers that are deeper and therefore more compact and movement-resistant.
https://en.wikipedia.org/wiki/Soil-structure_interaction
Soil Biology and Biochemistry is a monthly peer-reviewed scientific journal established in 1969 and published by Elsevier . It focuses on research papers that explain biological processes in soil. The founding editor-in-chief was John Saville Waid , and the current editors-in-chief are Karl Ritz from the University of Nottingham and Josh Schimel from the University of California Santa Barbara , who have been in position since 2020. The journal covers a broad range of topics within soil biology, including microbial and faunal activities, biogeochemical cycles, and ecosystem processes. It is recognized for its contributions to understanding soil health, fertility, and the role of soil organisms in maintaining ecological balance. Soil Biology and Biochemistry is indexed in several major databases, including Scopus , Web of Science , and PubMed . It has a significant impact factor, which has consistently increased over the years, indicating its relevance and influence in the field of soil science. According to the Journal Citation Reports , the journal had an impact factor of 9.8 in 2023, reflecting its high citation rate. [ 1 ] [ 2 ] The journal has published pioneering studies on soil microbial ecology, nutrient cycling, and the impact of environmental changes on soil processes. It serves as a critical resource for researchers, agronomists, and environmental scientists. This article about a journal on soil science is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Soil_Biology_and_Biochemistry
Soil Moisture and Ocean Salinity ( SMOS ) is a satellite which forms part of ESA 's Living Planet Programme . It is intended to provide new insights into Earth's water cycle and climate . In addition, it is intended to provide improved weather forecasting and monitoring of snow and ice accumulation. [ 3 ] [ 4 ] [ 5 ] [ 6 ] The project was proposed in November 1998; in 2004 the project passed ESA-phase "C/D" and, [ 7 ] after several delays, it was launched on 2 November 2009 from Plesetsk Cosmodrome on a Rockot rocket. [ 8 ] The first data from the MIRAS ( Microwave Imaging Radiometer using Aperture Synthesis) instrument was received on 20 November 2009. [ 9 ] The SMOS programme cost is about €315 million ($465 million; £280 million). It is led by ESA but with significant input from French and Spanish interests. [ 8 ] The satellite is part of ESA's Earth Explorer programme – satellite missions that are performing innovative science in obtaining data on issues of pressing environmental concern. The first is already complete – a mission called GOCE , which mapped variations in the pull of gravity across the Earth's surface. SMOS was the second Explorer to launch; and was followed by CryoSat-2 (the first CryoSat failed on launch), Swarm (spacecraft) , and ADM-Aeolus . The satellite was launched on 2 November 2009 (04:50 (01:50 GMT)) to a nearly circular orbit of 763 km aboard a Rokot , a modified Russian Intercontinental Ballistic Missile (ICBM) SS-19 launched from a decommissioned SS-19 launcher from Northern Russia's Plesetsk Cosmodrome . [ 4 ] [ 10 ] The SMOS satellite was launched together with the Proba-2 , a technology demonstration satellite. [ 11 ] [ 12 ] The goal of the SMOS mission is to monitor surface soil moisture with an accuracy of 4% (at 35–50 km spatial resolution ). [ 7 ] This aspect is managed by the HYDROS project. Project Aquarius will attempt to monitor sea surface salinity with an accuracy of 0.1 psu (10- to 30-day average and a spatial resolution of 200 km x 200 km). [ 7 ] [ 13 ] Soil moisture is an important aspect of climate, and therefore forecasting . Plants transpire water from depths lower than 1 meter in many places and satellites like SMOS can only provide moisture content down to a few centimeters, but using repeated measurements in a day, the satellite can extrapolate soil moisture. [ 4 ] [ 5 ] The SMOS team of ESA hope to work with farmers around the world, including the United States Department of Agriculture to use as ground-based calibration for models determining soil moisture, as it may help to better understand crop yields over wide regions. [ 14 ] Ocean salinity is crucial to the understanding of the role of the ocean in climate through the global water cycle . [ 15 ] Salinity in combination with temperature determine ocean circulation by defining its density and hence thermohaline circulation . [ 16 ] Additionally, ocean salinity is one of the variables that regulate CO 2 uptake and release and therefore has an effect on the oceanic carbon cycle . [ 17 ] Information from SMOS is expected to help improve short and medium-term weather forecasts, and also have practical applications in areas such as agriculture and water resource management. In addition, climate models should benefit from having a more precise picture of the scale and speed of movement of water in the different components of the hydrological cycle. [ 8 ] SMOS has been used to improve hurricane forecasting by collecting hurricane surface-level wind speed data using its novel microwave imaging radiometer, which can penetrate the thick clouds surrounding a cyclone. Hurricanes that have been studied by SMOS include Hurricane Florence , Typhoon Mangkhut , and Typhoon Jebi . [ 18 ] The SMOS satellite carries a new type of instrument called Microwave Imaging Radiometer with Aperture Synthesis (MIRAS). Some eight metres across, it has the look of helicopter rotor blades; the instrument creates images of radiation emitted in the microwave L-band (1.4 GHz). MIRAS will measure changes in the wetness of the land and in the salinity of seawater by observing variations in the natural microwave emission coming up off the surface of the planet. [ 6 ] [ 8 ] [ 13 ] The CNES Satellite Operations Ground Segment operates the spacecraft with telecommunications from ESA's S-band facility located in Kiruna , Sweden . The Data Processing Ground Segment is located in ESAC , Villafranca del Castillo , Spain. Higher level processing of information is done by scientists globally. [ 4 ]
https://en.wikipedia.org/wiki/Soil_Moisture_and_Ocean_Salinity
Soil acidification is the buildup of hydrogen cations , which reduces the soil pH . Chemically, this happens when a proton donor gets added to the soil. The donor can be an acid , such as nitric acid , sulfuric acid , or carbonic acid . It can also be a compound such as aluminium sulfate , which reacts in the soil to release protons. Acidification also occurs when base cations such as calcium , magnesium , potassium and sodium are leached from the soil. Soil acidification naturally occurs as lichens and algae begin to break down rock surfaces. Acids continue with this dissolution as soil develops. With time and weathering, soils become more acidic in natural ecosystems. Soil acidification rates can vary, and increase with certain factors such as acid rain , agriculture, and pollution. [ 1 ] Rainfall is naturally acidic due to carbonic acid forming from carbon dioxide in the atmosphere. [ 2 ] This compound causes rainfall pH to be around 5.0–5.5. When rainfall has a lower pH than natural levels, it can cause rapid acidification of soil. Sulfur dioxide and nitrogen oxides are precursors of stronger acids that can lead to acid rain production when they react with water in the atmosphere. These gases may be present in the atmosphere due to natural sources such as lightning and volcanic eruptions, or from anthropogenic emissions. [ 3 ] Basic cations like calcium are leached from the soil as acidic rainfall flows, which allows aluminum and proton levels to increase. [ 4 ] [ 5 ] Nitric and sulfuric acids in acid rain and snow can have different effects on the acidification of forest soils, particularly seasonally in regions where a snow pack may accumulate during the winter. [ 6 ] Snow tends to contain more nitric acid than sulfuric acid, and as a result, a pulse of nitric acid-rich snow meltwater may leach through high elevation forest soils during a short time in the spring. [ 7 ] This volume of water may comprise as much as 50% of the annual precipitation. The nitric acid flush of meltwater may cause a sharp, short term, decrease in the drainage water pH entering groundwater and surface waters. [ 8 ] The decrease in pH can solubilize Al 3+ that is toxic to fish, [ 9 ] especially newly-hatched fry with immature gill systems through which they pass large volumes of water to obtain O 2 for respiration. As the snow meltwater flush passes, water temperatures rise, and lakes and streams produce more dissolved organic matter; the Al concentration in drainage water decreases and is bound to organic acids, making it less toxic to fish. In rain, the ratio of nitric-to-sulfuric acids decreases to approximately 1:2. The higher sulfuric acid content of rain also may not release as much Al 3+ from soils as does nitric acid, in part due to the retention (adsorption) of SO 4 2- by soils. This process releases OH − into soil solution and buffers the pH decrease caused by the added H + from both acids. The forest floor organic soil horizons (layers) that are high in organic matter also buffer pH, and decrease the load of H+ that subsequently leaches through underlying mineral horizons. [ 10 ] [ 11 ] Plant roots acidify soil by releasing protons and organic acids so as to chemically weather soil minerals. [ 12 ] Decaying remains of dead plants on soil may also form organic acids which contribute to soil acidification. [ 13 ] Acidification from leaf litter on the O-horizon is more pronounced under coniferous trees such as pine , spruce and fir , which return fewer base cations to the soil, rather than under deciduous trees ; however, soil pH differences attributed to vegetation often preexisted that vegetation, and help select for species which tolerate them. Calcium accumulation in existing biomass also strongly affects soil pH - a factor which can vary from species to species. [ 14 ] Certain parent materials also contribute to soil acidification. Granites and their allied igneous rocks are called "acidic" because they have a lot of free quartz , which produces silicic acid on weathering. [ 15 ] Also, they have relatively low amounts of calcium and magnesium. Some sedimentary rocks such as shale and coal are rich in sulfides , which, when hydrated and oxidized, produce sulfuric acid which is much stronger than silicic acid. Many coal soils are too acidic to support vigorous plant growth, and coal gives off strong precursors to acid rain when it is burned. Marine clays are also sulfide-rich in many cases, and such clays become very acidic if they are drained to an oxidizing state. Soil amendments such as chemical fertilizers can cause soil acidification. Sulfur based fertilizers can be highly acidifying, examples include elemental sulfur and iron sulfate while others like potassium sulfate have no significant effect on soil pH . While most nitrogen fertilizers have an acidifying effect, ammonium-based nitrogen fertilizers are more acidifying than other nitrogen sources. [ 16 ] Ammonia-based nitrogen fertilizers include ammonium sulfate , diammonium phosphate , monoammonium phosphate , and ammonium nitrate . Organic nitrogen sources, such as urea and compost , are less acidifying. Nitrate sources which have little or no ammonium, such as calcium nitrate , magnesium nitrate , potassium nitrate , and sodium nitrate , are not acidifying. [ 17 ] [ 18 ] [ 19 ] Acidification may also occur from nitrogen emissions into the air, as the nitrogen may end up deposited into the soil. [ 20 ] Animal livestock is responsible for nearly 65 percent of man-made ammonia emissions . [ 21 ] Anthropogenic sources of sulfur dioxides and nitrogen oxides play a major role in increase of acid rain production. [ clarification needed ] The use of fossil fuels and motor exhaust are the largest anthropogenic contributors to sulfuric gases and nitrogen oxides, respectively. [ 22 ] Aluminum is one of the few elements capable of making soil more acidic. [ 23 ] This is achieved by aluminum taking hydroxide ions out of water, leaving hydrogen ions behind. [ 24 ] As a result, the soil is more acidic, which makes it unlivable for many plants. Another consequence of aluminum in soils is aluminum toxicity, which inhibits root growth. [ 25 ] Agriculture managements approaches such as monoculture and chemical fertilization often leads to soil problems such as soil acidification, degradation, and soil-borne diseases, which ultimately have a negative impact on agricultural productivity and sustainability. [ 26 ] [ 27 ] Soil acidification can cause damage to plants and organisms in the soil. In plants, soil acidification results in smaller, less durable roots. [ 28 ] Acidic soils sometimes damage the root tips reducing further growth. [ 29 ] Plant height is impaired and seed germination also decreases. Soil acidification impacts plant health, resulting in reduced cover and lower plant density. Overall, stunted growth is seen in plants. [ 30 ] Soil acidification is directly linked to a decline in endangered species of plants. [ 31 ] In the soil, acidification reduces microbial and macrofaunal diversity. [ 32 ] This can reduce soil structure decline which makes it more sensitive to erosion. There are less nutrients available in the soil, larger impact of toxic elements to plants, and consequences to soil biological functions (such as nitrogen fixation ). [ 33 ] A recent study showed that sugarcane monoculture induces soil acidity, reduces soil fertility, shifts microbial structure, and reduces its activity. Furthermore, most beneficial bacterial genera decreased significantly due to sugarcane monoculture, while beneficial fungal genera showed a reverse trend. [ 34 ] Therefore, mitigating soil acidity, improving soil fertility, and soil enzymatic activities, including improved microbial structure with beneficial service to plants and soil, can be an effective measure to develop a sustainable sugarcane cropping system. [ 26 ] At a larger scale, soil acidification is linked to losses in agricultural productivity due to these effects. [ 32 ] Impacts of acidic water and Soil acidification on plants could be minor or in most cases major. In minor cases which do not result in fatality of plant life include; less-sensitive plants to acidic conditions and or less potent acid rain. Also in minor cases the plant will eventually die due to the acidic water lowering the plants natural pH. Acidic water enters the plant and causes important plant minerals to dissolve and get carried away; which ultimately causes the plant to die of lack of minerals for nutrition. [ 35 ] In major cases which are more extreme; the same process of damage occurs as in minor cases, which is removal of essential minerals, but at a much quicker rate. Likewise, acid rain that falls on soil and on plant leaves causes drying of the waxy leaf cuticle; which ultimately causes rapid water loss from the plant to the outside atmosphere and results in death of the plant. To see if a plant is being affected by soil acidification, one can closely observe the plant leaves. If the leaves are green and look healthy, the soil pH is normal and acceptable for plant life. But if the plant leaves have yellowing between the veins on their leaves, that means the plant is suffering from acidification and is unhealthy. Moreover, a plant suffering from soil acidification cannot photosynthesize. [ 36 ] Drying out of the plant due to acidic water destroy chloroplast organelles. Without being able to photosynthesize a plant cannot create nutrients for its own survival or oxygen for the survival of aerobic organisms; which affects most species of Earth and ultimately end the purpose of the plants existence. [ 37 ] Soil acidification is a common issue in long-term crop production which can be reduced by lime, organic amendments (e.g., straw and manure) and biochar application. [ 38 ] [ 26 ] [ 39 ] [ 40 ] [ 41 ] In sugarcane, soybean and corn crops grown in acidic soils, lime application resulted in nutrient restoration, increase in soil pH, increase in root biomass, and better plant health. [ 27 ] [ 42 ] Different management strategies may also be applied to prevent further acidification: using less acidifying fertilizers, considering fertilizer amount and application timing to reduce nitrate-nitrogen leaching, good irrigation management with acid-neutralizing water, and considering the ratio of basic nutrients to nitrogen in harvested crops. Sulfur fertilizers should only be used in responsive crops with a high rate of crop recovery. [ 43 ] By reducing anthropogenic sources of sulfur dioxides and nitrogen oxides, and with air-pollution control measures, let us [ who? ] try to reduce acid rain and soil acidification worldwide. [ 44 ] This has been observed in Ontario, Canada, over several lakes and demonstrated improvements in water pH and alkalinity. [ 45 ]
https://en.wikipedia.org/wiki/Soil_acidification
Soil biodiversity refers to the relationship of soil to biodiversity and to aspects of the soil that can be managed in relative to biodiversity. Soil biodiversity relates to some catchment management considerations. According to the Australian Department of the Environment and Water Resources , biodiversity is "the variety of life: the different plants, animals and micro-organisms, their genes and the ecosystems of which they are a part." [ 1 ] Biodiversity and soil are strongly linked because soil is the medium for a large variety of organisms, and interacts closely with the wider biosphere . Conversely, biological activity is a primary factor in soil's physical and chemical formation. [ 2 ] Soil provides a vital habitat , primarily for microbes (including bacteria and fungi ), but also for microfauna (such as protozoa and nematodes ), mesofauna (such as microarthropods and enchytraeids), and macrofauna (such as earthworms , termites , and millipedes ). [ 2 ] The primary role of soil biota is to recycle organic matter that is derived from the "above-ground plant-based food web". Soil is in close cooperation with the broader biosphere. The maintenance of fertile soil is "one of the most vital ecological services the living world performs", and the "mineral and organic contents of soil must be replenished constantly as plants consume soil elements and pass them up the food chain ". [ 3 ] The correlation of soil and biodiversity can be observed spatially. For example, both natural and agricultural vegetation boundaries correspond closely to soil boundaries, even at continental and global scales. [ 4 ] A "subtle synchrony" is how Baskin (1997) describes the relationship between the soil and the diversity of life above and below the ground. It is not surprising that soil management directly affects biodiversity. This includes practices that influence soil volume, structure, biological, and chemical characteristics, and whether soil exhibits adverse effects such as reduced fertility , soil acidification , or salinisation . [ 3 ] Soil acidity (or alkalinity) is the concentration of hydrogen ions (H + ) in the soil. Measured on the pH scale, soil acidity is an invisible condition that directly affects soil fertility and toxicity by determining which elements in the soil are available for absorption by plants. Increases in soil acidity are caused by removal of agricultural product from the paddock, leaching of nitrogen as nitrate below the root zone, inappropriate use of nitrogenous fertilizers , and buildup of organic matter . [ 5 ] Many of the soils in the Australian state of Victoria are naturally acidic; however, about 30,000 square kilometres or 23% of Victoria's agricultural soils suffer reduced productivity due to increased acidity. [ 5 ] Soil acidity has been seen to damage the roots of the plants. [ 6 ] Plants in higher acidity have smaller, less durable roots. [ 6 ] Some evidence has shown that the acidity damages the tips of the roots, restricting further growth. [ 6 ] The height of the plants has also seen a marked restriction when grown in acidic soils, as seen in American and Russian wheat populations. [ 7 ] The number of seeds that are even able to germinate in acidic soil is much lower than the number of seeds that can sprout in a more neutral pH soil. [ 7 ] These limitations to the growth of plants can have a very negative effect on plant health , leading to a decrease in the overall plant population. These effects occur regardless of the biome . A study in the Netherlands examined the correlation between soil pH and soil biodiversity in soils with pH below 5. [ 8 ] A strong correlation was discovered, wherein the lower the pH the lower the biodiversity. [ 8 ] The results were the same in grasslands as well as heathlands. [ 8 ] Particularly concerning is the evidence showing that this acidification is directly linked to the decline in endangered species of plants, a trend recognized since 1950. [ 8 ] Soil acidification reduces soil biodiversity. It reduces the numbers of most macrofauna, including, for example, earthworm numbers (important in maintaining structural quality of the topsoil for plant growth). Also affected is rhizobium survival and persistence. Decomposition and nitrogen fixation may be reduced, which affects the survival of native vegetation . Biodiversity may further decline as certain weeds proliferate under declining native vegetation. [ 5 ] [ 9 ] In strongly acidic soils, the associated toxicity may lead to decreased plant cover , leaving the soil susceptible to erosion by water and wind. [ 10 ] Extremely low pH soils may suffer from structural decline as a result of reduced microorganisms and organic matter; this brings a susceptibility to erosion under high rainfall events, drought , and agricultural disturbance. [ 5 ] Some plants within the same species have shown resistance to the soil acidity their population grows in. [ 6 ] Selectively breeding the stronger plants is a way for humans to guard against increasing soil acidity. [ 6 ] Further success in combatting soil acidity has been seen in soybean and maize populations suffering from aluminum toxicity. [ 11 ] Soil nutrients were restored and acidity decreased when lime was added to the soil. [ 11 ] Plant health and root biomass increased in response to the treatment. [ 11 ] This is a possible solution for other acidic soil plant populations [ 11 ] Soil structure is the arrangement of particles and associated pores in soils across the size range from nanometres to centimeters. Biological influences can be demonstrated in the formation and stabilization of the soil aggregates. Still, it is necessary to distinguish clearly between those forces or agencies that create aggregations of particles and those that stabilize or degrade such aggregations. [ 12 ] What qualifies as good soil contains the following attributes: optimal soil strength and aggregate stability, which offer resistance to structural degradation (capping/crusting, slaking and erosion, for example); optimal bulk density, which aids root development and contributes to other soil physical parameters such as water and air movement within the soil; optimal water holding capacity and rate of water infiltration. [ 13 ] Well-developed, healthy soils are complex systems in which physical soil structure is as important as chemical content. Soil pores—maximized in a well-structured soil—allow oxygen and moisture to infiltrate to depths and plant roots to penetrate to obtain moisture and nutrients. [ 14 ] Biological activity helps in the maintenance of relatively open soil structure, as well as facilitating decomposition and the transportation and transformation of soil nutrients. Changing soil structure has been shown to lead to reduced accessibility by plants to necessary substances. It is now uncontested that microbial exudates dominate the aggregation of soil particles and the protection of carbon from further degradation. [ 15 ] It has been suggested that microorganisms within the soil "engineer" a superior habitat and provide a more sound soil structure, leading to more productive soil systems. [ 16 ] Traditional agricultural practices have generally caused declining soil structure. [ 17 ] For example, cultivation causes the mechanical mixing of the soil, compacting and sheering of aggregates and filling of pore spaces—organic matter is also exposed to a greater rate of decay and oxidation. [ 4 ] Soil structure is essential to soil health and fertility; soil structure decline has a direct effect on soil and surface food chain and biodiversity as a consequence. Continued crop cultivation eventually results in significant changes within the soil, such as its nutrient status, pH balance, organic matter content, and physical characteristics. [ 18 ] While some of these changes can be beneficial to food and crop production, they can also be harmful towards other necessary systems. For example, studies have shown that tilling has had negative consequences towards soil organic matter (SOM), the organic component of soil composed of plant and animal decomposition and substances synthesized by soil organisms . SOM plays an integral role in preserving soil structure. Still, the constant tilling of crops has caused the SOM to shift and redistribute, causing soil structure to deteriorate and altering soil organism populations (such as with earthworms). [ 19 ] Yet in many parts of the world, maximizing food production at all costs due to rampant poverty and the lack of food security tends to leave the long term ecological consequences overlooked, despite research and acknowledgment by the academic community. [ 18 ] Crop rotation , crop diversification, legume intercrops , and organic inputs are found to correlate with higher soil diversity by McDaniel et al. 2014 and Lori et al. 2017. [ 20 ] Soil sodicity refers to the soil's content of sodium compared to its content of other cations , such as calcium . In high levels, sodium ions break apart clay platelets and cause swelling and dispersion in soil. [ 21 ] This results in reduced soil sustainability. If the concentration occurs repeatedly, the soil becomes cement -like, with little or no structure. Extended exposure to high sodium levels results in a decrease in the amount of water retained and able to flow through the soil and a decrease in decomposition rates (this leaves the soil infertile and prohibits any future growth). This issue is prominent in Australia, where 1/3 of the land is affected by high salt levels. [ 22 ] It is a natural occurrence, but farming practices such as overgrazing and cultivation have contributed to the rise of it. The options for managing sodic soils are minimal; one must select sodicity-tolerant plants or change the soil. The latter is the more difficult process. If changing the soil, one must add calcium to displace the excess exchangeable sodium that causes the disaggregation that blocks water flow. [ 23 ] Soil salinity is the salt concentration within the soil profile or on the soil surface. Excessive salt directly affects the composition of plants and animals due to varying salt tolerance – along with various physical and chemical changes to the soil, including structural decline and, in the extreme, denudation, exposure to soil erosion, and export of salts to waterways. [ 24 ] At low soil salinity, there is a lot of microbial activity, that results in an increase in soil respiration , which increases the carbon dioxide levels in the soil, producing a healthier environment for plants. [ 25 ] As the salinity of the soil rises, there is more stress on microbes because there is less available water available to them, leading to less respiration. [ 25 ] Soil salinity has localised and regional effects on biodiversity, ranging, for example, from changes in plant composition and survival at a local discharge site through to regional changes in water quality and aquatic life . While very saline soil is not preferred for growing crops, it is important to note that many crops can grow in more saline soils than others. [ 26 ] This is important in countries where resources such as fresh water are scarce and needed for drinking. Saline water can be used for agriculture. [ 26 ] Soil salinity can vary between extremes in a relatively small area; [ 27 ] this allows plants to seek areas with less salinity. It is hard to determine which plants can grow in soil with high salinity because the soil salinity is not uniform, even in small areas. [ 27 ] However, plants absorb nutrients from areas with lower salinity. [ 27 ] Soil erosion is the removal of the soil's upper layers by water, wind, or ice. Soil erosion occurs naturally, but human activities can greatly increase its severity. [ 28 ] Soil that is healthy is fertile and productive. [ 29 ] But soil erosion leads to a loss of topsoil, organic matter, and nutrients; it breaks down soil structure and decreases water storage capacity, reducing fertility and water availability to plant roots. Soil erosion is, therefore, a major threat to soil biodiversity. [ 30 ] The effects of soil erosion can be lessened by means of various soil conservation techniques. These include changes in agricultural practice (such as moving to less erosion-prone crops ), the planting of leguminous nitrogen-fixing trees, or trees that are known to replenish organic matter . [ 29 ] [ 31 ] Also, jute mats and jute geotextile nets can be used to divert and store runoff and control soil movement. [ 32 ] [ 33 ] Misconstrued soil conservation efforts can result in an imbalance of soil chemical compounds. [ 31 ] [ 34 ] For example, attempts at afforestation in the northern Loess Plateau , China , have led to nutrient deprivation of organic materials such as carbon , nitrogen, and phosphorus . [ 34 ] Potassium (K) is an essential macronutrient for plant development [ 35 ] and potassium chloride (KCl) represents the most widely source of K used in agriculture. [ 36 ] The use of KCl leads to high concentrations of chloride (Clˉ) in soil which cause increase in soil salinity affecting the development of plants and soil organisms. [ 37 ] [ 38 ] [ 39 ] [ 40 ] Chloride has a biocidal effect on the soil ecosystem, causing negative effects on the growth, mortality, and reproduction of organisms, [ 38 ] [ 40 ] which in turn jeopardizes soil biodiversity. The excessive availability of chloride in soil can trigger physiological disorders in plants and microorganisms by decreasing cells' osmotic potential and stimulating the production of reactive oxygen species. [ 39 ] In addition, this ion negatively affects nitrifying microorganisms, thus affecting nutrient availability in the soil. [ 38 ] Biological systems—both natural and artificial—depend heavily on healthy soils; it is the maintenance of soil health and fertility in all of its dimensions that sustain life. The interconnection spans vast spatial and temporal scales; the major degradation issues of salinity and soil erosion, for instance, can have anywhere from local to regional effects – it may take decades for the consequences of management actions affecting soil to be realised in terms of biodiversity impact. [ citation needed ] Maintaining soil health is a regional or catchment-scale issue. Because soils are a dispersed asset, the only effective way to ensure soil health generally is to encourage a broad, consistent, and economically appealing approach. Examples of such approaches as applied to an agricultural setting include the application of lime ( calcium carbonate ) to reduce acidity so as to increase soil health and production and the transition from conventional farming practices that employ cultivation to limited or no-till systems, which has had a positive impact on improving soil structure. [ 41 ] Soils encompass a huge diversity of organisms, which makes biodiversity difficult to measure. It is estimated that a football pitch contains underground as many organisms as equal to the size of 500 sheep. A first step has been taken in identifying areas where soil biodiversity is most under pressure is to find the main proxies which decrease soil biodiversity. [ 42 ] Soil biodiversity will be measured in the future, especially thanks to the development of molecular approaches relying on direct DNA extraction from the soil matrix. [ 43 ]
https://en.wikipedia.org/wiki/Soil_biodiversity
Soil and Water Bioengineering is a discipline of civil engineering . It pursues technological, ecological, economic as well as design goals and seeks to achieve these primarily by making use of living materials, i.e. seeds, plants, part of plants and plant communities, and employing them in near–natural constructions while exploiting the manifold abilities inherent in plants. Soil bioengineering may sometimes be a substitute for classical engineering works; however, in most cases it is a meaningful and necessary method of complementing the latter. Its application suggests itself in all fields of soil and hydraulic engineering , especially for slope and embankment stabilization and erosion control . [ 1 ] Soil bioengineering is the use of living plant materials to provide some engineering function. Soil bioengineering is an effective tool for treatment of a variety of unstable and / or eroding sites. Soil bioengineering techniques have been used for many centuries. More recently Schiechtl (1980) has encouraged the use of soil bioengineering with a variety of European examples. Soil bioengineering is now widely practiced throughout the world for the treatment of erosion and unstable slopes. [ 2 ] [ 3 ] Soil Bioengineering methods can be applied wherever the plants which are used as living building materials are able to grow well and develop. This is the case in tropical, subtropical and temperate zones whereas there are obvious limits in dry and cold regions, i.e. where arid, semi–arid and frost zones prevail. In exceptional cases, lack of water may be compensated for by watering or irrigation . In Europe, dry conditions limiting application exist in the Mediterranean as well as in some inner alpine and eastern European snowy regions. However, limits are most frequently imposed in alpine and arctic regions. These can usually be clearly noticed by the limited growth of woody plants ( forest , tree and shrub lines) and the upper limits of closed turf cover . The more impoverished a region is in species, the less suited it is for the application of bioengineering methods. [ citation needed ] Apart from these, ecological functions are gaining in importance, particularly as these can be fulfilled to a very limited extent only by classical engineering constructions. Bioengineering control works are not always necessarily cheaper in construction when compared to classical engineering structures. However, when taking into account their lifetime including their service and maintenance, they will normally turn out to be more economical. [ citation needed ] Their special advantages are: The result of soil bioengineering protection works are living systems which develop further and maintain their balance by natural succession (i.e. by dynamic self–control, without artificial input of energy). If the right living but also non–living building materials and the appropriate types of construction are chosen, exceptionally high sustainability requiring little maintenance effort can be achieved. [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Soil_bioengineering
Soil biology is the study of microbial and faunal activity and ecology in soil . Soil life , soil biota , soil fauna , or edaphon is a collective term that encompasses all organisms that spend a significant portion of their life cycle within a soil profile, or at the soil- litter interface. These organisms include earthworms , nematodes , protozoa , fungi , bacteria , different arthropods , as well as some reptiles (such as snakes ), and species of burrowing mammals like gophers , moles and prairie dogs . Soil biology plays a vital role in determining many soil characteristics. The decomposition of organic matter by soil organisms has an immense influence on soil fertility , plant growth , soil structure , and carbon storage . As a relatively new science, much remains unknown about soil biology and its effect on soil ecosystems . The soil is home to a large proportion of the world's biodiversity . The links between soil organisms and soil functions are complex. The interconnectedness and complexity of this soil 'food web' means any appraisal of soil function must necessarily take into account interactions with the living communities that exist within the soil. We know that soil organisms break down organic matter , making nutrients available for uptake by plants and other organisms. The nutrients stored in the bodies of soil organisms prevent nutrient loss by leaching . Microbial exudates act to maintain soil structure , and earthworms are important in bioturbation . However, we find that we do not understand critical aspects about how these populations function and interact. The discovery of glomalin in 1995 indicates that we lack the knowledge to correctly answer some of the most basic questions about the biogeochemical cycle in soils. There is much work ahead to gain a better understanding of the ecological role of soil biological components in the biosphere . In balanced soil, plants grow in an active and steady environment. The mineral content of the soil and its heartiful [ clarification needed ] structure are important for their well-being, but it is the life in the earth that powers its cycles and provides its fertility. Without the activities of soil organisms, organic materials would accumulate and litter the soil surface, and there would be no food for plants. The soil biota includes: Of these, bacteria and fungi play key roles in maintaining a healthy soil. They act as decomposers that break down organic materials to produce detritus and other breakdown products. Soil detritivores , like earthworms, ingest detritus and decompose it. Saprotrophs , well represented by fungi and bacteria, extract soluble nutrients from delitro. The ants (macrofaunas) help by breaking down in the same way but they also provide the motion part as they move in their armies. Also the rodents, wood-eaters help the soil to be more absorbent. Soil biology involves work in the following areas: Complementary disciplinary approaches are necessarily utilized which involve molecular biology , genetics , ecophysiology , biogeography , ecology, soil processes, organic matter, nutrient dynamics [ 1 ] and landscape ecology . Bacteria are single-cell organisms and the most numerous denizens of agriculture, with populations ranging from 100 million to 3 billion in a gram. They are capable of very rapid reproduction by binary fission (dividing into two) in favourable conditions. One bacterium is capable of producing 16 million more in just 24 hours. Most soil bacteria live close to plant roots and are often referred to as rhizobacteria. Bacteria live in soil water, including the film of moisture surrounding soil particles, and some are able to swim by means of flagella . The majority of the beneficial soil-dwelling bacteria need oxygen (and are thus termed aerobic bacteria), whilst those that do not require air are referred to as anaerobic , and tend to cause putrefaction of dead organic matter. Aerobic bacteria are most active in a soil that is moist (but not saturated, as this will deprive aerobic bacteria of the air that they require), and neutral soil pH , and where there is plenty of food ( carbohydrates and micronutrients from organic matter) available. Hostile conditions will not completely kill bacteria; rather, the bacteria will stop growing and get into a dormant stage, and those individuals with pro-adaptive mutations may compete better in the new conditions. Some Gram-positive bacteria produce spores in order to wait for more favourable circumstances, and Gram-negative bacteria get into a "nonculturable" stage. Bacteria are colonized by persistent viral agents ( bacteriophages ) that determine gene word order in bacterial host. From the organic gardener's point of view, the important roles that bacteria play are: Nitrification is a vital part of the nitrogen cycle , wherein certain bacteria (which manufacture their own carbohydrate supply without using the process of photosynthesis) are able to transform nitrogen in the form of ammonium , which is produced by the decomposition of proteins , into nitrates , which are available to growing plants, and once again converted to proteins. In another part of the cycle, the process of nitrogen fixation constantly puts additional nitrogen into biological circulation. This is carried out by free-living nitrogen-fixing bacteria in the soil or water such as Azotobacter , or by those that live in close symbiosis with leguminous plants, such as rhizobia . These bacteria form colonies in nodules they create on the roots of peas , beans , and related species. These are able to convert nitrogen from the atmosphere into nitrogen-containing organic substances. [ 2 ] While nitrogen fixation converts nitrogen from the atmosphere into organic compounds, a series of processes called denitrification returns an approximately equal amount of nitrogen to the atmosphere. Denitrifying bacteria tend to be anaerobes, or facultatively anaerobes (can alter between the oxygen dependent and oxygen independent types of metabolisms), including Achromobacter and Pseudomonas . The purification process caused by oxygen-free conditions converts nitrates and nitrites in soil into nitrogen gas or into gaseous compounds such as nitrous oxide or nitric oxide . In excess, denitrification can lead to overall losses of available soil nitrogen and subsequent loss of soil fertility . However, fixed nitrogen may circulate many times between organisms and the soil before denitrification returns it to the atmosphere. The diagram above illustrates the nitrogen cycle. Actinomycetota are critical in the decomposition of organic matter and in humus formation. They specialize in breaking down cellulose and lignin along with the tough chitin found on the exoskeletons of insects. Their presence is responsible for the sweet "earthy" aroma associated with a good healthy soil. They require plenty of air and a pH between 6.0 and 7.5, but are more tolerant of dry conditions than most other bacteria and fungi. [ 3 ] A gram of garden soil can contain around one million fungi , such as yeasts and moulds . Fungi have no chlorophyll , and are not able to photosynthesise . They cannot use atmospheric carbon dioxide as a source of carbon, therefore they are chemo-heterotrophic , meaning that, like animals , they require a chemical source of energy rather than being able to use light as an energy source, as well as organic substrates to get carbon for growth and development. Many fungi are parasitic, often causing disease to their living host plant, although some have beneficial relationships with living plants, as illustrated below. In terms of soil and humus creation, the most important fungi tend to be saprotrophic ; that is, they live on dead or decaying organic matter, thus breaking it down and converting it to forms that are available to the higher plants. A succession of fungi species will colonise the dead matter, beginning with those that use sugars and starches, which are succeeded by those that are able to break down cellulose and lignins . Fungi spread underground by sending long thin threads known as mycelium throughout the soil; these threads can be observed throughout many soils and compost heaps. From the mycelia the fungi is able to throw up its fruiting bodies, the visible part above the soil (e.g., mushrooms , toadstools , and puffballs ), which may contain millions of spores . When the fruiting body bursts, these spores are dispersed through the air to settle in fresh environments, and are able to lie dormant for up to years until the right conditions for their activation arise or the right food is made available. Those fungi that are able to live symbiotically with living plants, creating a relationship that is beneficial to both, are known as mycorrhizae (from myco meaning fungal and rhiza meaning root). Plant root hairs are invaded by the mycelia of the mycorrhiza, which lives partly in the soil and partly in the root, and may either cover the length of the root hair as a sheath or be concentrated around its tip. The mycorrhiza obtains the carbohydrates that it requires from the root, in return providing the plant with nutrients including nitrogen and moisture. Later the plant roots will also absorb the mycelium into its own tissues. Beneficial mycorrhizal associations are to be found in many of our edible and flowering crops. Shewell Cooper suggests that these include at least 80% of the Brassica and Solanum families (including tomatoes and potatoes ), as well as the majority of tree species, especially in forest and woodlands. Here the mycorrhizae create a fine underground mesh that extends greatly beyond the limits of the tree's roots, greatly increasing their feeding range and actually causing neighbouring trees to become physically interconnected. The benefits of mycorrhizal relations to their plant partners are not limited to nutrients, but can be essential for plant reproduction. In situations where little light is able to reach the forest floor, such as the North American pine forests, a young seedling cannot obtain sufficient light to photosynthesise for itself and will not grow properly in a sterile soil. But, if the ground is underlain by a mycorrhizal mat, then the developing seedling will throw down roots that can link with the fungal threads and through them obtain the nutrients it needs, often indirectly obtained from its parents or neighbouring trees. David Attenborough points out the plant, fungi, animal relationship that creates a "three way harmonious trio" to be found in forest ecosystems , wherein the plant/fungi symbiosis is enhanced by animals such as the wild boar, deer, mice, or flying squirrel, which feed upon the fungi's fruiting bodies, including truffles, and cause their further spread ( Private Life Of Plants , 1995). A greater understanding of the complex relationships that pervade natural systems is one of the major justifications of the organic gardener , in refraining from the use of artificial chemicals and the damage these might cause. [ citation needed ] Recent research has shown that arbuscular mycorrhizal fungi produce glomalin , a protein that binds soil particles and stores both carbon and nitrogen. These glomalin-related soil proteins are an important part of soil organic matter . [ 4 ] Soil fauna affect soil formation and soil organic matter dynamically on many spatiotemporal scales. [ 5 ] Earthworms , ants and termites mix the soil as they burrow, significantly affecting soil formation. Earthworms ingest soil particles and organic residues, enhancing the availability of plant nutrients in the material that passes through and out of their bodies. By aerating and stirring the soil, and by increasing the stability of soil aggregates, these organisms help to assure the ready infiltration of water. These organisms in the soil also help improve pH levels. Ants and termites are often referred to as "Soil engineers" because, when they create their nests, there are several chemical and physical changes made to the soil. Among these changes are increasing the presence of the most essential elements like carbon, nitrogen, and phosphorus—elements needed for plant growth. [ 6 ] They also can gather soil particles from differing depths of soil and deposit them in other places, leading to the mixing of soil so it is richer with nutrients and other elements. The soil is also important to many mammals. Gophers , moles, prairie dogs, and other burrowing animals rely on this soil for protection and food. The animals even give back to the soil as their burrowing allows more rain, snow and water from ice to enter the soil instead of creating erosion. [ 7 ] This table includes some familiar types of soil life of soil life, [ 8 ] coherent with prevalent taxonomy as used in the linked Wikipedia articles.
https://en.wikipedia.org/wiki/Soil_biology
Soil chemistry is the study of the chemical characteristics of soil . Soil chemistry is affected by mineral composition, organic matter and environmental factors. In the early 1870s a consulting chemist to the Royal Agricultural Society in England, named J. Thomas Way, performed many experiments on how soils exchange ions , and is considered the father of soil chemistry. [ 1 ] Other scientists who contributed to this branch of ecology include Edmund Ruffin , and Linus Pauling . [ 1 ] Until the late 1960s, soil chemistry focused primarily on chemical reactions in the soil that contribute to pedogenesis or that affect plant growth . Since then, concerns have grown about environmental pollution , organic and inorganic soil contamination and potential ecological health and environmental health risks . Consequently, the emphasis in soil chemistry has shifted from pedology and agricultural soil science to an emphasis on environmental soil science . A knowledge of environmental soil chemistry is paramount to predicting the fate of contaminants , as well as the processes by which they are initially released into the soil. Once a chemical is exposed to the soil environment, myriad chemical reactions can occur that may increase or decrease contaminant toxicity. These reactions include adsorption / desorption , precipitation , polymerization , dissolution , hydrolysis , hydration , complexation and oxidation/reduction . These reactions are often disregarded by scientists and engineers involved with environmental remediation . Understanding these processes enable us to better predict the fate and toxicity of contaminants and provide the knowledge to develop scientifically correct, and cost-effective remediation strategies. Soil structure refers to the manner in which these individual soil particles are grouped together to form clusters of particles called aggregates. This is determined by the types of soil formation , parent material , and texture . Soil structure can be influenced by a wide variety of biota as well as management methods by humans. The classification of soil structural forms is based largely on shape. The interactions of the soil's micropores and macropores are important to soil chemistry, as they allow for the provision of water and gaseous elements to the soil and the surrounding atmosphere. Macropores [ 3 ] help transport molecules and substances in and out of the micropores. Micropores are comprised within the aggregates themselves. The atmosphere contains three main gases, namely oxygen, carbon dioxide (CO 2 ) and nitrogen. In the atmosphere, oxygen is 20%, nitrogen is 79% and CO 2 is 0.15% to 0.65% by volume. CO 2 increases with the increase in the depth of soil because of decomposition of accumulated organic matter and abundance of plant roots . The presence of oxygen in the soil is important because it helps in breaking down insoluble rocky mass into soluble minerals and organic humification . Air in the soil is composed of gases that are present in the atmosphere, but not in the same proportions. These gases facilitate chemical reactions in microorganisms . Accumulation of soluble nutrients in the soil makes it more productive. If the soil is deficient in oxygen, microbial activity is slowed down or eliminated. Important factors controlling the soil atmosphere are temperature , atmospheric pressure , wind / aeration and rainfall . Soil texture influences the soil chemistry pertaining to the soil's ability to maintain its structure, the restriction of water flow and the contents of the particles in the soil. Soil texture considers all particle types and a soil texture triangle is a chart that can be used to calculate the percentages of each particle type adding up to total 100% for the soil profile. These soil separates differ not only in their sizes but also in their bearing on some of the important factors affecting plant growth such as soil aeration , work ability, movement and availability of water and nutrients. Sand particles range in size (about 0.05–2 mm). [ 4 ] Sand is the most coarse of the particle groups. Sand has the largest pores and soil particles of the particle groups. It also drains the most easily. These particles become more involved in chemical reactions when coated with clay. Silt particles range in size (about 0.002–0.5 mm). Silt pores are considered a medium in size compared with the other particle groups. Silt has a texture consistency of flour. Silt particles allow water and air to pass readily, yet retain moisture for crop growth. Silty soil contains sufficient quantities of nutrients, both organic and inorganic. Clay has particles smallest in size (about <0.002 mm) of the particle groups. Clay also has the smallest pores which give it a greater porosity, and it does not drain well. Clay has a sticky texture when wet. Some kinds can grow and dissipate, or in other words shrink and swell. Loam is a combination of sand, silt and clay that encompasses soils. It can be named based on the primary particles in the soil composition, ex. sandy loam, clay loam, silt loam, etc. Biota are organisms that, along with organic matter, help comprise the biological system of the soil. The vast majority of biological activity takes place near the soil surface, usually in the A horizon of a soil profile . Biota rely on inputs of organic matter in order to sustain themselves and increase population sizes. In return, they contribute nutrients to the soil, typically after it has been cycled in the soil trophic food web . With the many different interactions that take place, biota can largely impact their environment physically, chemically, and biologically (Pavao-Zuckerman, 2008). A prominent factor that helps to provide some degree of stability with these interactions is biodiversity , a key component of all ecological communities. Biodiversity allows for a consistent flow of energy through trophic levels and strongly influences the structure of ecological communities in the soil. Types of living soil biota can be divided into categories of plants (flora), animals (fauna), and microorganisms. Plants play a role in soil chemistry by exchanging nutrients with microorganisms and absorbing nutrients, creating concentration gradients of cations and anions. In addition to this, the differences in water potential created by plants influence water movement in soil, which affects the form and transportation of various particles. Vegetative cover on the soil surface greatly reduces erosion , which in turn prevents compaction and helps to maintain aeration in the soil pore space , providing oxygen and carbon to the biota and cation exchange sites that depend on it. Animals are essential to soil chemistry, as they regulate the cycling of nutrients and energy into different forms. This is primarily done through food webs. Some types of soil animals can be found below. Soil microbes play a major role in a multitude of biological and chemical activities that take place in soil. These microorganisms are said to make up around 1,000–10,000 kg of biomass per hectare in some soils (García-Sánchez, 2016). They are mostly recognized for their association with plants. The most well-known example of this is mycorrhizae , which exchange carbon for nitrogen with plant roots in a symbiotic relationship. Additionally, microbes are responsible for the majority of respiration that takes place in the soil, which has implications for the release of gases like methane and nitrous oxide from soil (giving it significance in discussion of climate change ) (Frouz et al., 2020). Given the significance of the effects of microbes on their environment, the conservation and promotion of microbial life is often desired by many plant growers, conservationists, and ecologists. Soil organic matter is the largest source of nutrients and energy in a soil. Its inputs strongly influence key soil factors such as types of biota, pH , and even soil order. Soil organic matter is often strategically applied by plant growers because of its ability to improve soil structure, supply nutrients, manage pH, increase water retention, and regulate soil temperature (which directly affects water dynamics and biota). The chief elements found in humus , the product of organic matter decomposition in soil, are carbon, hydrogen, oxygen, sulphur and nitrogen. The important compound found in humus are carbohydrates , phosphoric acid , some organic acids , resins , urea etc. Humus is a dynamic product and is constantly changing because of its oxidation, reduction and hydrolysis ; hence, it has much carbon content and less nitrogen. This material can come from a variety of sources, but often derives from livestock manure and plant residues. Though there are many other variables, such as texture, soils that lack sufficient organic matter content are susceptible to soil degradation and drying, as there is nothing supporting the soil structure. This often leads to a decline in soil fertility and an increase in erodibility. Other associated concepts: Many plant nutrients in soil undergo biogeochemical cycles throughout their environment. These cycles are influenced by water, gas exchange, biological activity, immobilization , and mineralization dynamics, but each element has its own course of flow (Deemy et al., 2022). For example, nitrogen moves from an isolated gaseous form to the compounds nitrate and nitrite as it moves through soil and becomes available to plants. In comparison, an element like phosphorus transfers in mineral form, as it is contained in rock material. These cycles also greatly vary in mobility, solubility , and the rate at which they move through their natural cycles. Together, they drive all of the processes of soil chemistry. New knowledge about the chemistry of soils often comes from studies in the laboratory, in which soil samples taken from undisturbed soil horizons in the field are used in experiments that include replicated treatments and controls. In many cases, the soil samples are air dried at ambient temperatures (e.g., 25 °C (77 °F)) and sieved to a 2 mm size prior to storage for further study. Such drying and sieving soil samples markedly disrupts soil structure, microbial population diversity, and chemical properties related to pH , oxidation-reduction status, manganese oxidation state, and dissolved organic matter; among other properties. [ 7 ] Renewed interest in recent decades has led many soil chemists to maintain soil samples in a field-moist condition and stored at 4 °C (39 °F) under aerobic conditions before and during investigations. [ 8 ] Two approaches are frequently used in laboratory investigations in soil chemistry. The first is known as batch equilibration. The chemist adds a given volume of water or salt solution of known concentration of dissolved ions to a mass of soil (e.g., 25–mL of solution to 5–g of soil in a centrifuge tube or flask). The soil slurry then is shaken or swirled for a given amount of time (e.g., 15 minutes to many hours) to establish a steady state or equilibrium condition prior to filtering or centrifuging at high speed to separate sand grains, silt particles, and clay colloids from the equilibrated solution. [ 9 ] The filtrate or centrifugate then is analyzed using one of several methods, including ion specific electrodes, atomic absorption spectrophotometry , inductively coupled plasma spectrometry, ion chromatography , and colorimetric methods. In each case, the analysis quantifies the concentration or activity of an ion or molecule in the solution phase, and by multiplying the measured concentration or activity (e.g., in mg ion/mL) by the solution-to-soil ratio (mL of extraction solution/g soil), the chemist obtains the result in mg ion/g soil. This result based on the mass of soil allows comparisons between different soils and treatments. A related approach uses a known volume to solution to leach (infiltrate) the extracting solution through a quantity of soil in small columns at a controlled rate to simulate how rain, snow meltwater, and irrigation water pass through soils in the field. The filtrate then is analyzed using the same methods as used in batch equilibrations. [ 10 ] Another approach to quantifying soil processes and phenomena uses in situ methods that do not disrupt the soil. as occurs when the soil is shaken or leached with an extracting soil solution. These methods usually use surface spectroscopic techniques, such as Fourier transform infrared spectroscopy , nuclear magnetic resonance , Mössbauer spectroscopy , and X-ray spectroscopy . These approaches aim to obtain information on the chemical nature of the mineralogy and chemistry of particle and colloid surfaces, and how ions and molecules are associated with such surfaces by adsorption, complexation, and precipitation. [ 11 ] These laboratory experiments and analyses have an advantage over field studies in that chemical mechanisms on how ions and molecules react in soils can be inferred from the data. One can draw conclusions or frame new hypotheses on similar reactions in different soils with diverse textures, organic matter contents, types of clay minerals and oxides, pH, and drainage condition. Laboratory studies have the disadvantage that they lose some of the realism and heterogeneity of undisturbed soil in the field, while gaining control and the power of extrapolation to unstudied soil. Mechanistic laboratory studies combined with more realistic, less controlled, observational field studies often yield accurate approximations of the behavior and chemistry of the soils that may be spatially heterogeneous and temporally variable. Another challenge faced by soil chemists is how microbial populations and enzyme activity in field soils may be changed when the soil is disturbed, both in the field and laboratory, particularly when soils samples are dried prior to laboratory studies and analysis. [ 12 ]
https://en.wikipedia.org/wiki/Soil_chemistry
Soil conservation is the prevention of loss of the topmost layer of the soil from erosion or prevention of reduced fertility caused by over usage, acidification , salinization or other chemical soil contamination Slash-and-burn and other unsustainable methods of subsistence farming are practiced in some lesser developed areas. A consequence of deforestation is typically large-scale erosion , loss of soil nutrients and sometimes total desertification . Techniques for improved soil conservation include crop rotation , cover crops , conservation tillage and planted windbreaks , affect both erosion and fertility . When plants die, they decay and become part of the soil. Code 330 defines standard methods recommended by the U.S. Natural Resources Conservation Service . Farmers have practiced soil conservation for millennia. In Europe, policies such as the Common Agricultural Policy are targeting the application of best management practices such as reduced tillage , winter cover crops, [ 1 ] plant residues and grass margins in order to better address soil conservation. Political and economic action is further required to solve the erosion problem. A simple governance hurdle concerns how we value the land and this can be changed by cultural adaptation. [ 2 ] Soil carbon is a carbon sink , playing a role in climate change mitigation . [ 3 ] Contour ploughing orients furrows following the contour lines of the farmed area. Furrows move left and right to maintain a constant altitude, which reduces runoff . Contour plowing was practiced by the ancient Phoenicians for slopes between two and ten percent. [ 4 ] Contour plowing can increase crop yields from 10 to 50 percent, partially as a result of greater soil retention. [ 5 ] Terracing is the practice of creating nearly level areas in a hillside area. The terraces form a series of steps each at a higher level than the previous. Terraces are protected from erosion by other soil barriers. Terraced farming is more common on small farms. This involves creating a series of flat terraced levels on a sloping field. Keyline design is the enhancement of contour farming, where the total watershed properties are taken into account in forming the contour lines . Tree, shrubs and ground-cover are effective perimeter treatment for soil erosion prevention, by impeding surface flows. A special form of this perimeter or inter-row treatment is the use of a "grass way" that both channels and dissipates runoff through surface friction, impeding surface runoff and encouraging infiltration of the slowed surface water. [ 6 ] Windbreaks are sufficiently dense rows of trees at the windward exposure of an agricultural field subject to wind erosion . [ 7 ] Evergreen species provide year-round protection; however, as long as foliage is present in the seasons of bare soil surfaces, the effect of deciduous trees may be adequate. Cover crops such as nitrogen-fixing legumes , white turnips, radishes and other species are rotated with cash crops to blanket the soil year-round and act as green manure that replenishes nitrogen and other critical nutrients. Cover crops also help to suppress weeds. [ 8 ] Soil-conservation farming involves no-till farming , "green manures" and other soil-enhancing practices which make it hard for the soils to be equalized. Such farming methods attempt to mimic the biology of barren lands . They can revive damaged soil, minimize erosion, encourage plant growth, eliminate the use of nitrogen fertilizer or fungicide, produce above-average yields and protect crops during droughts or flooding. The result is less labor and lower costs that increase farmers’ profits. No-till farming and cover crops act as sinks for nitrogen and other nutrients. This increases the amount of soil organic matter . [ 8 ] Repeated plowing/tilling degrades soil, killing its beneficial fungi and earthworms. Once damaged, soil may take multiple seasons to fully recover, even in optimal circumstances. [ 8 ] Critics argue that no-till and related methods are impractical and too expensive for many growers, partly because it requires new equipment. They cite advantages for conventional tilling depending on the geography, crops and soil conditions. Some farmers have contended that no-till complicates pest control, delays planting and that post-harvest residues, especially for corn, are hard to manage. [ 8 ] The use of pesticides can contaminate the soil, and nearby vegetation and water sources for a long time. They affect soil structure and (biotic and abiotic) composition. [ 9 ] [ 10 ] Differentiated taxation schemes are among the options investigated in the academic literature to reducing their use. [ 11 ] Alternatives to pesticides are available and include methods of cultivation, use of biological pest controls (such as pheromones and microbial pesticides), genetic engineering (mostly of crops ), and methods of interfering with insect breeding. [ 12 ] Application of composted yard waste has also been used as a way of controlling pests. [ 13 ] Salinity in soil is caused by irrigating with salty water. Water then evaporates from the soil leaving the salt behind. Salt breaks down the soil structure, causing infertility and reduced growth. [ citation needed ] [ 14 ] The ions responsible for salination are: sodium (Na + ), potassium (K + ), calcium (Ca 2+ ), magnesium (Mg 2+ ) and chlorine (Cl − ). Salinity is estimated to affect about one third of the earth's arable land . [ 15 ] Soil salinity adversely affects crop metabolism and erosion usually follows. Salinity occurs on drylands from overirrigation and in areas with shallow saline water tables. Over-irrigation deposits salts in upper soil layers as a byproduct of soil infiltration ; irrigation merely increases the rate of salt deposition. The best-known case of shallow saline water table capillary action occurred in Egypt after the 1970 construction of the Aswan Dam . The change in the groundwater level led to high salt concentrations in the water table. The continuous high level of the water table led to soil salination . Use of humic acids may prevent excess salination, especially given excessive irrigation. [ 16 ] Humic acids can fix both anions and cations and eliminate them from root zones . [ citation needed ] Planting species that can tolerate saline conditions can be used to lower water tables and thus reduce the rate of capillary and evaporative enrichment of surface salts. Salt-tolerant plants include saltbush , a plant found in much of North America and in the Mediterranean regions of Europe . When worms excrete feces in the form of casts , a balanced selection of minerals and plant nutrients is made into a form accessible for root uptake. Earthworm casts are five times richer in available nitrogen , seven times richer in available phosphates and eleven times richer in available potash than the surrounding upper 150 millimetres (5.9 in) of soil. The weight of casts produced may be greater than 4.5 kg per worm per year. By burrowing, the earthworm improves soil porosity , creating channels that enhance the processes of aeration and drainage. [ 17 ] Other important soil organisms include nematodes , mycorrhiza and bacteria . A quarter of all the animal species live underground. According to the 2020 Food and Agriculture Organization’s report "State of knowledge of soil biodiversity – Status, challenges and potentialities", there are major gaps in knowledge about biodiversity in soils. [ 18 ] [ 19 ] Degraded soil requires synthetic fertilizer to produce high yields. Lacking structure increases erosion and carries nitrogen and other pollutants into rivers and streams. [ 8 ] Each one percent increase in soil organic matter helps soil hold 20,000 gallons more water per acre. [ 8 ] To allow plants to fully realize their phytonutrient potential, active mineralization of the soil is sometimes undertaken. This can involve adding crushed rock or chemical soil supplements. In either case, the purpose is to combat mineral depletion. A broad range of minerals can be used, including common substances such as phosphorus and more exotic substances such as zinc and selenium . Extensive research examines the phase transitions of minerals in soil with aqueous contact. [ 20 ] Flooding can bring significant sediments to an alluvial plain. While this effect may not be desirable if floods endanger life or if the sediment originates from productive land, this process of addition to a floodplain is a natural process that can rejuvenate soil chemistry through mineralization. [ citation needed ]
https://en.wikipedia.org/wiki/Soil_conservation
Soil contamination , soil pollution , or land pollution as a part of land degradation is caused by the presence of xenobiotic (human-made) chemicals or other alteration in the natural soil environment. It is typically caused by industrial activity, agricultural chemicals or improper disposal of waste . The most common chemicals involved are petroleum hydrocarbons , polynuclear aromatic hydrocarbons (such as naphthalene and benzo(a)pyrene ), solvents , pesticides, lead , and other heavy metals . [ 1 ] Contamination is correlated with the degree of industrialization and intensity of chemical substance. The concern over soil contamination stems primarily from health risks, from direct contact with the contaminated soil, vapour from the contaminants, or from secondary contamination of water supplies within and underlying the soil. [ 2 ] Mapping of contaminated soil sites and the resulting clean ups are time-consuming and expensive tasks, and require expertise in geology , hydrology , chemistry , computer modelling , and GIS in Environmental Contamination , as well as an appreciation of the history of industrial chemistry. [ 3 ] In North America and South-Western Europe the extent of contaminated land is best known for as many of the countries in these areas having a legal framework to identify and deal with this environmental problem. Developing countries tend to be less tightly regulated despite some of them having undergone significant industrialization . Soil pollution can be caused by the following (non-exhaustive list): The most common chemicals involved are petroleum hydrocarbons , solvents , pesticides, lead , and other heavy metals . Any activity that leads to other forms of soil degradation ( erosion , compaction , etc.) may indirectly worsen the contamination effects in that soil remediation becomes more tedious. Historical deposition of coal ash used for residential, commercial, and industrial heating, as well as for industrial processes such as ore smelting , were a common source of contamination in areas that were industrialized before about 1960. Coal naturally concentrates lead and zinc during its formation, as well as other heavy metals to a lesser degree. When the coal is burned, most of these metals become concentrated in the ash (the principal exception being mercury). Coal ash and slag may contain sufficient lead to qualify as a "characteristic hazardous waste ", defined in the US as containing more than 5 mg/L of extractable lead using the TCLP procedure. In addition to lead, coal ash typically contains variable but significant concentrations of polynuclear aromatic hydrocarbons (PAHs; e.g., benzo(a)anthracene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(a)pyrene, indeno(cd)pyrene, phenanthrene, anthracene, and others). These PAHs are known human carcinogens and the acceptable concentrations of them in soil are typically around 1 mg/kg. Coal ash and slag can be recognised by the presence of off-white grains in soil, gray heterogeneous soil, or (coal slag) bubbly, vesicular pebble-sized grains. Treated sewage sludge , known in the industry as biosolids , has become controversial as a " fertilizer ". As it is the byproduct of sewage treatment, it generally contains more contaminants such as organisms, pesticides, and heavy metals than other soil. [ 4 ] In the European Union, the Urban Waste Water Treatment Directive allows sewage sludge to be sprayed onto land. The volume is expected to double to 185,000 tons of dry solids in 2005. This has good agricultural properties due to the high nitrogen and phosphate content. In 1990/1991, 13% wet weight was sprayed onto 0.13% of the land; however, this is expected to rise 15 fold by 2005. [ needs update ] Advocates [ who? ] say there is a need to control this so that pathogenic microorganisms do not get into water courses and to ensure that there is no accumulation of heavy metals in the top soil. [ 5 ] A pesticide is a substance used to kill a pest. A pesticide may be a chemical substance, biological agent (such as a virus or bacteria), antimicrobial, disinfectant or device used against any pest. Pests include insects, plant pathogens, weeds, mollusks, birds, mammals, fish, nematodes (roundworms) and microbes that compete with humans for food, destroy property, spread or are a vector for disease or cause a nuisance. Although there are benefits to the use of pesticides, there are also drawbacks, such as potential toxicity to humans and other organisms. [ 6 ] [ 7 ] Herbicides are used to kill weeds, especially on pavements and railways. They are similar to auxins and most are biodegradable by soil bacteria. However, one group derived from trinitrotoluene (2:4 D and 2:4:5 T) have the impurity dioxin, which is very toxic and causes fatality even in low concentrations. Another herbicide is Paraquat . It is highly toxic but it rapidly degrades in soil due to the action of bacteria and does not kill soil fauna. [ 8 ] Insecticides are used to rid farms of pests which damage crops. The insects damage not only standing crops but also stored ones and in the tropics it is reckoned that one third of the total production is lost during food storage. As with fungicides , the first insecticides used in the nineteenth century were inorganic e.g. Paris Green and other compounds of arsenic . Nicotine has also been used since 1690. [ 9 ] There are now two main groups of synthetic insecticides: 1. Organochlorines include DDT , Aldrin , Dieldrin and BHC. They are cheap to produce, potent and persistent. DDT was used on a massive scale from the 1930s, with a peak of 72,000 tonnes used 1970. Then usage fell as the harmful environmental effects were realized. It was found worldwide in fish and birds and was even discovered in the snow in the Antarctic . It is only slightly soluble in water but is very soluble in the bloodstream. It affects the nervous and endocrine systems and causes the eggshells of birds to lack calcium causing them to be easily breakable. It is thought to be responsible for the decline of the numbers of birds of prey like ospreys and peregrine falcons in the 1950s – they are now recovering. [ 10 ] As well as increased concentration via the food chain, it is known to enter via permeable membranes, so fish get it through their gills. As it has low water solubility, it tends to stay at the water surface, so organisms that live there are most affected. DDT found in fish that formed part of the human food chain caused concern, but the levels found in the liver, kidney and brain tissues was less than 1 ppm and in fat was 10 ppm, which was below the level likely to cause harm. However, DDT was banned in the UK and the United States to stop the further buildup of it in the food chain. U.S. manufacturers continued to sell DDT to developing countries, who could not afford the expensive replacement chemicals and who did not have such stringent regulations governing the use of pesticides. [ 11 ] 2. Organophosphates , e.g. parathion , methyl parathion and about 40 other insecticides are available nationally. Parathion is highly toxic, methyl-parathion is less so and Malathion is generally considered safe as it has low toxicity and is rapidly broken down in the mammalian liver. This group works by preventing normal nerve transmission as cholinesterase is prevented from breaking down the transmitter substance acetylcholine, resulting in uncontrolled muscle movements. [ 12 ] The disposal of munitions, and a lack of care in manufacture of munitions caused by the urgency of production, can contaminate soil for extended periods. There is little published evidence on this type of contamination largely because of restrictions placed by governments of many countries on the publication of material related to war effort. However, mustard gas stored during World War II has contaminated some sites for up to 50 years [ 13 ] and the testing of Anthrax as a potential biological weapon contaminated the whole island of Gruinard . [ 14 ] Contaminated or polluted soil directly affects human health through direct contact with soil or via inhalation of soil contaminants that have vaporized; potentially greater threats are posed by the infiltration of soil contamination into groundwater aquifers used for human consumption, sometimes in areas apparently far removed from any apparent source of above-ground contamination. Toxic metals can also make their way up the food chain through plants that reside in soils containing high concentrations of heavy metals. [ 15 ] This tends to result in the development of pollution-related diseases . Most exposure is accidental, and exposure can happen through: [ 16 ] However, some studies estimate that 90% of exposure is through eating contaminated food. [ 16 ] Health consequences from exposure to soil contamination vary greatly depending on pollutant type, the pathway of attack, and the vulnerability of the exposed population. Researchers suggest that pesticides and heavy metals in soil may harm cardiovascular health, including inflammation and change in the body's internal clock. [ 17 ] Chronic exposure to chromium , lead , and other metals, petroleum, solvents, and many pesticide and herbicide formulations can be carcinogenic, can cause congenital disorders , or can cause other chronic health conditions. Industrial or human-made concentrations of naturally occurring substances, such as nitrate and ammonia associated with livestock manure from agricultural operations, have also been identified as health hazards in soil and groundwater. [ citation needed ] Chronic exposure to benzene at sufficient concentrations is known to be associated with a higher incidence of leukemia. Mercury and cyclodienes are known to induce higher incidences of kidney damage and some irreversible diseases. PCBs and cyclodienes are linked to liver toxicity. Organophosphates and carbonates can cause a chain of responses leading to neuromuscular blockage . Many chlorinated solvents induce liver changes, kidney changes, and depression of the central nervous system. There is an entire spectrum of further health effects such as headache, nausea, fatigue, eye irritation and skin rash for the above cited and other chemicals. At sufficient dosages a large number of soil contaminants can cause death by exposure via direct contact, inhalation or ingestion of contaminants in groundwater contaminated through soil. [ citation needed ] The Scottish Government has commissioned the Institute of Occupational Medicine to undertake a review of methods to assess risk to human health from contaminated land. The overall aim of the project is to work up guidance that should be useful to Scottish Local Authorities in assessing whether sites represent a significant possibility of significant harm (SPOSH) to human health. It is envisaged that the output of the project will be a short document providing high level guidance on health risk assessment with reference to existing published guidance and methodologies that have been identified as being particularly relevant and helpful. The project will examine how policy guidelines have been developed for determining the acceptability of risks to human health and propose an approach for assessing what constitutes unacceptable risk in line with the criteria for SPOSH as defined in the legislation and the Scottish Statutory Guidance. [ citation needed ] Not unexpectedly, soil contaminants can have significant deleterious consequences for ecosystems. [ 18 ] There are radical soil chemistry changes which can arise from the presence of many hazardous chemicals even at low concentration of the contaminant species. These changes can manifest in the alteration of metabolism of endemic microorganisms and arthropods resident in a given soil environment. The result can be virtual eradication of some of the primary food chain, which in turn could have major consequences for predator or consumer species. Even if the chemical effect on lower life forms is small, the lower pyramid levels of the food chain may ingest alien chemicals, which normally become more concentrated for each consuming rung of the food chain. Many of these effects are now well known, such as the concentration of persistent DDT materials for avian consumers, leading to weakening of egg shells, increased chick mortality and potential extinction of species. [ 19 ] Effects occur to agricultural lands which have certain types of soil contamination. Contaminants typically alter plant metabolism, often causing a reduction in crop yields. This has a secondary effect upon soil conservation , since the languishing crops cannot shield the Earth's soil from erosion . Some of these chemical contaminants have long half-lives and in other cases derivative chemicals are formed from decay of primary soil contaminants. [ 20 ] Heavy metals and other soil contaminants can adversely affect the activity, species composition and abundance of soil microorganisms, thereby threatening soil functions such as biochemical cycling of carbon and nitrogen. [ 21 ] However, soil contaminants can also become less bioavailable by time, and microorganisms and ecosystems can adapt to altered conditions. Soil properties such as pH, organic matter content and texture are very important and modify mobility, bioavailability and toxicity of pollutants in contaminated soils. [ 22 ] The same amount of contaminant can be toxic in one soil but totally harmless in another soil. This stresses the need for soil-specific risks assessment and measures. Cleanup or environmental remediation is analyzed by environmental scientists who utilize field measurement of soil chemicals and also apply computer models ( GIS in Environmental Contamination ) for analyzing transport [ 23 ] and fate of soil chemicals. Various technologies have been developed for remediation of oil-contaminated soil and sediments [ 24 ] There are several principal strategies for remediation: Various national standards for concentrations of particular contaminants include the United States EPA Region 9 Preliminary Remediation Goals (U.S. PRGs), the U.S. EPA Region 3 Risk Based Concentrations (U.S. EPA RBCs) and National Environment Protection Council of Australia Guideline on Investigation Levels in Soil and Groundwater. The immense and sustained growth of the People's Republic of China since the 1970s has exacted a price from the land in increased soil pollution. The Ministry of Ecology and Environment believes it to be a threat to the environment, to food safety and to sustainable agriculture. According to a scientific sampling, 150 million mu (100,000 square kilometres) of China's cultivated land have been polluted, with contaminated water being used to irrigate a further 32.5 million mu (21,670 square kilometres) and another 2 million mu (1,300 square kilometres) covered or destroyed by solid waste. In total, the area accounts for one-tenth of China's cultivatable land, and is mostly in economically developed areas. An estimated 12 million tonnes of grain are contaminated by heavy metals every year, causing direct losses of 20 billion yuan ($2.57 billion USD ). [ 27 ] Recent survey shows that 19% of the agricultural soils are contaminated which contains heavy metals and metalloids. And the rate of these heavy metals in the soil has been increased dramatically. [ 28 ] According to the received data from Member states, in the European Union the number of estimated potential contaminated sites is more than 2.5 million [ 29 ] and the identified contaminated sites around 342 thousand. Municipal and industrial wastes contribute most to soil contamination (38%), followed by the industrial/commercial sector (34%). Mineral oil and heavy metals are the main contaminants contributing around 60% to soil contamination. In terms of budget, the management of contaminated sites is estimated to cost around 6 billion Euros (€) annually. [ 29 ] Generic guidance commonly used in the United Kingdom are the Soil Guideline Values published by the Department for Environment, Food and Rural Affairs (DEFRA) and the Environment Agency . These are screening values that demonstrate the minimal acceptable level of a substance. Above this there can be no assurances in terms of significant risk of harm to human health. These have been derived using the Contaminated Land Exposure Assessment Model (CLEA UK). Certain input parameters such as Health Criteria Values, age and land use are fed into CLEA UK to obtain a probabilistic output. [ 30 ] Guidance by the Inter Departmental Committee for the Redevelopment of Contaminated Land (ICRCL) [ 31 ] has been formally withdrawn by DEFRA , for use as a prescriptive document to determine the potential need for remediation or further assessment. The CLEA model published by DEFRA and the Environment Agency (EA) in March 2002 sets a framework for the appropriate assessment of risks to human health from contaminated land, as required by Part IIA of the Environmental Protection Act 1990 . As part of this framework, generic Soil Guideline Values (SGVs) have currently been derived for ten contaminants to be used as "intervention values". [ 32 ] These values should not be considered as remedial targets but values above which further detailed assessment should be considered; see Dutch standards . Three sets of CLEA SGVs have been produced for three different land uses, namely It is intended that the SGVs replace the former ICRCL values. The CLEA SGVs relate to assessing chronic (long term) risks to human health and do not apply to the protection of ground workers during construction, or other potential receptors such as groundwater, buildings, plants or other ecosystems. The CLEA SGVs are not directly applicable to a site completely covered in hardstanding, as there is no direct exposure route to contaminated soils. [ 33 ] To date, the first ten of fifty-five contaminant SGVs have been published, for the following: arsenic, cadmium , chromium, lead, inorganic mercury, nickel, selenium ethyl benzene, phenol and toluene. Draft SGVs for benzene, naphthalene and xylene have been produced but their publication is on hold. Toxicological data (Tox) has been published for each of these contaminants as well as for benzo[a]pyrene, benzene, dioxins, furans and dioxin-like PCBs, naphthalene, vinyl chloride, 1,1,2,2 tetrachloroethane and 1,1,1,2 tetrachloroethane, 1,1,1 trichloroethane, tetrachloroethene, carbon tetrachloride, 1,2-dichloroethane, trichloroethene and xylene. The SGVs for ethyl benzene, phenol and toluene are dependent on the soil organic matter (SOM) content (which can be calculated from the total organic carbon (TOC) content). As an initial screen the SGVs for 1% SOM are considered to be appropriate. [ 34 ] As of February 2021, there are a total of 2,500 plus contaminated sites in Canada . [ 35 ] One infamous contaminated sited is located near a nickel-copper smelting site in Sudbury, Ontario . A study investigating the heavy metal pollution in the vicinity of the smelter reveals that elevated levels of nickel and copper were found in the soil; values going as high as 5,104ppm Ni , and 2,892 ppm Cu within a 1.1 km range of the smelter location. Other metals were also found in the soil; such metals include iron, cobalt, and silver. Furthermore, upon examining the different vegetation surrounding the smelter it was evident that they too had been affected; the results show that the plants contained nickel, copper and aluminium as a result of soil contamination. [ 36 ] In March 2009, the issue of uranium poisoning in Punjab attracted press coverage. It was alleged to be caused by fly ash ponds of thermal power stations, which reportedly lead to severe birth defects in children in the Faridkot and Bhatinda districts of Punjab . The news reports claimed the uranium levels were more than 60 times the maximum safe limit. [ 37 ] [ 38 ] In 2012, the Government of India confirmed [ 39 ] that the ground water in Malwa belt of Punjab has uranium metal that is 50% above the trace limits set by the United Nations' World Health Organization (WHO). Scientific studies, based on over 1000 samples from various sampling points, could not trace the source to fly ash and any sources from thermal power plants or industry as originally alleged. The study also revealed that the uranium concentration in ground water of Malwa district is not 60 times the WHO limits, but only 50% above the WHO limit in 3 locations. This highest concentration found in samples was less than those found naturally in ground waters currently used for human purposes elsewhere, such as Finland . [ 40 ] Research is underway to identify natural or other sources for the uranium.
https://en.wikipedia.org/wiki/Soil_contamination
Soil contamination , soil pollution , or land pollution as a part of land degradation is caused by the presence of xenobiotic (human-made) chemicals or other alteration in the natural soil environment. It is typically caused by industrial activity, agricultural chemicals or improper disposal of waste . The most common chemicals involved are petroleum hydrocarbons , polynuclear aromatic hydrocarbons (such as naphthalene and benzo(a)pyrene ), solvents , pesticides, lead , and other heavy metals . [ 1 ] Contamination is correlated with the degree of industrialization and intensity of chemical substance. The concern over soil contamination stems primarily from health risks, from direct contact with the contaminated soil, vapour from the contaminants, or from secondary contamination of water supplies within and underlying the soil. [ 2 ] Mapping of contaminated soil sites and the resulting clean ups are time-consuming and expensive tasks, and require expertise in geology , hydrology , chemistry , computer modelling , and GIS in Environmental Contamination , as well as an appreciation of the history of industrial chemistry. [ 3 ] In North America and South-Western Europe the extent of contaminated land is best known for as many of the countries in these areas having a legal framework to identify and deal with this environmental problem. Developing countries tend to be less tightly regulated despite some of them having undergone significant industrialization . Soil pollution can be caused by the following (non-exhaustive list): The most common chemicals involved are petroleum hydrocarbons , solvents , pesticides, lead , and other heavy metals . Any activity that leads to other forms of soil degradation ( erosion , compaction , etc.) may indirectly worsen the contamination effects in that soil remediation becomes more tedious. Historical deposition of coal ash used for residential, commercial, and industrial heating, as well as for industrial processes such as ore smelting , were a common source of contamination in areas that were industrialized before about 1960. Coal naturally concentrates lead and zinc during its formation, as well as other heavy metals to a lesser degree. When the coal is burned, most of these metals become concentrated in the ash (the principal exception being mercury). Coal ash and slag may contain sufficient lead to qualify as a "characteristic hazardous waste ", defined in the US as containing more than 5 mg/L of extractable lead using the TCLP procedure. In addition to lead, coal ash typically contains variable but significant concentrations of polynuclear aromatic hydrocarbons (PAHs; e.g., benzo(a)anthracene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(a)pyrene, indeno(cd)pyrene, phenanthrene, anthracene, and others). These PAHs are known human carcinogens and the acceptable concentrations of them in soil are typically around 1 mg/kg. Coal ash and slag can be recognised by the presence of off-white grains in soil, gray heterogeneous soil, or (coal slag) bubbly, vesicular pebble-sized grains. Treated sewage sludge , known in the industry as biosolids , has become controversial as a " fertilizer ". As it is the byproduct of sewage treatment, it generally contains more contaminants such as organisms, pesticides, and heavy metals than other soil. [ 4 ] In the European Union, the Urban Waste Water Treatment Directive allows sewage sludge to be sprayed onto land. The volume is expected to double to 185,000 tons of dry solids in 2005. This has good agricultural properties due to the high nitrogen and phosphate content. In 1990/1991, 13% wet weight was sprayed onto 0.13% of the land; however, this is expected to rise 15 fold by 2005. [ needs update ] Advocates [ who? ] say there is a need to control this so that pathogenic microorganisms do not get into water courses and to ensure that there is no accumulation of heavy metals in the top soil. [ 5 ] A pesticide is a substance used to kill a pest. A pesticide may be a chemical substance, biological agent (such as a virus or bacteria), antimicrobial, disinfectant or device used against any pest. Pests include insects, plant pathogens, weeds, mollusks, birds, mammals, fish, nematodes (roundworms) and microbes that compete with humans for food, destroy property, spread or are a vector for disease or cause a nuisance. Although there are benefits to the use of pesticides, there are also drawbacks, such as potential toxicity to humans and other organisms. [ 6 ] [ 7 ] Herbicides are used to kill weeds, especially on pavements and railways. They are similar to auxins and most are biodegradable by soil bacteria. However, one group derived from trinitrotoluene (2:4 D and 2:4:5 T) have the impurity dioxin, which is very toxic and causes fatality even in low concentrations. Another herbicide is Paraquat . It is highly toxic but it rapidly degrades in soil due to the action of bacteria and does not kill soil fauna. [ 8 ] Insecticides are used to rid farms of pests which damage crops. The insects damage not only standing crops but also stored ones and in the tropics it is reckoned that one third of the total production is lost during food storage. As with fungicides , the first insecticides used in the nineteenth century were inorganic e.g. Paris Green and other compounds of arsenic . Nicotine has also been used since 1690. [ 9 ] There are now two main groups of synthetic insecticides: 1. Organochlorines include DDT , Aldrin , Dieldrin and BHC. They are cheap to produce, potent and persistent. DDT was used on a massive scale from the 1930s, with a peak of 72,000 tonnes used 1970. Then usage fell as the harmful environmental effects were realized. It was found worldwide in fish and birds and was even discovered in the snow in the Antarctic . It is only slightly soluble in water but is very soluble in the bloodstream. It affects the nervous and endocrine systems and causes the eggshells of birds to lack calcium causing them to be easily breakable. It is thought to be responsible for the decline of the numbers of birds of prey like ospreys and peregrine falcons in the 1950s – they are now recovering. [ 10 ] As well as increased concentration via the food chain, it is known to enter via permeable membranes, so fish get it through their gills. As it has low water solubility, it tends to stay at the water surface, so organisms that live there are most affected. DDT found in fish that formed part of the human food chain caused concern, but the levels found in the liver, kidney and brain tissues was less than 1 ppm and in fat was 10 ppm, which was below the level likely to cause harm. However, DDT was banned in the UK and the United States to stop the further buildup of it in the food chain. U.S. manufacturers continued to sell DDT to developing countries, who could not afford the expensive replacement chemicals and who did not have such stringent regulations governing the use of pesticides. [ 11 ] 2. Organophosphates , e.g. parathion , methyl parathion and about 40 other insecticides are available nationally. Parathion is highly toxic, methyl-parathion is less so and Malathion is generally considered safe as it has low toxicity and is rapidly broken down in the mammalian liver. This group works by preventing normal nerve transmission as cholinesterase is prevented from breaking down the transmitter substance acetylcholine, resulting in uncontrolled muscle movements. [ 12 ] The disposal of munitions, and a lack of care in manufacture of munitions caused by the urgency of production, can contaminate soil for extended periods. There is little published evidence on this type of contamination largely because of restrictions placed by governments of many countries on the publication of material related to war effort. However, mustard gas stored during World War II has contaminated some sites for up to 50 years [ 13 ] and the testing of Anthrax as a potential biological weapon contaminated the whole island of Gruinard . [ 14 ] Contaminated or polluted soil directly affects human health through direct contact with soil or via inhalation of soil contaminants that have vaporized; potentially greater threats are posed by the infiltration of soil contamination into groundwater aquifers used for human consumption, sometimes in areas apparently far removed from any apparent source of above-ground contamination. Toxic metals can also make their way up the food chain through plants that reside in soils containing high concentrations of heavy metals. [ 15 ] This tends to result in the development of pollution-related diseases . Most exposure is accidental, and exposure can happen through: [ 16 ] However, some studies estimate that 90% of exposure is through eating contaminated food. [ 16 ] Health consequences from exposure to soil contamination vary greatly depending on pollutant type, the pathway of attack, and the vulnerability of the exposed population. Researchers suggest that pesticides and heavy metals in soil may harm cardiovascular health, including inflammation and change in the body's internal clock. [ 17 ] Chronic exposure to chromium , lead , and other metals, petroleum, solvents, and many pesticide and herbicide formulations can be carcinogenic, can cause congenital disorders , or can cause other chronic health conditions. Industrial or human-made concentrations of naturally occurring substances, such as nitrate and ammonia associated with livestock manure from agricultural operations, have also been identified as health hazards in soil and groundwater. [ citation needed ] Chronic exposure to benzene at sufficient concentrations is known to be associated with a higher incidence of leukemia. Mercury and cyclodienes are known to induce higher incidences of kidney damage and some irreversible diseases. PCBs and cyclodienes are linked to liver toxicity. Organophosphates and carbonates can cause a chain of responses leading to neuromuscular blockage . Many chlorinated solvents induce liver changes, kidney changes, and depression of the central nervous system. There is an entire spectrum of further health effects such as headache, nausea, fatigue, eye irritation and skin rash for the above cited and other chemicals. At sufficient dosages a large number of soil contaminants can cause death by exposure via direct contact, inhalation or ingestion of contaminants in groundwater contaminated through soil. [ citation needed ] The Scottish Government has commissioned the Institute of Occupational Medicine to undertake a review of methods to assess risk to human health from contaminated land. The overall aim of the project is to work up guidance that should be useful to Scottish Local Authorities in assessing whether sites represent a significant possibility of significant harm (SPOSH) to human health. It is envisaged that the output of the project will be a short document providing high level guidance on health risk assessment with reference to existing published guidance and methodologies that have been identified as being particularly relevant and helpful. The project will examine how policy guidelines have been developed for determining the acceptability of risks to human health and propose an approach for assessing what constitutes unacceptable risk in line with the criteria for SPOSH as defined in the legislation and the Scottish Statutory Guidance. [ citation needed ] Not unexpectedly, soil contaminants can have significant deleterious consequences for ecosystems. [ 18 ] There are radical soil chemistry changes which can arise from the presence of many hazardous chemicals even at low concentration of the contaminant species. These changes can manifest in the alteration of metabolism of endemic microorganisms and arthropods resident in a given soil environment. The result can be virtual eradication of some of the primary food chain, which in turn could have major consequences for predator or consumer species. Even if the chemical effect on lower life forms is small, the lower pyramid levels of the food chain may ingest alien chemicals, which normally become more concentrated for each consuming rung of the food chain. Many of these effects are now well known, such as the concentration of persistent DDT materials for avian consumers, leading to weakening of egg shells, increased chick mortality and potential extinction of species. [ 19 ] Effects occur to agricultural lands which have certain types of soil contamination. Contaminants typically alter plant metabolism, often causing a reduction in crop yields. This has a secondary effect upon soil conservation , since the languishing crops cannot shield the Earth's soil from erosion . Some of these chemical contaminants have long half-lives and in other cases derivative chemicals are formed from decay of primary soil contaminants. [ 20 ] Heavy metals and other soil contaminants can adversely affect the activity, species composition and abundance of soil microorganisms, thereby threatening soil functions such as biochemical cycling of carbon and nitrogen. [ 21 ] However, soil contaminants can also become less bioavailable by time, and microorganisms and ecosystems can adapt to altered conditions. Soil properties such as pH, organic matter content and texture are very important and modify mobility, bioavailability and toxicity of pollutants in contaminated soils. [ 22 ] The same amount of contaminant can be toxic in one soil but totally harmless in another soil. This stresses the need for soil-specific risks assessment and measures. Cleanup or environmental remediation is analyzed by environmental scientists who utilize field measurement of soil chemicals and also apply computer models ( GIS in Environmental Contamination ) for analyzing transport [ 23 ] and fate of soil chemicals. Various technologies have been developed for remediation of oil-contaminated soil and sediments [ 24 ] There are several principal strategies for remediation: Various national standards for concentrations of particular contaminants include the United States EPA Region 9 Preliminary Remediation Goals (U.S. PRGs), the U.S. EPA Region 3 Risk Based Concentrations (U.S. EPA RBCs) and National Environment Protection Council of Australia Guideline on Investigation Levels in Soil and Groundwater. The immense and sustained growth of the People's Republic of China since the 1970s has exacted a price from the land in increased soil pollution. The Ministry of Ecology and Environment believes it to be a threat to the environment, to food safety and to sustainable agriculture. According to a scientific sampling, 150 million mu (100,000 square kilometres) of China's cultivated land have been polluted, with contaminated water being used to irrigate a further 32.5 million mu (21,670 square kilometres) and another 2 million mu (1,300 square kilometres) covered or destroyed by solid waste. In total, the area accounts for one-tenth of China's cultivatable land, and is mostly in economically developed areas. An estimated 12 million tonnes of grain are contaminated by heavy metals every year, causing direct losses of 20 billion yuan ($2.57 billion USD ). [ 27 ] Recent survey shows that 19% of the agricultural soils are contaminated which contains heavy metals and metalloids. And the rate of these heavy metals in the soil has been increased dramatically. [ 28 ] According to the received data from Member states, in the European Union the number of estimated potential contaminated sites is more than 2.5 million [ 29 ] and the identified contaminated sites around 342 thousand. Municipal and industrial wastes contribute most to soil contamination (38%), followed by the industrial/commercial sector (34%). Mineral oil and heavy metals are the main contaminants contributing around 60% to soil contamination. In terms of budget, the management of contaminated sites is estimated to cost around 6 billion Euros (€) annually. [ 29 ] Generic guidance commonly used in the United Kingdom are the Soil Guideline Values published by the Department for Environment, Food and Rural Affairs (DEFRA) and the Environment Agency . These are screening values that demonstrate the minimal acceptable level of a substance. Above this there can be no assurances in terms of significant risk of harm to human health. These have been derived using the Contaminated Land Exposure Assessment Model (CLEA UK). Certain input parameters such as Health Criteria Values, age and land use are fed into CLEA UK to obtain a probabilistic output. [ 30 ] Guidance by the Inter Departmental Committee for the Redevelopment of Contaminated Land (ICRCL) [ 31 ] has been formally withdrawn by DEFRA , for use as a prescriptive document to determine the potential need for remediation or further assessment. The CLEA model published by DEFRA and the Environment Agency (EA) in March 2002 sets a framework for the appropriate assessment of risks to human health from contaminated land, as required by Part IIA of the Environmental Protection Act 1990 . As part of this framework, generic Soil Guideline Values (SGVs) have currently been derived for ten contaminants to be used as "intervention values". [ 32 ] These values should not be considered as remedial targets but values above which further detailed assessment should be considered; see Dutch standards . Three sets of CLEA SGVs have been produced for three different land uses, namely It is intended that the SGVs replace the former ICRCL values. The CLEA SGVs relate to assessing chronic (long term) risks to human health and do not apply to the protection of ground workers during construction, or other potential receptors such as groundwater, buildings, plants or other ecosystems. The CLEA SGVs are not directly applicable to a site completely covered in hardstanding, as there is no direct exposure route to contaminated soils. [ 33 ] To date, the first ten of fifty-five contaminant SGVs have been published, for the following: arsenic, cadmium , chromium, lead, inorganic mercury, nickel, selenium ethyl benzene, phenol and toluene. Draft SGVs for benzene, naphthalene and xylene have been produced but their publication is on hold. Toxicological data (Tox) has been published for each of these contaminants as well as for benzo[a]pyrene, benzene, dioxins, furans and dioxin-like PCBs, naphthalene, vinyl chloride, 1,1,2,2 tetrachloroethane and 1,1,1,2 tetrachloroethane, 1,1,1 trichloroethane, tetrachloroethene, carbon tetrachloride, 1,2-dichloroethane, trichloroethene and xylene. The SGVs for ethyl benzene, phenol and toluene are dependent on the soil organic matter (SOM) content (which can be calculated from the total organic carbon (TOC) content). As an initial screen the SGVs for 1% SOM are considered to be appropriate. [ 34 ] As of February 2021, there are a total of 2,500 plus contaminated sites in Canada . [ 35 ] One infamous contaminated sited is located near a nickel-copper smelting site in Sudbury, Ontario . A study investigating the heavy metal pollution in the vicinity of the smelter reveals that elevated levels of nickel and copper were found in the soil; values going as high as 5,104ppm Ni , and 2,892 ppm Cu within a 1.1 km range of the smelter location. Other metals were also found in the soil; such metals include iron, cobalt, and silver. Furthermore, upon examining the different vegetation surrounding the smelter it was evident that they too had been affected; the results show that the plants contained nickel, copper and aluminium as a result of soil contamination. [ 36 ] In March 2009, the issue of uranium poisoning in Punjab attracted press coverage. It was alleged to be caused by fly ash ponds of thermal power stations, which reportedly lead to severe birth defects in children in the Faridkot and Bhatinda districts of Punjab . The news reports claimed the uranium levels were more than 60 times the maximum safe limit. [ 37 ] [ 38 ] In 2012, the Government of India confirmed [ 39 ] that the ground water in Malwa belt of Punjab has uranium metal that is 50% above the trace limits set by the United Nations' World Health Organization (WHO). Scientific studies, based on over 1000 samples from various sampling points, could not trace the source to fly ash and any sources from thermal power plants or industry as originally alleged. The study also revealed that the uranium concentration in ground water of Malwa district is not 60 times the WHO limits, but only 50% above the WHO limit in 3 locations. This highest concentration found in samples was less than those found naturally in ground waters currently used for human purposes elsewhere, such as Finland . [ 40 ] Research is underway to identify natural or other sources for the uranium.
https://en.wikipedia.org/wiki/Soil_contamination_in_China
Soil crusts are soil surface layers that are distinct from the rest of the bulk soil, often hardened with a platy surface. Depending on the manner of formation, soil crusts can be biological or physical. Biological soil crusts are formed by communities of microorganisms that live on the soil surface whereas physical crusts are formed by physical impact such as that of raindrops. Biological soil crusts are communities of living organisms on the soil surface in arid- and semi-arid ecosystems. They are found throughout the world with varying species composition and cover depending on topography, soil characteristics, climate, plant community, microhabitats, and disturbance regimes. Biological soil crusts perform important ecological roles including carbon fixation, nitrogen fixation, soil stabilization, alter soil albedo and water relations, and affect germination and nutrient levels in vascular plants. They can be damaged by fire, recreational activity, grazing, and other disturbance and can require long time periods to recover composition and function. Biological soil crusts are also known as cryptogamic, microbiotic, microphytic, or cryptobiotic soils. Physical (as opposed to biological) soil crusts results from raindrop or trampling impacts. They are often hardened relative to uncrusted soil due to the accumulation of salts and silica. These can coexist with biological soil crusts, but have different ecological impact due to their difference in formation and composition. Physical soil crusts often reduce water infiltration, can inhibit plant establishment, and when disrupted can be eroded rapidly. [ 1 ]
https://en.wikipedia.org/wiki/Soil_crust
Soil ecology studies interactions among soil organisms , and their environment. [ 1 ] It is particularly concerned with the cycling of nutrients , soil aggregate formation and soil biodiversity . [ 2 ] Soil is made up of a multitude of physical , chemical , and biological entities, with many interactions occurring among them. It is a heterogenous mixture of minerals and organic matter with variations in moisture, temperature and nutrients. Soil supports a wide range of living organisms and is an essential component of terrestrial ecology. Soil fauna is crucial to soil formation , litter decomposition, nutrient cycling , biotic regulation, and for promoting plant growth . Yet soil organisms remain underrepresented in studies on soil processes and in existing modeling exercises. This is a consequence of assuming that much below ground diversity is ecologically redundant and that soil food webs exhibit a higher degree of omnivory . However, evidence is accumulating on the strong influence of abiotic filters , such as temperature, moisture and soil pH , as well as soil habitat characteristics in controlling their spatial and temporal patterns. [ 18 ] Soils are complex systems and their complexity resides in their heterogeneous nature: a mixture of air, water, minerals, organic compounds, and living organisms. The spatial variation, both horizontal and vertical, of all these constituents is related to soil forming agents varying from micro to macro scales. [ 19 ] Consequently, the horizontal patchy distribution of soil properties (soil temperature, moisture, pH, litter/nutrient availability, etc.) also drives the patchiness of the soil organisms across the landscape, [ 20 ] and has been one of the main arguments for explaining the great diversity observed in soil communities. [ 21 ] Because soils also show vertical stratification of their elemental constituents along the soil profile as result of microclimate , soil texture , and resource quantity and quality differing between soil horizons, soil communities also change in abundance and structure with soil depth. [ 22 ] [ 18 ] The majority of these organisms are aerobic , so the amount of porous space , pore-size distribution, surface area, and oxygen levels are crucial to their life cycles and activities. The smallest creatures (microbes) use the micropores filled with air to grow, whereas other bigger animals require bigger spaces, macropores , or the water film surrounding the soil particles to move in search for food. Therefore, soil textural properties together with the depth of the water table are also important factors regulating their diversity, population sizes, and their vertical stratification. Ultimately, the structure of the soil communities strongly depends not only on the natural soil forming factors but also on human activities (agriculture, forestry, urbanization) and determines the shape of landscapes in terms of healthy or contaminated, pristine or degraded soils. [ 18 ] Since all these drivers of biodiversity changes also operate above ground, it is thought that there must be some concordance of mechanisms regulating the spatial patterns and structure of both above and below ground communities. In support of this, a small-scale field study revealed that the relationships between environmental heterogeneity and species richness might be a general property of ecological communities. [ 21 ] In contrast, the molecular examination of 17,516 environmental 18S rRNA gene sequences representing 20 phyla of soil animals covering a range of biomes and latitudes around the world indicated otherwise, and the main conclusion from this study was that below-ground animal diversity may be inversely related to above-ground biodiversity. [ 23 ] [ 18 ] The lack of distinct latitudinal gradients in soil biodiversity contrasts with those clear global patterns observed for plants above ground and has led to the assumption that they are indeed controlled by different factors. [ 24 ] For example, in 2007 Lozupone and Knight found salinity was the major environmental determinant of bacterial diversity composition across the globe, rather than extremes of temperature, pH, or other physical and chemical factors. [ 25 ] In another global scale study in 2014, Tedersoo et al . concluded fungal richness is causally unrelated to plant diversity and is better explained by climatic factors, followed by edaphic and spatial patterns. [ 26 ] Global patterns of the distribution of macroscopic organisms are far poorer documented. However, the little evidence available appears to indicate that, at large scales, soil metazoans respond to altitudinal, latitudinal or area gradients in the same way as those described for above-ground organisms. [ 27 ] In contrast, at local scales, the great diversity of microhabitats commonly found in soils provides the required niche portioning to create hot spots of diversity in just a gram of soil. [ 24 ] [ 18 ] Spatial patterns of soil biodiversity are difficult to explain, and its potential linkages to many soil processes and the overall ecosystem functioning are debated. For example, while some studies have found that reductions in the abundance and presence of soil organisms results in the decline of multiple ecosystem functions, [ 28 ] others concluded that above-ground plant diversity alone is a better predictor of ecosystem multi-functionality than soil biodiversity. [ 29 ] Soil organisms exhibit a wide array of feeding preferences, life-cycles and survival strategies and they interact within complex food webs. [ 30 ] Consequently, species richness per se has very little influence on soil processes and functional dissimilarity can have stronger impacts on ecosystem functioning. [ 31 ] Therefore, besides the difficulties in linking above and below ground diversities at different spatial scales, gaining a better understanding of the biotic effects on ecosystem processes might require incorporating a great number of components together with several multi-trophic levels [ 32 ] as well as the much less considered non-trophic interactions such as phoresy , passive consumption. [ 33 ] ) In addition, if soil systems are indeed self-organized, and soil organisms concentrate their activities within a selected set of discrete scales with some form of overall coordination, [ 34 ] there is no need for looking for external factors controlling the assemblages of soil constituents. Instead we might just need to recognize the unexpected and that the linkages between above and below ground diversity and soil processes are difficult to predict. [ 18 ] Recent advances are emerging from studying sub-organism level responses using environmental DNA [ 35 ] and various omics approaches, such as metagenomics , metatranscriptomics , proteomics and proteogenomics , are rapidly advancing, at least for the microbial world. [ 36 ] Metaphenomics has been proposed recently as a better way to encompass the omics and the environmental constraints. [ 37 ] [ 18 ] Soil harbors many microbes: bacteria, archaea, protist, fungi and viruses. [ 38 ] A majority of these microbes have not been cultured and remain undescribed. [ 39 ] Development of next generation sequencing technologies open up the avenue to investigate microbial diversity in soil. [ 40 ] One feature of soil microbes is spatial separation which influences microbe to microbe interactions and ecosystem functioning in the soil habitat. [ 41 ] Microorganisms in soil are found to be concentrated in specific sites called 'hot spots' which is characterized by an abundance of resources such as moisture or nutrients. [ 42 ] An example is the rhizosphere , and areas with accumulated organic matter such as the detritusphere. [ 43 ] These areas are characterized by the presence of decaying root litter and exudates released from plant roots which regulates the availability of carbon and nitrogen and in consequence modulate microbial processes. [ 43 ] Apart from labile organic carbon, spatial separation of microbes in soil may be influenced by other environmental factors such as temperature and moisture. [ 44 ] Other abiotic factors like pH and mineral nutrient composition may also influence the distribution of microorganisms in soil. [ 45 ] Variability of these factors make soil a dynamic system. [ 46 ] Interactions between members of the soil microhabitat takes place via chemical signaling which is mediated by soluble metabolites and volatile organic compounds, in addition to extracellular polysaccharides. [ 47 ] Chemical signals enable microbes to interact, for example bacterial peptidoglycans stimulate growth of Candida albicans . [ 48 ] Reciprocally, C. albicans production of farnesol modulates the expression of virulence genes and influences bacterial quorum sensing. [ 49 ] Trophic interactions by microbes in the same environment is driven by molecular communication. [ 50 ] Microbes may also exchange metabolites to support each other's growth, e.g., the release of extracellular enzymes by ectomycorrhiza decomposes organic matter and releases nutrients which then benefits other members of the population, in exchange organic acids from bacteria stimulate fungal growth [ 51 ] These examples of trophic interactions especially metabolite dependencies drive species interactions and are important in the assembly of soil microbial communities. [ 52 ] Diverse organisms make up the soil food web . They range in size from one-celled bacteria , algae , fungi , and protozoa , to more complex nematodes and micro- arthropods , to the visible earthworms , insects , small vertebrates , and plants . As these organisms eat, grow, and move through the soil, they make it possible to have clean water, clean air, healthy plants, and moderated water flow. There are many ways that the soil food web is an integral part of landscape processes. Soil organisms decompose organic compounds, including manure , plant residues, and pesticides , preventing them from entering water and becoming pollutants. They sequester nitrogen and other nutrients that might otherwise enter groundwater, and they fix nitrogen from the atmosphere, making it available to plants. Many organisms enhance soil aggregation and porosity , thus increasing infiltration and reducing surface runoff . Soil organisms prey on crop pests and are food for above-ground animals. Research interests span many aspects of soil ecology and microbiology . Fundamentally, researchers are interested in understanding the interplay among microorganisms , fauna , and plants, the biogeochemical processes they carry out, and the physical environment in which their activities take place, and applying this knowledge to address environmental problems. Example research projects are to examine the biogeochemistry and microbial ecology of septic drain field soils used to treat domestic wastewater , the role of anecic earthworms in controlling the movement of water and nitrogen cycle in agricultural soils , and the assessment of soil quality in turf production. [ 53 ] Of particular interest as of 2006 [update] is to understand the roles and functions of arbuscular mycorrhizal fungi in natural ecosystems. The effect of anthropic soil conditions on arbuscular mycorrhizal fungi and the production of glomalin by arbuscular mycorrhizal fungi are both of interest due to their roles in sequestering atmospheric carbon dioxide .
https://en.wikipedia.org/wiki/Soil_ecology
The soil food web is the community of organisms living all or part of their lives in the soil. It describes a complex living system in the soil and how it interacts with the environment, plants, and animals. Food webs describe the transfer of energy between species in an ecosystem . While a food chain examines one, linear, energy pathway through an ecosystem, a food web is more complex and illustrates all of the potential pathways. Much of this transferred energy comes from the sun. Plants use the sun’s energy to convert inorganic compounds into energy-rich, organic compounds , turning carbon dioxide and minerals into plant material by photosynthesis . Plant flowers exude energy-rich nectar above ground and plant roots exude acids, sugars, and ectoenzymes into the rhizosphere , adjusting the pH and feeding the food web underground. [ 2 ] [ 3 ] [ 4 ] Plants are called autotrophs because they make their own energy; they are also called producers because they produce energy available for other organisms to eat. Heterotrophs are consumers that cannot make their own food. In order to obtain energy they eat plants or other heterotrophs. In above ground food webs, energy moves from producers (plants) to primary consumers ( herbivores ) and then to secondary consumers (predators). The phrase, trophic level , refers to the different levels or steps in the energy pathway. In other words, the producers, consumers, and decomposers are the main trophic levels. This chain of energy transferring from one species to another can continue several more times, but eventually ends. At the end of the food chain, decomposers such as bacteria and fungi break down dead plant and animal material into simple nutrients. The nature of soil makes direct observation of food webs difficult. Since soil organisms range in size from less than 0.1 mm (nematodes) to greater than 2 mm (earthworms) there are many different ways to extract them. Soil samples are often taken using a metal core. Larger macrofauna such as earthworms and insect larva can be removed by hand, but this is impossible for smaller nematodes and soil arthropods. Most methods to extract small organisms are dynamic; they depend on the ability of the organisms to move out of the soil. For example, a Berlese funnel , used to collect small arthropods , creates a light/heat gradient in the soil sample. As the microarthropods move down, away from the light and heat, they fall through a funnel and into a collection vial. A similar method, the Baermann funnel , is used for nematodes. The Baerman funnel is wet, however (while the Berlese funnel is dry) and does not depend on a light/heat gradient. Nematodes move out of the soil and to the bottom of the funnel because, as they move, they are denser than water and are unable to swim. Soil microbial communities are characterized in many different ways. The activity of microbes can be measured by their respiration and carbon dioxide release. The cellular components of microbes can be extracted from soil and genetically profiled, or microbial biomass can be calculated by weighing the soil before and after fumigation. There are three different types of food web representations: topological (or traditional) food webs, flow webs and interaction webs. These webs can describe systems both above and below ground. Early food webs were topological; they were descriptive and provided a nonquantitative picture of consumers, resources and the links between them. Pimm et al. (1991) described these webs as a map of which organisms in a community eat which other kinds. The earliest topological food web, made in 1912, examined the predators and parasites of cotton boll weevil (reviewed by Pimm et al. 1991). Researchers analyzed and compared topological webs between ecosystems by measuring the web’s interaction chain lengths and connectivity. [ 5 ] One problem faced in standardizing such measurements is that there are often too many species for each to have a separate box. Depending on the author, the number of species aggregated or separated into functional groups may be different. [ 6 ] Authors may even eliminate some organisms. By convention, the dead material flowing back to detritus is not shown, as it would complicate the figure, but it is taken account in any calculations. [ 6 ] Miosis build on interconnected food chains , adding quantitative information on the movement of carbon or other nutrients from producers to consumers. Hunt et al. (1987) published the first flow web for soil, describing the short grass prairie in Colorado, USA. The authors estimated nitrogen transferral rates through the soil food web and calculated nitrogen mineralization rates for a range of soil organisms. In another landmark study, researchers from the Lovinkhoeve Experimental Farm in the Netherlands examined the flow of carbon and illustrated transfer rates with arrows of different thicknesses. [ 7 ] In order to create a flow web, a topological web is first constructed. After the members of the web are decided, the biomass of each functional group is calculated, usually in kg carbon/ hectare . In order to calculate feeding rates, researchers assume that the population of the functional group is in equilibrium. At equilibrium, the reproduction of the group balances the rate at which members are lost through natural death and predation [ 8 ] When feeding rate is known, the efficiency with which nutrients are converted into organism biomass can be calculated. This energy stored in the organism represents the amount available to be passed on to the next trophic level. After constructing the first soil flow webs, researchers discovered that nutrients and energy flowed from lower resources to higher trophic levels through three main channels. [ 7 ] [ 8 ] The bacterial and fungal channels had the largest energy flow, while the herbivory channel, in which organisms directly consumed plant roots, was smaller. It is now widely recognized that bacteria and fungi are critical to the decomposition of carbon and nitrogen and play important roles in both the carbon cycle and nitrogen cycle . An interaction web, shown above right, [ 9 ] is similar to a topological web, but instead of showing the movement of energy or materials, the arrows show how one group influences another. In interaction food web models, every link has two direct effects, one of the resource on the consumer and one of the consumer on the resource. [ 10 ] The effect of the resource on the consumer is positive, (the consumer gets to eat) and the effect on the resource by the consumer is negative (it is eaten). These direct, trophic, effects can lead to indirect effects. Indirect effects, represented by dashed lines, show the effect of one element on another to which it is not directly linked. [ 10 ] For example, in the simple interaction web below, when the predator eats the root herbivore, the plant eaten by the herbivore may increase in biomass. We would then say that the predator has a beneficial indirect effect on the plant roots. Bottom-up effects occur when the density of a resource affects the density of its consumer. [ 11 ] For example, in the figure above, an increase in root density causes an increase in herbivore density that causes a corresponding increase in predator density. Correlations in abundance or biomass between consumers and their resources give evidence for bottom-up control. [ 11 ] An often-cited example of a bottom-up effect is the relationship between herbivores and the primary productivity of plants. In terrestrial ecosystems, the biomass of herbivores and detritivores increases with primary productivity. An increase in primary productivity will result in a larger influx of leaf litter into the soil ecosystem, which will provide more resources for bacterial and fungal populations to grow. More microbes will allow an increase in bacterial and fungal feeding nematodes , which are eaten by mites and other predatory nematodes. Thus, the entire food web swells as more resources are added to the base. [ 11 ] When ecologists use the term, bottom-up control, they are indicating that the biomass, abundance, or diversity of higher trophic levels depend on resources from lower trophic levels. [ 10 ] Ideas about top-down control are much more difficult to evaluate. Top-down effects occur when the population density of a consumer affects that of its resource; [ 10 ] for example, a predator affects the density of its prey. Top-down control, therefore, refers to situations where the abundance, diversity or biomass of lower trophic levels depends on effects from consumers at higher trophic levels. [ 10 ] A trophic cascade is a type of top-down interaction that describes the indirect effects of predators. In a trophic cascade, predators induce effects that cascade down food chain and affect biomass of organisms at least two links away. [ 10 ] The importance of trophic cascades and top-down control in terrestrial ecosystems is actively debated in ecology (reviewed in Shurin et al. 2006) and the issue of whether trophic cascades occur in soils is no less complex [ 12 ] Trophic cascades do occur in both the bacterial and fungal energy channels. [ 13 ] [ 14 ] [ 15 ] However, cascades may be infrequent, because many other studies show no top-down effects of predators. [ 16 ] [ 17 ] In Mikola and Setälä’s study, microbes eaten by nematodes grew faster when they were grazed upon frequently. This compensatory growth slowed when the microbe feeding nematodes were removed. Therefore, although top predators reduced the number of microbe feeding nematodes, there was no overall change in microbial biomass. Besides the grazing effect, another barrier to top down control in soil ecosystems is widespread omnivory, which by increasing the number of trophic interactions, dampens effects from the top. The soil environment is also a matrix of different temperatures, moistures and nutrient levels, and many organisms are able to become dormant to withstand difficult times. Depending on conditions, predators may be separated from their potential prey by an insurmountable amount of space and time. Any top-down effects that do occur will be limited in strength because soil food webs are donor controlled. Donor control means that consumers have little or no effect on the renewal or input of their resources. [ 10 ] For example, aboveground herbivores can overgraze an area and decrease the grass population, but decomposers cannot directly influence the rate of falling plant litter. They can only indirectly influence the rate of input into their system through nutrient recycling which, by helping plants to grow, eventually creates more litter and detritus to fall. [ 18 ] If the entire soil food web were completely donor controlled, however, bacterivores and fungivores would never greatly affect the bacteria and fungi they consume. While bottom-up effects are no doubt important, many soil ecologists suspect that top-down effects are also sometimes significant. Certain predators or parasites, when added to the soil, can have a large effect on root herbivores and thereby indirectly affect plant fitness. For example, in a coastal shrubland food chain the native entomopathogenic nematode , Heterorhabditis marelatus , parasitized ghost moth caterpillars, and ghost moth caterpillars consumed the roots of bush lupine. The presence of H. marelatus correlated with lower caterpillar numbers and healthier plants. In addition, the researchers observed high mortality of bush lupine in the absence of entomopathogenic nematodes. These results implied that the nematode, as a natural enemy of the ghost moth caterpillar, protected the plant from damage. The authors even suggested that the interaction was strong enough to affect the population dynamics of bush lupine; [ 19 ] this was supported in later experimental work with naturally-growing populations of bush lupine. [ 20 ] Top down control has applications in agriculture and is the principle behind biological control , the idea that plants can benefit from the application of their herbivore’s enemies. While wasps and ladybugs are commonly associated with biological control, parasitic nematodes and predatory mites are also added to the soil to suppress pest populations and preserve crop plants. In order to use such biological control agents effectively, a knowledge of the local soil food web is important. A community matrix model is a type of interaction web that uses differential equations to describe every link in the topological web. Using Lotka–Volterra equations , that describe predator-prey interactions, and food web energetics data such as biomass and feeding rate, the strength of interactions between groups is calculated. [ 21 ] Community matrix models can also show how small changes affect the overall stability of the web. Mathematical modeling in food webs has raised the question of whether complex or simple food webs are more stable. Until the last decade, it was believed that soil food webs were relatively simple, with low degrees of connectance and omnivory. [ 12 ] These ideas stemmed from the mathematical models of May which predicted that complexity destabilized food webs. May used community matrices in which species were randomly linked with random interaction strength to show that local stability decreases with complexity (measured as connectance), diversity, and average interaction strength among species. [ 22 ] The use of such random community matrices attracted much criticism. In other areas of ecology, it was realized that the food webs used to make these models were grossly oversimplified [ 23 ] and did not represent the complexity of real ecosystems. It also became clear that soil food webs did not conform to these predictions. Soil ecologists discovered that omnivory in food webs was common, [ 24 ] and that food chains could be long and complex [ 8 ] and still remain resistant to disturbance by drying, freezing, and fumigation. [ 12 ] But why are complex food webs more stable? Many of the barriers to top-down trophic cascades also promote stability. Complex food webs may be more stable if the interaction strengths are weak [ 22 ] and soil food webs appear to consist of many weak interactions and a few strong ones. [ 21 ] Donor controlled food webs may be inherently more stable, because it is difficult for primary consumers to overtax their resources. [ 25 ] The structure of the soil also acts as a buffer, separating organisms and preventing strong interactions. [ 12 ] Many soil organisms, for example bacteria, can remain dormant through difficult times and reproduce quickly once conditions improve, making them resilient to disturbance. Stability of the system is reduced by the use of nitrogen-containing inorganic and organic fertilizers, which cause soil acidification . Despite their complexity, some interactions between species in the soil are not easily classified by food webs. Litter transformers, mutualists, and ecosystem engineers all have strong impacts on their communities that cannot be characterized as either top-down or bottom-up. Litter transformers, such as isopods , consume dead plants and excrete fecal pellets. While on the surface this may not seem impressive, the fecal pellets are moister and higher in nutrients than the surrounding soil, which favors colonization by bacteria and fungi. Decomposition of the fecal pellet by the microbes increases its nutrient value and the isopod is able to re-ingest the pellets. When the isopods consume nutrient-poor litter, the microbes enrich it for them and isopods prevented from eating their own feces can die. [ 26 ] This mutualistic relationship has been called an “external rumen”, similar to the mutualistic relationship between bacteria and cows. While the bacterial symbionts of cows live inside the rumen of their stomach, isopods depend on microbes outside their body. Ecosystems engineers, such as earthworms, modify their environment and create habitat for other smaller organisms. Earthworms also stimulate microbial activity by increasing soil aeration and moisture, and transporting litter into the ground where it becomes available to other soil fauna. [ 12 ] Fungi create nutritional niche for other organisms by enriching nutritionally extremely scarce food – the dead wood. [ 27 ] This allows xylophages to develop and in turn affect dead wood, contributing to wood decomposition and nutrient cycling in the forest floor. [ 28 ] In aboveground and aquatic food webs, the literature assumes that the most important interactions are competition and predation. While soil food webs fit these sorts of interactions well, future research needs to include more complex interactions such as mutualisms and habitat modification. While they cannot characterize all interactions, soil food webs remain a useful tool for describing ecosystems. The interactions between species in the soil and their effect on decomposition continue to be well studied, but much remains unknown about soil food webs stability and how food webs change over time. [ 12 ] This knowledge is critical to understanding how food webs affect important qualities such as soil fertility .
https://en.wikipedia.org/wiki/Soil_food_web
Soil gases ( soil atmosphere [ 1 ] ) are the gases found in the air space between soil components. The spaces between the solid soil particles, if they do not contain water , are filled with air . The primary soil gases are nitrogen , carbon dioxide and oxygen . [ 2 ] Oxygen is critical because it allows for respiration of both plant roots and soil organisms . Other natural soil gases include nitric oxide , nitrous oxide , methane , and ammonia . [ 3 ] Some environmental contaminants below ground produce gas which diffuses through the soil such as from landfill wastes, mining activities, and contamination by petroleum hydrocarbons which produce volatile organic compounds . [ 4 ] Gases fill soil pores in the soil structure as water drains or is removed from a soil pore by evaporation or root absorption. The network of pores within the soil aerates, or ventilates, the soil. This aeration network becomes blocked when water enters soil pores. Not only are both soil air and soil water very dynamic parts of soil, but both are often inversely related. The composition of gases present in the soil's pores , referred to commonly as the soil atmosphere or atmosphere of the soil, is similar to that of the Earth's atmosphere . [ 5 ] Unlike the atmosphere, moreover, soil gas composition is less stagnant due to the various chemical and biological processes taking place in the soil . [ 5 ] The resulting changes in composition from these processes can be defined by their variation time (i.e. daily vs. seasonal). Despite this spatial- and temporal-dependent fluctuation, soil gases typically boast greater concentrations of carbon dioxide and water vapor in comparison to the atmosphere. [ 5 ] Furthermore, concentration of other gases, such as methane and nitrous oxide , are relatively minor yet significant in determining greenhouse gas flux and anthropogenic impact on soils . [ 3 ] Gas molecules in soil are in continuous thermal motion according to the kinetic theory of gases , and there is also collision between molecules – a random walk process. In soil, a concentration gradient causes net movement of molecules from high concentration to low concentration, which gives the movement of gas by diffusion . Numerically, it is explained by the Fick's law of diffusion . Soil gas migration, specifically that of hydrocarbon species with one to five carbons, can also be caused by microseepage. [ 6 ] The soil atmosphere's variable composition and constant motion can be attributed to chemical processes such as diffusion, decomposition , and, in some regions of the world, thawing , among other processes. Diffusion of soil air with the atmosphere causes the preferential replacement of soil gases with atmospheric air . [ 5 ] More significantly, moreover, variation in soil gas composition due to seasonal, or even daily, temperature and/or moisture change can influence the rate of soil respiration . [ 7 ] According to the USDA , soil respiration refers to the quantity of carbon dioxide released from soil. This excess carbon dioxide is created by the decomposition of organic material by microbial organisms , in the presence of oxygen . [ 7 ] Given the importance of both soil gases to soil life , significant fluctuation of carbon dioxide and oxygen can result in changes in rate of decay, [ 7 ] while changes in microbial abundance can inversely influence soil gas composition. In regions of the world where freezing of soils or drought is common, soil thawing and rewetting due to seasonal or meteorological changes influences soil gas flux . [ 3 ] Both processes hydrate the soil and increase nutrient availability leading to an increase in microbial activity. [ 3 ] This results in greater soil respiration and influences the composition of soil gases. [ 7 ] [ 3 ] Soil gases have been used for multiple scientific studies to explore topics such as microseepage, [ 6 ] earthquakes , [ 8 ] and gaseous exchange between the soil and the atmosphere . [ 9 ] [ 3 ] Microseepage refers to the limited release of hydrocarbons on the soil surface and can be used to look for petroleum deposits based on the assumption that hydrocarbons vertically migrate to the soil surface in small quantities. [ 6 ] Migration of soil gases, specifically radon , can also be examined as earthquake precursors . [ 8 ] Furthermore, for processes such as soil thawing and rewetting, for example, large sudden changes in soil respiration can cause increased flux of soil gases such as carbon dioxide and methane , which are greenhouse gases . [ 3 ] These fluxes and interactions between soil gases and atmospheric air can further be analyzed by distance from the soil surface. [ 9 ]
https://en.wikipedia.org/wiki/Soil_gas
Soil liquefaction occurs when a cohesionless saturated or partially saturated soil substantially loses strength and stiffness in response to an applied stress such as shaking during an earthquake or other sudden change in stress condition, in which material that is ordinarily a solid behaves like a liquid. In soil mechanics , the term "liquefied" was first used by Allen Hazen [ 1 ] in reference to the 1918 failure of the Calaveras Dam in California . He described the mechanism of flow liquefaction of the embankment dam as: If the pressure of the water in the pores is great enough to carry all the load, it will have the effect of holding the particles apart and of producing a condition that is practically equivalent to that of quicksand ... the initial movement of some part of the material might result in accumulating pressure, first on one point, and then on another, successively, as the early points of concentration were liquefied. The phenomenon is most often observed in saturated, loose (low density or uncompacted), sandy soils. This is because a loose sand has a tendency to compress when a load is applied. Dense sands, by contrast, tend to expand in volume or ' dilate '. If the soil is saturated by water, a condition that often exists when the soil is below the water table or sea level , then water fills the gaps between soil grains ('pore spaces'). In response to soil compressing, the pore water pressure increases and the water attempts to flow out from the soil to zones of low pressure (usually upward towards the ground surface). However, if the loading is rapidly applied and large enough, or is repeated many times (e.g., earthquake shaking, storm wave loading) such that the water does not flow out before the next cycle of load is applied, the water pressures may build to the extent that it exceeds the force ( contact stresses ) between the grains of soil that keep them in contact. These contacts between grains are the means by which the weight from buildings and overlying soil layers is transferred from the ground surface to layers of soil or rock at greater depths. This loss of soil structure causes it to lose its strength (the ability to transfer shear stress ), and it may be observed to flow like a liquid (hence 'liquefaction'). Although the effects of soil liquefaction have been long understood, engineers took more notice after the 1964 Alaska earthquake and 1964 Niigata earthquake . It was a major cause of the destruction produced in San Francisco 's Marina District during the 1989 Loma Prieta earthquake , and in the Port of Kobe during the 1995 Great Hanshin earthquake . More recently soil liquefaction was largely responsible for extensive damage to residential properties in the eastern suburbs and satellite townships of Christchurch during the 2010 Canterbury earthquake [ 2 ] and more extensively again following the Christchurch earthquakes that followed in early and mid-2011 . [ 3 ] On 28 September 2018, an earthquake of 7.5 magnitude hit the Central Sulawesi province of Indonesia. Resulting soil liquefaction buried the suburb of Balaroa and Petobo village 3 metres (9.8 ft) deep in mud. The government of Indonesia is considering designating the two neighborhoods of Balaroa and Petobo, that have been totally buried under mud, as mass graves. [ 4 ] [ needs update ] The building codes in many countries require engineers to consider the effects of soil liquefaction in the design of new buildings and infrastructure such as bridges, embankment dams and retaining structures. [ 5 ] [ 6 ] [ 7 ] Soil liquefaction occurs when the effective stress ( shear strength ) of soil is reduced to essentially zero. This may be initiated by either monotonic loading (i.e., a single, sudden occurrence of a change in stress – examples include an increase in load on an embankment or sudden loss of toe support) or cyclic loading (i.e., repeated changes in stress condition – examples include wave loading or earthquake shaking). In both cases a soil in a saturated loose state, and one which may generate significant pore water pressure on a change in load are the most likely to liquefy. This is because loose soil has the tendency to compress when sheared, generating large excess porewater pressure as load is transferred from the soil skeleton to adjacent pore water during undrained loading. As pore water pressure rises, a progressive loss of strength of the soil occurs as effective stress is reduced. Liquefaction is more likely to occur in sandy or non-plastic silty soils but may in rare cases occur in gravels and clays (see quick clay ). A 'flow failure' may initiate if the strength of the soil is reduced below the stresses required to maintain the equilibrium of a slope or footing of a structure. This can occur due to monotonic loading or cyclic loading and can be sudden and catastrophic. A historical example is the Aberfan disaster . Casagrande [ 8 ] referred to this type of phenomena as 'flow liquefaction' although a state of zero effective stress is not required for this to occur. 'Cyclic liquefaction' is the state of soil when large shear strains have accumulated in response to cyclic loading. A typical reference strain for the approximate occurrence of zero effective stress is 5% double amplitude shear strain. This is a soil test-based definition, usually performed via cyclic triaxial , cyclic direct simple shear , or cyclic torsional shear type apparatus. These tests are performed to determine a soil's resistance to liquefaction by observing the number of cycles of loading at a particular shear stress amplitude required to induce 'fails'. Failure here is defined by the aforementioned shear strain criteria. The term 'cyclic mobility' refers to the mechanism of progressive reduction of effective stress due to cyclic loading. This may occur in all soil types including dense soils. However, on reaching a state of zero effective stress such soils immediately dilate and regain strength. Thus, shear strains are significantly less than a true state of soil liquefaction. Liquefaction is more likely to occur in loose to moderately saturated granular soils with poor drainage , such as silty sands or sands and gravels containing impermeable sediments . [ 9 ] [ 10 ] During wave loading , usually cyclic undrained loading, e.g. seismic loading , loose sands tend to decrease in volume , which produces an increase in their pore water pressures and consequently a decrease in shear strength , i.e. reduction in effective stress . Deposits most susceptible to liquefaction are young ( Holocene -age, deposited within the last 10,000 years) sands and silts of similar grain size (well-sorted), in beds at least metres thick, and saturated with water. Such deposits are often found along stream beds , beaches , dunes , and areas where windblown silt ( loess ) and sand have accumulated. Examples of soil liquefaction include quicksand , quick clay, turbidity currents and earthquake-induced liquefaction. Depending on the initial void ratio , the soil material can respond to loading either strain-softening or strain-hardening. Strain-softened soils, e.g., loose sands, can be triggered to collapse, either monotonically or cyclically, if the static shear stress is greater than the ultimate or steady-state shear strength of the soil. In this case flow liquefaction occurs, where the soil deforms at a low constant residual shear stress. If the soil strain-hardens, e.g., moderately dense to dense sand, flow liquefaction will generally not occur. However, cyclic softening can occur due to cyclic undrained loading, e.g., earthquake loading. Deformation during cyclic loading depends on the density of the soil, the magnitude and duration of the cyclic loading, and amount of shear stress reversal. If stress reversal occurs, the effective shear stress could reach zero, allowing cyclic liquefaction to take place. If stress reversal does not occur, zero effective stress cannot occur, and cyclic mobility takes place. [ 11 ] The resistance of the cohesionless soil to liquefaction will depend on the density of the soil, confining stresses, soil structure (fabric, age and cementation ), the magnitude and duration of the cyclic loading, and the extent to which shear stress reversal occurs. [ 12 ] Three parameters are needed to assess liquefaction potential using the simplified empirical method : The interaction between the solid skeleton and pore fluid flow has been considered by many researchers to model the material softening associated with the liquefaction phenomenon. The dynamic performance of saturated porous media depends on the soil-pore fluid interaction. When the saturated porous media is subjected to strong ground shaking, pore fluid movement relative to the solid skeleton is induced. The transient movement of pore fluid can significantly affect the redistribution of pore water pressure, which is generally governed by the loading rate, soil permeability , pressure gradient , and boundary conditions . It is well known that for a sufficiently high seepage velocity, the governing flow law in porous media is nonlinear and does not follow Darcy's law. This fact has been recently considered in the studies of soil-pore fluid interaction for liquefaction modeling. A fully explicit dynamic finite element method has been developed for turbulent flow law. The governing equations have been expressed for saturated porous media based on the extension of the Biot formulation. The elastoplastic behavior of soil under earthquake loading has been simulated using a generalized plasticity theory that is composed of a yield surface along with a non-associated flow rule. [ 18 ] Pressures generated during large earthquakes can force underground water and liquefied sand to the surface. This can be observed at the surface as effects known alternatively as " sand boils ", "sand blows" or " sand volcanoes ". Such earthquake ground deformations can be categorized as primary deformation if located on or close to the ruptured fault, or distributed deformation if located at considerable distance from the ruptured fault. [ 19 ] [ 20 ] The other common observation is land instability – cracking and movement of the ground down slope or towards unsupported margins of rivers, streams, or the coast. The failure of ground in this manner is called 'lateral spreading' and may occur on very shallow slopes with angles only 1 or 2 degrees from the horizontal. One positive aspect of soil liquefaction is the tendency for the effects of earthquake shaking to be significantly damped (reduced) for the remainder of the earthquake. This is because liquids do not support a shear stress and so once the soil liquefies due to shaking, subsequent earthquake shaking (transferred through ground by shear waves ) is not transferred to buildings at the ground surface. Studies of liquefaction features left by prehistoric earthquakes, called paleoliquefaction or paleoseismology , can reveal information about earthquakes that occurred before records were kept or accurate measurements could be taken. [ 21 ] Soil liquefaction induced by earthquake shaking is a major contributor to urban seismic risk . The effects of soil liquefaction on the built environment can be extremely damaging. Buildings whose foundations bear directly on sand which liquefies will experience a sudden loss of support, which will result in drastic and irregular settlement of the building causing structural damage, including cracking of foundations and damage to the building structure, or leaving the structure unserviceable, even without structural damage. Where a thin crust of non-liquefied soil exists between building foundation and liquefied soil, a 'punching shear' type foundation failure may occur. Irregular settlement may break underground utility lines. The upward pressure applied by the movement of liquefied soil through the crust layer can crack weak foundation slabs and enter buildings through service ducts and may allow water to damage building contents and electrical services. Bridges and large buildings constructed on pile foundations may lose support from the adjacent soil and buckle or come to rest at a tilt. Sloping ground and ground next to rivers and lakes may slide on a liquefied soil layer (termed 'lateral spreading'), [ 22 ] opening large ground fissures , and can cause significant damage to buildings, bridges, roads and services such as water, natural gas, sewerage, power and telecommunications installed in the affected ground. Buried tanks and manholes may float in the liquefied soil due to buoyancy . [ 22 ] Earth embankments such as flood levees and earth dams may lose stability or collapse if the material comprising the embankment or its foundation liquefies. Over geological time, liquefaction of soil material due to earthquakes could provide a dense parent material in which the fragipan may develop through pedogenesis. [ 23 ] Mitigation methods have been devised by earthquake engineers and include various soil compaction techniques such as vibro compaction (compaction of the soil by depth vibrators), dynamic compaction , and vibro stone columns . [ 24 ] These methods densify soil and enable buildings to avoid soil liquefaction. [ 25 ] Existing buildings can be mitigated by injecting grout into the soil to stabilize the layer of soil that is subject to liquefaction. Another method called IPS (Induced Partial Saturation) is now practicable to apply on larger scale. In this method, the saturation degree of the soil is decreased. Quicksand forms when water saturates an area of loose sand, and the sand is agitated. When the water trapped in the batch of sand cannot escape, it creates liquefied soil that can no longer resist force. Quicksand can be formed by standing or (upwards) flowing underground water (as from an underground spring), or by earthquakes. In the case of flowing underground water, the force of the water flow opposes the force of gravity, causing the granules of sand to be more buoyant. In the case of earthquakes, the shaking force can increase the pressure of shallow groundwater, liquefying sand and silt deposits. In both cases, the liquefied surface loses strength, causing buildings or other objects on that surface to sink or fall over. The saturated sediment may appear quite solid until a change in pressure, or a shock initiates the liquefaction, causing the sand to form a suspension with each grain surrounded by a thin film of water. This cushioning gives quicksand, and other liquefied sediments, a spongy, fluidlike texture. Objects in the liquefied sand sink to the level at which the weight of the object is equal to the weight of the displaced sand/water mix and the object floats due to its buoyancy . Quick clay, known as Leda Clay in Canada , is a water-saturated gel , which in its solid form resembles highly sensitive clay . This clay has a tendency to change from a relatively stiff condition to a liquid mass when it is disturbed. This gradual change in appearance from solid to liquid is a process known as spontaneous liquefaction. The clay retains a solid structure despite its high-water content (up to 80% by volume), because surface tension holds water-coated flakes of clay together. When the structure is broken by a shock or sufficient shear, it enters a fluid state. Quick clay is found only in northern countries such as Russia , Canada , Alaska in the U.S., Norway , Sweden and Finland , which were glaciated during the Pleistocene epoch . Quick clay has been the underlying cause of many deadly landslides . In Canada alone, it has been associated with more than 250 mapped landslides. Some of these are ancient, and may have been triggered by earthquakes. [ 26 ] Submarine landslides are turbidity currents and consist of water-saturated sediments flowing downslope. An example occurred during the 1929 Grand Banks earthquake that struck the continental slope off the coast of Newfoundland . Minutes later, transatlantic telephone cables began breaking sequentially, further and further downslope, away from the epicenter . Twelve cables were snapped in a total of 28 places. The exact times and locations were recorded for each break. Investigators suggested that a 60-mile-per-hour (100 km/h) submarine landslide or turbidity current of water-saturated sediments swept 400 miles (600 km) down the continental slope from the earthquake's epicenter, snapping the cables as it passed. [ 27 ] Media related to Soil liquefaction at Wikimedia Commons
https://en.wikipedia.org/wiki/Soil_liquefaction
Soil mesofauna are invertebrates between 0.1mm and 2mm in size, [ 1 ] which live in the soil or in a leaf litter layer on the soil surface. Members of this group include nematodes , mites , springtails (collembola), proturans , pauropods , rotifers , earthworms , tardigrades , small spiders , pseudoscorpions , opiliones (harvestmen), enchytraeidae such as potworms, insect larvae , small isopods and myriapods . [ 2 ] They play an important part in the carbon cycle and are likely to be adversely affected by climate change. [ 3 ] Soil mesofauna feed on a wide range of materials including other soil animals, microorganisms, animal material, live or decaying plant material, fungi, algae, lichen, spores, and pollen. [ 4 ] Species that feed on decaying plant material open drainage and aeration channels in the soil by removing roots. The fecal material of soil mesofauna remains in channels that can be broken down by smaller animals. Soil mesofauna do not have the ability to reshape the soil and, therefore, are forced to use the existing pore space in soil , cavities, or channels for locomotion. Soil Macrofauna , earthworms, termites, ants, and some insect larvae, can make the pore spaces and hence can change the soil porosity , [ 5 ] one aspect of soil morphology . Mesofauna contribute to habitable pore spaces and account for a small portion of total pore spaces. Clay soils have much smaller particles which reduce pore space. Organic material can fill small pores. Grazing of bacteria by bacterivorous nematodes and flagellates, soil mesofauna living in the pores, may considerably increase Nitrogen mineralization because the bacteria are broken down and the nitrogen is released. [ 6 ] In agricultural soils, most of the biological activity occurs in the top 20 centimetres (7.9 in), the soil biomantle or plow layer, while in non-cultivated soils, the most biological activity occurs in top 5 centimetres (2.0 in) of soil. The top layer is the organic horizon or O horizon , the area of accumulation of animal residues and recognizable plant material. Animal residues are higher in nitrogen than plant residues with respect to the total carbon in the residue. [ 7 ] Some Nitrogen fixation is caused by bacteria which consume the amino acids and sugar that are exuded by the plant roots. [ 8 ] However, approximately 30% of nitrogen re-mineralization is contributed by soil fauna in agriculture and natural ecosystems. [ 9 ] Macro- and mesofauna break down plant residues [ 10 ] [ 11 ] to release Nitrogen as part of nutrient cycling . [ 12 ] Many species of mesofauna reproduce in a variety of ways. Non-arthropod species such as nematodes and potworms can reproduce both sexually and asexually , the nematode through parthenogenesis which only creates females, and the potworm through whole-body regeneration. Soil rotifers another non-arthropod mesofauna, are only female and reproduce using unfertilized eggs. Arthropod species of soil mesofauna such as thrips , springtails , and pauropods reproduce solely by parthenogenesis. Diplurians and mites reproduce sexually, but some species of mites can reproduce by parthenogenesis.  Some species of soil mesofauna are susceptible to soil and vegetation changes because they rely on soil fertility and plant biomass for food and comfortable living conditions. The changes can affect some species' ability to reproduce, but since there are many variations in the species of soil mesofauna, the changes won’t affect all. For mesofauna such as springtails temperature and soil moisture influence the reproduction and growth rates of the individuals.
https://en.wikipedia.org/wiki/Soil_mesofauna
Soil microbiology is the study of microorganisms in soil , their functions, and how they affect soil properties. [ 1 ] It is believed that between two and four billion years ago, the first ancient bacteria and microorganisms came about on Earth's oceans. These bacteria could fix nitrogen , in time multiplied , and as a result released oxygen into the atmosphere. [ 2 ] [ 3 ] This led to more advanced microorganisms, [ 4 ] [ 5 ] which are important because they affect soil structure and fertility. Soil microorganisms can be classified as bacteria , actinomycetes , fungi , algae and protozoa . Each of these groups has characteristics that define them and their functions in soil. [ 6 ] [ 7 ] Up to 10 billion bacterial cells inhabit each gram of soil in and around plant roots, a region known as the rhizosphere . In 2011, a team detected more than 33,000 bacterial and archaeal species on sugar beet roots. [ 8 ] The composition of the rhizobiome can change rapidly in response to changes in the surrounding environment. Bacteria and Archaea , the smallest organisms in soil apart from viruses , are prokaryotic . They are the most abundant microorganisms in the soil, and serve many important purposes, including nitrogen fixation. [ 9 ] Some bacteria can colonize minerals in the soil and help influence weathering and the breaking down of these minerals. The overall composition of the soil can determine the amount of bacteria growing in the soil. The more minerals that are found in area can result in a higher abundance of bacteria. These bacteria will also form aggregates which increases the overall health of the soil. [ 10 ] One of the most distinguished features of bacteria is their biochemical versatility. [ 11 ] A bacterial genus called Pseudomonas can metabolize a wide range of chemicals and fertilizers. In contrast, another genus known as Nitrobacter can only derive its energy by turning nitrite into nitrate , which is also known as oxidation. The genus Clostridium is an example of bacterial versatility because it, unlike most species, can grow in the absence of oxygen, respiring anaerobically . Several species of Pseudomonas , such as Pseudomonas aeruginosa are able to respire both aerobically and anaerobically, using nitrate as the terminal electron acceptor . [ 9 ] Nitrogen is often the most limiting nutrient in soil and water. Bacteria are responsible for the process of nitrogen fixation , which is the conversion of atmospheric nitrogen into nitrogen-containing compounds (such as ammonia ) that can be used by plants. Autotrophic bacteria derive their energy by making their own food through oxidation, like the Nitrobacter species, rather than feeding on plants or other organisms. These bacteria are responsible for nitrogen fixation. The amount of autotrophic bacteria is small compared to heterotrophic bacteria (the opposite of autotrophic bacteria, heterotrophic bacteria acquire energy by consuming plants or other microorganisms), but are very important because almost every plant and organism requires nitrogen in some way. [ 6 ] Actinomycetes are soil microorganisms. They are a type of bacteria, but they share some characteristics with fungi that are most likely a result of convergent evolution due to a common habitat and lifestyle. [ 12 ] Although they are members of the Bacteria kingdom, many actinomycetes share characteristics with fungi, including shape and branching properties, spore formation and secondary metabolite production. One of the most notable characteristics of the actinomycetes is their ability to produce antibiotics. Streptomycin , neomycin , erythromycin and tetracycline are only a few examples of these antibiotics. Streptomycin is used to treat tuberculosis and infections caused by certain bacteria and neomycin is used to reduce the risk of bacterial infection during surgery. Erythromycin is used to treat certain infections caused by bacteria, such as bronchitis, pertussis (whooping cough), pneumonia and ear, intestine, lung, urinary tract and skin infections. Fungi are abundant in soil, but bacteria are more abundant. Fungi are important in the soil as food sources for other, larger organisms, pathogens, beneficial symbiotic relationships with plants or other organisms and soil health . Fungi can be split into species based primarily on the size, shape and color of their reproductive spores, which are used to reproduce. Most of the environmental factors that influence the growth and distribution of bacteria and actinomycetes also influence fungi. The quality as well as quantity of organic matter in the soil has a direct correlation to the growth of fungi, because most fungi consume organic matter for nutrition. Compared with bacteria, fungi are relatively benefitted by acidic soils. [ 13 ] Fungi also grow well in dry, arid soils because fungi are aerobic, or dependent on oxygen, and the higher the moisture content in the soil, the less oxygen is present for them. Algae can make their own nutrients through photosynthesis . Photosynthesis converts light energy to chemical energy that can be stored as nutrients. For algae to grow, they must be exposed to light because photosynthesis requires light, so algae are typically distributed evenly wherever sunlight and moderate moisture is available. Algae do not have to be directly exposed to the Sun, but can live below the soil surface given uniform temperature and moisture conditions. Algae are also capable of performing nitrogen fixation. [ 6 ] Algae can be split up into three main groups: the Cyanophyceae , the Chlorophyceae and the bacillariophyceae . The Cyanophyceae contain chlorophyll , which is the molecule that absorbs sunlight and uses that energy to make carbohydrates from carbon dioxide and water and also pigments that make it blue-green to violet in color. The Chlorophyceae usually only have chlorophyll in them which makes them green, and the bacillariophyceae contain chlorophyll as well as pigments that make the algae brown in color. [ 6 ] Blue-green algae, or Cyanophyceae, are responsible for nitrogen fixation. The amount of nitrogen they fix depends more on physiological and environmental factors rather than the organism's abilities. These factors include intensity of sunlight, concentration of inorganic and organic nitrogen sources and ambient temperature and stability. [ 12 ] Protozoa are eukaryotic organisms that were some of the first microorganisms to reproduce sexually, a significant evolutionary step from duplication of spores, like those that many other soil microorganisms depend on. Protozoa can be split up into three categories: flagellates , amoebae and ciliates . [ 12 ] Flagellates are the smallest members of the protozoa group, and can be divided further based on whether they can participate in photosynthesis. Nonchlorophyll-containing flagellates are not capable of photosynthesis because chlorophyll is the green pigment that absorbs sunlight. These flagellates are found mostly in soil. Flagellates that contain chlorophyll typically occur in aquatic conditions. Flagellates can be distinguished by their flagella, which is their means of movement. Some have several flagella, while other species only have one that resembles a long branch or appendage. [ 12 ] Amoebae are larger than flagellates and move in a different way. Amoebae can be distinguished from other protozoa by their slug-like properties and pseudopodia . A pseudopodium or "false foot" is a temporary obtrusion from the body of the amoeba that helps pull it along surfaces for movement or helps to pull in food. The amoeba does not have permanent appendages and the pseudopodium is more of a slime-like consistency than a flagellum. [ 12 ] Ciliates are the largest of the protozoa group, and move by means of short, numerous cilia that produce beating movements. Cilia resemble small, short hairs. They can move in different directions to move the organism, giving it more mobility than flagellates or amoebae. [ 12 ] Plant hormones , salicylic acid , jasmonic acid and ethylene are key regulators of innate immunity in plant leaves. Mutants impaired in salicylic acid synthesis and signaling are hypersusceptible to microbes that colonize the host plant to obtain nutrients, whereas mutants impaired in jasmonic acid and ethylene synthesis and signaling are hypersusceptible to herbivorous insects and microbes that kill host cells to extract nutrients. The challenge of modulating a community of diverse microbes in plant roots is more involved than that of clearing a few pathogens from inside a plant leaf. Consequently, regulating root microbiome composition may require immune mechanisms other than those that control foliar microbes. [ 14 ] A 2015 study analyzed a panel of Arabidopsis hormone mutants impaired in synthesis or signaling of individual or combinations of plant hormones, the microbial community in the soil adjacent to the root and in bacteria living within root tissue. Changes in salicylic acid signaling stimulated a reproducible shift in the relative abundance of bacterial phyla in the endophytic compartment. These changes were consistent across many families within the affected phyla , indicating that salicylic acid may be a key regulator of microbiome community structure. [ 14 ] Classical plant defense hormones also function in plant growth, metabolism and abiotic stress responses, obscuring the precise mechanism by which salicylic acid regulates this microbiome. [ 14 ] During plant domestication, humans selected for traits related to plant improvement, but not for plant associations with a beneficial microbiome. Even minor changes in abundance of certain bacteria can have a major effect on plant defenses and physiology, with only minimal effects on overall microbiome structure. [ 14 ] Most soil enzymes are produced by bacteria , fungi and plant roots . Their biochemical activity is a factor in both stabilization and degradation of soil structure. Enzyme activity is higher in plots that are fertilized with manure as compared to inorganic fertilizers. The microflora of the rhizosphere may increase activity of enzymes there. [ 15 ] Microbes can make nutrients and minerals in the soil available to plants, produce hormones that spur growth, stimulate the plant immune system and trigger or dampen stress responses. In general a more diverse soil microbiome results in fewer plant diseases and higher yield. Farming can destroy soil's rhiziobiome (microbial ecosystem) by using soil amendments such as fertilizer and pesticide without compensating for their effects. By contrast, healthy soil can increase fertility in multiple ways, including supplying nutrients such as nitrogen and protecting against pests and disease, while reducing the need for water and other inputs. Some approaches may even allow agriculture in soils that were never considered viable. [ 8 ] The group of bacteria called rhizobia live inside the roots of legumes and fix nitrogen from the air into a biologically useful form. [ 8 ] Mycorrhizae or root fungi form a dense network of thin filaments that reach far into the soil, acting as extensions of the plant roots they live on or in. These fungi facilitate the uptake of water and a wide range of nutrients. [ 8 ] Up to 30% of the carbon fixed by plants is excreted from the roots as so-called exudates —including sugars, amino acids , flavonoids , aliphatic acids, and fatty acids —that attract and feed beneficial microbial species while repelling and killing harmful ones. [ 8 ] Almost all registered microbes are biopesticides , producing some $1 billion annually, less than 1% of the chemical amendment market, estimated at $110 billion. Some microbes have been marketed for decades, such as Trichoderma fungi that suppress other, pathogenic fungi, and the caterpillar killer Bacillus thuringiensis . Serenade is a biopesticide containing a Bacillus subtilis strain that has antifungal and antibacterial properties and promotes plant growth. It can be applied in a liquid form on plants and to soil to fight a range of pathogens. It has found acceptance in both conventional and organic agriculture. Agrochemical companies such as Bayer have begun investing in the technology. In 2012, Bayer bought AgraQuest for $425 million. Its €10 million annual research budget funds field-tests of dozens of new fungi and bacteria to replace chemical pesticides or to serve as biostimulants to promote crop health and growth. Novozymes , a company developing microbial fertilizers and pesticides, forged an alliance with Monsanto . Novozymes invested in a biofertilizer containing the soil fungus Penicillium bilaiae and a bioinsecticide that contains the fungus Metarhizium anisopliae . In 2014 Syngenta and BASF acquired companies developing microbial products, as did Dupont in 2015. [ 8 ] A 2007 study showed that a complex symbiosis with fungi and viruses makes it possible for a grass called Dichanthelium lanuginosum to thrive in geothermal soils in Yellowstone National Park , where temperatures reach 60 °C (140 °F). Introduced in the US market in 2014 for corn and rice, they trigger an adaptive stress response. [ 8 ] In both the US and Europe, companies have to provide regulatory authorities with evidence that both the individual strains and the product as a whole are safe, leading many existing products to label themselves "biostimulants" instead of " biopesticides ". [ 8 ] When selecting a bacterium for disease control its other effects must also be considered. Some suppressive bacteria perform the opposite of nitrogen fixation (see § Nitrogen fixation above), making nitrogen unavailable. Stevens et al 1998 find bacterial denitrification and dissimilatory nitrate reduction to ammonium to especially occur at high pH . [ 16 ] A funguslike unicellular organism named Phytophthora infestans , responsible for potato blight and other crop diseases, has caused famines throughout history. Other fungi and bacteria cause the decay of roots and leaves. [ 8 ] Many strains that seemed promising in the lab often failed to prove effective in the field, because of soil, climate and ecosystem effects, leading companies to skip the lab phase and emphasize field tests. [ 8 ] Populations of beneficial microbes can diminish over time. Serenade stimulates a high initial B. subtilis density, but levels decrease because the bacteria lacks a defensible niche. One way to compensate is to use multiple collaborating strains. [ 8 ] Fertilizers deplete soil of organic matter and trace elements, cause salination and suppress mycorrhizae; they can also turn symbiotic bacteria into competitors. [ 8 ] A pilot project in Europe used a plow to slightly loosen and ridge the soil. They planted oats and vetch , which attracts nitrogen-fixing bacteria. They planted small olive trees to boost microbial diversity. They split an unirrigated 100-hectare field into three zones, one treated with chemical fertilizer and pesticides; and the other two with different amounts of an organic biofertilizer , consisting of fermented grape leftovers and a variety of bacteria and fungi, along with four types of mycorrhiza spores. [ 8 ] The crops that had received the most organic fertilizer had reached nearly twice the height of those in zone A and were inches taller than zone C. The yield of that section equaled that of irrigated crops, whereas the yield of the conventional technique was negligible. The mycorrhiza had penetrated the rock by excreting acids, allowing plant roots to reach almost 2 meters into the rocky soil and reach groundwater . [ 8 ]
https://en.wikipedia.org/wiki/Soil_microbiology
Soil moisture sensors measure the volumetric water content in soil . [ 1 ] Since the direct gravimetric measurement of free soil moisture requires removing, drying, and weighing of a sample, soil moisture sensors measure the volumetric water content indirectly by using some other property of the soil, such as electrical resistance, dielectric constant, or interaction with neutrons , as a proxy for the moisture content. The relation between the measured property and soil moisture must be calibrated and may vary depending on environmental factors such as soil type , temperature , or electric conductivity . Reflected microwave radiation is affected by the soil moisture and is used for remote sensing in hydrology and agriculture. Portable probe instruments can be used by farmers or gardeners. Soil moisture sensors typically refer to sensors that estimate volumetric water content. Another class of sensors measure another property of moisture in soils called water potential ; these sensors are usually referred to as soil water potential sensors and include tensiometers and gypsum blocks. Technologies commonly used to indirectly measure volumetric water content (soil moisture) include: Measuring soil moisture is important for agricultural applications to help farmers manage their irrigation systems more efficiently. Knowing the exact soil moisture conditions on their fields, not only are farmers able to generally use less water to grow a crop, they are also able to increase yields and the quality of the crop by improved management of soil moisture during critical plant growth stages. [ citation needed ] In urban and suburban areas, landscapes and residential lawns are using soil moisture sensors to interface with an irrigation controller. Connecting a soil moisture sensor to a simple irrigation clock will convert it into a "smart" irrigation controller that prevents irrigation cycles when the soil is already wet, e.g. following a recent rainfall event. [ 4 ] Golf courses are using soil moisture sensors to increase the efficiency of their irrigation systems to prevent over-watering and leaching of fertilizers and other chemicals into the ground. [ citation needed ] Soil moisture sensors are used in numerous research applications, e.g. in agricultural science and horticulture including irrigation planning, climate research , or environmental science including solute transport studies and as auxiliary sensors for soil respiration measurements. [ 5 ] Relatively cheap and simple devices that do not require a power source are available for checking whether plants have sufficient moisture to thrive. After inserting a probe into the soil for approximately 60 seconds, a meter indicates if the soil is too dry, moist or wet for plants. [ citation needed ]
https://en.wikipedia.org/wiki/Soil_moisture_sensor
The soil moisture velocity equation [ 1 ] describes the speed that water moves vertically through unsaturated soil under the combined actions of gravity and capillarity, a process known as infiltration . The equation is alternative form of the Richardson/ Richards' equation . [ 2 ] [ 3 ] The key difference being that the dependent variable is the position of the wetting front z {\displaystyle z} , which is a function of time, the water content and media properties. The soil moisture velocity equation consists of two terms. The first "advection-like" term was developed to simulate surface infiltration [ 4 ] and was extended to the water table, [ 5 ] which was verified using data collected in a column experimental that was patterned after the famous experiment by Childs & Poulovassilis (1962) [ 6 ] and against exact solutions. [ 7 ] [ 1 ] The soil moisture velocity equation [ 1 ] or SMVE is a Lagrangian reinterpretation of the Eulerian Richards' equation wherein the dependent variable is the position z of a wetting front of a particular moisture content θ {\displaystyle \theta } with time. where: The first term on the right-hand side of the SMVE is called the "advection-like" term, while the second term is called the "diffusion-like" term. The advection-like term of the Soil Moisture Velocity Equation is particularly useful for calculating the advance of wetting fronts for a liquid invading an unsaturated porous medium under the combined action of gravity and capillarity because it is convertible to an ordinary differential equation by neglecting the diffusion-like term. [ 5 ] and it avoids the problem of representative elementary volume by use of a fine water-content discretization and solution method. This equation was converted into a set of three ordinary differential equations (ODEs) [ 5 ] using the method of lines [ 8 ] to convert the partial derivatives on the right-hand side of the equation into appropriate finite difference forms. These three ODEs represent the dynamics of infiltrating water, falling slugs, and capillary groundwater, respectively. This derivation of the 1-D soil moisture velocity equation [ 1 ] for calculating vertical flux q {\displaystyle q} of water in the vadose zone starts with conservation of mass for an unsaturated porous medium without sources or sinks: We next insert the unsaturated Buckingham–Darcy flux: [ 9 ] yielding Richards' equation [ 2 ] in mixed form because it includes both the water content θ {\displaystyle \theta } and capillary head ψ ( θ ) {\displaystyle \psi (\theta )} : Applying the chain rule of differentiation to the right-hand side of Richards' equation: Assuming that the constitutive relations for unsaturated hydraulic conductivity and soil capillarity are solely functions of the water content, K = K ( θ ) {\displaystyle K=K(\theta )} and ψ = ψ ( θ ) {\displaystyle \psi =\psi (\theta )} , respectively: This equation implicitly defines a function Z R ( θ , t ) {\displaystyle Z_{R}(\theta ,t)} that describes the position of a particular moisture content within the soil using a finite moisture-content discretization. Employing the Implicit function theorem , which by the cyclic rule required dividing both sides of this equation by − ∂ θ / ∂ z {\displaystyle {-\partial \theta }/{\partial z}} to perform the change in variable, resulting in: ∂ Z R ∂ t = − K ′ ( θ ) ψ ′ ( θ ) ∂ θ ∂ z − K ( θ ) ψ ″ ( θ ) ∂ θ ∂ z − K ( θ ) ψ ′ ( θ ) ∂ 2 θ / ∂ z 2 ∂ θ / ∂ z + K ′ ( θ ) {\displaystyle {\frac {\partial Z_{R}}{\partial t}}=-K'(\theta )\psi '(\theta ){\frac {\partial \theta }{\partial z}}-K(\theta )\psi ''(\theta ){\frac {\partial \theta }{\partial z}}-K(\theta )\psi '(\theta ){\frac {\partial ^{2}\theta /\partial z^{2}}{\partial \theta /\partial z}}+K'(\theta )} , which can be written as: ∂ Z R ∂ t = − K ′ ( θ ) [ ∂ ψ ( θ ) ∂ z − 1 ] − K ( θ ) [ ψ ″ ( θ ) ∂ θ ∂ z + ψ ′ ( θ ) ∂ 2 θ / ∂ z 2 ∂ θ / ∂ z ] {\displaystyle {\frac {\partial Z_{R}}{\partial t}}=-K'(\theta )\left[{\frac {\partial \psi (\theta )}{\partial z}}-1\right]-K(\theta )\left[\psi ''(\theta ){\frac {\partial \theta }{\partial z}}+\psi '(\theta ){\frac {\partial ^{2}\theta /\partial z^{2}}{\partial \theta /\partial z}}\right]} . Inserting the definition of the soil water diffusivity: into the previous equation produces: ∂ Z R ∂ t = − K ′ ( θ ) [ ∂ ψ ( θ ) ∂ z − 1 ] − D ( θ ) ∂ 2 ψ / ∂ z 2 ∂ ψ / ∂ z {\displaystyle {\frac {\partial Z_{R}}{\partial t}}=-K'(\theta )\left[{\frac {\partial \psi (\theta )}{\partial z}}-1\right]-D(\theta ){\frac {\partial ^{2}\psi /\partial z^{2}}{\partial \psi /\partial z}}} If we consider the velocity of a particular water content θ {\displaystyle \theta } , then we can write the equation in the form of the Soil Moisture Velocity Equation : d z d t | θ = ∂ K ( θ ) ∂ θ [ 1 − ( ∂ ψ ( θ ) ∂ z ) ] − D ( θ ) ∂ 2 ψ / ∂ z 2 ∂ ψ / ∂ z {\displaystyle \left.{\frac {dz}{dt}}\right\vert _{\theta }={\frac {\partial K(\theta )}{\partial \theta }}\left[1-\left({\frac {\partial \psi (\theta )}{\partial z}}\right)\right]-D(\theta ){\frac {\partial ^{2}\psi /\partial z^{2}}{\partial \psi /\partial z}}} Written in moisture content form, 1-D Richards' equation is [ 10 ] Where D ( θ ) [L 2 /T] is 'the soil water diffusivity' as previously defined. Note that with θ {\displaystyle \theta } as the dependent variable, physical interpretation is difficult because all the factors that affect the divergence of the flux are wrapped up in the soil moisture diffusivity term D ( θ ) {\displaystyle D(\theta )} . However, in the SMVE, the three factors that drive flow are in separate terms that have physical significance. The primary assumptions used in the derivation of the Soil Moisture Velocity Equation are that K = K ( θ ) {\displaystyle K=K(\theta )} and ψ = ψ ( θ ) {\displaystyle \psi =\psi (\theta )} are not overly restrictive. Analytical and experimental results show that these assumptions are acceptable under most conditions in natural soils. In this case, the Soil Moisture Velocity Equation is equivalent to the 1-D Richards' equation, albeit with a change in dependent variable. This change of dependent variable is convenient because it reduces the complexity of the problem because compared to Richards' equation , which requires the calculation of the divergence of the flux, the SMVE represents a flux calculation, not a divergence calculation. The first term on the right-hand side of the SMVE represents the two scalar drivers of flow, gravity and the integrated capillarity of the wetting front. Considering just that term, the SMVE becomes: where ∂ ψ ( θ ) / ∂ z {\displaystyle {\partial \psi (\theta )}/{\partial z}} is the capillary head gradient that is driving the flux and the remaining conductivity term K ′ ( θ ) {\displaystyle K'(\theta )} represents the ability of gravity to conduct flux through the soil. This term is responsible for the true advection of water through the soil under the combined influences of gravity and capillarity. As such, it is called the "advection-like" term. Neglecting gravity and the scalar wetting front capillarity, we can consider only the second term on the right-hand side of the SMVE. In this case the Soil Moisture Velocity Equation becomes: This term is strikingly similar to Fick's second law of diffusion . For this reason, this term is called the "diffusion-like" term of the SMVE. This term represents the flux due to the shape of the wetting front − D ( θ ) ∂ 2 ψ / ∂ z 2 {\displaystyle -D(\theta ){\partial ^{2}\psi /\partial z^{2}}} , divided by the spatial gradient of the capillary head ∂ ψ / ∂ z {\displaystyle {\partial \psi /\partial z}} . Looking at this diffusion-like term, it is reasonable to ask when might this term be negligible? The first answer is that this term will be zero when the first derivative < ∂ ψ / ∂ z = C {\displaystyle <\partial \psi /\partial z=C} , because the second derivative will equal zero. One example where this occurs is in the case of an equilibrium hydrostatic moisture profile, when ∂ ψ / ∂ z = − 1 {\displaystyle \partial \psi /\partial z=-1} with z defined as positive upward. This is a physically realistic result because an equilibrium hydrostatic moisture profile is known to not produce fluxes. Another instance when the diffusion-like term will be nearly zero is in the case of sharp wetting fronts, where the denominator of the diffusion-like term ∂ ψ / ∂ z → ∞ {\displaystyle \partial \psi /\partial z\to \infty } , causing the term to vanish. Notably, sharp wetting fronts are notoriously difficult to resolve and accurately solve with traditional numerical Richards' equation solvers. [ 11 ] Finally, in the case of dry soils, K ( θ ) {\displaystyle K(\theta )} tends towards 0 {\displaystyle 0} , making the soil water diffusivity D ( θ ) {\displaystyle D(\theta )} tend towards zero as well. In this case, the diffusion-like term would produce no flux. Comparing against exact solutions of Richards' equation for infiltration into idealized soils developed by Ross & Parlange (1994) [ 12 ] revealed [ 1 ] that indeed, neglecting the diffusion-like term resulted in accuracy >99% in calculated cumulative infiltration. This result indicates that the advection-like term of the SMVE, converted into an ordinary differential equation using the method of lines, is an accurate ODE solution of the infiltration problem. This is consistent with the result published by Ogden et al. [ 5 ] who found errors in simulated cumulative infiltration of 0.3% using 263 cm of tropical rainfall over an 8-month simulation to drive infiltration simulations that compared the advection-like SMVE solution against the numerical solution of Richards' equation. The advection-like term of the SMVE can be solved using the method of lines and a finite moisture content discretization . This solution of the SMVE advection-like term replaces the 1-D Richards' equation PDE with a set of three ordinary differential equations (ODEs). These three ODEs are: With reference to Figure 1, water infiltrating the land surface can flow through the pore space between θ d {\displaystyle \theta _{d}} and θ i {\displaystyle \theta _{i}} . Using the method of lines to convert the SMVE advection-like term into an ODE: Given that any ponded depth of water on the land surface is h p {\displaystyle h_{p}} , the Green and Ampt (1911) [ 13 ] assumption is employed, represents the capillary head gradient that is driving the flow in the j t h {\displaystyle j^{th}} discretization or "bin". Therefore, the finite water-content equation in the case of infiltration fronts is: After rainfall stops and all surface water infiltrates, water in bins that contains infiltration fronts detaches from the land surface. Assuming that the capillarity at leading and trailing edges of this 'falling slug' of water is balanced, then the water falls through the media at the incremental conductivity associated with the j th Δ θ {\displaystyle j^{\text{th}}\ \Delta \theta } bin: This approach to solving the capillary-free solution is very similar to the kinematic wave approximation. In this case, the flux of water to the j th {\displaystyle j^{\text{th}}} bin occurs between bin j and i . Therefore, in the context of the method of lines : and which yields: Note the "-1" in parentheses, representing the fact that gravity and capillarity are acting in opposite directions. The performance of this equation was verified, [ 7 ] using a column experiment fashioned after that by Childs and Poulovassilis (1962). [ 6 ] Results of that validation showed that the finite water-content vadose zone flux calculation method performed comparably to the numerical solution of Richards' equation. The photo shows apparatus. Data from this column experiment are available by clicking on this hot-linked DOI . These data are useful for evaluating models of near-surface water table dynamics. It is noteworthy that the SMVE advection-like term solved using the finite moisture-content method completely avoids the need to estimate the specific yield . Calculating the specific yield as the water table nears the land surface is made cumbersome my non-linearities. However, the SMVE solved using a finite moisture-content discretization essentially does this automatically in the case of a dynamic near-surface water table. The paper on the Soil Moisture Velocity Equation was highlighted by the editor in the issue of J. Adv. Modeling of Earth Systems when the paper was first published, and is in the public domain. The paper may be freely downloaded here by anyone. The paper describing the finite moisture-content solution of the advection-like term of the Soil Moisture Velocity Equation was selected to receive the 2015 Coolest Paper Award by the early career members of the International Association of Hydrogeologists .
https://en.wikipedia.org/wiki/Soil_moisture_velocity_equation
Soil pH is a measure of the acidity or basicity (alkalinity) of a soil . Soil pH is a key characteristic that can be used to make informative analysis both qualitative and quantitatively regarding soil characteristics. [ 1 ] pH is defined as the negative logarithm (base 10) of the activity of hydronium ions ( H + or, more precisely, H 3 O + aq ) in a solution . In soils, it is measured in a slurry of soil mixed with water (or a salt solution, such as 0.01 M CaCl 2 ), and normally falls between 3 and 10, with 7 being neutral. Acid soils have a pH below 7 and alkaline soils have a pH above 7. Ultra-acidic soils (pH < 3.5) and very strongly alkaline soils (pH > 9) are rare. [ 2 ] [ 3 ] Soil pH is considered a master variable in soils as it affects many chemical processes. It specifically affects plant nutrient availability by controlling the chemical forms of the different nutrients and influencing the chemical reactions they undergo. The optimum pH range for most plants is between 5.5 and 7.5; [ 3 ] however, many plants have adapted to thrive at pH values outside this range. The United States Department of Agriculture Natural Resources Conservation Service classifies soil pH ranges as follows: [ 4 ] 0 to 6=acidic 7=neutral 8 and above=alkaline Methods of determining pH include: Precise, repeatable measures of soil pH are required for scientific research and monitoring. This generally entails laboratory analysis using a standard protocol; an example of such a protocol is that in the USDA Soil Survey Field and Laboratory Methods Manual. [ 8 ] In this document the three-page protocol for soil pH measurement includes the following sections: Application; Summary of Method; Interferences; Safety; Equipment; Reagents; and Procedure. The pH is measured in soil-water (1:1) and soil-salt (1:2 CaCl 2 {\displaystyle {\ce {CaCl2}}} ) solutions. For convenience, the pH is initially measured in water and then measured in CaCl 2 {\displaystyle {\ce {CaCl2}}} . With the addition of an equal volume of 0.02 M CaCl 2 {\displaystyle {\ce {CaCl2}}} to the soil suspension that was prepared for the water pH, the final soil-solution ratio is 1:2 0.01 M CaCl 2 {\displaystyle {\ce {CaCl2}}} . A 20-g soil sample is mixed with 20 mL of reverse osmosis (RO) water (1:1 w:v) with occasional stirring. The sample is allowed to stand 1 h with occasional stirring. The sample is stirred for 30 s, and the 1:1 water pH is measured. The 0.02 M CaCl 2 {\displaystyle {\ce {CaCl2}}} (20 mL) is added to soil suspension, the sample is stirred, and the 1:2 0.01 M CaCl 2 {\displaystyle {\ce {CaCl2}}} pH is measured (4C1a2a2). The pH of a natural soil depends on the mineral composition of the parent material of the soil, and the weathering reactions undergone by that parent material. In warm, humid environments, soil acidification occurs over time as the products of weathering are leached by water moving laterally or downwards through the soil. In dry climates, however, soil weathering and leaching are less intense and soil pH is often neutral or alkaline. [ 9 ] [ 10 ] Many processes contribute to soil acidification. These include: [ 11 ] Total soil alkalinity increases with: [ 13 ] [ 14 ] The accumulation of alkalinity in a soil (as carbonates and bicarbonates of Na, K, Ca and Mg) occurs when there is insufficient water flowing through the soils to leach soluble salts. This may be due to arid conditions, or poor internal soil drainage ; in these situations most of the water that enters the soil is transpired (taken up by plants) or evaporates, rather than flowing through the soil. [ 13 ] The soil pH usually increases when the total alkalinity increases, but the balance of the added cations also has a marked effect on the soil pH. For example, increasing the amount of sodium in an alkaline soil tends to induce dissolution of calcium carbonate , which increases the pH. Calcareous soils may vary in pH from 7.0 to 9.5, depending on the degree to which Ca 2+ or Na + dominate the soluble cations. [ 13 ] High levels of aluminium occur near mining sites; small amounts of aluminium are released to the environment at the coal-fired power plants or incinerators . [ 15 ] Aluminium in the air is washed out by the rain or normally settles down but small particles of aluminium remain in the air for a long time. [ 15 ] Acidic precipitation is the main natural factor to mobilize aluminium from natural sources [ 16 ] and the main reason for the environmental effects of aluminium; [ 17 ] however, the main factor of presence of aluminium in salt and freshwater are the industrial processes that also release aluminium into air. [ 16 ] Plants grown in acid soils can experience a variety of stresses including aluminium (Al), hydrogen (H), and/or manganese (Mn) toxicity, as well as nutrient deficiencies of calcium (Ca) and magnesium (Mg). [ 18 ] Aluminium toxicity is the most widespread problem in acid soils. Aluminium is present in all soils to varying degrees, but dissolved Al 3+ is toxic to plants; Al 3+ is most soluble at low pH; above pH 5.0, there is little Al in soluble form in most soils. [ 19 ] [ 20 ] Aluminium is not a plant nutrient, and as such, is not actively taken up by the plants, but enters plant roots passively through osmosis . Aluminium can exist in many different forms and is a responsible agent for limiting growth in various parts of the world. Aluminium tolerance studies have been conducted in different plant species to see viable thresholds and concentrations exposed along with function upon exposure. [ 21 ] Aluminium inhibits root growth; lateral roots and root tips become thickened and roots lack fine branching; root tips may turn brown. In the root, the initial effect of Al 3+ is the inhibition of the expansion of the cells of the rhizodermis , leading to their rupture; thereafter it is known to interfere with many physiological processes including the uptake and transport of calcium and other essential nutrients, cell division, cell wall formation, and enzyme activity. [ 19 ] [ 22 ] Proton (H + ion) stress can also limit plant growth. The proton pump , H + -ATPase, of the plasmalemma of root cells works to maintain the near-neutral pH of their cytoplasm . A high proton activity (pH within the range 3.0–4.0 for most plant species) in the external growth medium overcomes the capacity of the cell to maintain the cytoplasmic pH and growth shuts down. [ 23 ] In soils with a high content of manganese -containing minerals, Mn toxicity can become a problem at pH 5.6 and lower. Manganese, like aluminium, becomes increasingly soluble as pH drops, and Mn toxicity symptoms can be seen at pH levels below 5.6. Manganese is an essential plant nutrient, so plants transport Mn into leaves. Classic symptoms of Mn toxicity are crinkling or cupping of leaves. [ 24 ] Soil pH affects the availability of some plant nutrients : As discussed above, aluminium toxicity has direct effects on plant growth; however, by limiting root growth, it also reduces the availability of plant nutrients. Because roots are damaged, nutrient uptake is reduced, and deficiencies of the macronutrients (nitrogen, phosphorus, potassium, calcium and magnesium) are frequently encountered in very strongly acidic to ultra-acidic soils (pH<5.0). [ 26 ] When aluminum levels increase in the soil, it decreases the pH levels. This does not allow for trees to take up water, meaning they cannot photosynthesize, leading them to die. The trees can also develop yellowish colour on their leaves and veins. [ 27 ] Molybdenum availability is increased at higher pH; this is because the molybdate ion is more strongly sorbed by clay particles at lower pH. [ 28 ] Zinc , iron , copper and manganese show decreased availability at higher pH (increased sorption at higher pH). [ 28 ] The effect of pH on phosphorus availability varies considerably, depending on soil conditions and the crop in question. The prevailing view in the 1940s and 1950s was that P availability was maximized near neutrality (soil pH 6.5–7.5), and decreased at higher and lower pH. [ 29 ] [ 30 ] Interactions of phosphorus with pH in the moderately to slightly acidic range (pH 5.5–6.5) are, however, far more complex than is suggested by this view. Laboratory tests, glasshouse trials and field trials have indicated that increases in pH within this range may increase, decrease, or have no effect on P availability to plants. [ 30 ] [ 31 ] Strongly alkaline soils are sodic and dispersive , with slow infiltration , low hydraulic conductivity and poor available water capacity . [ 32 ] Plant growth is severely restricted because aeration is poor when the soil is wet; while in dry conditions, plant-available water is rapidly depleted and the soils become hard and cloddy (high soil strength). [ 33 ] The higher the pH in the soil, the less water available to be distributed to the plants and organisms that depend on it. With a decreased pH, this does not allow for plants to uptake water like they normally would. This causes them to not be able to photosynthesize. [ 34 ] Many strongly acidic soils, on the other hand, have strong aggregation, good internal drainage , and good water-holding characteristics. However, for many plant species, aluminium toxicity severely limits root growth, and moisture stress can occur even when the soil is relatively moist. [ 19 ] In general terms, different plant species are adapted to soils of different pH ranges. For many species, the suitable soil pH range is fairly well known. [ 35 ] Online databases of plant characteristics, such as USDA PLANTS [ 36 ] and Plants for a Future [ 37 ] can be used to look up the suitable soil pH range of a wide range of plants. Documents like Ellenberg's indicator values for British plants [ 38 ] can also be consulted. However, a plant may be intolerant of a particular pH in some soils as a result of a particular mechanism, and that mechanism may not apply in other soils. For example, a soil low in molybdenum may not be suitable for soybean plants at pH 5.5, but soils with sufficient molybdenum allow optimal growth at that pH. [ 26 ] Similarly, some calcifuges (plants intolerant of high-pH soils) can tolerate calcareous soils if sufficient phosphorus is supplied. [ 39 ] Another confounding factor is that different varieties of the same species often have different suitable soil pH ranges. Plant breeders can use this to breed varieties that can tolerate conditions that are otherwise considered unsuitable for that species – examples are projects to breed aluminium-tolerant and manganese-tolerant varieties of cereal crops for food production in strongly acidic soils. [ 40 ] The table below gives suitable soil pH ranges for some widely cultivated plants as found in the USDA PLANTS Database . [ 36 ] Some species (like Pinus radiata and Opuntia ficus-indica ) tolerate only a narrow range in soil pH, whereas others (such as Vetiveria zizanioides ) tolerate a very wide pH range. In natural or near-natural plant communities , the various pH preferences of plant species (or ecotypes ) at least partly determine the composition and biodiversity of vegetation. While both very low and very high pH values are detrimental to plant growth, there is an increasing trend of plant biodiversity along the range from extremely acidic (pH 3.5) to strongly alkaline (pH 9) soils, i.e. there are more calcicole than calcifuge species, at least in terrestrial environments. [ 41 ] [ 42 ] Although widely reported and supported by experimental results, [ 43 ] [ 44 ] the observed increase of plant species richness with pH is still in need of a clearcut explanation. Competitive exclusion between plant species with overlapping pH ranges most probably contributes to the observed shifts of vegetation composition along pH gradients. [ 45 ] Soil biota (soil microflora , soil animals) are sensitive to soil pH, either directly upon contact or after soil ingestion or indirectly through the various soil properties to which pH contributes (e.g. nutrient status, metal toxicity , humus form ). According to the various physiological and behavioural adaptations of soil biota, the species composition of soil microbial and animal communities varies with soil pH. [ 46 ] [ 47 ] Along altitudinal gradients, changes in the species distribution of soil animal and microbial communities can be at least partly ascribed to variation in soil pH. [ 47 ] [ 48 ] The shift from toxic to non-toxic forms of aluminium around pH5 marks the passage from acid-tolerance to acid-intolerance, with few changes in the species composition of soil communities above this threshold, even in calcareous soils . [ 49 ] [ 50 ] Soil animals exhibit distinct pH preferences when allowed to exert a choice along a range of pH values, [ 51 ] explaining that various field distributions of soil organisms, motile microbes included, could at least partly result from active movement along pH gradients. [ 52 ] [ 53 ] Like for plants, competition between acido-tolerant and acido-intolerant soil-dwelling organisms was suspected to play a role in the shifts in species composition observed along pH ranges. [ 54 ] The opposition between acido-tolerance and acido-intolerance is commonly observed at species level within a genus or at genus level within a family , but it also occurs at much higher taxonomic rank , like between soil fungi and bacteria, here too with a strong involvement of competition. [ 55 ] It has been suggested that soil organisms more tolerant of soil acidity, and thus living mainly in soils at pH less than 5, were more primitive than those intolerant of soil acidity. [ 56 ] A cladistic analysis on the collembolan genus Willemia showed that tolerance to soil acidity was correlated with tolerance of other stress factors and that stress tolerance was an ancestral character in this genus. [ 57 ] However the generality of these findings remains to be established. At low pH, the oxidative stress induced by aluminium (Al 3+ ) affects soil animals the body of which is not protected by a thick chitinous exoskeleton like in arthropods , and thus are in more direct contact with the soil solution, e.g. protists , nematodes , rotifers ( microfauna ), enchytraeids ( mesofauna ) and earthworms ( macrofauna ). [ 58 ] Effects of pH on soil biota can be mediated by the various functional interactions of soil foodwebs . It has been shown experimentally that the collembolan Heteromurus nitidus , commonly living in soils at pH higher than 5, could be cultured in more acid soils provided that predators were absent. [ 59 ] Its attraction to earthworm excreta ( mucus , urine , faeces ), mediated by ammonia emission, [ 60 ] provides food and shelter within earthworm burrows in mull humus forms associated with less acid soils. [ 61 ] Soil biota affect soil pH directly through excretion , and indirectly by acting on the physical environment. Many soil fungi, although not all of them, acidify the soil by excreting oxalic acid , a product of their respiratory metabolism. Oxalic acid precipitates calcium, forming insoluble crystals of calcium oxalate and thus depriving the soil solution from this necessary element. [ 62 ] On the opposite side, earthworms exert a buffering effect on soil pH through their excretion of mucus , endowed with amphoteric properties. [ 63 ] By mixing organic matter with mineral matter, in particular clay particles, and by adding mucus as a glue for some of them, burrowing soil animals, e.g. fossorial rodents , moles , earthworms , termites , some millipedes and fly larvae, contribute to decrease the natural acidity of raw organic matter, as observed in mull humus forms . [ 64 ] [ 65 ] Finely ground agricultural lime is often applied to acid soils to increase soil pH ( liming ). The amount of limestone or chalk needed to change pH is determined by the mesh size of the lime (how finely it is ground) and the buffering capacity of the soil. A high mesh size (60 mesh = 0.25 mm; 100 mesh = 0.149 mm) indicates a finely ground lime that will react quickly with soil acidity. The buffering capacity of a soil depends on the clay content of the soil, the type of clay, and the amount of organic matter present, and may be related to the soil cation exchange capacity . Soils with high clay content will have a higher buffering capacity than soils with little clay, and soils with high organic matter will have a higher buffering capacity than those with low organic matter. [ 66 ] Soils with higher buffering capacity require a greater amount of lime to achieve an equivalent change in pH. [ 67 ] The buffering of soil pH is often directly related to the quantity of aluminium in soil solution and taking up exchange sites as part of the cation exchange capacity . This aluminium can be measured in a soil test in which it is extracted from the soil with a salt solution, and then is quantified with a laboratory analysis. Then, using the initial soil pH and the aluminium content, the amount of lime needed to raise the pH to a desired level can be calculated. [ 68 ] Amendments other than agricultural lime that can be used to increase the pH of soil include wood ash , industrial calcium oxide ( burnt lime ), magnesium oxide , basic slag ( calcium silicate ), and oyster shells. These products increase the pH of soils through various acid–base reactions . Calcium silicate neutralizes active acidity in the soil by reacting with H + ions to form monosilicic acid (H 4 SiO 4 ), a neutral solute. [ 69 ] The pH of an alkaline soil can be reduced by adding acidifying agents or acidic organic materials. Elemental sulfur (90–99% S) has been used at application rates of 300–500 kg/ha (270–450 lb/acre) – it slowly oxidises in the soil to form sulfuric acid . Acidifying fertilizers, such as ammonium sulfate , ammonium nitrate and urea , can help to reduce the pH of soil because ammonium oxidises to form nitric acid . Acidifying organic materials include peat or sphagnum peat moss. [ 70 ] However, in high-pH soils with a high calcium carbonate content (more than 2%), attempting to reduce the pH with acids can be very costly and ineffective. In such cases, it is often more efficient to add phosphorus, iron, manganese, copper, or zinc instead because deficiencies of these nutrients are the most common reasons for poor plant growth in calcareous soils. [ 71 ] [ 70 ]
https://en.wikipedia.org/wiki/Soil_pH
Pollution is an environmental issue in Canada . It has posed health risks to the Canadian population and is an area of concern for Canadian lawmakers. Air, water and soil pollution as well as the associated health effects are prominent points of contention in modern Canadian society. Air pollution in Canada is caused by industrial and vehicle emissions, agriculture, construction, wood burning, and energy production. [ 1 ] Ongoing monitoring of Canada's Air Pollutant Emissions Inventory shows that 14 of the 17 of the air pollutants monitored are decreasing compared to historical levels. Data of 2019 shows that Canada is expected to meet or exceed its emission reduction commitments for 2020, as per the amended Gothenburg Protocol. [ 2 ] While overall pollution levels have dropped, it was found that oil sand pollution has increased by 20% since 2009. [ 3 ] [ 4 ] Tar sands facilities were found to be among the top four highest polluters of volatile organic compounds (VOCs)- a major air contaminant. [ 4 ] VOCs and other air contaminants are set to increase in the future as a result of continued output from the oil sands. [ 4 ] Oil sand pollution is not only set to increase VOCs, but also, acid rain . [ 5 ] Acid rain is rain that has been contaminated by airborne chemicals, making it acidic . [ 6 ] Two major causes of acid rain are sulphur dioxide and nitrogen oxide . [ 7 ] Acid rain can cause damage to soil, water, wildlife, plants and buildings. Additionally, the airborne particles that cause acid rain can also contribute to smog. [ 7 ] In recent years progress has been made in reducing acid rain, however, Alberta's oil sands may soon set back this progress. In southeastern Saskatchewan , air pollution from oil production has breached provincial air quality standards hundreds of times since 2014. [ 8 ] In recent years, the Canada-United States Air Quality Agreement , signed on 13 March 1992, has improved air quality by reducing sulfur dioxide and nitrogen oxide emissions in both countries. [ 9 ] The agreement was meant to address the issue of transnational air pollution between the two countries. The agreement was expanded in 2000 to also include goals of reducing emissions of volatile organic compounds and levels of ground-level ozone . [ 9 ] Ground-level ozone is caused by reactions between nitrogen oxides and VOCs in the presence of sunlight. Ozone is a contributor to smog and is known to cause numerous respiratory diseases . [ 10 ] The 2012 Canada-United States Air Quality Agreement Progress Report found that "Canada's total emissions of sulfur dioxide have decreased by 57% from 1990 levels while the U.S. has reduced total sulfur dioxide emissions from covered sources by 67% from their 1990 emission levels. Between 2000 and 2010, Canada reduced total emissions of nitrogen oxides by 40% in the transboundary ozone region while U.S. total nitrogen oxide emissions decreased by 42% in the region". [ 11 ] While transnational pollution between the United States and Canada has decreased many Canadians still say they contend with polluted air as a result of drifting pollution from the U.S. Approximately 70% of the air pollution in Canada comes from the United States. [ 12 ] In 2006 the government of Ontario announced that "5,000 premature deaths caused by smog in the province every year can be attributed to air pollution that crosses the Canada-U.S. border." [ 13 ] Additionally, the then (2006) mayor of Halifax, Peter Kelley, also proclaimed "over 50 per cent of air pollutants over New Brunswick and Nova Scotia are from the U.S. For us, we're trying to deal with what's coming our way, but also what we generate here as well." [ 13 ] In an attempt to combat the pollution a petition was created. In 2006 the petition was filed by thirteen Canadian municipalities to the U.S. Environmental Protection Agency calling for a reduction in coal-fired plants. [ 13 ] The Climate Change Accountability Act called for greenhouse gas emissions to be 25% below 1990 levels by 2021, and 80% below 1990 levels by 2050. Although the bill was passed by the House of Commons , the bill was defeated by the Senate . Environment Minister Jim Prentice stated in early 2010 that the new goal for greenhouse gas emissions would be 17% below 2005 levels by 2020, the equivalent of a 3% increase from 1990. [ 14 ] While most of Canada's surface and ground water is generally clean there is some local and regional water pollution that can be caused by "industrial and municipal discharge, runoff, spills, and deposition of airborne pollutants". [ 15 ] Contaminated water can result in a myriad of serious consequences for human health. As previously stated, Alberta's oil sands are set to cause growing levels of acid rain consequentially leading to an increase in water contamination in the area. Acid rain will cause Canada's lakes and rivers to become further acidified. This is a problem as it decreases levels of surface water calcium. This lower concentration of calcium is already having particularly adverse effects on plant life, as can be seen with the Daphnia species-an important food source for aquatic species and marine life. [ 16 ] A recent study at the University of Alberta found levels of metals like arsenic , lead and mercury to be considerably higher than national guidelines in water downstream from Albertan oil sites. [ 17 ] This pollution could potentially result in harmful health implications for fish and other wildlife. [ 17 ] The study further discerned that their findings were "contrary to claims made by industry and government" who purported that "pollutants are from natural sources and not from the expanding production of oil from tar sands." [ 17 ] Other than contributing to acid rain and high levels of metals in water, sites of oil production can also cause significant damage by human error and runoff. A prominent example is the 2007 case involving the Athabasca River . Due to human error, energy magnate Suncor spilled 9.8 million liters of oil sands waste water into the river causing adverse effects for people and wildlife in the area. [ 4 ] The Athabasca River can also be used as an example of oil sands runoff. It was found that the Athabasca's waters, which are downstream from the oil sands, had higher concentrations of pollutants as a result of runoff. [ 18 ] High concentrations of pollutants can have serious consequences for wildlife and humans. Recently, it was reported that there were significant increases in fish deformities as well as an increase in cancer rates in a Native community downstream from the Athabasca. [ 19 ] Pollution of the Great Lakes , the world's biggest bodies of fresh water, [ 20 ] continue to be a significant problem for both Canada and the United States. According to Derek Stack, executive director of Great Lakes United, "High pollution levels in the Great Lakes basin continue to take an apparent toll on the air and water quality of the ecosystem." [ 21 ] In 2002, it was reported that the Great Lakes basin was home to 45% of all toxic air pollution in Canada, in turn affecting the Great Lakes' water. [ 22 ] An even more recent report suggests that the Alberta oil sands' impact could reach as far as the Great Lakes. [ 23 ] The report warns that "[oil] refineries will be using the Great Lakes 'as a cheap supply' source for their copious water needs and the area’s air 'as a pollution dump'." [ 23 ] Sulphur dioxide emissions have also contributed to the acidity in Canada's Lakes. The thousands of lakes in Canada (including the Great Lakes) have an average pH of 5, which is harmfully acidic for aquatic life in these lakes. [ 12 ] In September 2012, the United States and Canada signed an amended version of the Great Lakes Water Quality Agreement . [ 24 ] The overarching purpose of the Agreement is to "restore and maintain the chemical, physical and biological integrity of the waters". [ 24 ] Significant amendments made to the Agreement include "address[ing] aquatic invasive species, habitat degradation and the effects of climate change, and support continued work on existing threats to people's health and the environment in the Great Lakes Basin such as harmful algae, toxic chemicals, and discharges from other vessels". [ 24 ] However, some people contend that the changes made to the Agreement while good in principle, lack the "hard number goals, and actions to reach them." [ 25 ] Under the 1970 Arctic Waters Pollution Prevention Act , the Canadian government established a document to prevent pollution of Canadian Arctic waters. However, in recent years Arctic waters have become increasingly polluted. It was recently found that due to pollution some waters have levels of lead that are higher than the Canadian guidelines. [ 26 ] Coastal communities that emit waste also contribute to arctic pollution. Arctic coastal communities do not presently have the infrastructure necessary to properly deal with their waste, this could lead to greater pollution in the future as these communities continue to grow in size. [ 26 ] Other than coastal communities, waste and litter from the rest of the world continues to be a significant issue in the Arctic, with waste levels doubling in the past ten years. [ 27 ] The most significant types of litter found are plastic items and plastic bags. [ 27 ] The Guidelines for Canadian Drinking Water Quality are guidelines for drinking water quality standards in Canada developed by Health Canada. These guidelines set forth recommendations for the maximum concentrations of various substances in drinking water. Provinces and territories are responsible for enforcing these guidelines, as there is no national regulatory body for drinking water. [ 28 ] Water pollution by sewage is one of the main culprits involved in polluting drinking water. [ 29 ] Advocacy group Ecojustice estimates overall raw sewage dumping in Canada to be around 200 billion litres a year. [ 30 ] The Canadian government recently announced waste water regulations that would allow for sewage to be dumped into Canadian waters until 2040. [ 31 ] Proper measures for waste water disposal will not immediately be put in place, rather, they will be implemented gradually from 2020 to 2040. However, in the meantime, Canadian municipalities may continue to pollute their waters by dumping sewage. This can prominently be viewed with Halifax , Nova Scotia . In Halifax, human waste is dumped directly into the Halifax harbour. [ 30 ] This dumping can mainly be attributed to a failure in their sewage treatment infrastructure. [ 30 ] Victoria, British Columbia also follows a similar practice by getting rid of their untreated waste into the ocean. [ 30 ] However, the government has plans to open operational treatment facilities for 2016. [ 30 ] Water pollution resulting from sewage can also be attributed to error in sewage facilities. A recent example can be evidenced with Ottawa. In 2004 Ottawa experienced a 190 million liter raw sewage spill into the Ottawa River . [ 32 ] Similarly, Winnipeg, released "partially treated sewage water into the Red River for seven weeks" in 2011. [ 33 ] However, in this case, the city was actually charged for their pollution. [ 33 ] Numerous other places like Richmond B.C and Calgary A.B, have experienced significant sewage spills in their native waters. [ 32 ] While soil pollution is present in Canada, it is not yet an area of great national concern. Some of the main causes of soil pollution include chemical/oil spills into the ground, road salt, excessive pesticide use by farmers, acid rain, and polluted water. [ 34 ] Acid Deposition is a leading cause of soil degradation. The acidic particles from pollutants become part of soil, harming the pH with such low acidity and therefore harming the organisms that live within the soil. [ 35 ] As Environment Canada mentions " soil degradation degrades the land and places significant stress on ecologically sensitive biota and flora". [ 36 ] Soil degradation in Canada's biologically sensitive forests as a result of pollution, is one of the most significant cases of degradation in the country. One study found that 12% of Alberta's forests' soils are over their acid carrying capacity. [ 37 ] This rise in acidity is attributed to the continual extraction of fossil fuel from the Alberta oil sands. [ 37 ] Oil refinery sites, like those found in Alberta, have become some of the most dominant contributors to Canadian soil pollution . A further example can be witnessed in Calgary, where a neighbourhood built on an old Imperial Oil refinery needed their soil replaced due to contamination. [ 38 ] As a result of Canada's icy winters, salt is needed in order to deice slippery roads. The primary ingredient of road salt is sodium chloride . [ 39 ] Road salt, while helping cars and people to gain traction in the winter, can have serious consequences for soil. As National Geographic found, "Road salt can pollute soil at every stage in the deicing process." [ 39 ] This pollution is a result of numerous factors such as runoff, application and spray from vehicles. [ 39 ] In Canada, there has been research that shows that "salt run-off from roads can increase local chloride levels to between 100 and 4,000 times normal levels." [ 40 ] Salt can have adverse effects on soil and soil composition. Significant levels of chloride (one of the main components in salt) can "alter the soil 's pH chemistry and elevate levels of heavy metal pollutants, while at the same time causing a loss of soil structure and killing off micro-organisms". [ 40 ] These effects can have dire consequences for plants rendering them unable to grow or stunting their growth. [ 40 ] Salt and oil refineries are not the only contaminants of soil. Polychlorinated biphenyls or PCBs also pollute the soil. PCBs are released into the environment through "spills, leaks from electrical and other equipment, and improper disposal and storage". [ 41 ] However, recently it was found that household weeds were able to remove PCBs from contaminated soil. A study found that "the weeds stored PCBs in their shoots and could be harvested for disposal cutting the need to expensively remove and incinerate contaminated soil". [ 42 ] The Canadian federal government formed a current institution that protects marine areas; this includes the mitigation of plastic pollution . In 1997, Canada adopted legislation for oceans management and passed the Oceans Act . [ 43 ] Federal governance, Regional Governance, and Aboriginal Peoples are the actors involved in the process of decision-making and implementation of the decision. The Regional Governance bodies are federal, provincial, and territorial government agencies that hold responsibilities of the marine environment. Aboriginal Peoples in Canada have treaty and non-treaty rights related to ocean activities. According to the Canadian government, they respect these rights and work with Aboriginal groups in oceans management activities. [ 43 ] With the Oceans Act made legal, Canada made a commitment to conserve and protect the oceans. The Oceans Act underlying principle is sustainable development, precautionary and integrated management approach to ensure that there is a comprehensive understanding in protecting marine areas. In the integrated management approach, the Oceans Act designates federal responsibility to the Minister of Fisheries and Oceans Canada for any new and emerging ocean-related activities. [ 43 ] The Act encourages collaboration and coordination within the government that unifies interested parties. Moreover, the Oceans Act engages any Canadians who are interested in being informed of the decision-making regarding ocean environment. In 2005, federal organizations developed the Federal Marine Protected Areas Strategy. [ 43 ] This strategy is a collaborative approach implemented by Fisheries and Oceans Canada , Parks Canada , and Environment Canada to plan and manage federal marine protected areas . The federal marine protected areas work with Aboriginal groups, industries, academia, environmental groups, and NGOs to strengthen marine protected areas. The federal marine protected areas network consists of three core programs: Marine Protected Areas, Marine Wildlife Areas , and National Marine Conservation Areas . [ 43 ] The MPA is a program to be noted because it is significant in protecting ecosystems from the effects of industrial activities. The MPA guiding principles are Integrated Management, ecosystem-based management approach, Adaptive Management Approach, Precautionary Principle , and Flexible Management Approach. [ 43 ] All five guiding principles are used collectively and simultaneously to collaborate and respect legislative mandates of individual departments, to use scientific knowledge and traditional ecological knowledge (TEK) to manage human activities, to monitor and report on programs to meet conservation objectives of MPAs, to use best available information in the absence of scientific certainty, and to maintain a balance between conservation needs and sustainable development objectives. [ 43 ] In 2021, the government of Canada officially added plastic manufactured items to a list of toxic substances under the Canadian Environmental Protection Act, 1999 . [ 44 ] Later in 2021, the government moved forward on regulations to ban single-use plastics, namely checkout bags, cutlery, foodservice ware made from or containing problematic plastics, ring carrier, stir sticks and most straws. [ 45 ] Pollution is associated with numerous negative health effects in humans. Air pollution has been shown to negatively effect humans' cardiovascular and respiratory systems. [ 46 ] Lung tissue can be damaged with direct exposure to air pollutants such as ozone, potentially causing lung inflammation and impairment of lung function. [ 46 ] As Environment Canada mentions "impacts from exposure can range from "minor breathing problems to premature death". [ 47 ] Some of the main respiratory diseases caused by air pollution include asthma , chronic obstructive pulmonary disease , and lung cancer . [ 48 ] Specific cardiovascular disease and problems caused by air pollution include heart attack , hypertension , inflammation around the heart, stroke and arrhythmias . [ 49 ] Health Canada estimates that 5,900 Canadians die every year from air pollution. [ 50 ] A 2008 study by the Canadian Medical Association estimated that almost 3,000 Canadians die annually from short-term exposure to air pollution, while another 18,000 die annually due to long-term effects of polluted air. The study estimated the economic impact of air pollution to be at $8 billion, including lost productivity, health care costs, deaths and a decrease in quality of life. [ 51 ] Soil pollution also causes numerous diseases. Some of the most prominent are cancer, kidney disease, liver disease, dysentery , skin infections, and stomach infections. [ 52 ]
https://en.wikipedia.org/wiki/Soil_pollution_in_Canada
Soil contamination , soil pollution , or land pollution as a part of land degradation is caused by the presence of xenobiotic (human-made) chemicals or other alteration in the natural soil environment. It is typically caused by industrial activity, agricultural chemicals or improper disposal of waste . The most common chemicals involved are petroleum hydrocarbons , polynuclear aromatic hydrocarbons (such as naphthalene and benzo(a)pyrene ), solvents , pesticides, lead , and other heavy metals . [ 1 ] Contamination is correlated with the degree of industrialization and intensity of chemical substance. The concern over soil contamination stems primarily from health risks, from direct contact with the contaminated soil, vapour from the contaminants, or from secondary contamination of water supplies within and underlying the soil. [ 2 ] Mapping of contaminated soil sites and the resulting clean ups are time-consuming and expensive tasks, and require expertise in geology , hydrology , chemistry , computer modelling , and GIS in Environmental Contamination , as well as an appreciation of the history of industrial chemistry. [ 3 ] In North America and South-Western Europe the extent of contaminated land is best known for as many of the countries in these areas having a legal framework to identify and deal with this environmental problem. Developing countries tend to be less tightly regulated despite some of them having undergone significant industrialization . Soil pollution can be caused by the following (non-exhaustive list): The most common chemicals involved are petroleum hydrocarbons , solvents , pesticides, lead , and other heavy metals . Any activity that leads to other forms of soil degradation ( erosion , compaction , etc.) may indirectly worsen the contamination effects in that soil remediation becomes more tedious. Historical deposition of coal ash used for residential, commercial, and industrial heating, as well as for industrial processes such as ore smelting , were a common source of contamination in areas that were industrialized before about 1960. Coal naturally concentrates lead and zinc during its formation, as well as other heavy metals to a lesser degree. When the coal is burned, most of these metals become concentrated in the ash (the principal exception being mercury). Coal ash and slag may contain sufficient lead to qualify as a "characteristic hazardous waste ", defined in the US as containing more than 5 mg/L of extractable lead using the TCLP procedure. In addition to lead, coal ash typically contains variable but significant concentrations of polynuclear aromatic hydrocarbons (PAHs; e.g., benzo(a)anthracene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(a)pyrene, indeno(cd)pyrene, phenanthrene, anthracene, and others). These PAHs are known human carcinogens and the acceptable concentrations of them in soil are typically around 1 mg/kg. Coal ash and slag can be recognised by the presence of off-white grains in soil, gray heterogeneous soil, or (coal slag) bubbly, vesicular pebble-sized grains. Treated sewage sludge , known in the industry as biosolids , has become controversial as a " fertilizer ". As it is the byproduct of sewage treatment, it generally contains more contaminants such as organisms, pesticides, and heavy metals than other soil. [ 4 ] In the European Union, the Urban Waste Water Treatment Directive allows sewage sludge to be sprayed onto land. The volume is expected to double to 185,000 tons of dry solids in 2005. This has good agricultural properties due to the high nitrogen and phosphate content. In 1990/1991, 13% wet weight was sprayed onto 0.13% of the land; however, this is expected to rise 15 fold by 2005. [ needs update ] Advocates [ who? ] say there is a need to control this so that pathogenic microorganisms do not get into water courses and to ensure that there is no accumulation of heavy metals in the top soil. [ 5 ] A pesticide is a substance used to kill a pest. A pesticide may be a chemical substance, biological agent (such as a virus or bacteria), antimicrobial, disinfectant or device used against any pest. Pests include insects, plant pathogens, weeds, mollusks, birds, mammals, fish, nematodes (roundworms) and microbes that compete with humans for food, destroy property, spread or are a vector for disease or cause a nuisance. Although there are benefits to the use of pesticides, there are also drawbacks, such as potential toxicity to humans and other organisms. [ 6 ] [ 7 ] Herbicides are used to kill weeds, especially on pavements and railways. They are similar to auxins and most are biodegradable by soil bacteria. However, one group derived from trinitrotoluene (2:4 D and 2:4:5 T) have the impurity dioxin, which is very toxic and causes fatality even in low concentrations. Another herbicide is Paraquat . It is highly toxic but it rapidly degrades in soil due to the action of bacteria and does not kill soil fauna. [ 8 ] Insecticides are used to rid farms of pests which damage crops. The insects damage not only standing crops but also stored ones and in the tropics it is reckoned that one third of the total production is lost during food storage. As with fungicides , the first insecticides used in the nineteenth century were inorganic e.g. Paris Green and other compounds of arsenic . Nicotine has also been used since 1690. [ 9 ] There are now two main groups of synthetic insecticides: 1. Organochlorines include DDT , Aldrin , Dieldrin and BHC. They are cheap to produce, potent and persistent. DDT was used on a massive scale from the 1930s, with a peak of 72,000 tonnes used 1970. Then usage fell as the harmful environmental effects were realized. It was found worldwide in fish and birds and was even discovered in the snow in the Antarctic . It is only slightly soluble in water but is very soluble in the bloodstream. It affects the nervous and endocrine systems and causes the eggshells of birds to lack calcium causing them to be easily breakable. It is thought to be responsible for the decline of the numbers of birds of prey like ospreys and peregrine falcons in the 1950s – they are now recovering. [ 10 ] As well as increased concentration via the food chain, it is known to enter via permeable membranes, so fish get it through their gills. As it has low water solubility, it tends to stay at the water surface, so organisms that live there are most affected. DDT found in fish that formed part of the human food chain caused concern, but the levels found in the liver, kidney and brain tissues was less than 1 ppm and in fat was 10 ppm, which was below the level likely to cause harm. However, DDT was banned in the UK and the United States to stop the further buildup of it in the food chain. U.S. manufacturers continued to sell DDT to developing countries, who could not afford the expensive replacement chemicals and who did not have such stringent regulations governing the use of pesticides. [ 11 ] 2. Organophosphates , e.g. parathion , methyl parathion and about 40 other insecticides are available nationally. Parathion is highly toxic, methyl-parathion is less so and Malathion is generally considered safe as it has low toxicity and is rapidly broken down in the mammalian liver. This group works by preventing normal nerve transmission as cholinesterase is prevented from breaking down the transmitter substance acetylcholine, resulting in uncontrolled muscle movements. [ 12 ] The disposal of munitions, and a lack of care in manufacture of munitions caused by the urgency of production, can contaminate soil for extended periods. There is little published evidence on this type of contamination largely because of restrictions placed by governments of many countries on the publication of material related to war effort. However, mustard gas stored during World War II has contaminated some sites for up to 50 years [ 13 ] and the testing of Anthrax as a potential biological weapon contaminated the whole island of Gruinard . [ 14 ] Contaminated or polluted soil directly affects human health through direct contact with soil or via inhalation of soil contaminants that have vaporized; potentially greater threats are posed by the infiltration of soil contamination into groundwater aquifers used for human consumption, sometimes in areas apparently far removed from any apparent source of above-ground contamination. Toxic metals can also make their way up the food chain through plants that reside in soils containing high concentrations of heavy metals. [ 15 ] This tends to result in the development of pollution-related diseases . Most exposure is accidental, and exposure can happen through: [ 16 ] However, some studies estimate that 90% of exposure is through eating contaminated food. [ 16 ] Health consequences from exposure to soil contamination vary greatly depending on pollutant type, the pathway of attack, and the vulnerability of the exposed population. Researchers suggest that pesticides and heavy metals in soil may harm cardiovascular health, including inflammation and change in the body's internal clock. [ 17 ] Chronic exposure to chromium , lead , and other metals, petroleum, solvents, and many pesticide and herbicide formulations can be carcinogenic, can cause congenital disorders , or can cause other chronic health conditions. Industrial or human-made concentrations of naturally occurring substances, such as nitrate and ammonia associated with livestock manure from agricultural operations, have also been identified as health hazards in soil and groundwater. [ citation needed ] Chronic exposure to benzene at sufficient concentrations is known to be associated with a higher incidence of leukemia. Mercury and cyclodienes are known to induce higher incidences of kidney damage and some irreversible diseases. PCBs and cyclodienes are linked to liver toxicity. Organophosphates and carbonates can cause a chain of responses leading to neuromuscular blockage . Many chlorinated solvents induce liver changes, kidney changes, and depression of the central nervous system. There is an entire spectrum of further health effects such as headache, nausea, fatigue, eye irritation and skin rash for the above cited and other chemicals. At sufficient dosages a large number of soil contaminants can cause death by exposure via direct contact, inhalation or ingestion of contaminants in groundwater contaminated through soil. [ citation needed ] The Scottish Government has commissioned the Institute of Occupational Medicine to undertake a review of methods to assess risk to human health from contaminated land. The overall aim of the project is to work up guidance that should be useful to Scottish Local Authorities in assessing whether sites represent a significant possibility of significant harm (SPOSH) to human health. It is envisaged that the output of the project will be a short document providing high level guidance on health risk assessment with reference to existing published guidance and methodologies that have been identified as being particularly relevant and helpful. The project will examine how policy guidelines have been developed for determining the acceptability of risks to human health and propose an approach for assessing what constitutes unacceptable risk in line with the criteria for SPOSH as defined in the legislation and the Scottish Statutory Guidance. [ citation needed ] Not unexpectedly, soil contaminants can have significant deleterious consequences for ecosystems. [ 18 ] There are radical soil chemistry changes which can arise from the presence of many hazardous chemicals even at low concentration of the contaminant species. These changes can manifest in the alteration of metabolism of endemic microorganisms and arthropods resident in a given soil environment. The result can be virtual eradication of some of the primary food chain, which in turn could have major consequences for predator or consumer species. Even if the chemical effect on lower life forms is small, the lower pyramid levels of the food chain may ingest alien chemicals, which normally become more concentrated for each consuming rung of the food chain. Many of these effects are now well known, such as the concentration of persistent DDT materials for avian consumers, leading to weakening of egg shells, increased chick mortality and potential extinction of species. [ 19 ] Effects occur to agricultural lands which have certain types of soil contamination. Contaminants typically alter plant metabolism, often causing a reduction in crop yields. This has a secondary effect upon soil conservation , since the languishing crops cannot shield the Earth's soil from erosion . Some of these chemical contaminants have long half-lives and in other cases derivative chemicals are formed from decay of primary soil contaminants. [ 20 ] Heavy metals and other soil contaminants can adversely affect the activity, species composition and abundance of soil microorganisms, thereby threatening soil functions such as biochemical cycling of carbon and nitrogen. [ 21 ] However, soil contaminants can also become less bioavailable by time, and microorganisms and ecosystems can adapt to altered conditions. Soil properties such as pH, organic matter content and texture are very important and modify mobility, bioavailability and toxicity of pollutants in contaminated soils. [ 22 ] The same amount of contaminant can be toxic in one soil but totally harmless in another soil. This stresses the need for soil-specific risks assessment and measures. Cleanup or environmental remediation is analyzed by environmental scientists who utilize field measurement of soil chemicals and also apply computer models ( GIS in Environmental Contamination ) for analyzing transport [ 23 ] and fate of soil chemicals. Various technologies have been developed for remediation of oil-contaminated soil and sediments [ 24 ] There are several principal strategies for remediation: Various national standards for concentrations of particular contaminants include the United States EPA Region 9 Preliminary Remediation Goals (U.S. PRGs), the U.S. EPA Region 3 Risk Based Concentrations (U.S. EPA RBCs) and National Environment Protection Council of Australia Guideline on Investigation Levels in Soil and Groundwater. The immense and sustained growth of the People's Republic of China since the 1970s has exacted a price from the land in increased soil pollution. The Ministry of Ecology and Environment believes it to be a threat to the environment, to food safety and to sustainable agriculture. According to a scientific sampling, 150 million mu (100,000 square kilometres) of China's cultivated land have been polluted, with contaminated water being used to irrigate a further 32.5 million mu (21,670 square kilometres) and another 2 million mu (1,300 square kilometres) covered or destroyed by solid waste. In total, the area accounts for one-tenth of China's cultivatable land, and is mostly in economically developed areas. An estimated 12 million tonnes of grain are contaminated by heavy metals every year, causing direct losses of 20 billion yuan ($2.57 billion USD ). [ 27 ] Recent survey shows that 19% of the agricultural soils are contaminated which contains heavy metals and metalloids. And the rate of these heavy metals in the soil has been increased dramatically. [ 28 ] According to the received data from Member states, in the European Union the number of estimated potential contaminated sites is more than 2.5 million [ 29 ] and the identified contaminated sites around 342 thousand. Municipal and industrial wastes contribute most to soil contamination (38%), followed by the industrial/commercial sector (34%). Mineral oil and heavy metals are the main contaminants contributing around 60% to soil contamination. In terms of budget, the management of contaminated sites is estimated to cost around 6 billion Euros (€) annually. [ 29 ] Generic guidance commonly used in the United Kingdom are the Soil Guideline Values published by the Department for Environment, Food and Rural Affairs (DEFRA) and the Environment Agency . These are screening values that demonstrate the minimal acceptable level of a substance. Above this there can be no assurances in terms of significant risk of harm to human health. These have been derived using the Contaminated Land Exposure Assessment Model (CLEA UK). Certain input parameters such as Health Criteria Values, age and land use are fed into CLEA UK to obtain a probabilistic output. [ 30 ] Guidance by the Inter Departmental Committee for the Redevelopment of Contaminated Land (ICRCL) [ 31 ] has been formally withdrawn by DEFRA , for use as a prescriptive document to determine the potential need for remediation or further assessment. The CLEA model published by DEFRA and the Environment Agency (EA) in March 2002 sets a framework for the appropriate assessment of risks to human health from contaminated land, as required by Part IIA of the Environmental Protection Act 1990 . As part of this framework, generic Soil Guideline Values (SGVs) have currently been derived for ten contaminants to be used as "intervention values". [ 32 ] These values should not be considered as remedial targets but values above which further detailed assessment should be considered; see Dutch standards . Three sets of CLEA SGVs have been produced for three different land uses, namely It is intended that the SGVs replace the former ICRCL values. The CLEA SGVs relate to assessing chronic (long term) risks to human health and do not apply to the protection of ground workers during construction, or other potential receptors such as groundwater, buildings, plants or other ecosystems. The CLEA SGVs are not directly applicable to a site completely covered in hardstanding, as there is no direct exposure route to contaminated soils. [ 33 ] To date, the first ten of fifty-five contaminant SGVs have been published, for the following: arsenic, cadmium , chromium, lead, inorganic mercury, nickel, selenium ethyl benzene, phenol and toluene. Draft SGVs for benzene, naphthalene and xylene have been produced but their publication is on hold. Toxicological data (Tox) has been published for each of these contaminants as well as for benzo[a]pyrene, benzene, dioxins, furans and dioxin-like PCBs, naphthalene, vinyl chloride, 1,1,2,2 tetrachloroethane and 1,1,1,2 tetrachloroethane, 1,1,1 trichloroethane, tetrachloroethene, carbon tetrachloride, 1,2-dichloroethane, trichloroethene and xylene. The SGVs for ethyl benzene, phenol and toluene are dependent on the soil organic matter (SOM) content (which can be calculated from the total organic carbon (TOC) content). As an initial screen the SGVs for 1% SOM are considered to be appropriate. [ 34 ] As of February 2021, there are a total of 2,500 plus contaminated sites in Canada . [ 35 ] One infamous contaminated sited is located near a nickel-copper smelting site in Sudbury, Ontario . A study investigating the heavy metal pollution in the vicinity of the smelter reveals that elevated levels of nickel and copper were found in the soil; values going as high as 5,104ppm Ni , and 2,892 ppm Cu within a 1.1 km range of the smelter location. Other metals were also found in the soil; such metals include iron, cobalt, and silver. Furthermore, upon examining the different vegetation surrounding the smelter it was evident that they too had been affected; the results show that the plants contained nickel, copper and aluminium as a result of soil contamination. [ 36 ] In March 2009, the issue of uranium poisoning in Punjab attracted press coverage. It was alleged to be caused by fly ash ponds of thermal power stations, which reportedly lead to severe birth defects in children in the Faridkot and Bhatinda districts of Punjab . The news reports claimed the uranium levels were more than 60 times the maximum safe limit. [ 37 ] [ 38 ] In 2012, the Government of India confirmed [ 39 ] that the ground water in Malwa belt of Punjab has uranium metal that is 50% above the trace limits set by the United Nations' World Health Organization (WHO). Scientific studies, based on over 1000 samples from various sampling points, could not trace the source to fly ash and any sources from thermal power plants or industry as originally alleged. The study also revealed that the uranium concentration in ground water of Malwa district is not 60 times the WHO limits, but only 50% above the WHO limit in 3 locations. This highest concentration found in samples was less than those found naturally in ground waters currently used for human purposes elsewhere, such as Finland . [ 40 ] Research is underway to identify natural or other sources for the uranium.
https://en.wikipedia.org/wiki/Soil_pollution_in_India
There are a range of environmental issues in Southern Africa , such as climate change , land , water , deforestation , land degradation , and pollution . The Southern Africa region itself, except for South Africa , [ 1 ] produces less carbon emissions but is a recipient of climate change impacts characterized by changes in precipitation, extreme weather events and hot temperatures. Through an attempt of keeping up with the developing world and trying to meet the high demands of the growing population , Southern Africa has exhausted its many resources resulting in severe environmental damage. Southern Africa's log, and produce are the cores of their economy, and this region has become dependent on these resources. The continuous depleting and improper treatment of their natural resources have led Southern Africa to the state where they are. Southern Africa consists of countries such as: Angola , Botswana , Eswatini , Lesotho , Malawi , Mozambique , Namibia , South Africa , Zambia , and Zimbabwe . Lesotho is surrounded by South Africa (it is in the middle of South Africa). Some environmental issues that affect Southern Africa are: water pollution , air pollution , land degradation , solid waste pollution, and deforestation . The environmental damage affects not only the population's health , but also the species that live in the area, while also contributing to the worldwide issue of climate change . One of Southern Africa's biggest issues is the lack of clean water . According to The United Nations Convention on Climate Change on South Africa in 2000, the water around Africa is unevenly distributed, meaning that 60% of the water is situated in only 20% of the land. [ 2 ] Less than 10% of Southern Africa's surface water is accessible [ 3 ] and due to the fact that a majority of their groundwater lay under large rock formations, groundwater becomes difficult to access as well. [ 2 ] Climate change and its attendant effects on temperature and precipitation may have an additional impact. Many Africans are moving to rural areas , adding to the already high demands for clean water [ 4 ] [ failed verification ] and while demands are growing drastically, freshwater supplies remain limited. Adding to the high demands, Durban ’s dam has decreased by 20% since 2010, [ 5 ] and up to 30% of the water has either been stolen or given away illegally through international trading. [ 4 ] “A review of water availability in 1996 estimated that the total average annual surface runoff was 150 million cubic metres, the maximum potential annual system yield was 33 290 million cubic meters, and total water annual requirements were 20 045 million cubic metres. Water requirements could increase by about 50% by 2030 (Department of Water and Forestry, 2000).” [ 2 ] Although South Africa has of the best, cleanest water out of all the countries in Southern Africa, many don't have access to basic sanitation. [ 6 ] A majority of Southern Africa's accessible water is unclean, making the water vulnerable for water transmitted diseases to exist. Water-borne diseases such as Hepatitis A and Hepatitis E increase, while some of the water become so unclean that diseases such as: Typhoid fever , Leptospirosis , Schistosomiasis , and Bilharzia are transmitted through water contact. [ 7 ] As the population of people moving to urbanized areas increase, the demands for food supply also grow. As a mean to keep up with these high demands, the use of fertilization and sewage contamination also incline. Chemicals found in fertilizers and sewage wastes can cause diseases , which is harmful to other species in the environment. Diseases increase which may cause illnesses such as: diarrhea , hayfever , skin rashes , vomiting , fevers , gastroenteritis , muscle and joint pains, and eye irritations. [ 8 ] South Africa is situated at the very tip of Southern Africa. This location causes South Africa to become very vulnerable to oil spills . High levels of oil is transported from the Middle East to Europe and America along the coast, making Southern African's water and ecosystem at risk to being severely damaged. It thus is prone to oil spill. [ 9 ] Coal mining is one of Southern Africa's main energy source, but it holds a huge negative impact on the land's water, air and soil quality . Acid mine drainage is the result of the excess coal mining that occurs. Sulphuric Acid is released from coal mining, and although the generalizing process is slow, the time it takes for the acid to neutralize is equally as slow. When clean, excess water is released from the rock masses that are broken through mining, it's mixed with the sulphuric acid causing the water to become toxic. This toxic, contaminated water kills plants and animals, while also dissolving aluminum and heavy minerals found in clean water (increasing toxicity level). Although rocks which contain calcium carbonate are able to neutralize the acidic water, Southern Africa does not have the rocks which contain these minerals. [ 10 ] Southern Africa experiences poor ambient and indoor air quality . [ 11 ] In this developing region , low-grade fuels are used to meet high demands for food , and energy . [ 3 ] During the winter, pollutants are trapped in the air due to the high pressure, and are unable to move or dissipate. In the summer, due to the low pressure, pollutants are dissipated through unstable circulation. Many women are also cooking indoors with fossil fuels, which is the main cause for the health problems in women and children. 75.2% of Southern Africa's energy come from Highveld Areas, [ 3 ] where 5 of its 10 Eskom Power Stations are the largest in the world. [ 12 ] Highveld areas are above sea level, making the oxygen level 20% less than the oxygen level in the coast. This results in an incomplete combustion of fossil fuels, [ 12 ] and a severe nocturnal temperature inversion to occur; which results in smoke being trapped in the air [ 10 ] 860 tons of SO 2 is produced from 3 of their main power stations (Matla, Duvha and Arnot), [ 12 ] “which exceeds the World Health Organisation’s (WHO) [exposure to particulate matter] standards of 180 mg.m-3 by 6 to 7 times during winter months (Annegarn et al. 1996 a,b)”. [ 3 ] This high concentration of air pollution surround the area making it very dangerous to one's health. With the increase of population, and an increase in people who are moving to urbanized areas , the number of solid waste produced is increasing. South Africa's Department of Environmental Affairs and Tourism estimates that over half of the population of South Africa lack "adequate" solid waste treatment, [ 3 ] instead, waste is often dumped, buried or burned. [ 13 ] With the decrease in water and the high demands for agriculture , Southern Africa's land is becoming less fertile . Climate change is also causing an increase in water evaporation from the soil, making it very difficult for produce in Southern Africa . Africa itself is located in an area where climate is unpredictable, making them vulnerable to climate change and while Southern Africa is semi-arid , it puts them at risk for desertification . Desertification causes an increase in soil erosion , making it difficult for plants to grow. This will lead to unsustainable food, and endanger Southern Africa's wildlife. [ 14 ] Through time, soil erosion will result in harvesting alien plants. Alien plants threaten indigenous plants and reduce grazing areas, which contributes to soil erosion . [ 15 ] Southern Africa's land is already over cropped and over-grazed as a result of Africa's undistributed lands. With the combination of alien plants and the exhaustion of their lands, Southern Africa's degraded land is beyond repair. [ 16 ] Many countries use the method of irrigation as a way to prevent desertification and droughts. Unfortunately, only 4% of Sub-Saharan Africa is equipped for irrigation. [ 17 ] With the decrease in rainfall, and the lack of irrigation, Southern Africa's land and soil will soon become arid.
https://en.wikipedia.org/wiki/Soil_pollution_in_Southern_Africa
Soil quality refers to the condition of soil based on its capacity to perform ecosystem services that meet the needs of human and non-human life. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Soil quality reflects how well a soil performs the functions of maintaining biodiversity and productivity, partitioning water and solute flow, filtering and buffering, nutrient cycling , and providing support for plants and other structures. Soil management has a major impact on soil quality. Soil quality relates to soil functions . Unlike water or air, for which established standards have been set, soil quality is difficult to define or quantify. Soil quality can be evaluated using the Soil Management Assessment Framework. [ 5 ] Soil quality in agricultural terms is measured on a scale of soil value ( Bodenwertzahl ) in Germany . [ 6 ] Soil quality is primarily measured by chemical, physical, and biological indicators because soil function cannot easily be measured directly. [ 7 ] Each of these categories comprises several indicators that provide insight into overall soil quality. There are very few soil quality monitoring systems that can provide near real-time information on these indicators but almost all of these systems are currently reported only to the research level. [ 8 ] The physical category of soil quality indicators consists of tests that measure soil texture, bulk density, porosity, water content at saturation, aggregate stability, penetration resistance, and more. [ 9 ] These measures provide hydrological information, such the level of water infiltration and water availability to plants. Chemical indicators include pH and nutrient levels. [ 10 ] A typical soil test only evaluates chemical soil properties. [ 7 ] Biological measures include diversity of soil organisms and fungi. The movement and biological functions of soil organisms (including earthworms, millipedes, centipedes, ants, and spiders) impact soil processes such as the regulation of soil structure, degradation of contaminants, and nutrient cycling. [ 11 ]
https://en.wikipedia.org/wiki/Soil_quality
Soil regeneration , as a particular form of ecological regeneration within the field of restoration ecology , is creating new soil and rejuvenating soil health by: minimizing the loss of topsoil , retaining more carbon than is depleted, boosting biodiversity , and maintaining proper water and nutrient cycling. [ 1 ] This has many benefits, such as: soil sequestration of carbon in response to a growing threat of climate change , [ 2 ] [ 3 ] a reduced risk of soil erosion , [ 3 ] and increased overall soil resilience . [ 1 ] Soil quality means the ability of the soil to "perform its functions." [ 4 ] Soil is integral to a variety of ecosystem services. These services include food, animal feed, and fiber production, climate moderation, waste disposal, water filtration, elemental cycling, [ 1 ] and much more. Soil is composed of organic matter (decomposing plants, animals, and microbes), biomass (living plants, animals, and microbes), water, air, minerals (sand, silt, and clay), and nutrients (nitrogen, carbon, phosphorus). [ 4 ] For optimal plant growth , a proper carbon to nitrogen ratio of 20–30:1 must be maintained. [ 3 ] Promoting biodiversity is key to maintaining healthy soil. [ 5 ] This can be done by growing a variety of plants, always keeping soil covered, maintaining a living root system, and minimizing soil disturbance. [ 5 ] Macro and micro organisms assist with processes such as decomposition , nutrient cycling, disease suppression, and moderating CO 2 in the atmosphere . [ 1 ] Plants have a particularly symbiotic relationship with microbes in the rhizosphere of the soil. [ 5 ] The rhizosphere is an "area of concentrated microbial activity close to the root" and where water and nutrients are readily available. [ 5 ] Plants exchange carbohydrates for nutrients excreted by the microbes, different carbohydrates support different microbes. [ 5 ] Dead plants and other organic matter also feed the variety of organisms in the soil. [ 5 ] Organisms like earthworms and termites are examples of macro organisms in the soil. [ 1 ] A good indication that you have quality soil is a lack of pests and diseases. [ 1 ] Low biodiversity increases the risk of pests and diseases. [ 5 ] Having too much or too little of any of the components of soil can cause soil degradation . For example, having a high clay content reduces aeration and water permeability . [ 3 ] Another example is that, though phosphorus and nitrogen are essential for plant growth, they are toxic in high amounts. [ 3 ] Soil degradation means that soil quality has diminished, which causes ecosystem functions to decline. [ 1 ] One third of the globe's land has degraded soil; [ 1 ] especially the tropics and subtropics with around 500 million hectares . [ 1 ] Soil degradation occurs due to physical, chemical, and biological forces. [ 5 ] These forces can be natural and anthropogenic . [ 5 ] [ 1 ] Tilling is a physical example which causes erosion, compaction , and decreased microbial activity. [ 5 ] Erosion is “one of the most serious problems facing urban soil quality", [ 4 ] and the problem is exacerbated by uncovered soil. [ 4 ] Compaction occurs when soil is pushed together and becomes harder, so the ability to retain air and water is diminished. [ 4 ] This increases erosion and flooding, diminishes the ability of plants to grow good root systems, and reduces biological diversity. [ 4 ] Overgrazing is another example in which the root system beneath the soil is damaged, reducing water permeability. [ 5 ] Acidification , salinization , nutrient leaching , and toxin contamination are a few types of chemical degradation. [ 1 ] Toxins can accumulate in the soil from industrial processes like mining and waste management . [ 3 ] Some biological examples include biodiversity loss , emitting greenhouse gasses , reduced carbon content, and a reduced capacity to sequester carbon. [ 1 ] One of the most predictable ways to determine whether soil degradation has occurred is to measure its organic carbon content. [ 1 ] The soil organic carbon pool is extremely important for soil fertility . [ 1 ] There is a significant connection between the carbon cycle and climate change . [ 6 ] Most greenhouse gases are primarily composed of carbon and they produce an effect where warmer air that is heated by the sun is kept from leaving the atmosphere by forming a barrier in the troposphere . According to the Intergovernmental Panel on Climate Change , greenhouse gasses produced by human activity are the most significant cause of global climate change since the 1950s. [ 7 ] Without human interaction, carbon is removed from and reintroduced to soil through a variety of ecosystem processes known as the carbon cycle . Humans have been significantly influencing the global carbon cycle since the Industrial Revolution through various means, such as transportation and agriculture . Through these actions, most of this carbon has moved in one direction, from the lithosphere and biospheres to the atmosphere. By means of fossil fuels and intensive farming , much of the natural carbon in the Earth's pedosphere has been released into the atmosphere, contributing to greenhouse gasses. There are many ways to regenerate soil and improve soil quality, such as land management by conservation agriculture . Agriculture is one of the main factors in the depletion of soil richness. [ 8 ] As one historical review put it, "Accelerated soil erosion has plagued the earth since the dawn of settled agriculture, and has been a major issue in the rise and fall of early civilization." [ 9 ] Certain agricultural practices can deplete the soil of carbon, such as monoculture , [ 10 ] where only one type of crop is harvested in a field season after season. This depletes nutrients from the soil because each type of plant has a specific set of nutrients that it requires to grow or that it can fix back into the soil. With a lack of plant diversity, only certain nutrients will be absorbed. Over time, these nutrients will be depleted from the soil. Agroecology is an overarching category of approaches to creating a more sustainable agricultural system and increasing soil health. These conservation agricultural practices utilize many techniques and resources to maintain healthy soil. Some examples are cover cropping , crop rotation , reducing soil disturbance, retaining mulch , and integrated nutrient management . [ 1 ] These practices have many benefits, including increased carbon sequestration and reducing the use of fossil fuels. [ 1 ] Permaculture (from "permanent" and "agriculture") is a type of conservation agriculture, which is a systems thinking approach that seeks to increase the carbon content of soil by utilizing natural patterns and processes. There is a strong emphasis on knowledge of plants, animals, and natural cycles to promote high-efficiency food production, decrease reliance on human involvement, and create a sustainable and resilient ecosystem. This can be accomplished through intentional landscaping to increase the efficiency of capturing rainfall into the system or by placing nitrogen-fixing plants near nitrogen-demanding plants, such as legumes . [ 3 ] Utilizing the interconnections of various plants, animals, and processes is a key practice in permaculture. Native plants should be used whenever possible, [ 3 ] their roots help water infiltrate deep into the soil. [ 4 ] holistic management stems from the work of Allan Savory , who observes that planned grazing can improve soil health and reverse the effects of desertification by increasing biomass. Researchers dispute the desertification claim. [ 11 ] [ 12 ] There are also many kinds of soil amendments , both organic and inorganic. [ 3 ] They promote soil quality in a variety of ways, such as sequestering toxins, balancing the pH of the soil, adding nutrients, and promoting the activity of organisms. [ 3 ] The current conditions of the soil will determine which type of amendment and how much to use. [ 3 ] Inorganic amendments are generally used for things like improving the texture and structure of the soil, balancing the pH, and limiting the bioavailability of heavy metal toxins. [ 3 ] There are two types of inorganic amendments: alkaline and mineral. Some examples of inorganic amendments include wood ash, ground limestone, and red mud. [ 13 ] Mineral amendments include gypsum and dredged materials. [ 3 ] Organic amendments improve biological activity, water permeability, and soil structure. [ 4 ] Mulch , for example, reduces erosion and helps to maintain the temperature of the soil. [ 3 ] Compost is rich in organic matter, [ 4 ] it is composed of decomposed matter such as food, vegetation, and animal wastes. [ 3 ] Adding compost increases the moisture and nutrient content of the soil and promotes biological activity. Creating compost requires careful management of temperature, the carbon to nitrogen ratio, water, and air. [ 3 ] Biochar is an amendment that is full of carbon and is created by pyrolysis , a high-temperature decomposition process. [ 1 ] Wastes from animals are common soil amendments, usually their manure . The moisture and nutrient content will vary depending on the animal from which it came. [ 3 ] Human wastes can also be used, like the byproduct biosolids from wastewater facilities. Biosolids can be high in nutrient content, so should be used sparingly. [ 3 ]
https://en.wikipedia.org/wiki/Soil_regeneration
Soil respiration refers to the production of carbon dioxide when soil organisms respire. This includes respiration of plant roots , the rhizosphere , microbes and fauna . Soil respiration is a key ecosystem process that releases carbon from the soil in the form of CO 2 . CO 2 is acquired by plants from the atmosphere and converted into organic compounds in the process of photosynthesis . Plants use these organic compounds to build structural components or respire them to release energy. When plant respiration occurs below-ground in the roots, it adds to soil respiration. Over time, plant structural components are consumed by heterotrophs . This heterotrophic consumption releases CO 2 and when this CO 2 is released by below-ground organisms, it is considered soil respiration. The amount of soil respiration that occurs in an ecosystem is controlled by several factors. The temperature, moisture, nutrient content and level of oxygen in the soil can produce extremely disparate rates of respiration. These rates of respiration can be measured in a variety of methods. Other methods can be used to separate the source components, in this case the type of photosynthetic pathway ( C3 / C4 ), of the respired plant structures. Soil respiration rates can be largely affected by human activity. This is because humans have the ability to and have been changing the various controlling factors of soil respiration for numerous years. Global climate change is composed of numerous changing factors including rising atmospheric CO 2 , increasing temperature and shifting precipitation patterns. All of these factors can affect the rate of global soil respiration. Increased nitrogen fertilization by humans also has the potential to affect rates over the entire planet . Soil respiration and its rate across ecosystems is extremely important to understand. This is because soil respiration plays a large role in global carbon cycling as well as other nutrient cycles . The respiration of plant structures releases not only CO 2 but also other nutrients in those structures, such as nitrogen. Soil respiration is also associated with positive feedback with global climate change. Positive feedback is when a change in a system produces response in the same direction of the change. Therefore, soil respiration rates can be affected by climate change and then respond by enhancing climate change. All cellular respiration releases energy, water and CO 2 from organic compounds. Any respiration that occurs below-ground is considered soil respiration. Respiration by plant roots, bacteria, fungi and soil animals all release CO 2 in soils, as described below. The tricarboxylic acid (TCA) cycle – or citric acid cycle – is an important step in cellular respiration. In the TCA cycle, a six carbon sugar is oxidized . [ 1 ] This oxidation produces the CO 2 and H 2 O from the sugar. Plants, fungi, animals and bacteria all use this cycle to convert organic compounds to energy. This is how the majority of soil respiration occurs at its most basic level. Since the process relies on oxygen to occur, this is referred to as aerobic respiration. Fermentation is another process in which cells gain energy from organic compounds. In this metabolic pathway , energy is derived from the carbon compound without the use of oxygen. The products of this reaction are carbon dioxide and usually either ethyl alcohol or lactic acid . [ 2 ] Due to the lack of oxygen, this pathway is described as anaerobic respiration . This is an important source of CO 2 in soil respiration in waterlogged ecosystems where oxygen is scarce, as in peat bogs and wetlands . However, most CO 2 released from the soil occurs via respiration and one of the most important aspects of below-ground respiration occurs in the plant roots. Plants respire some of the carbon compounds which were generated by photosynthesis. When this respiration occurs in roots, it adds to soil respiration. Root respiration accounts for approximately half of all soil respiration. However, these values can range from 10 to 90% depending on the dominant plant types in an ecosystem and conditions under which the plants are subjected. Thus, the amount of CO 2 produced through root respiration is determined by the root biomass and specific root respiration rates. [ 3 ] Directly next to the root is the area known as the rhizosphere, which also plays an important role in soil respiration. The rhizosphere is a zone immediately next to the root surface with its neighboring soil. In this zone there is a close interaction between the plant and microorganisms. Roots continuously release substances, or exudates , into the soil. These exudates include sugars, amino acids , vitamins , long chain carbohydrates , enzymes and lysates which are released when roots cells break. The amount of carbon lost as exudates varies considerably between plant species. It has been demonstrated that up to 20% of carbon acquired by photosynthesis is released into the soil as root exudates. [ 4 ] These exudates are decomposed primarily by bacteria. These bacteria will respire the carbon compounds through the TCA cycle; however, fermentation is also present. This is due to the lack of oxygen due to greater oxygen consumption by the root as compared to the bulk soil, soil at a greater distance from the root. [ 5 ] Another important organism in the rhizosphere are root-infecting fungi or mycorrhizae . These fungi increase the surface area of the plant root and allow the root to encounter and acquire a greater amount of soil nutrients necessary for plant growth. In return for this benefit, the plant will transfer sugars to the fungi. The fungi will respire these sugars for energy thereby increasing soil respiration. [ 6 ] Fungi, along with bacteria and soil animals, also play a large role in the decomposition of litter and soil organic matter . Soil animals graze on populations of bacteria and fungi as well as ingest and break up litter to increase soil respiration. Microfauna are made up of the smallest soil animals. These include nematodes and mites . This group specializes on soil bacteria and fungi. By ingesting these organisms, carbon that was initially in plant organic compounds and was incorporated into bacterial and fungal structures will now be respired by the soil animal. Mesofauna are soil animals from 0.1 to 2 millimeters (0.0039 to 0.0787 in) in length and will ingest soil litter. The fecal material will hold a greater amount of moisture and have a greater surface area. This will allow for new attack by microorganisms and a greater amount of soil respiration. Macrofauna are organisms from 2 to 20 millimeters (0.079 to 0.787 in), such as earthworms and termites . Most macrofauna fragment litter, thereby exposing a greater amount of area to microbial attack. Other macrofauna burrow or ingest litter, reducing soil bulk density, breaking up soil aggregates and increasing soil aeration and the infiltration of water. [ 7 ] Regulation of CO 2 production in soil is due to various abiotic , or non-living, factors. Temperature, soil moisture and nitrogen all contribute to the rate of respiration in soil. Temperature affects almost all aspects of respiration processes. Temperature will increase respiration exponentially to a maximum, at which point respiration will decrease to zero when enzymatic activity is interrupted. Root respiration increases exponentially with temperature in its low range when the respiration rate is limited mostly by the TCA cycle. At higher temperatures the transport of sugars and the products of metabolism become the limiting factor. At temperatures over 35 °C (95 °F), root respiration begins to shut down completely. [ 8 ] Microorganisms are divided into three temperature groups; cryophiles , mesophiles and thermophiles . Cryophiles function optimally at temperatures below 20 °C (68 °F), mesophiles function best at temperatures between 20 and 40 °C (104 °F) and thermophiles function optimally at over 40 °C (104 °F). In natural soils many different cohorts, or groups of microorganisms exist. These cohorts will all function best at different conditions, so respiration may occur over a very broad range. [ 9 ] Temperature increases lead to greater rates of soil respiration until high values retard microbial function, this is the same pattern that is seen with soil moisture levels. Soil moisture is another important factor influencing soil respiration. Soil respiration is low in dry conditions and increases to a maximum at intermediate moisture levels until it begins to decrease when moisture content excludes oxygen. This allows anaerobic conditions to prevail and depress aerobic microbial activity. Studies have shown that soil moisture only limits respiration at the lowest and highest conditions with a large plateau existing at intermediate soil moisture levels for most ecosystems. [ 10 ] Many microorganisms possess strategies for growth and survival under low soil moisture conditions. Under high soil moisture conditions, many bacteria take in too much water causing their cell membrane to lyse, or break. This can decrease the rate of soil respiration temporarily, but the lysis of bacteria causes for a spike in resources for many other bacteria. This rapid increase in available labile substrates causes short-term enhanced soil respiration. Root respiration will increase with increasing soil moisture, especially in dry ecosystems; however, individual species' root respiration response to soil moisture will vary widely from species to species depending on life history traits. Upper levels of soil moisture will depress root respiration by restricting access to atmospheric oxygen. With the exception of wetland plants, which have developed specific mechanisms for root aeration, most plants are not adapted to wetland soil environments with low oxygen . [ 11 ] The respiration dampening effect of elevated soil moisture is amplified when soil respiration also lowers soil redox through bioelectrogenesis . [ 12 ] Soil-based microbial fuel cells are becoming popular educational tools for science classrooms. Nitrogen directly affects soil respiration in several ways. Nitrogen must be taken in by roots to promote plant growth and life. Most available nitrogen is in the form of NO 3 − , which costs 0.4 units of CO 2 to enter the root because energy must be used to move it up a concentration gradient . Once inside the root the NO 3 − must be reduced to NH 3 . This step requires more energy, which equals 2 units of CO 2 per molecule reduced. In plants with bacterial symbionts , which fix atmospheric nitrogen, the energetic cost to the plant to acquire one molecule of NH 3 from atmospheric N 2 is 2.36 CO 2 . [ 13 ] It is essential that plants uptake nitrogen from the soil or rely on symbionts to fix it from the atmosphere to assure growth, reproduction and long-term survival. Another way nitrogen affects soil respiration is through litter decomposition . High nitrogen litter is considered high quality and is more readily decomposed by microorganisms than low quality litter. Degradation of cellulose , a tough plant structural compound, is also a nitrogen limited process and will increase with the addition of nitrogen to litter. [ 14 ] Different methods exist for the measurement of soil respiration rate and the determination of sources. Methods can be divided into field- and laboratory-based methods. The most common field methods include the use of long-term stand alone soil flux systems for measurement at one location at different times; survey soil respiration systems for measurement of different locations and at different times. The use of stable isotope ratios can be used both in laboratory of field measurements. Soil respiration can be measured alone or with added nutrients and (carbon) substrates that supply food sources to the microorganisms. Soil respiration without any additions of nutrients and substrates is called the basal soil respiration (BR). With the addition of nutrients (often nitrogen and phosphorus) and substrates (e.g. sugars), it is called the substrate-induced soil respiration (SIR). In both BR and SIR measurements, the moisture content can be adjusted with water. These systems measure at one location over long periods of time. Since they only measure at one location, it is common to use multiple stations to reduce measuring error caused by soil variability over small distances. Soil variability may be tested with survey soil respiration instruments. The long-term instruments are designed to expose the measuring site to ambient conditions as much as is possible between measurements. Closed systems take short-term measurements (typically over few minutes only) in a chamber sealed over the soil. [ 15 ] The rate of soil CO 2 efflux is calculated on the basis of CO 2 increased inside the chamber. As it is within the nature of closed chambers that CO 2 continues to accumulate, measurement periods are reduced to a minimum to achieve a detectable, linear concentration increase, avoiding an excessive build-up of CO 2 inside the chamber over time. Both individual assay information and diurnal CO 2 respiration measuring information is accessible. It is also common for such systems to also measure soil temperature, soil moisture and PAR ( photosynthetically active radiation ). These variables are normally recorded in the measuring file along with CO 2 values. For determination of soil respiration and the slope of CO 2 increase, researchers have used linear regression analysis, the Pedersen (2001) algorithm, and exponential regression . There are more published references for linear regression analysis; however, the Pedersen algorithm and exponential regression analysis methods also have their following. Some systems offer a choice of mathematical methods. [ 16 ] When using linear regression , multiple data points are graphed and the points can be fitted with a linear regression equation, which will provide a slope. This slope can provide the rate of soil respiration with the equation F = b V / A {\displaystyle F=bV/A} , where F is the rate of soil respiration, b is the slope, V is the volume of the chamber and A is the surface area of the soil covered by the chamber. [ 17 ] It is important that the measurement is not allowed to run over a longer period of time as the increase in CO 2 concentration in the chamber will also increase the concentration of CO 2 in the porous top layer of the soil profile. This increase in concentration will cause an underestimation of soil respiration rate due to the additional CO 2 being stored within the soil. [ 18 ] Open mode systems are designed to find soil flux rates when measuring chamber equilibrium has been reached. Air flows through the chamber before the chamber is closed and sealed. This purges any non-ambient CO 2 levels from the chamber before measurement. After the chamber is closed, fresh air is pumped into the chamber at a controlled and programmable flow rate. This mixes with the CO 2 from the soil, and after a time, equilibrium is reached. The researcher specifies the equilibrium point as the difference in CO 2 measurements between successive readings, in an elapsed time. During the assay, the rate of change slowly reduces until it meets the customer's rate of change criteria, or the maximum selected time for the assay. Soil flux or rate of change is then determined once equilibrium conditions are reached within the chamber. Chamber flow rates and times are programmable, accurately measured, and used in calculations. These systems have vents that are designed to prevent a possible unacceptable buildup of partial CO 2 pressure discussed under closed mode systems. Since the air movement inside the chamber might cause increased chamber pressure, or external winds may produce reduced chamber pressure, a vent is provided that is designed to be as wind proof as possible. Open systems are also not as sensitive to soil structure variation , or to boundary layer resistance issues at the soil surface. Air flow in the chamber at the soil surface is designed to minimize boundary layer resistance phenomena. A hybrid system also exists. It has a vent that is designed to be as wind proof as possible, and prevent possible unacceptable partial CO 2 pressure buildup, but is designed to operate like a closed mode design system in other regards. These are either open or closed mode instruments that are portable or semi-portable. They measure CO 2 soil respiration variability at different locations and at different times. With this type of instrument, soil collars that can be connected to the survey measuring instrument are inserted into the ground and the soil is allowed to stabilize for a period of time. The insertion of the soil collar temporarily disturbs the soil, creating measuring artifacts. For this reason, it is common to have several soil collars inserted at different locations. Soil collars are inserted far enough to limit lateral diffusion of CO 2 . After soil stabilization, the researcher then moves from one collar to another according to experimental design to measure soil respiration. Survey soil respiration systems can also be used to determine the number of long-term stand-alone temporal instruments that are required to achieve an acceptable level of error. Different locations may require different numbers of long-term stand-alone units due to greater or lesser soil respiration variability. Plants acquire CO 2 and produce organic compounds with the use of one of three photosynthetic pathways . The two most prevalent pathways are the C 3 and C 4 processes. C 3 plants are best adapted to cool and wet conditions while C 4 plants do well in hot and dry ecosystems. Due to the different photosynthetic enzymes between the two pathways, different carbon isotopes are acquired preferentially. Isotopes are the same element that differ in the number of neutrons, thereby making one isotope heavier than the other. The two stable carbon isotopes are 12 C and 13 C. The C 3 pathway will discriminate against the heavier isotope more than the C 4 pathway. This will make the plant structures produced from C 4 plants more enriched in the heavier isotope and therefore root exudates and litter from these plants will also be more enriched. When the carbon in these structures is respired, the CO 2 will show a similar ratio of the two isotopes. Researchers will grow a C 4 plant on soil that was previously occupied by a C 3 plant or vice versa. By taking soil respiration measurements and analyzing the isotopic ratios of the CO 2 it can be determined whether the soil respiration is mostly old versus recently formed carbon. For example, maize, a C 4 plant, was grown on soil where spring wheat , a C 3 plant, was previously grown. The results showed respiration of C 3 SOM in the first 40 days, with a gradual linear increase in heavy isotope enrichment until day 70. The days after 70 showed a slowing enrichment to a peak at day 100. [ 19 ] By analyzing stable carbon isotope data it is possible to determine the source components of respired SOM that was produced by different photosynthetic pathways. One problem in the measurement of soil respiration in the field is that respiration of microorganisms can not be distinguished from respiration from plant roots and soil animals. This can be overcome using stable isotope techniques. Cane sugar is a C 4 – sugar which can act as an isotopic tracer. [ 20 ] [ 21 ] Cane sugar has a slightly higher abundance of 13 C (δ 13 C ≈ −10‰) than the endogenous (natural) carbon in a C 3 ecosystem (δ 13 C=−25 to −28‰). Cane sugar can be sprayed on the soil in a solution and will infiltrate the upper soil, Only microorganisms will respire the added sugar because roots exclusively respire carbon products that are assimilated by the plant via photosynthesis. By analyses of the δ 13 C of the CO 2 evolving from the soil with or without adding cane sugar, the fraction of C 3 (root and microbial) and C 4 (microbial respiration) can be calculated. [ 22 ] [ 23 ] Field respiration using stable isotopes can be used as a tool to measure microbial respiration in-situ without disturbing the microbial communities by mixing soil nutrients, oxygen, and soil contaminants that may be present. [ 23 ] Throughout the past 160 years, humans have changed land use and industrial practices, which have altered the climate and global biogeochemical cycles . These changes have affected the rate of soil respiration around the planet. In addition, increasingly frequent extreme climatic events [ 24 ] such as heat waves (involving high temperature disturbances and associated intense droughts), followed by intense rainfall, impact on microbial communities and soil physico-chemistry and may induce changes in soil respiration. [ 25 ] Since the Industrial Revolution , humans have emitted vast amounts of CO 2 into the atmosphere. These emissions have increased greatly over time and have increased global atmospheric CO 2 levels to their highest in over 750,000 years. Soil respiration increases when ecosystems are exposed to elevated levels of CO 2 . Numerous free air CO 2 enrichment (FACE) studies have been conducted to test soil respiration under predicted future elevated CO 2 conditions. Recent FACE studies have shown large increases in soil respiration due to increased root biomass and microbial activity. [ 26 ] Soil respiration has been found to increase up to 40.6% in a sweetgum forest in Tennessee and poplar forests in Wisconsin under elevated CO 2 conditions. [ 27 ] It is extremely likely that CO 2 levels will exceed those used in these FACE experiments by the middle of this century due to increased human use of fossil fuels and land use practices. Due to the increase in temperature of the soil, CO 2 levels in our atmosphere increase, and as such the mean average temperature of the Earth is rising. This is due to human activities such as forest clearing , soil denuding , and developments that destroy autotrophic processes. With the loss of photosynthetic plants covering and cooling the surface of the soil, the infrared energy penetrates the soil heating it up and causing a rise in heterotrophic bacteria. Heterotrophs in the soil quickly degrade the organic matter and soil structure crumbles, thus it dissolves into streams and rivers into the sea. Much of the organic matter swept away in floods caused by forest clearing goes into estuaries , wetlands and eventually into the open ocean. Increased turbidity of surface waters causes biological oxygen demand and more autotrophic organisms die. Carbon dioxide levels rise with increased respiration of soil bacteria after temperatures rise due to loss of soil cover. As mentioned earlier, temperature greatly affects the rate of soil respiration. This may have the most drastic influence in the Arctic . Large stores of carbon are locked in the frozen permafrost . With an increase in temperature, this permafrost is melting and aerobic conditions are beginning to prevail, thereby greatly increasing the rate of respiration in that ecosystem. [ 28 ] Due to the shifting patterns of temperature and changing oceanic conditions, precipitation patterns are expected to change in location, frequency and intensity. Larger and more frequent storms are expected when oceans can transfer more energy to the forming storm systems. This may have the greatest impact on xeric , or arid, ecosystems. It has been shown that soil respiration in arid ecosystems shows dynamic changes within a raining cycle . The rate of respiration in dry soil usually bursts to a very high level after rainfall and then gradually decreases as the soil dries. [ 10 ] With an increase in rainfall frequency and intensity over area without previous extensive rainfall, a dramatic increase in soil respiration can be inferred. Since the onset of the Green Revolution in the middle of the last century, vast amounts of nitrogen fertilizers have been produced and introduced to almost all agricultural systems. This has led to increases in plant available nitrogen in ecosystems around the world due to agricultural runoff and wind-driven fertilization . As discussed earlier, nitrogen can have a significant positive effect on the level and rate of soil respiration. Increases in soil nitrogen have been found to increase plant dark respiration, stimulate specific rates of root respiration and increase total root biomass. [ 29 ] This is because high nitrogen rates are associated with high plant growth rates. High plant growth rates will lead to the increased respiration and biomass found in the study. With this increase in productivity, an increase in soil activities and therefore respiration can be assured. Soil respiration plays a significant role in the global carbon and nutrient cycles as well as being a driver for changes in climate. These roles are important to our understanding of the natural world and human preservation. Soil respiration plays a critical role in the regulation of carbon cycling at the ecosystem level and at global scales. Each year approximately 120 petagrams (Pg) of carbon are taken up by land plants and a similar amount is released to the atmosphere through ecosystem respiration. The global soils contain up to 3150 Pg of carbon, of which 450 Pg exist in wetlands and 400 Pg in permanently frozen soils. The soils contain more than four times the carbon as the atmosphere. [ 30 ] Researchers have estimated that soil respiration accounts for 77 Pg of carbon released to the atmosphere each year. [ 31 ] This level of release is greater than the carbon release due to anthropogenic sources (56 Pg per year) such as fossil fuel burning. Thus, a small change in soil respiration can seriously alter the balance of atmosphere CO 2 concentration versus soil carbon stores. Much like soil respiration can play a significant role in the global carbon cycle, it can also regulate global nutrient cycling . A major component of soil respiration is from the decomposition of litter which releases CO 2 to the environment while simultaneously immobilizing or mineralizing nutrients. During decomposition, nutrients such as nitrogen are immobilized by microbes for their own growth. As these microbes are ingested or die, nitrogen is added to the soil. Nitrogen is also mineralized from the degradation of proteins and nucleic acids in litter. This mineralized nitrogen is also added to the soil. Due to these processes, the rate of nitrogen added to the soil is coupled with rates of microbial respiration. Studies have shown that rates of soil respiration were associated with rates of microbial turnover and nitrogen mineralization. [ 5 ] Alterations of the global cycles can further act to change the climate of the planet. As stated earlier, the CO 2 released by soil respiration is a greenhouse gas that will continue to trap energy and increase the global mean temperature if concentrations continue to rise. As global temperature rises, so will the rate of soil respiration across the globe thereby leading to a higher concentration of CO 2 in the atmosphere, again leading to higher global temperatures. This is an example of a positive feedback loop. It is estimated that a rise in temperature by 2 °C will lead to an additional release of 10 Pg carbon per year to the atmosphere from soil respiration. [ 32 ] This is a larger amount than current anthropogenic carbon emissions. There also exists a possibility that this increase in temperature will release carbon stored in permanently frozen soils, which are now melting. Climate models have suggested that this positive feedback between soil respiration and temperature will lead to a decrease in soil stored carbon by the middle of the 21st century. [ 33 ] Soil respiration is a key ecosystem process that releases carbon from the soil in the form of carbon dioxide. Carbon is stored in the soil as organic matter and is respired by plants, bacteria, fungi and animals. When this respiration occurs below ground, it is considered soil respiration. Temperature, soil moisture and nitrogen all regulate the rate of this conversion from carbon in soil organic compounds to CO 2 . Many methods are used to measure soil respiration; however, the closed dynamic chamber and use of stable isotope ratios are two of the most prevalent techniques. Humans have altered atmospheric CO 2 levels, precipitation patterns and fertilization rates, all of which have had a significant role on soil respiration rates. The changes in these rates can alter the global carbon and nutrient cycles as well as play a significant role in climate change.
https://en.wikipedia.org/wiki/Soil_respiration
Soil salinity is the salt content in the soil ; the process of increasing the salt content is known as salinization (also called salination in American English ). [ 1 ] Salts occur naturally within soils and water. Salinization can be caused by natural processes such as mineral weathering or by the gradual withdrawal of an ocean. It can also come about through artificial processes such as irrigation and road salt . Salts are a natural component in soils and water. The ions responsible for salinization are: Na + , K + , Ca 2+ , Mg 2+ and Cl − . Over long periods of time, as soil minerals weather and release salts, these salts are flushed or leached out of the soil by drainage water in areas with sufficient precipitation. In addition to mineral weathering, salts are also deposited via dust and precipitation. Salts may accumulate in dry regions, leading to naturally saline soils. This is the case, for example, in large parts of Australia . Human practices can increase the salinity of soils by the addition of salts in irrigation water. Proper irrigation management can prevent salt accumulation by providing adequate drainage water to leach added salts from the soil. Disrupting drainage patterns that provide leaching can also result in salt accumulations. An example of this occurred in Egypt in 1970 when the Aswan High Dam was built. The change in the level of ground water before the construction had enabled soil erosion , which led to high concentration of salts in the water table. After the construction, the continuous high level of the water table led to the salinization of arable land . [ citation needed ] When the Na + (sodium) predominates, soils can become sodic . The pH of sodic soils may be acidic , neutral or alkaline . Sodic soils present particular challenges because they tend to have very poor structure which limits or prevents water infiltration and drainage. They tend to accumulate certain elements like boron and molybdenum in the root zone at levels that may be toxic for plants. [ 2 ] The most common compound used for reclamation of sodic soil is gypsum , and some plants that are tolerant to salt and ion toxicity may present strategies for improvement. [ 3 ] [ failed verification ] The term "sodic soil" is sometimes used imprecisely in scholarship. It's been used interchangeably with the term alkali soil , which is used in two meanings: 1) a soil with a pH greater than 8.2, 2) soil with an exchangeable sodium content above 15% of exchange capacity. The term "alkali soil" is often, but not always, used for soils that meet both of these characteristics. [ 4 ] Salinity in drylands can occur when the water table is between two and three metres from the surface of the soil. The salts from the groundwater are raised by capillary action to the surface of the soil. This occurs when groundwater is saline (which is true in many areas), and is favored by land use practices allowing more rainwater to enter the aquifer than it could accommodate. For example, the clearing of trees for agriculture is a major reason for dryland salinity in some areas, since deep rooting of trees has been replaced by shallow rooting of annual crops. Salinity from irrigation can occur over time wherever irrigation occurs, since almost all water (even natural rainfall) contains some dissolved salts. [ 5 ] When the plants use the water, the salts are left behind in the soil and eventually begin to accumulate. This water in excess of plant needs is called the leaching fraction . Salinization from irrigation water is also greatly increased by poor drainage and use of saline water for irrigating agricultural crops. Salinity in urban areas often results from the combination of irrigation and groundwater processes. Irrigation is also now common in cities (gardens and recreation areas). The consequences of salinity are Salinity is an important land degradation problem. Soil salinity can be reduced by leaching soluble salts out of soil with excess irrigation water. Soil salinity control involves watertable control and flushing in combination with tile drainage or another form of subsurface drainage . [ 7 ] [ 8 ] A comprehensive treatment of soil salinity is available from the United Nations Food and Agriculture Organization . [ 9 ] High levels of soil salinity can be tolerated if salt-tolerant plants are grown. Sensitive crops lose their vigor already in slightly saline soils, most crops are negatively affected by (moderately) saline soils, and only salinity-resistant crops thrive in severely saline soils. The University of Wyoming [ 10 ] and the Government of Alberta [ 11 ] report data on the salt tolerance of plants. Field data in irrigated lands, under farmers' conditions, are scarce, especially in developing countries. However, some on-farm surveys have been made in Egypt, [ 12 ] India, [ 13 ] and Pakistan. [ 14 ] Some examples are shown in the following gallery, with crops arranged from sensitive to very tolerant. [ 15 ] [ 16 ] Calcium has been found to have a positive effect in combating salinity in soils. It has been shown to ameliorate the negative effects that salinity has such as reduced water usage of plants. [ 17 ] Soil salinity activates genes associated with stress conditions for plants. [ 18 ] These genes initiate the production of plant stress enzymes such as superoxide dismutase , L-ascorbate oxidase , and Delta 1 DNA polymerase . Limiting this process can be achieved by administering exogenous glutamine to plants. The decrease in the level of expression of genes responsible for the synthesis of superoxide dismutase increases with the increase in glutamine concentration. [ 18 ] From the FAO/UNESCO Soil Map of the World the following salinised areas can be derived. [ 19 ]
https://en.wikipedia.org/wiki/Soil_salinity
Soil salinity control refers to controlling the process and progress of soil salinity to prevent soil degradation by salination and reclamation of already salty (saline) soils. Soil reclamation is also known as soil improvement, rehabilitation, remediation , recuperation, or amelioration. The primary man-made cause of salinization is irrigation . River water or groundwater used in irrigation contains salts, which remain in the soil after the water has evaporated . The primary method of controlling soil salinity is to permit 10–20% of the irrigation water to leach the soil, so that it will be drained and discharged through an appropriate drainage system . The salt concentration of the drainage water is normally 5 to 10 times higher than that of the irrigation water which meant that salt export will more closely match salt import and it will not accumulate. Salty (saline) soils have high salt content. The predominant salt is normally sodium chloride (NaCl, "table salt"). Saline soils are therefore also sodic soils but there may be sodic soils that are not saline, but alkaline . This damage is an average of 2,000 hectares of irrigated land in arid and semi-arid areas daily for more than 20 years across 75 countries (each week the world loses an area larger than Manhattan)...To feed the world's anticipated nine billion people by 2050, and with little new productive land available, it's a case of all lands needed on deck.— principal author Manzoor Qadir, Assistant Director, Water and Human Development, at UN University's Canadian-based Institute for Water, Environment and Health [ 1 ] According to a study by UN University , about 62 million hectares (240 thousand square miles; 150 million acres), representing 20% of the world's irrigated lands are affected, up from 45 million ha (170 thousand sq mi; 110 million acres) in the early 1990s. [ 1 ] In the Indo-Gangetic Plain , home to over 10% of the world's population , crop yield losses for wheat , rice , sugarcane and cotton grown on salt-affected lands could be 40%, 45%, 48%, and 63%, respectively. [ 1 ] Salty soils are a common feature and an environmental problem in irrigated lands in arid and semi-arid regions, resulting in poor or little crop production. [ 2 ] The causes of salty soils are often associated with high water tables , which are caused by a lack of natural subsurface drainage to the underground. Poor subsurface drainage may be caused by insufficient transport capacity of the aquifer or because water cannot exit the aquifer, for instance, if the aquifer is situated in a topographical depression. Worldwide, the major factor in the development of saline soils is a lack of precipitation . Most naturally saline soils are found in (semi) arid regions and climates of the earth. Man-made salinization is primarily caused by salt found in irrigation water. All irrigation water derived from rivers or groundwater, regardless of water purity, contains salts that remain behind in the soil after the water has evaporated. For example, assuming irrigation water with a low salt concentration of 0.3 g/L (equal to 0.3 kg/m 3 corresponding to an electric conductivity of about 0.5 FdS/m) and a modest annual supply of irrigation water of 10,000 m 3 /ha (almost 3 mm/day) brings 3,000 kg salt/ha each year. With the absence of sufficient natural drainage (as in waterlogged soils), and proper leaching and drainage program to remove salts, this would lead to high soil salinity and reduced crop yields in the long run. Much of the water used in irrigation has a higher salt content than 0.3 g/L, compounded by irrigation projects using a far greater annual supply of water. Sugar cane , for example, needs about 20,000 m 3 /ha of water per year. As a result, irrigated areas often receive more than 3,000 kg/ha of salt per year, with some receiving as much as 10,000 kg/ha/year. The secondary cause of salinization is waterlogging in irrigated land. Irrigation causes changes to the natural water balance of irrigated lands. Large quantities of water in irrigation projects are not consumed by plants and must go somewhere. In irrigation projects, it is impossible to achieve 100% irrigation efficiency where all the irrigation water is consumed by the plants. The maximum attainable irrigation efficiency is about 70%, but usually, it is less than 60%. This means that minimum 30%, but usually more than 40% of the irrigation water is not evaporated and it must go somewhere. Most of the water lost this way is stored underground which can change the original hydrology of local aquifers considerably. Many aquifers cannot absorb and transport these quantities of water, and so the water table rises leading to waterlogging. Waterlogging causes three problems: Aquifer conditions in irrigated land and the groundwater flow have an important role in soil salinization, [ 3 ] as illustrated here: Normally, the salinization of agricultural land affects a considerable area of 20% to 30% in irrigation projects. When the agriculture in such a fraction of the land is abandoned, a new salt and water balance is attained, a new equilibrium is reached and the situation becomes stable. In India alone, thousands of square kilometers have been severely salinized. China and Pakistan do not lag far behind (perhaps China has even more salt affected land than India). A regional distribution of the 3,230,000 km 2 of saline land worldwide is shown in the following table derived from the FAO / UNESCO Soil Map of the World. [ 4 ] Although the principles of the processes of salinization are fairly easy to understand, it is more difficult to explain why certain parts of the land suffer from the problems and other parts do not, or to predict accurately which part of the land will fall victim. The main reason for this is the variation of natural conditions in time and space, the usually uneven distribution of the irrigation water, and the seasonal or yearly changes of agricultural practices . Only in lands with undulating topography is the prediction simple: the depressional areas will degrade the most. The preparation of salt and water balances [ 3 ] for distinguishable sub-areas in the irrigation project, or the use of agro-hydro-salinity models, [ 5 ] can be helpful in explaining or predicting the extent and severity of the problems. Soil salinity is measured as the salt concentration of the soil solution in tems of g/L or electric conductivity (EC) in dS/m . The relation between these two units is about 5/3: y g/L => 5y/3 dS/m. Seawater may have a salt concentration of 30 g/L (3%) and an EC of 50 dS/m. The standard for the determination of soil salinity is from an extract of a saturated paste of the soil, and the EC is then written as ECe. The extract is obtained by centrifugation . The salinity can more easily be measured, without centrifugation, in a 2:1 or 5:1 water:soil mixture (in terms of g water per g dry soil) than from a saturated paste. The relation between ECe and EC 2:1 is about 4, hence: ECe = 4EC 1:2 . [ 8 ] Soils are considered saline when the ECe > 4. [ 9 ] When 4 < ECe < 8, the soil is called slightly saline, when 8 < ECe < 16 it is called (moderately) saline, and when ECe > 16 severely saline. Sensitive crops lose their vigor already in slightly saline soils; most crops are negatively affected by (moderately) saline soils, and only salinity resistant crops thrive in severely saline soils. The University of Wyoming [ 10 ] and the Government of Alberta [ 11 ] report data on the salt tolerance of plants. Drainage is the primary method of controlling soil salinity. The system should permit a small fraction of the irrigation water (about 10 to 20 percent, the drainage or leaching fraction) to be drained and discharged out of the irrigation project. [ 12 ] In irrigated areas where salinity is stable, the salt concentration of the drainage water is normally 5 to 10 times higher than that of the irrigation water. Salt export matches salt import and salt will not accumulate. When reclaiming already salinized soils, the salt concentration of the drainage water will initially be much higher than that of the irrigation water (for example 50 times higher). Salt export will greatly exceed salt import, so that with the same drainage fraction a rapid desalinization occurs. After one or two years, the soil salinity is decreased so much, that the salinity of the drainage water has come down to a normal value and a new, favorable, equilibrium is reached. In regions with pronounced dry and wet seasons , the drainage system may be operated in the wet season only, and closed during the dry season. This practice of checked or controlled drainage saves irrigation water. The discharge of salty drainage water may pose environmental problems to downstream areas. The environmental hazards must be considered very carefully and, if necessary mitigating measures must be taken. If possible, the drainage must be limited to wet seasons only, when the salty effluent inflicts the least harm. Land drainage for soil salinity control is usually by horizontal drainage system (figure left), but vertical systems (figure right) are also employed. The drainage system designed to evacuate salty water also lowers the water table . To reduce the cost of the system, the lowering must be reduced to a minimum. The highest permissible level of the water table (or the shallowest permissible depth) depends on the irrigation and agricultural practices and kind of crops. In many cases a seasonal average water table depth of 0.6 to 0.8 m is deep enough. This means that the water table may occasionally be less than 0.6 m (say 0.2 m just after an irrigation or a rain storm). This automatically implies that, in other occasions, the water table will be deeper than 0.8 m (say 1.2 m). The fluctuation of the water table helps in the breathing function of the soil while the expulsion of carbon dioxide (CO 2 ) produced by the plant roots and the inhalation of fresh oxygen (O 2 ) is promoted. The establishing of a not-too-deep water table offers the additional advantage that excessive field irrigation is discouraged, as the crop yield would be negatively affected by the resulting elevated water table, and irrigation water may be saved. The statements made above on the optimum depth of the water table are very general, because in some instances the required water table may be still shallower than indicated (for example in rice paddies), while in other instances it must be considerably deeper (for example in some orchards ). The establishment of the optimum depth of the water table is in the realm of agricultural drainage criteria . [ 13 ] The vadose zone of the soil below the soil surface and the water table is subject to four main hydrological inflow and outflow factors: [ 3 ] In steady state (i.e. the amount of water stored in the unsaturated zone does not change in the long run) the water balance of the unsaturated zone reads: Inflow = Outflow, thus: and the salt balance is where Ci is the salt concentration of the irrigation water, Cc is the salt concentration of the capillary rise, equal to the salt concentration of the upper part of the groundwater body, Fc is the fraction of the total evaporation transpired by plants, Ce is the salt concentration of the water taken up by the plant roots, Cp is the salt concentration of the percolation water, and Ss is the increase of salt storage in the unsaturated soil. This assumes that the rainfall contains no salts. Only along the coast this may not be true. Further it is assumed that no runoff or surface drainage occurs. The amount of removed by plants (Evap.Fc.Ce) is usually negligibly small: Evap.Fc.Ce = 0 The salt concentration Cp can be taken as a part of the salt concentration of the soil in the unsaturated zone (Cu) giving: Cp = Le.Cu, where Le is the leaching efficiency . The leaching efficiency is often in the order of 0.7 to 0.8, [ 14 ] but in poorly structured , heavy clay soils it may be less. In the Leziria Grande polder in the delta of the Tagus river in Portugal it was found that the leaching efficiency was only 0.15. [ 15 ] Assuming that one wishes to avoid the soil salinity to increase and maintain the soil salinity Cu at a desired level Cd we have: Ss = 0, Cu = Cd and Cp = Le.Cd. Hence the salt balance can be simplified to: Setting the amount percolation water required to fulfill this salt balance equal to Lr (the leaching requirement ) it is found that: Substituting herein Irr = Evap + Perc − Rain − Cap and re-arranging gives : With this the irrigation and drainage requirements for salinity control can be computed too. In irrigation projects in (semi)arid zones and climates it is important to check the leaching requirement, whereby the field irrigation efficiency (indicating the fraction of irrigation water percolating to the underground) is to be taken into account. The desired soil salinity level Cd depends on the crop tolerance to salt. The University of Wyoming, [ 10 ] US, and the Government of Alberta, [ 11 ] Canada, report crop tolerance data. In irrigated lands with scarce water resources suffering from drainage (high water table) and soil salinity problems, strip cropping is sometimes practiced with strips of land where every other strip is irrigated while the strips in between are left permanently fallow . [ 16 ] Owing to the water application in the irrigated strips they have a higher water table which induces flow of groundwater to the unirrigated strips. This flow functions as subsurface drainage for the irrigated strips, whereby the water table is maintained at a not-too-shallow depth, leaching of the soil is possible, and the soil salinity can be controlled at an acceptably low level. In the unirrigated (sacrificial) strips the soil is dry and the groundwater comes up by capillary rise and evaporates leaving the salts behind, so that here the soil salinizes. Nevertheless, they can have some use for livestock , sowing salinity resistant grasses or weeds . Moreover, useful salt resistant trees can be planted like Casuarina , Eucalyptus , or Atriplex , keeping in mind that the trees have deep rooting systems and the salinity of the wet subsoil is less than of the topsoil . In these ways wind erosion can be controlled. The unirrigated strips can also be used for salt harvesting. [ citation needed ] The majority of the computer models available for water and solute transport in the soil (e.g. SWAP, [ 17 ] DrainMod-S, [ 18 ] UnSatChem, [ 19 ] and Hydrus [ 20 ] ) are based on Richard's differential equation for the movement of water in unsaturated soil in combination with Fick's differential convection–diffusion equation for advection and dispersion of salts. The models require the input of soil characteristics like the relations between variable unsaturated soil moisture content, water tension, water retention curve , unsaturated hydraulic conductivity , dispersity , and diffusivity . These relations vary greatly from place to place and time to time and are not easy to measure. Further, the models are complicated to calibrate under farmer's field conditions because the soil salinity here is spatially very variable. The models use short time steps and need at least a daily, if not hourly, database of hydrological phenomena. Altogether, this makes model application to a fairly large project the job of a team of specialists with ample facilities. Simpler models, like SaltMod , [ 5 ] based on monthly or seasonal water and soil balances and an empirical capillary rise function, are also available. They are useful for long-term salinity predictions in relation to irrigation and drainage practices. LeachMod, [ 21 ] [ 22 ] Using the SaltMod principles helps in analyzing leaching experiments in which the soil salinity was monitored in various root zone layers while the model will optimize the value of the leaching efficiency of each layer so that a fit is obtained of observed with simulated soil salinity values. Spatial variations owing to variations in topography can be simulated and predicted using salinity cum groundwater models , like SahysMod .
https://en.wikipedia.org/wiki/Soil_salinity_control
The soil seed bank is the natural storage of seeds , often dormant, within the soil of most ecosystems . [ 1 ] The study of soil seed banks started in 1859 when Charles Darwin observed the emergence of seedlings using soil samples from the bottom of a lake. The first scientific paper on the subject was published in 1882 and reported on the occurrence of seeds at different soil depths. [ 2 ] Weed seed banks have been studied intensely in agricultural science because of their important economic impacts; other fields interested in soil seed banks include forest regeneration and restoration ecology . Henry David Thoreau wrote that the contemporary popular belief explaining the succession of a logged forest, specifically to trees of a dissimilar species to the trees cut down, was that seeds either spontaneously generated in the soil, or sprouted after lying dormant for centuries. However, he dismissed this idea, noting that heavy nuts unsuited for distribution by wind were distributed instead by animals. [ 3 ] The seed bank is one of the key factors for the persistence and density fluctuations of plant populations, especially for annual plants . [ 4 ] Perennial plants have vegetative propagules to facilitate forming new plants, migration into new ground, or reestablishment after being top-killed, which are analogous to seed bank in their persistence ability under disturbance. These propagules are collectively called the 'soil bud bank', and include dormant and adventitious buds on stolons , rhizomes , and bulbs . Moreover, the term soil diaspore bank can be used to include non-flowering plants such as ferns and bryophytes . [ citation needed ] Soil seed bank is significant breeding source for vegetation restoration [ 5 ] and species-rich vegetation restoration, [ 6 ] as they provide memories of past vegetation and represent the structure of future population. [ 6 ] Moreover the composition of seed bank is often more stable than the vegetation to environmental changes, [ 7 ] although a chronic N deposition can deplete it. [ 8 ] [ 9 ] In many systems, the density of the soil seed bank is often lower than the vegetation, [ 4 ] and there are a large differences in species composition of the seed bank and the composition of the aboveground vegetation. [ 10 ] [ 11 ] [ 12 ] Additionally, it is a key point that the relationship between soil seed bank and original potential to measure the revegetation potential. [ 13 ] [ 14 ] In endangered habitats, such as mudflats, rare and critically endangered species may be present in high densities, the composition of the seed bank is often more stable than the vegetation to environmental changes[7][7], [ 15 ] Soil seed banks are a crucial part of the rapid re-vegetation of sites disturbed by wildfire, catastrophic weather, agricultural operations, and timber harvesting, a natural process known as secondary succession . Soil seed banks are often dominated by pioneer species , those species that are specially adapted to return to an environment first after a disturbance. [ 16 ] Forest ecosystems and wetlands contain a number of specialized plant species forming persistent soil seed banks. [ citation needed ] The absence of a soil seed bank impedes the establishment of vegetation during primary succession , while presence of a well-stocked soil seed bank permits rapid development of species-rich ecosystems during secondary succession . [ citation needed ] Many taxa have been classified according to the longevity of their seeds in the soil seed bank. Seeds of transient species remain viable in the soil seed bank only to the next opportunity to germinate , while seeds of persistent species can survive longer than the next opportunity—often much longer than one year. Species with seeds that remain viable in the soil longer than five years form the long-term persistent seed bank, while species whose seeds generally germinate or die within one to five years are called short-term persistent. A typical long-term persistent species is Chenopodium album (Lambsquarters); its seeds commonly remain viable in the soil for up to 40 years and in rare situations perhaps as long as 1,600 years. [ 17 ] A species forming no soil seed bank at all (except the dry season between ripening and the first autumnal rains) is Agrostemma githago (Corncockle), which was formerly a widespread cereal weed. [ citation needed ] Longevity of seeds is very variable and depends on many factors. Seeds buried more deeply tend to be capable of lasting longer. [ 18 ] However, few species exceed 100 years. [ 19 ] In typical soils the longevity of seeds can range from nearly zero (germinating immediately when reaching the soil or even before) to several hundred years. Some of the oldest still-viable seeds were those of Lotus ( Nelumbo nucifera ) found buried in the soil of a pond; these seeds were estimated by carbon dating to be around 1,200 years old. [ 20 ] One cultivar of date palm , the Judean date palm , successfully sprouted in 2008 after accidental storage for 2,000 years. [ 21 ] One of the longest-running soil seed viability trials was started in Michigan in 1879 by James Beal . The experiment involved the burying of 20 bottles holding 50 seeds from 21 species. Every five years, a bottle from every species was retrieved and germinated on a tray of sterilized soil which was kept in a growth chamber. Later, after responsibility for managing the experiment was delegated to caretakers, the period between retrievals became longer. In 1980, more than 100 years after the trial was started, seeds of only three species were observed to germinate: moth mullein ( Verbascum blattaria ), common mullein ( Verbascum thapsus ) and common mallow ( Malva neglecta ). [ 22 ] Several other experiments have been conducted to determine the long-term longevity of seeds in soil seed banks. Species of Striga (witchweed) are known to leave some of the highest seed densities in the soil compared to other plant genera ; this is a major factor that aids their invasive potential. [ 28 ] Each plant has the capability to produce between 90,000 and 450,000 seeds, although a majority of these seeds are not viable. [ 29 ] It has been estimated that only two witchweeds would produce enough seeds required to refill a seed bank after seasonal losses. [ 30 ] Before the advent of herbicides, a good example of a persistent seed bank species was Papaver rhoeas , sometimes so abundant in agricultural fields in Europe that it could be mistaken for a crop. [ citation needed ] Studies on the genetic structure of Androsace septentrionalis populations in the seed bank compared to those of established plants showed that diversity within populations is higher below ground than above ground. [ citation needed ]
https://en.wikipedia.org/wiki/Soil_seed_bank
A soil stockpile is formed with excavated topsoil during the construction of buildings or infrastructure. It is considered to be an important resource in construction and ecology . [ 1 ] [ 2 ] Soil is stockpiled for later use in landscaping or restoration of the region following the removal of construction infrastructure. [ 3 ] Before re-use, stockpiled soil may be tested for contamination . [ 4 ] [ 5 ] This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Soil_stockpile
The thermal properties of soil are a component of soil physics that has found important uses in engineering , climatology and agriculture . These properties influence how energy is partitioned in the soil profile . While related to soil temperature , it is more accurately associated with the transfer of energy (mostly in the form of heat) throughout the soil, by radiation , conduction and convection . The main soil thermal properties are It is hard to say something general about the soil thermal properties at a certain location because these are in a constant state of flux from diurnal and seasonal variations. Apart from the basic soil composition, which is constant at one location, soil thermal properties are strongly influenced by the soil volumetric water content, volume fraction of solids and volume fraction of air. Air is a poor thermal conductor and reduces the effectiveness of the solid and liquid phases to conduct heat. While the solid phase has the highest conductivity it is the variability of soil moisture that largely determines thermal conductivity. As such soil moisture properties and soil thermal properties are very closely linked and are often measured and reported together. Temperature variations are most extreme at the surface of the soil and these variations are transferred to sub surface layers but at reduced rates as depth increases. Additionally there is a time delay as to when maximum and minimum temperatures are achieved at increasing soil depth (sometimes referred to as thermal lag). One possible way of assessing soil thermal properties is the analysis of soil temperature variations versus depth Fourier's law , where Q is heat flux or rate of heat transfer per unit area J·m −2 ∙s −1 or W·m −2 , λ is thermal conductivity W·m −1 ∙K −1 ; dT / dz is the gradient of temperature (change in temp/change in depth) K·m −1 . The most commonly applied method for measurement of soil thermal properties, is to perform in-situ measurements, using Non-Steady-State Probe systems, or Heat Probes. The single probe method employs a heat source inserted into the soil whereby heat energy is applied continuously at a given rate. The thermal properties of the soil can be determined by analysing the temperature response adjacent to the heat source via a thermal sensor. This method reflects the rate at which heat is conducted away from the probe. The limitation of this device is that it measures thermal conductivity only. Applicable standards are: IEEE Guide for Soil Thermal Resistivity Measurements (IEEE Standard 442-1981) as well as with ASTM D 5334-08 Standard Test Method for Determination of Thermal Conductivity of Soil and Soft Rock by Thermal Needle Probe Procedure. After further research the dual-probe heat-pulse technique was developed. It consists of two parallel needle probes separated by a distance (r). One probe contains a heater and the other a temperature sensor. The dual probe device is inserted into the soil and a heat pulse is applied and the temperature sensor records the response as a function of time. That is, a heat pulse is sent from the probe across the soil (r) to the sensor. The great benefit of this device is that it measures both thermal diffusivity and volumetric heat capacity. From this, thermal conductivity can be calculated meaning the dual probe can determine all the main soil thermal properties. Potential drawbacks of the heat-pulse technique have been noted. This includes the small measuring volume of soil as well as measurements being sensitive to probe-to-soil contact and sensor-to-heater spacing. Remote sensing from satellites, aircraft has greatly enhanced how the variation in soil thermal properties can be identified and utilized to benefit many aspects of human endeavor. While remote sensing of reflected light from surfaces does indicate thermal response of the topmost layers of soil (a few molecular layers thick), it is thermal infrared wavelength that provides energy variations extending to varying shallow depths below the ground surface which is of most interest. A thermal sensor can detect variations to heat transfers into and out of near surface layers because of external heating by the thermal processes of conduction, convection, and radiation. Microwave remote sensing from satellites has also proven useful as it has an advantage over TIR of not being effected by cloud cover. The various methods of measuring soil thermal properties have been utilized to assist in diverse fields such as; the expansion and contraction of construction materials especially in freezing soils, longevity and efficiency of gas pipes or electrical cables buried in the ground, energy conservation schemes, in agriculture for timing of planting to ensure optimum seedling emergence and crop growth, measuring greenhouse gas emissions as heat effects the liberation of carbon dioxide from soil. Soil thermal properties are also becoming important in areas of environmental science such as determining water movement in radioactive waste and in locating buried land mines . The thermal effusivity of soil enables the ground to be used for underground thermal energy storage. [ 1 ] Solar energy can be recycled from summer to winter by using the ground as a long term store of heat energy before being retrieved by ground source heat pumps in winter. Changes in the amount of dissolved organic carbon and soil organic carbon within the soil can affect its ability to respirate, either increasing or decreasing the soil's carbon uptake. [ 2 ] Furthermore, MCS design criteria for shallow loop ground source heat pumps require an accurate in situ thermal conductivity reading. [ 3 ] This can be done by using the above-mentioned thermal heat probe to determine soil thermal conductivity across the site accurately.
https://en.wikipedia.org/wiki/Soil_thermal_properties
Soil vapor extraction (SVE) is a physical treatment process for in situ remediation of volatile contaminants in vadose zone (unsaturated) soils (EPA, 2012). SVE (also referred to as in situ soil venting or vacuum extraction) is based on mass transfer of contaminant from the solid (sorbed) and liquid (aqueous or non-aqueous) phases into the gas phase , with subsequent collection of the gas phase contamination at extraction wells . Extracted contaminant mass in the gas phase (and any condensed liquid phase) is treated in aboveground systems. In essence, SVE is the vadose zone equivalent of the pump-and-treat technology for groundwater remediation . SVE is particularly amenable to contaminants with higher Henry’s Law constants, including various chlorinated solvents and hydrocarbons . SVE is a well-demonstrated, mature remediation technology [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] and has been identified by the U.S. Environmental Protection Agency (EPA) as presumptive remedy. [ 8 ] [ 9 ] [ 10 ] The soil vapor extraction remediation technology uses vacuum blowers and extraction wells to induce gas flow through the subsurface, collecting contaminated soil vapor, which is subsequently treated aboveground. SVE systems can rely on gas inflow through natural routes or specific wells may be installed for gas inflow (forced or natural). The vacuum extraction of soil gas induces gas flow across a site, increasing the mass transfer driving force from aqueous ( soil moisture ), non-aqueous (pure phase), and solid (soil) phase into the gas phase. Air flow across a site is thus a key aspect, but soil moisture and subsurface heterogeneity (i.e., a mixture of low and high permeability materials) can result in less gas flow across some zones. In some situations, such as enhancement of monitored natural attenuation, a passive SVE system that relies on barometric pumping may be employed. [ 11 ] [ 12 ] SVE has several advantages as a vadose zone remediation technology. The system can be implemented with standard wells and off-the-shelf equipment (blowers, instrumentation, vapor treatment, etc.). SVE can also be implemented with a minimum of site disturbance, primarily involving well installation and minimal aboveground equipment. Depending on the nature of the contamination and the subsurface geology, SVE has the potential to treat large soil volumes at reasonable costs. The soil gas (vapor) that is extracted by the SVE system generally requires treatment prior to discharge back into the environment. The aboveground treatment is primarily for a gas stream, although condensation of liquid must be managed (and in some cases may specifically be desired). A variety of treatment techniques are available for aboveground treatment [ 13 ] and include thermal destruction (e.g., direct flame thermal oxidation , catalytic oxidizers ), adsorption (e.g., granular activated carbon , zeolites , polymers ), biofiltration , non-thermal plasma destruction , photolytic/ photocatalytic destruction, membrane separation, gas absorption , and vapor condensation . The most commonly applied aboveground treatment technologies are thermal oxidation and granular activated carbon adsorption. The selection of a particular aboveground treatment technology depends on the contaminant, concentrations in the offgas, throughput, and economic considerations. The effectiveness of SVE, that is, the rate and degree of mass removal, depends on a number of factors that influence the transfer of contaminant mass into the gas phase. The effectiveness of SVE is a function of the contaminant properties (e.g., Henry’s Law constant, vapor pressure , boiling point , adsorption coefficient ), temperature in the subsurface, vadose zone soil properties (e.g., soil grain size , soil moisture content, soil permeability , soil carbon content), subsurface heterogeneity, and the air flow driving force (applied pressure gradient ). As an example, a residual quantity of a highly volatile contaminant (such as trichloroethene ) in a homogeneous sand with high permeability and low carbon content (i.e., low/negligible adsorption) will be readily treated with SVE. In contrast, a heterogeneous vadose zone with one or more clay layers containing residual naphthalene would require a longer treatment time and/or SVE enhancements. SVE effectiveness issues include tailing and rebound, which result from contaminated zones with lower air flow (i.e., low permeability zones or zones of high moisture content) and/or lower volatility (or higher adsorption). Recent work at U.S. Department of Energy sites has investigated layering and low permeability zones in the subsurface and how they affect SVE operations. [ 14 ] [ 15 ] Enhancements for improving the effectiveness of SVE can include directional drilling , pneumatic and hydraulic fracturing , and thermal enhancement (e.g., hot air or steam injection ). [ 16 ] [ 17 ] [ 18 ] Directional drilling and fracturing enhancements are generally intended to improve the gas flow through the subsurface, especially in lower permeability zones. Thermal enhancements such as hot air or steam injection increase the subsurface soil temperature, thereby improving the volatility of the contamination. In addition, injection of hot (dry) air can remove soil moisture and thus improve the gas permeability of the soil. Additional thermal technologies (such as electrical resistance heating, six-phase soil heating, radio-frequency heating , or thermal conduction heating) can be applied to the subsurface to heat the soil and volatilize/desorb contaminants, but these are generally viewed as separate technologies (versus a SVE enhancement) that may use vacuum extraction (or other methods) for collecting soil gas. On selection as a remedy, implementation of SVE involves the following elements: system design, operation, optimization, performance assessment, and closure. Several guidance documents provide information on these implementation aspects. EPA and U.S. Army Corps of Engineers (USACE) guidance documents [ 19 ] [ 20 ] [ 21 ] establish an overall framework for design, operation, optimization, and closure of a SVE system. The Air Force Center for Engineering and the Environment (AFCEE) guidance [ 22 ] presents actions and considerations for SVE system optimization, but has limited information related to approaches for SVE closure and meeting remediation goals. Guidance from the Pacific Northwest National Laboratory (PNNL) [ 23 ] supplements these documents by discussing specific actions and decisions related to SVE optimization, transition, and/or closure. Design and operation of a SVE system is relatively straightforward, with the major uncertainties having to do with subsurface geology / formation characteristics and the location of contamination. As time goes on, it is typical for a SVE system to exhibit a diminishing rate of contaminant extraction due to mass transfer limitations or removal of contaminant mass. Performance assessment is a key aspect to provide input for decisions about whether the system should be optimized, terminated, or transitioned to another technology to replace or augment SVE. Assessment of rebound and mass flux [ 24 ] [ 25 ] [ 23 ] provide approaches to evaluate system performance and obtain information on which to base decisions. Several technologies are related to soil vapor extraction. As noted above, various soil-heating remediation technologies (e.g., electrical resistive heating, in situ vitrification ) require a soil gas collection component, which may take the form of SVE and/or a surface barrier (i.e., hood). Bioventing is a related technology, the goal of which is to introduce additional oxygen (or possibly other reactive gases) into the subsurface to stimulate biological degradation of the contamination. In situ air sparging is a remediation technology for treating contamination in groundwater. Air is injected and "sparged" through the groundwater and then collected via soil vapor extraction wells.
https://en.wikipedia.org/wiki/Soil_vapor_extraction
Soil zoology or pedozoology is the study of animals living fully or partially in the soil ( soil fauna ). The field of study was developed in the 1940s by Mercury Ghilarov in Russia. Ghilarov noted inverse relationships between size and numbers of soil organisms. He also suggested that soil included water, air and solid phases and that soil may have provided the transitional environment between aquatic and terrestrial life. The phrase was apparently first used in the English speaking world at a conference of soil zoologists presenting their research at the University of Nottingham , UK, in 1955. [ 1 ] This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Soil_zoology
The Sokal affair , also known as the Sokal hoax , [ 1 ] was a demonstrative scholarly hoax performed by Alan Sokal , a physics professor at New York University and University College London . In 1996, Sokal submitted an article to Social Text , an academic journal of cultural studies . The submission was an experiment to test the journal's intellectual rigor , specifically to investigate whether "a leading North American journal of cultural studies—whose editorial collective includes such luminaries as Fredric Jameson and Andrew Ross —[would] publish an article liberally salted with nonsense if (a) it sounded good and (b) it flattered the editors' ideological preconceptions." [ 2 ] The article, "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity", [ 3 ] was published in the journal's Spring/Summer 1996 "Science Wars" issue. It proposed that quantum gravity is a social and linguistic construct. The journal did not practice academic peer review at the time, [ 4 ] so it did not submit the article for outside expert review by a physicist. [ 3 ] [ 5 ] Three weeks after its publication in May 1996, Sokal revealed in the magazine Lingua Franca that the article was a hoax. [ 2 ] The hoax caused controversy about the scholarly merit of commentary on the physical sciences by those in the humanities; the influence of postmodern philosophy on social disciplines in general; and academic ethics, including whether Sokal was wrong to deceive the editors or readers of Social Text ; and whether Social Text had abided by proper scientific ethics. In 2008, Sokal published Beyond the Hoax , which revisited the history of the hoax and discussed its lasting implications. In an interview on the U.S. radio program All Things Considered , Sokal said he was inspired to submit the bogus article after reading Higher Superstition (1994), in which authors Paul R. Gross and Norman Levitt claim that some humanities journals will publish anything as long as it has "the proper leftist thought" and quoted (or was written by) well-known leftist thinkers. [ 6 ] [ a ] Gross and Levitt had been defenders of the philosophy of scientific realism , opposing postmodernist academics who questioned scientific objectivity . They asserted that anti-intellectual sentiment in liberal arts departments (especially English departments) caused the increase of deconstructionist thought, which eventually resulted in a deconstructionist critique of science. They saw the critique as a "repertoire of rationalizations" for avoiding the study of science. [ 7 ] Sokal reasoned that if the presumption of editorial laziness was correct, the nonsensical content of his article would be irrelevant to whether the editors would publish it. What would matter would be ideological obsequiousness, fawning references to deconstructionist writers, and sufficient quantities of the appropriate jargon. After the article was published and the hoax revealed, he wrote: The results of my little experiment demonstrate, at the very least, that some fashionable sectors of the American academic Left have been getting intellectually lazy. The editors of Social Text liked my article because they liked its conclusion: that "the content and methodology of postmodern science provide powerful intellectual support for the progressive political project" [sec. 6]. They apparently felt no need to analyze the quality of the evidence, the cogency of the arguments, or even the relevance of the arguments to the purported conclusion. [ 8 ] "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity" [ 3 ] proposed that quantum gravity has progressive political implications, and that the " morphogenetic field " could be a valid theory of quantum gravity. (A morphogenetic field is a concept adapted by Rupert Sheldrake in a way that Sokal characterized in the affair's aftermath as "a bizarre New Age idea". [ 2 ] ) Sokal wrote that the concept of "an external world whose properties are independent of any individual human being" was "dogma imposed by the long post-Enlightenment hegemony over the Western intellectual outlook". After referring skeptically to the "so-called scientific method", the article declared that "it is becoming increasingly apparent that physical 'reality ' " is fundamentally "a social and linguistic construct." It went on to state that because scientific research is "inherently theory-laden and self-referential", it "cannot assert a privileged epistemological status with respect to counterhegemonic narratives emanating from dissident or marginalized communities", and that therefore a "liberatory science" and an "emancipatory mathematics", spurning "the elite caste canon of 'high science ' ", needed to be established for a "postmodern science [that] provide[s] powerful intellectual support for the progressive political project." Moreover, the article's footnotes conflate academic terms with sociopolitical rhetoric, e.g.: Just as liberal feminists are frequently content with a minimal agenda of legal and social equality for women and " pro-choice ", so liberal (and even some socialist ) mathematicians are often content to work within the hegemonic Zermelo–Fraenkel framework (which, reflecting its nineteenth-century liberal origins, already incorporates the axiom of equality) supplemented only by the axiom of choice . Sokal submitted the article to Social Text , whose editors were collecting articles for the "Science Wars" issue. "Transgressing the Boundaries" was notable as an article by a natural scientist; biologist Ruth Hubbard also had an article in the issue. [ 9 ] Later, after Sokal revealed the hoax in Lingua Franca , Social Text 's editors wrote that they had requested editorial changes that Sokal refused to make, [ 5 ] and had had concerns about the quality of the writing: "We requested him (a) to excise a good deal of the philosophical speculation and (b) to excise most of his footnotes." [ 10 ] Still, despite calling Sokal a "difficult, uncooperative author", and noting that such writers were "well known to journal editors", based on Sokal's credentials Social Text published the article in the May 1996 Spring/Summer "Science Wars" issue. [ 5 ] The editors did not seek peer review of the article by physicists or otherwise; they later defended this decision on the basis that Social Text was a journal of open intellectual inquiry and the article was not offered as a contribution to physics. [ 5 ] In the article "A Physicist Experiments With Cultural Studies" in the May 1996 issue of Lingua Franca , Sokal revealed that "Transgressing the Boundaries" was a hoax and concluded that Social Text "felt comfortable publishing an article on quantum physics without bothering to consult anyone knowledgeable in the subject" because of its ideological proclivities and editorial bias. [ 2 ] In their defense, Social Text 's editors said they believed that Sokal's essay "was the earnest attempt of a professional scientist to seek some kind of affirmation from postmodern philosophy for developments in his field" and that "its status as parody does not alter, substantially, our interest in the piece, itself, as a symptomatic document." [ 11 ] Besides criticizing his writing style, Social Text 's editors accused Sokal of behaving unethically in deceiving them. [ 4 ] Sokal said the editors' response demonstrated the problem that he sought to identify. Social Text , as an academic journal, published the article not because it was faithful, true, and accurate to its subject, but because an "academic authority" had written it and because of the appearance of the obscure writing. The editors said they considered it poorly written but published it because they felt Sokal was an academic seeking their intellectual affirmation. Sokal remarked: My goal isn't to defend science from the barbarian hordes of lit crit (we'll survive just fine, thank you), but to defend the Left from a trendy segment of itself. ... There are hundreds of important political and economic issues surrounding science and technology. Sociology of science, at its best, has done much to clarify these issues. But sloppy sociology, like sloppy science, is useless, or even counterproductive. [ 5 ] Social Text 's response revealed that none of the editors had suspected Sokal's piece was a parody. Instead, they speculated Sokal's admission "represented a change of heart, or a folding of his intellectual resolve". Sokal found further humor in the idea that the article's absurdity was hard to spot: In the second paragraph I declare without the slightest evidence or argument, that "physical 'reality' (note the scare quotes ) ... is at bottom a social and linguistic construct." Not our theories of physical reality, mind you, but the reality itself. Fair enough. Anyone who believes that the laws of physics are mere social conventions is invited to try transgressing those conventions from the windows of my apartment. I live on the twenty-first floor. [ 12 ] In 1997, Sokal and Jean Bricmont co-wrote Impostures intellectuelles (published in the US as Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science and in the UK as Intellectual Impostures , 1998). [ 13 ] The book featured analysis of extracts from established intellectuals ' writings that Sokal and Bricmont claimed misused scientific terminology. [ 14 ] It closed with a critical summary of postmodernism and criticism of the strong programme of social constructionism in the sociology of scientific knowledge . [ 15 ] In 2008, Sokal published a followup book, Beyond the Hoax , which revisited the history of the hoax and discussed its lasting implications. [ 16 ] The French philosopher Jacques Derrida , whose 1966 statement about Einstein's theory of relativity was quoted in Sokal's paper, was singled out for criticism, particularly in U.S. newspaper coverage of the hoax. [ 17 ] [ 18 ] One weekly magazine used two images of him, a photo and a caricature , to illustrate a "dossier" on Sokal's paper. [ 18 ] Arkady Plotnitsky commented: [ 17 ] Even given Derrida's status as an icon of intellectual controversy on the Anglo-American cultural scene, it is remarkable that out of thousands of pages of Derrida's published works, a single extemporaneous remark on relativity made in 1966 (before Derrida was "the Derrida" and, in a certain sense, even before "deconstruction") ... is made to stand for nearly all of deconstructive or even postmodernist (not a term easily, if at all, applicable to Derrida) treatments of science. Derrida later responded to the hoax in " Sokal et Bricmont ne sont pas sérieux " ("Sokal and Bricmont Aren't Serious"), first published on November 20, 1997, in Le Monde . He called Sokal's action "sad" for having trivialized Sokal's mathematical work and "ruining the chance to carefully examine controversies" about scientific objectivity . [ 18 ] Derrida then faulted him and Bricmont for what he considered "an act of intellectual bad faith " in their follow-up book, Impostures intellectuelles : they had published two articles almost simultaneously, one in English in The Times Literary Supplement on October 17, 1997 [ 19 ] and one in French in Libération on October 18–19, 1997, [ 20 ] but while the two articles were almost identical, they differed in how they treated Derrida. The English-language article had a list of French intellectuals who were not included in Sokal's and Bricmont's book: "Such well-known thinkers as Althusser , Barthes , and Foucault —who, as readers of the TLS will be well aware, have always had their supporters and detractors on both sides of the Channel—appear in our book only in a minor role, as cheerleaders for the texts we criticize." The French-language list, however, included Derrida: " Des penseurs célèbres tels qu'Althusser, Barthes, Derrida et Foucault sont essentiellement absents de notre livre " ("Famous thinkers such as Althusser, Barthes, Derrida and Foucault are essentially absent from our book"). According to Brian Reilly, Derrida may also have been sensitive to another difference between the French and English versions of Impostures intellectuelles . In the French, his citation from the original hoax article is said to be an "isolated" instance of abuse, [ 21 ] whereas the English text adds a parenthetical remark that Derrida's work contained "no systematic misuse (or indeed attention to) science". [ 22 ] [ 23 ] Sokal and Bricmont insisted that the difference between the articles was "banal". [ 24 ] Nevertheless, Derrida concluded that Sokal was not serious in his method, but had used the spectacle of a "quick practical joke" to displace the scholarship Derrida believed the public deserved. [ 25 ] Sociologist Stephen Hilgartner , chairman of Cornell University 's science and technology studies department, wrote "The Sokal Affair in Context" (1997), [ 26 ] comparing Sokal's hoax to "Confirmational Response: Bias Among Social Work Journals" (1990), an article by William M. Epstein published in Science, Technology, & Human Values . [ 27 ] Epstein used a similar method to Sokal's, submitting fictitious articles to real academic journals to measure their response. Though much more systematic than Sokal's work, it received scant media attention. Hilgartner argued that the "asymmetric" effect of the successful Sokal hoax compared with Epstein's experiment cannot be attributed to its quality, but that "[t]hrough a mechanism that resembles confirmatory bias, audiences may apply less stringent standards of evidence and ethics to attacks on targets that they are predisposed to regard unfavorably." [ 26 ] As a result, according to Hilgartner, though competent in terms of method, Epstein's experiment was largely muted by the more socially accepted social work discipline he critiqued, while Sokal's attack on cultural studies , despite lacking experimental rigor, was accepted. Hilgartner also argued that Sokal's hoax reinforced the views of well-known pundits such as George Will and Rush Limbaugh , so that his opinions were amplified by media outlets predisposed to agree with his argument. [ 28 ] The Sokal Affair extended from academia to the public press. Anthropologist Bruno Latour , who was criticized in Fashionable Nonsense , described the scandal as a "tempest in a teacup". Retired Northeastern University mathematician-turned social scientist Gabriel Stolzenberg wrote essays criticizing the statements of Sokal and his allies, [ 29 ] arguing that they insufficiently grasped the philosophy they criticized, rendering their criticism meaningless. In Social Studies of Science , Bricmont and Sokal responded to Stolzenberg, [ 30 ] denouncing his representations of their work and criticizing his commentary about the " strong programme " of the sociology of science. Stolzenberg replied in the same issue that their critique and allegations of misrepresentation were based on misreadings. He advised readers to slowly and skeptically examine the arguments of each party, bearing in mind that "the obvious is sometimes the enemy of the true". [ 31 ] In her 1998 article "The Sokal Hoax: At Whom Are We Laughing?", philosopher of science Mara Beller compared the "awe" physicists feel for Bohr's obscurity to their "contempt" for Derrida's density. [ 32 ] In 2009, Cornell sociologist Robb Willer performed an experiment in which undergraduate students read Sokal's paper and were told either that it was written by another student or that it was by a famous academic. He found that students who believed the paper's author was a high-status intellectual rated it better in quality and intelligibility. [ 33 ] In October 2021, the scholarly journal Higher Education Quarterly published a bogus article "authored" by "Sage Owens" and "Kal Avers-Lynde III". The initials stand for "Sokal III". [ 34 ] The Quarterly retracted the article. [ 35 ] The author Ali Hazelwood published the book Love, Theoretically in 2023, whose plot revolved around a character who performed a similar hoax, which the author stated to be inspired by this one. In the novel, the character is a physicist, and his false contribution is to a theoretical physics journal. [ 36 ]
https://en.wikipedia.org/wiki/Sokal_affair
The Sokolov–Ternov effect is the effect of self-polarization of relativistic electrons or positrons moving at high energy in a magnetic field . The self-polarization occurs through the emission of spin-flip synchrotron radiation . The effect was predicted by Igor Ternov and the prediction rigorously justified by Arseny Sokolov using exact solutions to the Dirac equation . [ 1 ] [ 2 ] An electron in a magnetic field can have its spin oriented in the same ("spin up") or in the opposite ("spin down") direction with respect to the direction of the magnetic field (which is assumed to be oriented "up"). The "spin down" state has a higher energy than "spin up" state. The polarization arises due to the fact that the rate of transition through emission of synchrotron radiation to the "spin down" state is slightly greater than the probability of transition to the "spin up" state. As a result, an initially unpolarized beam of high-energy electrons circulating in a storage ring after sufficiently long time will have spins oriented in the direction opposite to the magnetic field. Saturation is not complete and is explicitly described by the formula [ 3 ] where A = 8 3 / 15 ≈ 0.924 {\displaystyle A=8{\sqrt {3}}/15\approx 0.924} is the limiting degree of polarization (92.4%), and τ {\displaystyle \tau } is the relaxation time: Here A {\displaystyle A} is as before, m {\displaystyle m} and e {\displaystyle e} are the mass and charge of the electron, ε 0 {\displaystyle \varepsilon _{0}} is the vacuum permittivity , c {\displaystyle c} is the speed of light, H 0 ≈ 4.414 × 10 13 gauss {\displaystyle H_{0}\approx 4.414\times 10^{13}~{\text{gauss}}} is the Schwinger field , H {\displaystyle H} is the magnetic field, and E {\displaystyle E} is the electron energy. The limiting degree of polarization A {\displaystyle A} is less than one due to the existence of spin–orbital energy exchange, which allows transitions to the "spin up" state (with probability 25.25 times less than to the "spin down" state). Typical relaxation time is on the order of minutes and hours. Thus producing a highly polarized beam requires a long enough time and the use of storage rings . The self-polarization effect for positrons is similar, with the only difference that positrons will tend to have spins oriented in the direction parallel to the direction of the magnetic field. [ 4 ] The Sokolov–Ternov effect was experimentally observed in the USSR , France, Germany, United States, Japan, and Switzerland in storage rings with electrons of energy 1–50 GeV. [ 3 ] [ 5 ] The effect of radiative polarization provides a unique capability for creating polarized beams of high-energy electrons and positrons that can be used for various experiments. The effect also has been related to the Unruh effect which, up to now, under experimentally achievable conditions is too small to be observed. The equilibrium polarization given by the Sokolov and Ternov has corrections when the orbit is not perfectly planar. The formula has been generalized by Derbenev and Kondratenko and others. [ 6 ]
https://en.wikipedia.org/wiki/Sokolov–Ternov_effect
Sokratis Famellos ( Greek : Σωκράτης Φάμελλος ; born 27 March 1966) is a Greek politician and former chemical engineer who has been the president of Syriza since November 2024. He previously served as de jure Leader of the Official Opposition in the Hellenic Parliament from 2023 to 2024. Sokratis Famellos was born on 27 March 1966 in Athens , but was raised in Thessaloniki . [ 1 ] He earned a diploma from the Aristotle University of Thessaloniki , and a Master of Science in Environmental Planning and Management from the Hellenic Open University ; later working as a chemical engineer. [ 2 ] Famellos was elected to the Hellenic Parliament representing for Thessaloniki B in 2015 . [ 1 ] He was reelected for the constituency in 2019 and served as the Alternative Minister for the Environment and Energy from November 2016 to July 2019. [ 2 ] Following the defeat of Syriza in the June 2023 Greek legislative election and the resignation of Alexis Tsipras as party chairman, on 3 July 2023 Famellos was elected as chairman of the Syriza parliamentary group. [ 3 ] Prior to the 2023 Syriza leadership election , there was speculation that Famellos would be a candidate, which he denied. [ 4 ] Under the new leader, Stefanos Kasselakis , Famellos retained his position as leader of the parliamentary group. He became the Leader of the Opposition in the Hellenic Parliament , as Kasselakis was not an MP. [ 5 ] On 27 August 2024, Famellos was dismissed from the position after refusing to resign. [ 6 ] Famellos stood in the 2024 Syriza leadership election and was elected president on 24 November after leading the first round with 49.41% of the votes. [ 7 ] He was congratulated by Prime Minister Kyriakos Mitsotakis on his election as president. [ 8 ] Famellos was married to Popi Karagiannidou until her death in 2018. [ 9 ] They had one son together. [ 9 ]
https://en.wikipedia.org/wiki/Sokratis_Famellos
Sol-air temperature ( T sol-air ) is a variable used to calculate cooling load of a building and determine the total heat gain through exterior surfaces. It is an improvement over: Where: The above equation only takes into account the temperature differences and ignores two important parameters, being 1) solar radiative flux; and 2) infrared exchanges from the sky. The concept of T sol-air was thus introduced to enable these parameters to be included within an improved calculation. The following formula results: T s o l − a i r = T o + ( a ⋅ I − Δ Q i r ) h o {\displaystyle T_{\mathrm {sol-air} }=T_{o}+{\frac {(a\cdot I-\Delta Q_{ir})}{h_{o}}}} Where: The product T s o l − a i r {\displaystyle T_{\mathrm {sol-air} }} just found can now be used to calculate the amount of heat transfer per unit area, as below: q A = h o ( T s o l − a i r − T s ) {\displaystyle {\frac {q}{A}}=h_{o}(T_{\mathrm {sol-air} }-T_{s})} An equivalent, and more useful equation for the net heat loss across the whole construction is: q A = U c ( T i − T s o l − a i r ) {\displaystyle {\frac {q}{A}}=U_{c}(T_{i}-T_{\mathrm {sol-air} })} Where: By expanding the above equation through substituting T s o l − a i r {\displaystyle T_{\mathrm {sol-air} }} the following heat loss equation is derived: q A = U c ( T i − T o ) − U c h o [ a ⋅ I − F r ⋅ h r ⋅ Δ T o − s k y ] {\displaystyle {\frac {q}{A}}=U_{c}(T_{i}-T_{o})-{\frac {U_{c}}{h_{o}}}{[a\cdot I-F_{r}\cdot h_{r}\cdot \Delta T_{o-sky}]}} The above equation is used for opaque facades in, [ 1 ] and renders intermediate calculation of T s o l − a i r {\displaystyle T_{\mathrm {sol-air} }} unnecessary. The main advantage of this latter approach is that it avoids the need for a different outdoor temperature node for each facade. Thus, the solution scheme is kept simple, and the solar and sky radiation terms from all facades can be aggregated and distributed to internal temperature nodes as gains/losses.
https://en.wikipedia.org/wiki/Sol-air_temperature
A sol is a colloidal suspension made out of tiny solid particles [ 1 ] in a continuous liquid medium. Sols are stable, so that they do not settle down when left undisturbed, and exhibit the Tyndall effect , which is the scattering of light by the particles in the colloid. The size of the particles can vary from 1 nm - 100 nm. Examples include amongst others blood , pigmented ink , cell fluids, paint , antacids and mud . Artificial sols can be prepared by two main methods: dispersion and condensation. In the dispersion method, solid particles are reduced to colloidal dimensions through techniques such as ball milling and Bredig's arc method . In the condensation method, small particles are formed from larger molecules through a chemical reaction. The stability of sols can be maintained through the use of dispersing agents , which prevent the particles from clumping together or settling out of the suspension. Sols are often used in the sol-gel process , in which a sol is converted into a gel through the addition of a crosslinking agent . In a sol, solid particles are dispersed in a liquid continuous phase, while in an emulsion , liquid droplets are dispersed in a liquid or semi-solid continuous phase. This chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Sol_(colloid)
A solar-assisted heat pump ( SAHP ) is a system that combines a heat pump and thermal solar panels and/or PV solar panels in a single integrated system. [ 1 ] Heat pumps require a low temperature heat source which can be provided by solar energy. Typically, these two technologies are used separately (or only placing them in parallel) to produce warm air or hot water . [ 2 ] In this system the solar thermal panel performs the function of the low temperature heat source and the heat produced is used to feed the heat pump's evaporator. [ 3 ] The goal of this system is to get high coefficient of performance ( COP ) and then produce energy in a more efficient and less expensive way. Air source heat pumps which are preheated by solar air collectors have an additional benefit of lower maintenance as the outside fan unit can be protected from the harsh winter environment. Solar PV energy can power the heat pump electrically to enable electrification of heating buildings [ 4 ] and greenhouses . [ 5 ] These systems enable electrification [ 6 ] of heating/cooling and are normally driven by economics [ 7 ] and decarbonization goals. [ 8 ] Such systems have been shown to be economic in the Middle East, [ 9 ] North America, [ 10 ] Asia [ 11 ] and Europe. [ 12 ] It is possible to use any type of solar thermal system with air or liquid collectors, (sheet and tubes, roll-bond, heat pipe, thermal plates) or hybrid ( mono / polycrystalline , thin film ) in combination with the heat pump. The use of a hybrid panel is preferable because it allows covering a part of the electricity demand of the heat pump and reduce the power consumption and consequently the variable costs of the system. Solar air collectors operate at maximum efficiency when heating ambient air and thus are ideal for supplying warm air to air source heat pumps. For solar liquid based systems the operating conditions' optimization of this system is the main challenge, because there are two opposing trends of the performance of the two sub-systems: by way of example, decreasing the evaporation temperature of the working fluid increases the thermal efficiency of the solar panel but decreases the performance of the heat pump, and consequently the COP. [ 13 ] The target for the optimization is normally the minimization of the electrical consumption of the heat pump, or primary energy required by an auxiliary boiler which supplies the load not covered by a renewable source . For PV powered heat pump systems the goal is still to reduce grid-power, but there is an additional optimization to maximize self-sufficiency and self-consumption of PV and the energy imported/exported to the grid. [ 14 ] Best practices have been developed to model PV-powered heat pumps that can be done with a range of open source software tools like TRNSYS , EnergyPlus and System Advisory Model (SAM). [ 15 ] Solar heated air source heat pumps are relatively simple to implement by connecting the outlet of the solar air collectors to the fan inlet of the heat pump. For liquid solar collectors, there are two possible configurations with heat pumps, which are distinguished by the presence or not of an intermediate fluid that transports the heat from the panel to the heat pump. Machines called indirect-expansion mainly use water as a heat transfer fluid, mixed with an antifreeze fluid (usually glycol ) to avoid ice formation phenomena during winter period. The machines called direct-expansion place the refrigerant fluid directly inside the hydraulic circuit of the thermal panel, where the phase transition takes place. [ 13 ] This second configuration, even though it is more complex from a technical point of view, has several advantages: [ 16 ] [ 17 ] Generally speaking the use of this integrated system is an efficient way to employ the heat produced by the thermal panels in winter period, something that normally would not be exploited because its temperature is too low. [ 3 ] In comparison with only heat pump utilization, it is possible to reduce the amount of electrical energy consumed by the machine during the weather evolution from winter season to the spring, and then finally only use thermal solar panels to produce all the heat demand required (only in case of indirect-expansion machine), thus saving on variable costs. [ 2 ] In comparison with a system with only thermal panels, it is possible to provide a greater part of the required winter heating using a non-fossil energy source. [ 18 ] Compared to geothermal heat pumps , the main advantage is that the installation of a piping field in the soil is not required, which results in a lower cost of investment (drilling accounts for about 50% of the cost of a geothermal heat pump system) and in more flexibility of machine installation, even in areas in which there is limited available space. Furthermore, there are no risks related to possible thermal soil impoverishment. [ 19 ] Similarly to air source heat pumps , solar-assisted heat pump performance is affected by atmospheric conditions, although this effect is less significant. Solar-assisted heat pump performance is generally affected by varying solar radiation intensity rather than air temperature oscillation. This produces a greater SCOP (Seasonal COP). Additionally, evaporation temperature of the working fluid is higher than in air source heat pumps, so in general the coefficient of performance is significantly higher. [ 16 ] In general, a heat pump can evaporate at temperatures below the ambient temperature. In a solar-assisted heat pump this generates a temperature distribution of the thermal panels below that temperature. In this condition thermal losses of the panels towards the environment become additional available energy to the heat pump. [ 20 ] [ 21 ] In this case it is possible that the thermal efficiency of solar panels is more than 100%. Another free-contribution in these conditions of low temperature is related to the possibility of condensation of water vapor on the surface of the panels, which provides additional heat to the heat transfer fluid (normally it is a small part of the total heat collected by solar panels), that is equal to the latent heat of condensation. The simple configuration of solar-assisted heat pump as only solar panels as heat source for the evaporator. It can also exist a configuration with an additional heat source. [ 2 ] The goal is to have further advantages in energy saving but, on the other hand, the management and optimization of the system become more complex. The geothermal-solar configuration allows reducing the size of the piping field (and reduce the investment) and to have a regeneration of the ground during summer through the heat collected from the thermal panels. This configuration is also known as solar assisted ground source heat pump (SAGSHP). A well specified and well designed SAGSHP system [ 22 ] can achieve energy neutral buildings [on an annual basis] as far north as Karlskrona in Sweden. The air-solar structure allows an acceptable heat input also during cloudy days, maintaining the compactness of the system and the easiness to install it. As in regular air conditioners, one of the issues is to keep the evaporation temperature high, especially when the sunlight has low power and the ambient airflow is low.
https://en.wikipedia.org/wiki/Solar-assisted_heat_pump
Solar-blind technology is a set of technologies to produce images without interference from the Sun. This is done by using wavelengths of ultraviolet light that are totally absorbed by the ozone layer , yet are transmitted in the Earth's atmosphere. Wavelengths from 240 to 280 nm are completely absorbed by the ozone layer. Elements of this technology are ultraviolet light sources, ultraviolet image detectors, and filters that only transmit the range of wavelengths that are blocked by ozone. [ 1 ] A system will also have a signal processing system, and a way to display the results (image). [ 2 ] Ultraviolet illumination can be produced from longer wavelengths using non-linear optical materials . These can be a second harmonic generator. They must have a suitable birefringence in order to phase match the output frequency doubled UV light. One compound commercially used is L-arginine phosphate monohydrate known as LAP. [ 3 ] Research is underway for substances that are very non-linear, have a suitable birefringence , are transparent in the spectrum and have a high degree of resistance to damage from lasers. [ 4 ] Normal glass does not transmit below 350 nm, so it is not used for optics in solar-blind systems. Instead calcium fluoride , fused silica , and magnesium fluoride are used as they are transparent to shorter wavelengths. [ 2 ] An optical filter can be used to block out visible light and near-ultraviolet light. It is important to have a high transmittance within the solar-blind spectrum, but to strongly block the other wavelengths. [ 2 ] Interference filters can pass 25% of the wanted rays, and reduce others by 1000 to 10,000 times. However they are unstable and have a narrow field of view. Absorption filters may only pass 10% of wanted UV, but can reject by a ratio of 10 12 . They can have a wide field of view and are stable. Semiconductor ultraviolet detectors are solid state, and convert an ultraviolet photon into an electric pulse. If they are transparent to visible light, then they will not be sensitive to light. [ 2 ] Solar-blind imaging can be used to detect corona discharge, in electrical infrastructure. Missile exhaust can be detected from the troposphere or ground. Also when looking down on the Earth from space, the Earth appears dark in this range, so rockets can be easily detected from above once they pass the ozone layer. [ 2 ] Israel, the People's Republic of China (PRC), Russia, South Africa, the United Kingdom (UK), and the United States (US) are developing this technology. [ 8 ]
https://en.wikipedia.org/wiki/Solar-blind_technology
A solar-powered desalination unit produces potable water from saline water through direct or indirect methods of desalination powered by sunlight. Solar energy is the most promising renewable energy source due to its ability to drive the more popular thermal desalination systems directly through solar collectors and to drive physical and chemical desalination systems indirectly through photovoltaic cells. [ 1 ] Direct solar desalination produces distillate directly in the solar collector. An example would be a solar still which traps the Sun's energy to obtain freshwater through the process of evaporation and condensation . Indirect solar desalination incorporates solar energy collection systems with conventional desalination systems such as multi-stage flash distillation , multiple effect evaporation , freeze separation or reverse osmosis to produce freshwater . [ 2 ] One type of solar desalination unit is a solar still , it is also similar to a condensation trap. A solar still is a simple way of distilling water, using the heat of the Sun to drive evaporation from humid soil, and ambient air to cool a condenser film. Two basic types of solar stills are box and pit stills. In a pit still, impure water is contained outside the collector, where it is evaporated by sunlight shining through clear plastic. The pure water vapor condenses on the cool inside plastic surface and drips down from the weighted low point, where it is collected and removed. The box type is more sophisticated. The basic principles of solar water distillation are simple, yet effective, as distillation replicates the way nature makes rain. The sun's energy heats water to the point of evaporation. As the water evaporates, water vapor rises, condensing on the glass surface for collection. This process removes impurities, such as salts and heavy metals, and eliminates microbiological organisms. The end result is water cleaner than the purest rainwater. [ citation needed ] Indirect solar desalination systems comprise two sub-systems: a solar collection system and a desalination system. The solar collection system is used, either to collect heat using solar collectors and supply it via a heat exchanger to a thermal desalination process, or to convert electromagnetic solar radiation to electricity using photovoltaic cells to power an electricity-driven desalination process. Osmosis is a natural phenomenon in which water passes through a membrane from a lower to a higher concentration solution. The flow of water can be reversed if a pressure larger than the osmotic pressure is applied on the higher concentration side. In Reverse osmosis desalination systems, seawater pressure is raised above the natural osmotic pressure, forcing pure water through membrane pores to the fresh water side. Reverse osmosis (RO) is the most common desalination process in terms of installed capacity due to its superior energy efficiency compared to thermal desalination systems, despite requiring extensive water pre-treatment. Furthermore, part of the consumed mechanical energy can be reclaimed from the concentrated brine effluent with an energy recovery device. [ 1 ] Solar-powered RO desalination is common in demonstration plants due to the modularity and scalability of both photovoltaic (PV) and RO systems. A detailed economic analysis [ 3 ] and a thorough optimisation strategy [ 4 ] of PV powered RO desalination were carried out with favorable results reported. Economic and reliability considerations are the main challenges to improving PV powered RO desalination systems. However, the quickly dropping PV panel costs are making solar-powered desalination ever more feasible. A solar powered desalination unit designed for remote communities has been tested in the Northern Territory of Australia . The "reverse-osmosis solar installation" (ROSI) uses membrane filtration to provide a reliable and clean drinking water stream from sources such as brackish groundwater . Solar energy overcomes the usually high-energy operating costs as well as greenhouse emissions of conventional reverse osmosis systems. ROSI can also remove trace contaminants such as arsenic and uranium that may cause certain health problems, and minerals such as calcium carbonate which causes water hardness . [ 5 ] Project leader Dr Andrea Schaefer from the University of Wollongong 's Faculty of Engineering said ROSI has the potential to bring clean water to remote communities throughout Australia that do not have access to a town water supply and/or the electricity grid. [ 5 ] Groundwater (which may contain dissolved salts or other contaminants) or surface water (which may have high turbidity or contain microorganisms ) is pumped into a tank with an ultrafiltration membrane, which removes viruses and bacteria. This water is fit for cleaning and bathing. Ten percent of that water undergoes nanofiltration and reverse osmosis in the second stage of purification, which removes salts and trace contaminants, producing drinking water. A photovoltaic solar array tracks the Sun and powers the pumps needed to process the water, using the plentiful sunlight available in remote regions of Australia not served by the power grid. [ 6 ] Solar photo voltaic power is considered a viable option to power a reverse osmosis desalination plant. The techno-economics both in standalone mode and in PV-biodisel hybrid mode for capacities from 0.05 MLD to 300 MLD were examined by researchers at IIT Madras. As a technology demonstrator, a plant of 500 litre /day capacity has been designed, installed and functional there. [ 7 ] While the intermittent nature of sunlight and its variable intensity throughout the day makes desalination during nighttime challenging, several energy storage options can be used to permit 24 hour operation. Batteries can store solar energy for use at night. Thermal energy storage systems ensure constant performance at night or on cloudy days, improving overall efficiency. [ 8 ] Alternatively, stored gravitational energy can be harnessed to provide energy to a solar-powered reverse osmosis unit during non-sunlight hours. [ citation needed ]
https://en.wikipedia.org/wiki/Solar-powered_desalination_unit
The Solar Anomalous and Magnetospheric Particle Explorer ( SAMPEX or Explorer 68 ) was a NASA solar and magnetospheric observatory and was the first spacecraft in the Small Explorer program . It was launched into low Earth orbit on 3 July 1992, from Vandenberg Air Force Base ( Western Test Range ) aboard a Scout G-1 launch vehicle . SAMPEX was an international collaboration between NASA and the Max Planck Institute for Extraterrestrial Physics of Germany . [ 3 ] The Solar Anomalous and Magnetospheric Particle Explorer (SAMPEX) is the first of a series of spacecraft that was launched under the Small Explorer (SMEX) program for low-cost spacecraft. [ 4 ] The main objectives of SAMPEX experiments were to obtained data for several continuous years on the anomalous components of cosmic rays , on solar energetic particles emissions from the Sun , and on the precipitating magnetospheric relativistic electrons . The orbit of SAMPEX has an altitude of 512 × 687 km (318 × 427 mi) and an 81.70° inclination . The spacecraft uses an onboard 3-axis stabilized solar pointed/momentum bias system with the pitch axis pointed to towards the Sun. Solar panels provide power for operations, including 16.7 watts for science instruments. An on-board Data processing unit (DPU) preprocesses the science and other data and stores them in a Recorder/Processor/Packetizer (RPP) unit of about 65 Mb , before transmitting in the S-band at a rate of 1.5 Mbit/s over Wallops Flight Facility (WFF) (or a back-up) station. The command memory can store at least a thousand commands. The science instruments generally point toward local zenith , especially over the terrestrial poles, for optimal sampling of galactic and solar cosmic ray flux. Energetic magnetospheric particle precipitation is monitored at lower geomagnetic latitudes . [ 4 ] It carries four science instruments: (1) low-energy ion composition analyzer (LICA); (2) heavy ion large telescope (HILT); (3) mass spectrometer telescope (MAST); and (4) proton-electron telescope (PET). Estimated useful lifetime of the spacecraft was about three years; however, the data stream continue to 30 June 2004. In 1997, NASA Goddard transferred operation of SAMPEX to the Flight Dynamics and Control Laboratory (FDCL) housed within the Aerospace Engineering Department of the University of Maryland, College Park . [ 4 ] The spacecraft carried four instruments designed to measure the anomalous components of cosmic rays , emissions from solar energetic particles , and electron counts in Earth's magnetosphere . Built for a three-year mission, its science mission was ended on 30 June 2004. [ 5 ] Mission control for SAMPEX was handled by the Goddard Space Flight Center until October 1997, after which it was turned over to the Bowie State University Satellite Operations Control Center (BSOCC). [ 1 ] BSOCC, with funding assistance from The Aerospace Corporation , continued to operate the spacecraft after its science mission ended, using the spacecraft as an educational tool for its students while continuing to release science data to the public. [ 6 ] [ 7 ] The HILT experiment was designed to measure the charge, energy, and mass of cosmic rays in the energy range of about 8.0--310 MeV / nucleon . Specifically, the energy ranges were: Helium (He): 3.9 -- 90 MeV/nucleon; Carbon (C): 7.2 -- 160 MeV/nucleon; Oxygen (O): 8.3-310 MeV/nucleon; Neon (Ne): 9.1--250 MeV/nucleon; and, Iron (Fe): 11–90 Mev/nucleon. The instrument consisted of (a) an array of position-sensitive proportional counters at the entrance, followed by (b) an ionization chamber , (c) another array of position-sensitive proportional counters just before, (d) a coplanar, 10-element, solid state array of detectors. The detectors were backed by, (e) a large caesium iodide (CsI) scintillation counter which was viewed by four light-sensitive diodes . The geometric factor was as large as 35 cm2-sr. The two position-sensitive counters enabled computation of the exact length of the trajectory along the ionization chamber. Items (a), (b), and (c) were filled with flowing, isobutane gas at a pressure of 75 Torr . The 8.5 kg (19 lb) of liquid isobutane was sufficient for a three-year operation. The instrument was basically a dE/dx versus E system; dE/dx was provided by (a), (b), and (c), and E was provided by (d) and (e). The telemetered signals from all the sensors enabled accurate determination of isotopic mass , charge and energy. However, isotopic resolution was poor at the high-energy end of each band, especially for the heavier elements. Species-dependent fluxes were, however, readily computed even at the high energy ends. [ 8 ] The LICA experiment was designed to measure 0.5--5 MeV/nucleon solar and magnetospheric ions (He through Ni ) arriving from the zenith in twelve energy bands. The mass of an ion was determined with simultaneous measurements of its time of flight (ToF) across a path length of approximately 50 cm (20 in) and its residual kinetic energy in one of four 4 × 9 cm (1.6 × 3.5 in) silicon (Si) solid-state detectors. Ions passing through the 0.75 micrometre nickel entrance foils emitted secondary electrons which a chevron microchannel plate assembly amplified to form a signal to begin timing. A double entrance foil prevented single pinholes from allowing sunlight to enter the telescope and provided immunity to solar and geocoronal ultraviolet . Another foil and microchannel plate assembly in front of the solid-state detectors gave the signal to stop timing. Wedge-and-strip anodes on the front sides of the timing anodes determined where the ion passed through the foils and, therefore, its flight path length. The velocity determined from the path length, the ToF, and the residual energy measured by the solid-state detectors were combined to yield the mass of the ion with a resolution of about 1%, adequate to provide complete isotope separation. Corrections for the energy loss in the entrance foils gave the ion's incident energy. The geometric factor of the sensor was 0.8 cm2-sr and the field of view was 17° x 21°. On-board processing determined whether ions triggering LICA were protons , He nuclei, or more massive ions. Protons were counted in a rate and not further analyzed. Heavier nuclei were treated as low (He) or high (more massive than He) priority for transmission to the ground. The instrument data processing unit ensured that a sample of both priority events was telemetered, but that low-priority events did not crowd out the rarer heavy species. Processed flux rates versus energy of H ( hydrogen ), He, O, Si group, and Fe groups were picked out every 15 seconds for transmission. Appropriate magnetic field models enabled specification of the atomic charge state by means of rigidity cut-off calculations. In addition, the proton cut-off versus energy during an orbit helped charge identification of the other species. On-board calibrations of the sensor were done by command about once per week. Data was stored in on-board memory of 26.5 MB , which was then dumped twice daily over ground stations. [ 9 ] MAST was an 11-layer array of detectors, each of area >20 cm 2 (3.1 sq in), stacked one below the other. The first four of these, M1, M2, M3, and M4, were surface-barrier, one-dimensional, position-sensitive detectors, each having 92 coplanar, parallel electrode strips with 0.5 mm (0.020 in) pitch. The combination of these four layers enabled determination of the X-Y coordinates at two positions, and hence the exact trajectories of penetrating nuclei. Following these were two more surface-barrier detectors, D1 and D2. Further downstream were lithium-drifted solid-state detectors, D3 through D7. The areas and thicknesses of the detectors were as follows: M1—M4: 20 cm 2 (3.1 sq in), 115 micrometre ; D1: 20 cm 2 (3.1 sq in), 175 micrometre; D2: 20 cm 2 (3.1 sq in), 500 micrometre; D3 through D7 had area of 30 cm 2 (4.7 sq in), with thicknesses, respectively, of 1.8 mm (0.071 in), 3.0 mm (0.12 in), 6.0 mm (0.24 in) (compound stack of 2 3.0 mm (0.12 in) detectors), 9.0 mm (0.35 in) (compound stack of 3 3.0 mm (0.12 in) detectors), and 3.0 mm (0.12 in). The signal from the last-penetrated detector measured the residual energy E', and the upstream detectors provided dE/dx with abundant redundancy. The trajectory system, together with preflight calibrations at the Bevalac particle accelerator , enabled considerably more precision in isotopic mass determination, i.e. 0.2 amu than would otherwise have been possible for the energy range of 10 MeV/nucleon to several hundred MeV/nucleon, and charge ranges of 3 <= Z <= 28. The on-board DPU enabled down-linking of data from Z > 3 events on a priority basis. [ 10 ] PET consisted of an array of eight, lithium-drifted solid state detectors, together covering the energy range of 1--30 MeV for electrons, 18–85 MeV/nucleon for H and He, and 54–195 MeV/nucleon for the heavier elements. The geometric factors were about 1.0 cm**2-sr. H and He could be tracked into several hundred MeV/nucleon range but with a reduced geometric factor of 0.3. The top-most detectors, P1 (convex) and P2 (concave) were each 2 mm (0.079 in) thick, and had area of 8.1 cm 2 (1.26 sq in). Downstream were the remaining, flat detectors P3 through P8, with the following dimensions. P3: 9.2 cm 2 (1.43 sq in), 15 mm (0.59 in) (compound stack of 5 3.0 mm (0.12 in) detectors); and P4—P8: 4.5 cm 2 (0.70 sq in), 3.0 mm (0.12 in). The instrument could be operated in a low gain (high-Z) mode or, ordinarily, in low-Z mode for observation of protons, electrons, and helium. Pulse height from the last-penetrated detector enabled determination of total E, and the upstream detectors provided dE/dx with enough redundancy to enable accurate determination of particle type. The counting rate of P1 was recorded with a resolution of 0.1 seconds, enabling observation of rapid time variations in the flux of precipitating electrons above energies of 0.4 MeV. [ 11 ] SAMPEX collaborators included: [ 5 ] SAMPEX studies the energy composition, and charge states of particles from supernova explosions in the distant reaches of the galaxy , from the heart of solar flares , and from the depths of nearby interstellar space . It also monitors closely the magnetospheric particle populations which plunge occasionally into the middle atmosphere of the Earth , thereby ionizing neutral gases and altering the atmospheric chemistry. A key part of SAMPEX is to use the magnetic field of the Earth as an essential component of the measurement strategy. The Earth's field is used as a giant magnetic spectrometer to separate different energies and charge states of particles as SAMPEX executes its near polar orbit . [ 12 ] Nearly five years after its launch into the current minimum of the solar cycle, SAMPEX has carried out a wide range of observations and discoveries concerning solar, heliospheric, and magnetospheric energetic particles seen from its unique vantage point in a nearly polar, low Earth orbit. Since almost all of the processes we are studying are driven or heavily influenced by the solar activity cycle, we have the opportunity to fully characterize the solar cycle dependence of a wide range of processes central to the goals of the NASA Office of Space Science's Sun-Earth Connections (SEC) theme. [ 12 ] Over the next several years as the solar activity ramps up to its 11-year maximum, SAMPEX investigations will: [ 12 ] Built for a three-year primary mission, the spacecraft continued to return science data until its reentry on 13 November 2012. [ 2 ]
https://en.wikipedia.org/wiki/Solar_Anomalous_and_Magnetospheric_Particle_Explorer
The Solar Decathlon AFRICA is an international competition that challenges collegiate teams to design and build houses powered exclusively by the sun. The winner of the competition is the team that is able to score the most points in ten contests. On November 15, 2016, the Moroccan Ministry of Energy, Mines, Water, and Sustainable development; [ 2 ] the Moroccan Research Institute in Solar Energy and New Energies (IRESEN); [ 3 ] and the United States Department of Energy signed a memorandum of understanding to collaborate on the development of Solar Decathlon Africa, a competition that will integrate unique local and regional characteristics while following the philosophy, principles, and model of the U.S. Department of Energy Solar Decathlon. The competition is planned for September 2019. This competition takes place during even years, alternating with the U.S.-based competition, Solar Decathlon by agreement between the United States and Moroccan governments. The 2019 edition of the Solar Decathlon Africa will take place in Ben Guerir , Morocco Participants: United States Malaysia France Senegal United States Nigeria France Germany Germany Senegal Italy France United States India Mali Burkina Faso Cameroon Tanzania Democratic Republic of the Congo Pan-African University of Water and Energy Sciences United States Egypt
https://en.wikipedia.org/wiki/Solar_Decathlon_Africa
The Solar Decathlon China (SDC) is a cooperative student competition in China focused on the design and construction of sustainable housing. It was instituted in 2011 during the Strategic Economic Dialogue between China and the United States. Competitions took place in 2013, 2018 and 2022. [ 1 ] The 2018 edition took place in Dezhou , in the province of Shandong . [ 1 ] The top finishers were: [ 1 ] The other participating teams were: [ 1 ] The first Solar Decathlon China was held in Datong, China, August 2–13, 2013. [ 1 ] The top finishers were: [ 1 ] The other participating teams were: [ 1 ]
https://en.wikipedia.org/wiki/Solar_Decathlon_China
The Solar Decathlon Europe (SDE) is an international student-based Competition that challenges collegiate Teams to design, build and operate highly efficient and innovative buildings powered by renewable energy. [ 3 ] The winner of the Competition is the Team able to score the most points in 10 contests. On Oct. 18, 2007, the Spanish and U.S. governments signed a memorandum of understanding in which the Spanish Ministry of Housing committed to organise and host a Solar Decathlon in Europe. [ 4 ] The agreement was signed in Washington, D.C., next to the Universidad Politécnica de Madrid's Casa Solar during the U.S. Department of Energy Solar Decathlon 2007 Competition. The American signatory was Alexander A. Karsner, assistant secretary of the Office of Energy Efficiency and Renewable Energy Department of the U.S. Department of Energy, with Fernando Magro Fernández, undersecretary of housing of the Ministry of Housing representing the Spanish government. [ 5 ] Modeled after the U.S. Department of Energy Solar Decathlon , the first Solar Decathlon Europe took place in Madrid , Spain, in June 2010. [ 6 ] Decathletes from 17 Teams spent 10 days competing in the Villa Solar near the Royal Palace of Madrid (Palacio Real). [ 7 ] A combination of task completion, measurement, and jury scoring determined Solar Decathlon Europe's first champion: Virginia Polytechnic Institute and State University with Lumenhaus project. Final results: [ 8 ] Second edition of the Solar Decathlon Europe was held from Sept. 14–30, 2012, in Madrid, Spain in the Casa de Campo . The final standings of its 18 competitors were: [ 19 ] Solar Decathlon Europe 2014 took place in Versailles , France, June 28–July 14, 2014. Official final results: [ 8 ] Chile Universidad Técnica Federico Santa María , Valparaiso (Chile) and University of La Rochelle – Espace Bois de l'IUT (France) United States University of Angers and Appalachian State University Germany Rhode Island School of Design , Rhode Island (U.S.A.), Brown University , Rhode Island (U.S.A.), and University of Applied Sciences – Erfurt (Germany) And substitute Teams: After the Solar Decathlon Europe in 2014, previous organisers, participants, supporters and decathletes worked to create a vehicle for the longevity of the Solar Decathlon in Europe. The culmination of this work was the creation of the Energy Endeavour Foundation (EEF) in 2016/2017 with the endorsement of the United States Department of Energy to steward the Solar Decathlon in Europe. The EEF subsequently issued a Call for Cities for the 2019 edition, which was awarded to Szentendre, Hungary in March 2017. From this point onward the Energy Endeavour Foundation has fulfilled its stewarding role to the SDE editions organisers. Drawing upon the input of the SDE Council of Experts , the EEF provides continuity from one SDE edition to the next. Solar Decathlon Europe 2019 took place in Szentendre , Hungary , July 12–July 28, 2019. Official results: [ 32 ] In July 2018 the Energy Endeavour Foundation (EEF) issued the Call for Cities for the 2021 edition of the Solar Decathlon Europe ( SDE21 ). In early 2019, the EEF designated the city of Wuppertal , Germany , as the host city for the SDE21, led by a team from the University of Wuppertal and the Wuppertal Institute for Climate, Environment and Energy . Due to the COVID19 health crisis the Solar Decathlon Europe 2021 was postponed, and took between June 10 and June 26, 2022. The SDE21>22 took place on the Utopiastadt Campus. Utopiastadt participated in the "Solar Decathlon goes urban" concept of the 2021 competition. [ 34 ] The SDE21 Call for Teams was open until October 25, 2019, leading to the selection of 18 Teams from 11 countries. This edition of the SDE focuses on the requalification of urban environments, challenging the participating Teams in resolving one of three possible urban solutions: renovation and extension, closing gaps, and addition of stories. The Teams that competed in the SDE21>22 were: Two Teams from Bangkok, Thailand, SAB [ 51 ] from Bangkok University and Ur-Baan [ 52 ] from King Mongkut's University of Technology Thonburi were unable to participate onsite due to high transportation costs. [ 53 ] Final ranking [ 54 ] 1. Platz: Team RoofKIT , Karlsruhe Institute of Technology 2. Platz: Team VIRTUe , Eindhoven University of Technology 3. Platz (draw): Team SUM , Delft University of Technology und Team AuRA , Grenoble National School of Architecture The Call for Cities for the Solar Decathlon Europe 2023 (available here: sde23_ Call for Cities and… webinar! – SDE ) was launched on July 14, 2020, by the Energy Endeavour Foundation. On April 7, 2021, the Capital City of Romania, Bucharest was designed as Host City for the SDE23 . In January 2022, through a joint decision between The Energy Endeavour Foundation (governing body of the Solar Decathlon Europe) and the Solar Decathlon Bucharesti Association, EFdeN , (SDE23 Host City Executives) the SDE23 edition was closed . The Closure was a result of continued repercussions caused by the COVID pandemic, which created high uncertainty and volatility, with ensuing economic, social, and public health challenges.
https://en.wikipedia.org/wiki/Solar_Decathlon_Europe
The Solar Decathlon is an initiative of the Department of Energy of the United States (DOE) in which universities around the world compete with the design and construction of sustainable housing that works 100% with solar energy . It is called “Decathlon" since universities and their prototypes are evaluated in 10 criteria: architecture, engineering and construction, energy efficiency, energy consumption, comfort, sustainability, positioning, communications, urban design and feasibility and innovation. The 2019 edition of the Solar Decathlon Latin America and Caribbean [ 1 ] will take place in Cali , Colombia . The first Solar Decathlon Latin America and Caribbean was held on the campus of Universidad del Valle in Santiago de Cali , Colombia, in December 2015. The top finishers were: [ 2 ] [ 1 ] The other participating teams were: [ 2 ] [ 1 ]
https://en.wikipedia.org/wiki/Solar_Decathlon_Latin_America_and_Caribbean
The 2018 edition of the Solar Decathlon Middle East will take place in Dubai , United Arab Emirates . The teams selected to compete in Solar Decathlon Middle East 2018 are:
https://en.wikipedia.org/wiki/Solar_Decathlon_Middle_East
Solar Energy Materials and Solar Cells is a scientific journal published by Elsevier covering research related to solar energy materials and solar cells . According to the Journal Citation Reports , Solar Energy Materials and Solar Cells has a 2020 impact factor of 7.267. [ 1 ] A paper titled "Ageing effects of perovskite solar cells under different environmental factors and electrical load conditions" published in 2018 in the journal [ 2 ] corresponded to a paper previously published in the journal Nature Energy as "Systematic investigation of the impact of operation conditions on the degradation behaviour of perovskite solar cells". [ 3 ] It led to an investigation of plagiarism . [ 4 ] This article about a journal on energy , its collection, its distribution, or its uses is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Solar_Energy_Materials_and_Solar_Cells
Solar System belts are asteroid and comet belts that orbit the Sun in the Solar System in interplanetary space . [ 1 ] [ 2 ] The Solar System belts' size and placement are mostly a result of the Solar System having four giant planets : Jupiter , Saturn , Uranus and Neptune far from the sun. The giant planets must be in the correct place, not too close or too far from the sun for a system to have Solar System belts. [ 3 ] [ 4 ] [ 5 ] The Solar System belts were formed in the formation and evolution of the Solar System . [ 6 ] [ 7 ] The Grand tack hypothesis is a model of the unique placement of the giant planets and the Solar System belts. [ 3 ] [ 4 ] [ 8 ] Most giant planets found outside our Solar System, exoplanets , are inside the snow line , and are called Hot Jupiters . [ 5 ] [ 9 ] Thus in normal planetary systems giant planets form beyond snow line and then migrated towards the star. A small percent of giant planets migrate far from the star. In both types of migrations, the Solar System belts are lost in these planetary migrations . The Grand tack hypothesis explains how in the Solar System giant planets migrated in unique way to form the Solar System belts and near circular orbit of planets around the Sun. [ 10 ] [ 11 ] [ 9 ] The Solar System's belts are one key parameters for a Solar System that can support complex life, as circular orbits are a parameter needed for the Habitable zone for complex life . [ 12 ] [ 13 ] [ 14 ] [ 15 ] The asteroid and comet belts orbit the Sun from the inner rocky planets into outer parts of the Solar System, interstellar space . [ 16 ] [ 17 ] [ 18 ] An astronomical unit , or AU, is the distance from Earth to the Sun, which is approximately 150 billion meters (93 million miles). [ 19 ] Small Solar System objects are classified by their orbits: [ 20 ] [ 21 ] Solar System planets and dwarf planets listed for distances comparison to belts. The Solar System planets all orbit in near circular orbits. [ 22 ] [ 23 ] [ 24 ] Planets : Dwarf planets : Dwarf planets , other than Ceres, are plutoids that have elliptical orbits: [ 25 ] [ 26 ] [ 27 ] Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Solar_System_belts
Solar System models , especially mechanical models, called orreries , that illustrate the relative positions and motions of the planets and moons in the Solar System have been built for centuries. While they often showed relative sizes, these models were usually not built to scale. The enormous ratio of interplanetary distances to planetary diameters makes constructing a scale model of the Solar System a challenging task. As one example of the difficulty, the distance between the Earth and the Sun is almost 12,000 times the diameter of the Earth. If the smaller planets are to be easily visible to the naked eye, large outdoor spaces are generally necessary, as is some means for highlighting objects that might otherwise not be noticed from a distance. The Boston Museum of Science had placed bronze models of the planets in major public buildings, all on similar stands with interpretive labels. [ 1 ] For example, the model of Jupiter was located in the cavernous South Station waiting area. The properly-scaled, basket-ball-sized model is 1.3 miles (2.14 km) from the model Sun which is located at the museum, graphically illustrating the immense empty space in the Solar System. The objects in such large models do not move. Traditional orreries often did move, and some used clockworks to display the relative speeds of objects accurately. These can be thought of as being correctly scaled in time , instead of distance . Many towns and institutions have built outdoor scale models of the Solar System. Here is a table comparing these models with the actual system. Brussells , Belgium An Exploration of Scale National Mall, Washington, D.C. (2001) Kansas City, Missouri (2008) Space Center Houston, Texas (2008) Corpus Christi, Texas (2009) Boulder, Colorado (2021) Palo Alto, California (2022) [ 49 ] [ 50 ] Broken Arrow, Oklahoma (2022) Ocala, Florida (2022) Calcasieu Parish, Louisiana (2022) Dover, New Hampshire (2023) Spokane, Washington (2022) Memphis, Tennessee (2023) Chalmette, Louisiana (2023) Jonesboro, Arkansas (2023) Troy, New York (2024) Several sets of geocaching caches have been laid out as Solar System models. Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Solar_System_model
Solar air conditioning , or "solar-powered air conditioning", refers to any air conditioning (cooling) system that uses solar power . This can be done through passive solar design, solar thermal energy conversion, and photovoltaic conversion (sunlight to electricity). The U.S. Energy Independence and Security Act of 2007 [ 1 ] created 2008 through 2012 funding for a new solar air conditioning research and development program, which should develop and demonstrate multiple new technology innovations and mass production economies of scale . In the late 19th century, the most common fluid for absorption cooling was a solution of ammonia and water. Today, the combination of lithium bromide and water is also in common use. One end of the system of expansion/condensation pipes is heated, and the other end gets cold enough to make ice. Originally, natural gas was used as a heat source in the late 19th century. Today, propane is used in recreational vehicle absorption chiller refrigerators. Hot water solar thermal energy collectors can also be used as the modern "free energy" heat source. A National Aeronautics and Space Administration (NASA) sponsored report in 1976 surveyed solar energy system applications of air conditioning. Techniques discussed included both solar powered (absorption cycle and heat engine / Rankine cycle ) and solar related (heat pump) along with an extensive bibliography of related literature. [ 2 ] Photovoltaics can provide either indirect solar air conditioning power or, now, directly power to air conditioners.  Indirect photovoltaic power for air conditioners consists of whole-house or whole-building solar which, traditionally for most users, has also meant net metering to the grid.  Solar in this case is inverted to alternating current (AC) to run the appliances in the house or building, including the air conditioner(s).  The advantage of this is the air conditioners don’t need any special electronics to accommodate solar, so it’s a simple implementation.  The disadvantage is that these air conditioners usually have a SEER value of 14 or less, and the supplied solar has some loss from the power conversion of DC ( direct current ) solar to AC even before it reaches the air conditioners.  Another disadvantage is that these air conditioners cannot run when the grid is down, since, in effect, the net-meter ed home or building is a node on the grid, and utilities need to prevent backfeeding power into a dead grid when the grid’s down. And, now, air conditioners, like many home appliances (e.g., TVs, computers) are beginning to run on DC power.  So, whole-building solar for such units needs to be inverted to alternating current, and then rectified back to direct current, further increasing inefficiencies. Off-grid solar arrays instead use batteries to supply whole-house or whole-building solar.  Such systems employ a voltage controller to manage battery charging, and then the battery power is inverted to provide alternating current for the home or building.  Since they’re not grid tied or net metered, they can operate after a storm or other event brings down grid power.  However, the power, once again, must be converted from DC from the solar panels and batteries to AC by inversion to run power remotely to the appliances. More recently, true solar-powered photovoltaic air conditioners heat pumps have been developed.  Such units run using DC power, and, as such, they can and do make use of the inherent DC power generated by photovoltaic solar panels.  One mini split version of this units employs a 48v DC power bus and a 48v battery array, usually 4 x 12v batteries in series (e.g., Hotspot Energy ).  Unlike the whole-house battery system, though, these batteries only run the air conditioner. The advantage of these systems is that, with enough solar and battery capacity, they can run at night or when it’s cloudy.  Another mini split version allows the solar panels to be plugged directly to the outside part of the unit, uses a 310v DC power bus, and offers optional 120v plug-in backup grid power ( Airspool ) to be leveraged to fill in any lack of solar power available. The advantage of these inverter DC air conditioners is the lower cost, while the disadvantage is that they have no way to run without solar unless they're plugged in. Both of these systems make use variable refrigerant flow technology, with high-efficiency variable-speed DC motors and compressors to require very little run power, and both also offer heat in addition to air conditioning.  A third type of unit is available for larger, usually commercial, buildings and offers both grid and battery backup as well as optional net metering. [ citation needed ] Like the two smaller units, these units are VRF , but unlike them, there’s an option to run heating in one part of the building and air conditioning in another part, making use of one outside/condensing unit and multiple inside/evaporative units located in different areas of the building to condition that areas based on specific user needs. [ citation needed ] Photovoltaic can be combined with geothermal technology, too.  An efficient geothermal air conditioning system would require a smaller, less-expensive photovoltaic system. A high-quality geothermal heat pump installation can have a SEER in the range of 20 (±). A 29 kW (100,000 BTU/h) SEER 20 air conditioner would require less than 5 kW while operating. There are also new non-compressor-based electrical air conditioning systems with a SEER above 20 coming on the market. New versions of phase-change indirect evaporative coolers use nothing but a fan and a supply of water to cool buildings without adding extra interior humidity (such as at McCarran Airport Las Vegas Nevada). In dry arid climates with relative humidity below 45% (about 40% of the continental U.S.) indirect evaporative coolers can achieve a SEER above 20, and up to SEER 40. A 29 kW (100,000 BTU/h) indirect evaporative cooler would only need enough photovoltaic power for the circulation fan (plus a water supply). A less-expensive partial-power photovoltaic system can reduce (but not eliminate) the monthly amount of electricity purchased from the power grid for air conditioning (and other uses). With American state government subsidies of $2.50 to US$5.00 per photovoltaic watt, the amortized cost of PV-generated electricity can be below $0.15 per kWh. This is currently cost effective in some areas where power company electricity is now $0.15 or more. Excess PV power generated when air conditioning is not required can be sold to the power grid in many locations, which can reduce or eliminate annual net electricity purchase requirement. (See Zero-energy building ) Superior energy efficiency can be designed into new construction (or retrofitted to existing buildings). Since the U.S. Department of Energy was created in 1977, their Weatherization Assistance Program has reduced heating-and-cooling load on 5.5 million low-income affordable homes an average of 31%. A hundred million American buildings still need improved weatherization. Careless conventional construction practices are still producing inefficient new buildings that need weatherization when they are first occupied. Earth sheltering or earth cooling tubes can take advantage of the ambient temperature of the earth to reduce or eliminate conventional air conditioning requirements. In many climates where the majority of humans live, they can greatly reduce the buildup of undesirable summer heat, and also help remove heat from the interior of the building. They increase construction cost, but reduce or eliminate the cost of conventional air conditioning equipment. Earth cooling tubes are not cost effective in hot humid tropical environments where the ambient Earth temperature approaches human temperature comfort zone. A solar chimney or photovoltaic -powered fan can be used to exhaust undesired heat and draw in cooler, dehumidified air that has passed by ambient Earth temperature surfaces. Control of humidity and condensation are important design issues. A geothermal heat pump uses ambient earth temperature to improve SEER for heat and cooling. A deep well recirculates water to extract ambient earth temperature, typically at 8 litres (2 US gal) of water per metric ton per minute. These "open loop" systems were the most common in early systems, however water quality could cause damage to the coils in the heat pump and shorten the life of the equipment. Another method is a closed loop system, in which a loop of tubing is run down a well or wells, or in trenches in the lawn, to cool an intermediate fluid. When wells are used, they are back-filled with bentonite grout or another grout material to ensure good thermal conductivity to the earth. [ 3 ] In the past the fluid of choice was a 50/50 mixture of propylene glycol because it is non-toxic unlike ethylene glycol (which is used in car radiators). Propylene glycol is viscous, and would eventually gum up some parts in the loop(s), so it has fallen out of favor. Today [ when? ] , the most common transfer agent is a mixture of water and ethyl alcohol (ethanol). Ambient earth temperature is much lower than peak summer air temperature, and much higher than the lowest extreme winter air temperature. Water is 25 times more thermally conductive than air, so it is much more efficient than an outside air heat pump, (which becomes less effective when the outside temperature drops in winter). The same type of geothermal well can be used without a heat pump but with greatly diminished results. Ambient earth temperature water is pumped through a shrouded radiator (like an automobile radiator). Air is blown across the radiator, which cools without a compressor-based air conditioner. Photovoltaic solar electric panels produce electricity for the water pump and fan, eliminating conventional air-conditioning utility bills. This concept is cost-effective, as long as the location has ambient earth temperature below the human thermal comfort zone (not the tropics). Air can be passed over common, solid desiccants (like silica gel or zeolite ) or liquid desiccants (like lithium bromide/chloride) to draw moisture from the air to allow an efficient mechanical or evaporative cooling cycle. The desiccant is then regenerated by using solar thermal energy to dehumidify, in a cost-effective, low-energy-consumption, continuously repeating cycle. [ 4 ] A photovoltaic system can power a low-energy air circulation fan, and a motor to slowly rotate a large disk filled with desiccant. Energy recovery ventilation systems provide a controlled way of ventilating a home while minimizing energy loss. Air is passed through an " enthalpy wheel " (often using silica gel) to reduce the cost of heating ventilated air in the winter by transferring heat from the warm inside air being exhausted to the fresh (but cold) supply air. In the summer, the inside air cools the warmer incoming supply air to reduce ventilation cooling costs. [ 5 ] This low-energy fan-and-motor ventilation system can be cost-effectively powered by photovoltaics , with enhanced natural convection exhaust up a solar chimney - the downward incoming air flow would be forced convection ( advection ). A desiccant like calcium chloride can be mixed with water to create a recirculating waterfall that dehumidifies a room using solar thermal energy to regenerate the liquid, and a PV-powered low-rate water pump to circulate liquid. [ 6 ] Active solar cooling wherein solar thermal collectors provide input energy for a desiccant cooling system. There are several commercially available systems that blow air through a desiccant impregnated medium for both the dehumidification and the regeneration cycle. The solar heat is one way that the regeneration cycle is powered. In theory packed towers can be used to form a counter-current flow of the air and the liquid desiccant but are not normally employed in commercially available machines. Preheating of the air is shown to greatly enhance desiccant regeneration. The packed column yields good results as a dehumidifier/regenerator, provided pressure drop can be reduced with the use of suitable packing. [ 7 ] In this type of cooling solar thermal energy is not used directly to create a cold environment or drive any direct cooling processes. Instead, solar building design aims at slowing the rate of heat transfer into a building in the summer, and improving the removal of unwanted heat. It involves a good understanding of the mechanisms of heat transfer : heat conduction , convective heat transfer , and thermal radiation , the latter primarily from the sun. For example, a sign of poor thermal design is an attic that gets hotter in summer than the peak outside air temperature. This can be significantly reduced or eliminated with a cool roof or a green roof , which can reduce the roof surface temperature by 70 °F (40 °C) in summer. A radiant barrier and an air gap below the roof will block about 97% of downward radiation from roof cladding heated by the sun. Passive solar cooling is much easier to achieve in new construction than by adapting existing buildings. There are many design specifics involved in passive solar cooling. It is a primary element of designing a zero energy building in a hot climate. Closed-loop air conditioning commonly uses the following materials for water-based absorption: An alternative to water-based systems is to use methanol with activated carbon. [ 8 ] Active solar cooling uses solar thermal collectors to provide solar energy to thermally driven chillers (usually adsorption or absorption chillers). [ 9 ] Solar energy heats a fluid that provides heat to the generator of an absorption chiller and is recirculated back to the collectors. The heat provided to the generator drives a cooling cycle that produces chilled water. The chilled water produced is used for large commercial and industrial cooling. Solar thermal energy can be used to efficiently cool in the summer, and also heat domestic hot water and buildings in the winter. Single, double or triple iterative absorption cooling cycles are used in different solar-thermal-cooling system designs. The more cycles, the more efficient they are. Absorption chillers operate with less noise and vibration than compressor-based chillers, but their capital costs are relatively high. [ 10 ] Efficient absorption chillers nominally require water of at least 190 °F (88 °C). Common, inexpensive flat-plate solar thermal collectors only produce about 160 °F (71 °C) water. High temperature flat plate, concentrating (CSP) or evacuated tube collectors are needed to produce the higher temperature transfer fluids required. In large scale installations there are several projects successful both technical and economical in operation worldwide including, for example, at the headquarters of Caixa Geral de Depósitos in Lisbon with 1,579 square metres (17,000 sq ft) solar collectors and 545 kW cooling power or on the Olympic Sailing Village in Qingdao/China. In 2011 the most powerful plant at Singapore's new constructed United World College will be commissioned (1500 kW). These projects have shown that flat plate solar collectors specially developed for temperatures over 200 °F (93 °C) (featuring double glazing, increased backside insulation, etc.) can be effective and cost-efficient. [ 11 ] Where water can be heated well above 190 °F (88 °C), it can be stored and used when the sun is not shining. The Audubon Environmental Center at the Ernest E. Debs Regional Park in Los Angeles has an example solar air conditioning installation, [ 12 ] [ 13 ] which failed fairly soon after commissioning and is no longer being maintained. [ citation needed ] The Southern California Gas Co. (The Gas Company) is also testing the practicality of solar thermal cooling systems at their Energy Resource Center (ERC) in Downey, California . Solar Collectors from Sopogy and Cogenra were installed on the rooftop at the ERC and are producing cooling for the building's air conditioning system. [ 14 ] Masdar City in the United Arab Emirates is also testing a double-effect absorption cooling plant using Sopogy parabolic trough collectors, [ 15 ] Mirroxx Fresnel array and TVP Solar high-vacuum solar thermal panels. [ 16 ] A FedEx Ground sorting facility in Davenport, Florida uses a solar thermal air conditioning system to feed cool air into truck trailers parked at loading doors. [ 17 ] For 150 years, absorption chillers have been used to make ice (before the electric light bulbs were invented). [ 18 ] This ice can be stored and used as an "ice battery" for cooling when the sun is not shining, as it was in the 1995 Hotel New Otani Tokyo in Japan. [ 19 ] Mathematical models are available in the public domain for ice-based thermal energy storage performance calculations. [ 20 ] The ISAAC Solar Icemaker is an intermittent solar ammonia-water absorption cycle. The ISAAC uses a parabolic trough solar collector with a compact and efficient design to produce ice with no fuel or electric input, as well as with no moving parts. [ 21 ] The main reasons for employing concentrating collectors in solar cooling systems are: high efficient air-conditioning through coupling with double/triple effect chillers; and solar refrigeration serving industrial end-users, possibly in combination with process heat and steam. [ 22 ] Concerning industrial applications, several studies in the recent years highlighted that there is a high potential for refrigeration (temperatures below 0 °C) in different areas of the globe (e.g., the Mediterranean, [ 23 ] Central America [ 24 ] ). However, this can be achieved by ammonia/ water absorption chillers requiring high temperature heat input at the generator, in a range (120 ÷ 180 °C) which can only be satisfied by concentrating solar collectors. Moreover, several industrial applications require both cooling and steam for processes, and concentrating solar collectors can be very advantageous in the sense that their use is maximized. [ citation needed ] Goals of zero-energy buildings include sustainable , green building technologies that can significantly reduce, or eliminate, net annual energy bills. The supreme achievement is the totally off-the-grid autonomous building that does not have to be connected to utility companies. In hot climates with significant degree days of cooling requirement, leading-edge solar air conditioning will be an increasingly important critical success factor . [ citation needed ]
https://en.wikipedia.org/wiki/Solar_air_conditioning
A solar cell, also known as a photovoltaic cell ( PV cell ), is an electronic device that converts the energy of light directly into electricity by means of the photovoltaic effect . [ 1 ] It is a type of photoelectric cell, a device whose electrical characteristics (such as current , voltage , or resistance ) vary when it is exposed to light. Individual solar cell devices are often the electrical building blocks of photovoltaic modules , known colloquially as "solar panels". Almost all commercial PV cells consist of crystalline silicon , with a market share of 95%. Cadmium telluride thin-film solar cells account for the remainder. [ 2 ] The common single-junction silicon solar cell can produce a maximum open-circuit voltage of approximately 0.5 to 0.6 volts . [ 3 ] Photovoltaic cells may operate under sunlight or artificial light. In addition to producing solar power , they can be used as a photodetector (for example infrared detectors ), to detect light or other electromagnetic radiation near the visible light range, as well as to measure light intensity. The operation of a PV cell requires three basic attributes: There are multiple input factors that affect the output power of solar cells such as temperature , material properties, weather conditions, solar irradiance and more. [ 4 ] A similar type of "photoelectrolytic cell" ( photoelectrochemical cell ), can refer to devices In contrast to outputting power directly, a solar thermal collector absorbs sunlight , to produce either indirect heat to be used to spin turbines in electrical power generation . Arrays of solar cells are used to make solar modules that generate a usable amount of direct current (DC) from sunlight . Strings of solar modules create a solar array to generate solar power using solar energy , many times using an inverter to convert the solar power to alternating current (AC). Electric vehicles that operate off of solar energy and/or sunlight are commonly referred to as solar cars. [ citation needed ] These vehicles use solar panels to convert absorbed light into electrical energy to be used by electric motors, with any excess energy stored in batteries . [ 5 ] Batteries in solar-powered vehicles differ from starting batteries in standard ICE cars because they are fashioned to impart power towards electrical components of the vehicle for a long durations. [ citation needed ] The first instance of photovoltaic cells within vehicular applications was around midway through the second half of the 1900s. In an effort to increase publicity and awareness in solar powered transportation Hans Tholstrup decided to set up the first edition of the World Solar Challenge in 1987. [ citation needed ] It was a 3000 km race across the Australian outback where competitors from industry research groups and top universities around the globe were invited to compete. [ citation needed ] General Motors ended up winning the event by a significant margin with their Sunraycer vehicle that achieved speeds of over 40 mph. [ citation needed ] Contrary to popular belief however solar powered cars are one of the oldest alternative energy vehicles. [ 6 ] Multiple solar cells in an integrated group, all oriented in one plane, constitute a solar photovoltaic panel or module . Photovoltaic modules often have a sheet of glass on the sun-facing side, allowing light to pass while protecting the semiconductor wafers . Solar cells connected in series creates an additive higher voltage, while connecting in parallel yields an additive higher current. Parallel cells without bypass or shunt diodes that experience shade can shut down the weaker (less illuminated) parallel string (each string a number of series connected cells) causing substantial power loss and possible damage because of the reverse bias applied to the shaded cells by their illuminated partners. [ citation needed ] Solar modules can be interconnected to create an array with a desired peak DC voltage and loading current capacity. This functionality can also be accomplished with various other solar devices that do more than just create the desired voltages and currents, such as with MPPTs ( maximum power point trackers ) or module level power electronic (MLPE) units: microinverters or DC-DC optimizers . Multiple solar cells assembled together in a single plane form a solar photovoltaic (PV) panel or module. These modules typically feature a glass sheet on the sun-facing side, which allows sunlight to pass through while safeguarding the semiconductor wafers from environmental factors. Connecting solar cells in series increases the voltage output, whereas parallel connections enhance the current output. [ 7 ] To mitigate these issues, solar modules are often equipped with bypass diodes that isolate shaded cells, preventing them from affecting the performance of the entire string. These diodes allow the current to bypass the shaded or underperforming cells, thereby minimizing power loss and reducing the risk of damage. [ 8 ] By 2020, the United States cost per watt for a utility scale system had declined to $0.94. [ 11 ] Solar cells were first used in a prominent application when they were proposed and flown on the Vanguard satellite in 1958, as an alternative power source to the primary battery power source. By adding cells to the outside of the body, the mission time could be extended with no major changes to the spacecraft or its power systems. In 1959 the United States launched Explorer 6 , featuring large wing-shaped solar arrays, which became a common feature in satellites. These arrays consisted of 9600 Hoffman solar cells . By the 1960s, solar cells were (and still are) the main power source for most Earth orbiting satellites and a number of probes into the Solar System , since they offered the best power-to-weight ratio . The success of the space solar power market drove the development of higher efficiencies in solar cells, due to limited other power options and the desire for the best possible cells, up until the National Science Foundation "Research Applied to National Needs" program began to push development of solar cells for terrestrial applications. In the early 1990s the technology used for space solar cells diverged from the silicon technology used by terrestrial panels, with the spacecraft application shifting to gallium arsenide -based III-V semiconductor materials, which then evolved into the modern III-V multijunction photovoltaic cell used on spacecraft that are lightweight, compact, flexible, and highly efficient. State of the art technology implemented on satellites uses multi-junction photovoltaic cells, which are composed of different p–n junctions with varying bandgaps in order to utilize a wider spectrum of the Sun's energy. Space solar cells additionally diverged from the protective layer used by terrestrial panels, with space applications using flexible laminate layers. Additionally, large satellites require the use of large solar arrays to produce electricity. These solar arrays need to be broken down to fit in the geometric constraints of the launch vehicle the satellite travels on before being injected into orbit. Historically, solar cells on satellites consisted of several small terrestrial panels folded together. These small panels would be unfolded into a large panel after the satellite is deployed in its orbit. Newer satellites aim to use flexible rollable solar arrays that are very lightweight and can be packed into a very small volume. The smaller size and weight of these flexible arrays drastically decreases the overall cost of launching a satellite due to the direct relationship between payload weight and launch cost of a launch vehicle. [ 12 ] In 2020, the US Naval Research Laboratory conducted its first test of solar power generation in a satellite, the Photovoltaic Radio-frequency Antenna Module (PRAM) experiment aboard the Boeing X-37 . [ 13 ] [ 14 ] The photovoltaic effect was experimentally demonstrated first by French physicist Edmond Becquerel . In 1839, at age 19, he built the world's first photovoltaic cell in his father's laboratory. Willoughby Smith first described the "Effect of Light on Selenium during the passage of an Electric Current" in a 20 February 1873 issue of Nature . In 1883 Charles Fritts built the first solid state photovoltaic cell by coating the semiconductor selenium with a thin layer of gold to form the junctions; the device was only around 1% efficient. [ 15 ] Other milestones include: Pricing and efficiency Improvements were gradual over the 1960s. One reason that costs remained high was because space users were willing to pay for the best possible cells, leaving no reason to invest in lower-cost, less-efficient solutions. Also, price was determined largely by the semiconductor industry ; their move to integrated circuits in the 1960s led to the availability of larger boules at lower relative prices. As their price fell, the price of the resulting cells did as well. These effects lowered 1971 cell costs to some $100,000 per watt. [ 26 ] In late 1969 Elliot Berman joined Exxon 's task force which was looking for projects 30 years in the future and in April 1973 he founded Solar Power Corporation (SPC), a wholly owned subsidiary of Exxon at that time. [ 27 ] [ 28 ] [ 29 ] The group concluded that electrical power would be much more expensive by 2000, and felt that the increase in price would make alternative energy sources more attractive. He conducted a market study and concluded that a price per watt of about $20/watt would create significant demand. [ 27 ] To reduce costs, the team By 1973 they announced a product, and SPC convinced Tideland Signal to use its panels to power navigational buoys , initially for the U.S. Coast Guard. [ 28 ] Research into solar power for terrestrial applications became prominent with the U.S. National Science Foundation's Advanced Solar Energy Research and Development Division within the "Research Applied to National Needs" program, which ran from 1969 to 1977, [ 31 ] and funded research on developing solar power for ground electrical power systems. A 1973 conference, the "Cherry Hill Conference", set forth the technology goals required to achieve this goal and outlined an ambitious project for achieving them, kicking off an applied research program that would be ongoing for several decades. [ 32 ] The program was eventually taken over by the Energy Research and Development Administration (ERDA), [ 33 ] which was later merged into the U.S. Department of Energy . Following the 1973 oil crisis , oil companies used their higher profits to start (or buy) solar firms, and were for decades the largest producers. Exxon, ARCO, Shell, Amoco (later purchased by BP) and Mobil all had major solar divisions during the 1970s and 1980s. Technology companies also participated, including General Electric, Motorola, IBM, Tyco and RCA. [ 34 ] Adjusting for inflation, it cost $96 per watt for a solar module in the mid-1970s. Process improvements and a very large boost in production have brought that figure down more than 99%, to 30¢ per watt in 2018 [ 37 ] and as low as 20¢ per watt in 2020. [ 38 ] Swanson's law is an observation similar to Moore's Law that states that solar cell prices fall 20% for every doubling of industry capacity. It was featured in an article in the British weekly newspaper The Economist in late 2012. [ 39 ] Balance of system costs are now higher than the solar panels alone, where in 2018 commercial arrays could be built at below $1.00 a watt, fully commissioned. [ 11 ] Over decades, costs for solar cells and panels has declined for may reasons: During the 1990s, polysilicon ("poly") cells became increasingly popular. These cells offer less efficiency than their monosilicon ("mono") counterparts, but are grown in large vats that reduce cost. By the mid-2000s, poly was dominant in the low-cost panel market, but more recently the monosilicon cells have returned to widespread use due to the efficiency gains. Crystalline silicon panels dominate worldwide markets and are mostly manufactured in China and Taiwan. By late 2011, a drop in European demand dropped prices for crystalline solar modules to about $1.09 [ 42 ] per watt down sharply from 2010. Prices continued to fall in 2012, reaching $0.62/watt by 4Q2012. [ 43 ] It was anticipated that electricity from PV will be competitive with wholesale electricity costs all across Europe and the energy payback time of crystalline silicon modules can be reduced to below 0.5 years by 2020. [ 44 ] Falling costs are considered one of the biggest factors in the rapid growth of renewable energy , of 2016, solar PV is growing fastest in Asia, with China and Japan currently accounting for half of worldwide deployment . [ 45 ] Costs of solar photovoltaic electricity fell by ~85% between 2010 (when solar and wind made up 1.7% of global electricity generation) and 2021 (where they made up 8.7%). [ 46 ] Global installed PV capacity reached at least 301 gigawatts in 2016, and grew to supply 1.3% of global power by 2016. [ 47 ] In 2019 solar cells accounted for ~3 % of the world's electricity generation at 720 Tw-hr. [ 48 ] Solar-specific feed-in tariffs vary by and within country countries. Such tariffs can encourage the development of solar power projects and to achieve grid parity. Grid parity , the point at which photovoltaic electricity is equal to or cheaper than grid power without subsidies, is expected to be first achieved in areas with abundant sun and high electricity costs such as in California and Japan . [ 49 ] In 2007 BP claimed grid parity for Hawaii and other islands that otherwise use diesel fuel to produce electricity. George W. Bush set 2015 as the date for grid parity in the US. [ 50 ] [ 51 ] The Photovoltaic Association reported in 2012 that Australia had reached grid parity (ignoring feed in tariffs). [ 52 ] The price of solar panels fell steadily for 40 years, interrupted in 2004 when high subsidies in Germany drastically increased demand there and greatly increased the price of purified silicon (which is used in computer chips as well as solar panels). The recession of 2008 and the onset of Chinese manufacturing caused prices to resume their decline. In the four years after January 2008 prices for solar modules in Germany dropped from €3 to €1 per peak watt. During that same time production capacity surged with an annual growth of more than 50%. China increased solar panel production market share from 8% in 2008 to over 55% in the last quarter of 2010. [ 53 ] In December 2012 the price of Chinese solar panels had dropped to $0.60/Wp (crystalline modules). [ 54 ] (The abbreviation Wp stands for watt peak capacity, or the maximum capacity under optimal conditions. [ 55 ] ) As of the end of 2016, it was reported that spot prices for assembled solar panels (not cells) had fallen to a record-low of US$0.36/Wp. The second largest supplier, Canadian Solar Inc., had reported costs of US$0.37/Wp in the third quarter of 2016, having dropped $0.02 from the previous quarter, and hence was probably still at least breaking even. Many producers expected costs would drop to the vicinity of $0.30 by the end of 2017. [ 56 ] It was also reported that new solar installations were cheaper than coal-based thermal power plants in some regions of the world, and this was expected to be the case in most of the world within a decade. [ 57 ] A solar cell is made of semiconducting materials , such as silicon , that have been fabricated into a p–n junction . Such junctions are made by doping one side of the device p-type and the other n-type, for example in the case of silicon by introducing small concentrations of boron or phosphorus respectively. In operation, photons in sunlight hit the solar cell and are absorbed by the semiconductor. When the photons are absorbed, electrons are excited from the valence band to the conduction band (or from occupied to unoccupied molecular orbitals in the case of an organic solar cell ), producing electron-hole pairs . If the electron-hole pairs are created near the junction between p-type and n-type materials the local electric field sweeps them apart to opposite electrodes, producing an excess of electrons on one side and an excess of holes on the other. When the solar cell is unconnected (or the external electrical load is very high) the electrons and holes will ultimately restore equilibrium by diffusing back across the junction against the field and recombine with each other giving off heat, but if the load is small enough then it is easier for equilibrium to be restored by the excess electrons going around the external circuit, doing useful work along the way. The most commonly known solar cell is configured as a large-area p–n junction made from silicon. Other possible solar cell types are organic solar cells, dye sensitized solar cells, perovskite solar cells, quantum dot solar cells, etc. The illuminated side of a solar cell generally has a transparent conducting film for allowing light to enter into the active material and to collect the generated charge carriers. Typically, films with high transmittance and high electrical conductance such as indium tin oxide , conducting polymers, or conducting nanowire networks are used for the purpose. [ 58 ] Solar cell efficiency may be broken down into reflectance efficiency, thermodynamic efficiency, charge carrier separation efficiency and conductive efficiency. The overall efficiency is the product of these individual metrics. The power conversion efficiency of a solar cell is a parameter which is defined by the fraction of incident power converted into electricity. [ 59 ] A solar cell has a voltage dependent efficiency curve, temperature coefficients, and allowable shadow angles. Due to the difficulty in measuring these parameters directly, other parameters are substituted: thermodynamic efficiency, quantum efficiency , integrated quantum efficiency , V OC ratio, and fill factor. Reflectance losses are a portion of quantum efficiency under " external quantum efficiency ". Recombination losses make up another portion of quantum efficiency, V OC ratio, and fill factor. Resistive losses are predominantly categorized under fill factor, but also make up minor portions of quantum efficiency, V OC ratio. The fill factor is the ratio of the actual maximum obtainable power to the product of the open-circuit voltage and short-circuit current . This is a key parameter in evaluating performance. In 2009, typical commercial solar cells had a fill factor > 0.70. Grade B cells were usually between 0.4 and 0.7. [ 60 ] Cells with a high fill factor have a low equivalent series resistance and a high equivalent shunt resistance , so less of the current produced by the cell is dissipated in internal losses. Single p–n junction crystalline silicon devices are now approaching the theoretical limiting power efficiency of 33.16%, [ 61 ] noted as the Shockley–Queisser limit in 1961. In the extreme, with an infinite number of layers, the corresponding limit is 86% using concentrated sunlight. [ 62 ] In 2014, three companies broke the record of 25.6% for a silicon solar cell. Panasonic's was the most efficient. The company moved the front contacts to the rear of the panel, eliminating shaded areas. In addition they applied thin silicon films to the (high quality silicon) wafer's front and back to eliminate defects at or near the wafer surface. [ 63 ] In 2015, a 4-junction GaInP/GaAs//GaInAsP/GaInAs solar cell achieved a new laboratory record efficiency of 46.1% (concentration ratio of sunlight = 312) in a French-German collaboration between the Fraunhofer Institute for Solar Energy Systems (Fraunhofer ISE) , CEA-LETI and SOITEC. [ 64 ] In September 2015, Fraunhofer ISE announced the achievement of an efficiency above 20% for epitaxial wafer cells. The work on optimizing the atmospheric-pressure chemical vapor deposition (APCVD) in-line production chain was done in collaboration with NexWafe GmbH, a company spun off from Fraunhofer ISE to commercialize production. [ 65 ] [ 66 ] For triple-junction thin-film solar cells, the world record is 13.6%, set in June 2015. [ 67 ] In 2016, researchers at Fraunhofer ISE announced a GaInP/GaAs/Si triple-junction solar cell with two terminals reaching 30.2% efficiency without concentration. [ 68 ] In 2017, a team of researchers at National Renewable Energy Laboratory (NREL), EPFL and CSEM ( Switzerland ) reported record one-sun efficiencies of 32.8% for dual-junction GaInP/GaAs solar cell devices. In addition, the dual-junction device was mechanically stacked with a Si solar cell, to achieve a record one-sun efficiency of 35.9% for triple-junction solar cells. [ 69 ] Solar cells are typically named after the semiconducting material of which they are composed. These materials have varying characteristics to absorb optimal available sunlight spectrum. Some cells are designed to handle sunlight that reaches the Earth's surface, while others are optimized for use in space . Solar cells can be made of a single layer of light-absorbing material ( single-junction ) or use multiple physical configurations ( multi-junctions ) to take advantage of various absorption and charge separation mechanisms. Solar cells can be classified into first, second and third generation: As of 2016, the most popular and efficient solar cells were those made from thin wafers of silicon which are also the oldest solar cell technology. [ 72 ] By far, the most prevalent bulk material for solar cells is crystalline silicon (c-Si), also known as "solar grade silicon". [ 73 ] Bulk silicon is separated into multiple categories according to crystallinity and crystal size in the resulting ingot , ribbon or wafer . These cells are entirely based around the concept of a p–n junction . Solar cells made of c-Si are made from wafers between 160 and 240 micrometers thick. Monocrystalline silicon (mono-Si) solar cells feature a single-crystal composition that enables electrons to move more freely than in a multi-crystal configuration. Consequently, monocrystalline solar panels deliver a higher efficiency than their multicrystalline counterparts. [ 74 ] The corners of the cells look clipped, like an octagon, because the wafer material is cut from cylindrical ingots, that are typically grown by the Czochralski process . Solar panels using mono-Si cells display a distinctive pattern of small white diamonds. Epitaxial wafers of crystalline silicon can be grown on a monocrystalline silicon "seed" wafer by chemical vapor deposition (CVD), and then detached as self-supporting wafers of some standard thickness (e.g., 250 μm) that can be manipulated by hand, and directly substituted for wafer cells cut from monocrystalline silicon ingots. Solar cells made with this " kerfless " technique can have efficiencies approaching those of wafer-cut cells, but at appreciably lower cost if the CVD can be done at atmospheric pressure in a high-throughput inline process. [ 65 ] [ 66 ] The surface of epitaxial wafers may be textured to enhance light absorption. [ 75 ] [ 76 ] In June 2015, it was reported that heterojunction solar cells grown epitaxially on n-type monocrystalline silicon wafers had reached an efficiency of 22.5% over a total cell area of 243.4 cm 2 {\displaystyle ^{2}} . [ 77 ] Polycrystalline silicon , or multicrystalline silicon (multi-Si) cells are made from cast square ingots—large blocks of molten silicon carefully cooled and solidified. They consist of small crystals giving the material its typical metal flake effect . Polysilicon cells are the most common type used in photovoltaics and are less expensive, but also less efficient, than those made from monocrystalline silicon. Ribbon silicon is a type of polycrystalline silicon—it is formed by drawing flat thin films from molten silicon and results in a polycrystalline structure. These cells are cheaper to make than multi-Si, due to a great reduction in silicon waste, as this approach does not require sawing from ingots . [ 78 ] However, they are also less efficient. This form was developed in the 2000s and introduced commercially around 2009. Also called cast-mono, this design uses polycrystalline casting chambers with small "seeds" of mono material. The result is a bulk mono-like material that is polycrystalline around the outsides. When sliced for processing, the inner sections are high-efficiency mono-like cells (but square instead of "clipped"), while the outer edges are sold as conventional poly. This production method results in mono-like cells at poly-like prices. [ 79 ] Thin-film technologies reduce the amount of active material in a cell. Most designs sandwich active material between two panes of glass. Since silicon solar panels only use one pane of glass, thin film panels are approximately twice as heavy as crystalline silicon panels, although they have a smaller ecological impact (determined from life cycle analysis ). [ 80 ] [ 81 ] Cadmium telluride is the only thin film material so far to rival crystalline silicon in cost/watt. However cadmium is highly toxic and tellurium ( anion : "telluride") supplies are limited. The cadmium present in the cells would be toxic if released. However, release is impossible during normal operation of the cells and is unlikely during fires in residential roofs. [ 82 ] A square meter of CdTe contains approximately the same amount of Cd as a single C cell nickel-cadmium battery , in a more stable and less soluble form. [ 82 ] Copper indium gallium selenide (CIGS) is a direct band gap material. It has the highest efficiency (~20%) among all commercially significant thin film materials (see CIGS solar cell ). Traditional methods of fabrication involve vacuum processes including co-evaporation and sputtering. Recent developments at IBM and Nanosolar attempt to lower the cost by using non-vacuum solution processes. [ 83 ] Silicon thin-film cells are mainly deposited by chemical vapor deposition (typically plasma-enhanced, PE-CVD) from silane gas and hydrogen gas. Depending on the deposition parameters, this can yield amorphous silicon (a-Si or a-Si:H), protocrystalline silicon or nanocrystalline silicon (nc-Si or nc-Si:H), also called microcrystalline silicon. [ 84 ] Amorphous silicon is the most well-developed thin film technology to-date. An amorphous silicon (a-Si) solar cell is made of non-crystalline or microcrystalline silicon. Amorphous silicon has a higher bandgap (1.7 eV) than crystalline silicon (c-Si) (1.1 eV), which means it absorbs the visible part of the solar spectrum more strongly than the higher power density infrared portion of the spectrum. The production of a-Si thin film solar cells uses glass as a substrate and deposits a very thin layer of silicon by plasma-enhanced chemical vapor deposition (PECVD). Protocrystalline silicon with a low volume fraction of nanocrystalline silicon is optimal for high open-circuit voltage. [ 85 ] Nc-Si has about the same bandgap as c-Si and nc-Si and a-Si can advantageously be combined in thin layers, creating a layered cell called a tandem cell. The top cell in a-Si absorbs the visible light and leaves the infrared part of the spectrum for the bottom cell in nc-Si. The semiconductor material gallium arsenide (GaAs) is also used for single-crystalline thin film solar cells. Although GaAs cells are very expensive [ citation needed ] , they hold the world's record in efficiency for a single-junction solar cell at 28.8%. [ 86 ] Typically fabricated on crystalline silicon wafer [ 87 ] with a 41% fill factor , by moving to porous silicon fill factor can be increased to 56% with potentially reduced cost. Using less active GaAs material by fabricating nanowires is another potential pathway to cost reduction. [ 88 ] GaAs is more commonly used in multijunction photovoltaic cells for concentrated photovoltaics (CPV, HCPV) and for solar panels on spacecraft , as the industry favours efficiency over cost for space-based solar power . Based on the previous literature and some theoretical analysis, there are several reasons why GaAs has such high power conversion efficiency for three main reasons: Multi-junction cells consist of multiple thin films, each essentially a solar cell grown on top of another, typically using metalorganic vapour phase epitaxy . Each layer has a different band gap energy to allow it to absorb electromagnetic radiation over a different portion of the spectrum. Multi-junction cells were originally developed for special applications such as satellites and space exploration , but are now used increasingly in terrestrial concentrator photovoltaics (CPV), an emerging technology that uses lenses and curved mirrors to concentrate sunlight onto small, highly efficient multi-junction solar cells. By concentrating sunlight up to a thousand times, High concentration photovoltaics (HCPV) has the potential to outcompete conventional solar PV in the future. [ 89 ] : 21, 26 Tandem solar cells based on monolithic, series connected, gallium indium phosphide (GaInP), gallium arsenide (GaAs), and germanium (Ge) p–n junctions, are increasing sales, despite cost pressures. [ 90 ] Between December 2006 and December 2007, the cost of 4N gallium metal rose from about $350 per kg to $680 per kg. Additionally, germanium metal prices have risen substantially to $1000–1200 per kg this year. Those materials include gallium (4N, 6N and 7N Ga), arsenic (4N, 6N and 7N) and germanium, pyrolitic boron nitride (pBN) crucibles for growing crystals, and boron oxide, these products are critical to the entire substrate manufacturing industry. [ citation needed ] A triple-junction cell, for example, may consist of the semiconductors: GaAs , Ge , and GaInP 2 . [ 91 ] Triple-junction GaAs solar cells were used as the power source of the Dutch four-time World Solar Challenge winners Nuna in 2003, 2005 and 2007 and by the Dutch solar cars Solutra (2005) , Twente One (2007) and 21Revolution (2009). [ citation needed ] GaAs based multi-junction devices are the most efficient solar cells to date. On 15 October 2012, triple junction metamorphic cells reached a record high of 44%. [ 92 ] In 2022, researchers at Fraunhofer Institute for Solar Energy Systems ISE in Freiburg, Germany, demonstrated a record solar cell efficiency of 47.6% under 665-fold sunlight concentration with a four-junction concentrator solar cell. [ 93 ] [ 94 ] In 2016, a new approach was described for producing hybrid photovoltaic wafers combining the high efficiency of III-V multi-junction solar cells with the economies and wealth of experience associated with silicon. The technical complications involved in growing the III-V material on silicon at the required high temperatures, a subject of study for some 30 years, are avoided by epitaxial growth of silicon on GaAs at low temperature by plasma-enhanced chemical vapor deposition (PECVD). [ 95 ] Si single-junction solar cells have been widely studied for decades and are reaching their practical efficiency of ~26% under 1-sun conditions. [ 96 ] Increasing this efficiency may require adding more cells with bandgap energy larger than 1.1 eV to the Si cell, allowing to convert short-wavelength photons for generation of additional voltage. A dual-junction solar cell with a band gap of 1.6–1.8 eV as a top cell can reduce thermalization loss, produce a high external radiative efficiency and achieve theoretical efficiencies over 45%. [ 97 ] A tandem cell can be fabricated by growing the GaInP and Si cells. Growing them separately can overcome the 4% lattice constant mismatch between Si and the most common III–V layers that prevent direct integration into one cell. The two cells therefore are separated by a transparent glass slide so the lattice mismatch does not cause strain to the system. This creates a cell with four electrical contacts and two junctions that demonstrated an efficiency of 18.1%. With a fill factor (FF) of 76.2%, the Si bottom cell reaches an efficiency of 11.7% (± 0.4) in the tandem device, resulting in a cumulative tandem cell efficiency of 29.8%. [ 98 ] This efficiency exceeds the theoretical limit of 29.4% [ 99 ] and the record experimental efficiency value of a Si 1-sun solar cell, and is also higher than the record-efficiency 1-sun GaAs device. However, using a GaAs substrate is expensive and not practical. Hence researchers try to make a cell with two electrical contact points and one junction, which does not need a GaAs substrate. This means there will be direct integration of GaInP and Si. Perovskite solar cells are solar cells that include a perovskite -structured material as the active layer. Most commonly, this is a solution-processed hybrid organic-inorganic tin or lead halide based material. Efficiencies have increased from below 5% at their first usage in 2009 to 25.5% in 2020, making them a very rapidly advancing technology and a hot topic in the solar cell field. [ 100 ] Researchers at University of Rochester reported in 2023 that significant further improvements in cell efficiency can be achieved by utilizing Purcell effect . [ 101 ] Perovskite solar cells are also forecast to be extremely cheap to scale up, making them a very attractive option for commercialisation. So far most types of perovskite solar cells have not reached sufficient operational stability to be commercialised, although many research groups are investigating ways to solve this. [ 102 ] Energy and environmental sustainability of perovskite solar cells and tandem perovskite are shown to be dependent on the structures. [ 103 ] [ 104 ] [ 105 ] Photonic front contacts for light management can improve the perovskite cells' performance, via enhanced broadband absorption, while allowing better operational stability due to protection against the harmful high-energy (above Visible) radiation. [ 106 ] The inclusion of the toxic element lead in the most efficient perovskite solar cells is a potential problem for commercialisation. [ 107 ] With a transparent rear side, bifacial solar cells can absorb light from both the front and rear sides. Hence, they can produce more electricity than conventional monofacial solar cells. The first patent of bifacial solar cells was filed by Japanese researcher Hiroshi Mori, in 1966. [ 108 ] Later, it is said that Russia was the first to deploy bifacial solar cells in their space program in the 1970s. [ citation needed ] In 1976, the Institute for Solar Energy of the Technical University of Madrid , began a research program for the development of bifacial solar cells led by Prof. Antonio Luque . Based on 1977 US and Spanish patents by Luque, a practical bifacial cell was proposed with a front face as anode and a rear face as cathode; in previously reported proposals and attempts both faces were anodic and interconnection between cells was complicated and expensive. [ 109 ] [ 110 ] [ 111 ] In 1980, Andrés Cuevas, a PhD student in Luque's team, demonstrated experimentally a 50% increase in output power of bifacial solar cells, relative to identically oriented and tilted monofacial ones, when a white background was provided. [ 112 ] In 1981 the company Isofoton was founded in Málaga to produce the developed bifacial cells, thus becoming the first industrialization of this PV cell technology. With an initial production capacity of 300 kW/yr of bifacial solar cells, early landmarks of Isofoton's production were the 20kWp power plant in San Agustín de Guadalix , built in 1986 for Iberdrola , and an off grid installation by 1988 also of 20kWp in the village of Noto Gouye Diama ( Senegal ) funded by the Spanish international aid and cooperation programs . Due to the reduced manufacturing cost, companies have again started to produce commercial bifacial modules since 2010. By 2017, there were at least eight certified PV manufacturers providing bifacial modules in North America. The International Technology Roadmap for Photovoltaics (ITRPV) predicted that the global market share of bifacial technology will expand from less than 5% in 2016 to 30% in 2027. [ 113 ] Due to the significant interest in the bifacial technology, a recent study has investigated the performance and optimization of bifacial solar modules worldwide. [ 114 ] [ 115 ] The results indicate that, across the globe, ground-mounted bifacial modules can only offer ~10% gain in annual electricity yields compared to the monofacial counterparts for a ground albedo coefficient of 25% (typical for concrete and vegetation groundcovers). However, the gain can be increased to ~30% by elevating the module 1 m above the ground and enhancing the ground albedo coefficient to 50%. Sun et al. also derived a set of empirical equations that can optimize bifacial solar modules analytically. [ 114 ] In addition, there is evidence that bifacial panels work better than traditional panels in snowy environments as bifacials on dual-axis trackers made 14% more electricity in a year than their monofacial counterparts and 40% during the peak winter months. [ 116 ] An online simulation tool is available to model the performance of bifacial modules in any arbitrary location across the entire world. It can also optimize bifacial modules as a function of tilt angle, azimuth angle, and elevation above the ground. [ 117 ] Intermediate band photovoltaics in solar cell research provides methods for exceeding the Shockley–Queisser limit on the efficiency of a cell. It introduces an intermediate band (IB) energy level in between the valence and conduction bands. Theoretically, introducing an IB allows two photons with energy less than the bandgap to excite an electron from the valence band to the conduction band . This increases the induced photocurrent and thereby efficiency. [ 118 ] Luque and Marti first derived a theoretical limit for an IB device with one midgap energy level using detailed balance . They assumed no carriers were collected at the IB and that the device was under full concentration. They found the IB maximum efficiency to be 63.2%, for a bandgap of 1.95eV with the IB 0.71eV from either the valence or conduction band ans compared to the under one sun illumination limiting efficiency of 47%. [ 119 ] Several means are under study to realize IB semiconductors with such optimum 3-bandgap configuration, namely via materials engineering (controlled inclusion of deep level impurities or highly mismatched alloys) and nano-structuring (quantum-dots in host hetero-crystals). [ 120 ] In 2014, researchers at California NanoSystems Institute discovered using kesterite and perovskite improved electric power conversion efficiency for solar cells. [ 121 ] In December 2022, it was reported that MIT researchers had developed ultralight fabric solar cells. These cells offer a weight one-hundredth that of traditional panels while generating 18 times more power per kilogram. Thinner than a human hair, these cells can be laminated onto various surfaces, such as boat sails, tents, tarps, or drone wings, to extend their functionality. Using ink-based materials and scalable techniques, researchers coat the solar cell structure with printable electronic inks, completing the module with screen-printed electrodes . Tested on high-strength fabric, the cells produce 370 watts-per-kilogram, representing an improvement over conventional solar cells. [ 122 ] Photon upconversion is the process of using two low-energy ( e.g. , infrared) photons to produce one higher energy photon; downconversion is the process of using one high energy photon ( e.g. , ultraviolet) to produce two lower energy photons. Either of these techniques could be used to produce higher efficiency solar cells by allowing solar photons to be more efficiently used. The difficulty, however, is that the conversion efficiency of existing phosphors exhibiting up- or down-conversion is low, and is typically narrow band. One upconversion technique is to incorporate lanthanide -doped materials ( Er 3+ , Yb 3+ , Ho 3+ or a combination), taking advantage of their luminescence to convert infrared radiation to visible light. Upconversion process occurs when two infrared photons are absorbed by rare-earth ions to generate a (high-energy) absorbable photon. As example, the energy transfer upconversion process (ETU), consists in successive transfer processes between excited ions in the near infrared. The upconverter material could be placed below the solar cell to absorb the infrared light that passes through the silicon. Useful ions are most commonly found in the trivalent state. Er + ions have been the most used. Er 3+ ions absorb solar radiation around 1.54 μm. Two Er 3+ ions that have absorbed this radiation can interact with each other through an upconversion process. The excited ion emits light above the Si bandgap that is absorbed by the solar cell and creates an additional electron–hole pair that can generate current. However, the increased efficiency was small. In addition, fluoroindate glasses have low phonon energy and have been proposed as suitable matrix doped with Ho 3+ ions. [ 123 ] Dye-sensitized solar cells (DSSCs) are made of low-cost materials and do not need elaborate manufacturing equipment, so they can be made in a DIY fashion. In bulk it should be significantly less expensive than older solid-state cell designs. DSSC's can be engineered into flexible sheets and although its conversion efficiency is less than the best thin film cells , its price/performance ratio may be high enough to allow them to compete with fossil fuel electrical generation . Typically a ruthenium metalorganic dye (Ru-centered) is used as a monolayer of light-absorbing material, which is adsorbed onto a thin film of titanium dioxide . The dye-sensitized solar cell depends on this mesoporous layer of nanoparticulate titanium dioxide (TiO 2 ) to greatly amplify the surface area (200–300 m 2 /g TiO 2 , as compared to approximately 10 m 2 /g of flat single crystal) which allows for a greater number of dyes per solar cell area (which in term in increases the current). The photogenerated electrons from the light absorbing dye are passed on to the n-type TiO 2 and the holes are absorbed by an electrolyte on the other side of the dye. The circuit is completed by a redox couple in the electrolyte, which can be liquid or solid. This type of cell allows more flexible use of materials and is typically manufactured by screen printing or ultrasonic nozzles , with the potential for lower processing costs than those used for bulk solar cells. However, the dyes in these cells also suffer from degradation under heat and UV light and the cell casing is difficult to seal due to the solvents used in assembly. Due to this reason, researchers have developed solid-state dye-sensitized solar cells that use a solid electrolyte to avoid leakage. [ 124 ] The first commercial shipment of DSSC solar modules occurred in July 2009 from G24i Innovations. [ 125 ] Quantum dot solar cells (QDSCs) are based on the Gratzel cell, or dye-sensitized solar cell architecture, but employ low band gap semiconductor nanoparticles , fabricated with crystallite sizes small enough to form quantum dots (such as CdS , CdSe , Sb 2 S 3 , PbS , etc.), instead of organic or organometallic dyes as light absorbers. Due to the toxicity associated with Cd and Pb based compounds there are also a series of "green" QD sensitizing materials in development (such as CuInS 2, CuInSe 2 and CuInSeS). [ 126 ] QD's size quantization allows for the band gap to be tuned by simply changing particle size. They also have high extinction coefficients and have shown the possibility of multiple exciton generation . [ 127 ] In a QDSC, a mesoporous layer of titanium dioxide nanoparticles forms the backbone of the cell, much like in a DSSC. This TiO 2 layer can then be made photoactive by coating with semiconductor quantum dots using chemical bath deposition , electrophoretic deposition or successive ionic layer adsorption and reaction. The electrical circuit is then completed through the use of a liquid or solid redox couple . The efficiency of QDSCs has increased [ 128 ] to over 5% shown for both liquid-junction [ 129 ] and solid state cells, [ 130 ] with a reported peak efficiency of 11.91%. [ 131 ] In an effort to decrease production costs, the Prashant Kamat research group [ 132 ] demonstrated a solar paint made with TiO 2 and CdSe that can be applied using a one-step method to any conductive surface with efficiencies over 1%. [ 133 ] However, the absorption of quantum dots (QDs) in QDSCs is weak at room temperature. [ 134 ] The plasmonic nanoparticles can be utilized to address the weak absorption of QDs (e.g., nanostars). [ 135 ] Adding an external infrared pumping source to excite intraband and interband transition of QDs is another solution. [ 134 ] Organic solar cells and polymer solar cells are built from thin films (typically 100 nm) of organic semiconductors including polymers, such as polyphenylene vinylene and small-molecule compounds like copper phthalocyanine (a blue or green organic pigment) and carbon fullerenes and fullerene derivatives such as PCBM . They can be processed from liquid solution, offering the possibility of a simple roll-to-roll printing process, potentially leading to inexpensive, large-scale production. In addition, these cells could be beneficial for some applications where mechanical flexibility and disposability are important. Current cell efficiencies are, however, very low, and practical devices are essentially non-existent. Energy conversion efficiencies achieved to date using conductive polymers are very low compared to inorganic materials. However, Konarka Power Plastic reached efficiency of 8.3% [ 136 ] and organic tandem cells in 2012 reached 11.1%. [ citation needed ] The active region of an organic device consists of two materials, one electron donor and one electron acceptor. When a photon is converted into an electron hole pair, typically in the donor material, the charges tend to remain bound in the form of an exciton , separating when the exciton diffuses to the donor-acceptor interface, unlike most other solar cell types. The short exciton diffusion lengths of most polymer systems tend to limit the efficiency of such devices. Nanostructured interfaces, sometimes in the form of bulk heterojunctions, can improve performance. [ 137 ] In 2011, MIT and Michigan State researchers developed solar cells with a power efficiency close to 2% with a transparency to the human eye greater than 65%, achieved by selectively absorbing the ultraviolet and near-infrared parts of the spectrum with small-molecule compounds. [ 138 ] [ 139 ] Researchers at UCLA more recently developed an analogous polymer solar cell, following the same approach, that is 70% transparent and has a 4% power conversion efficiency. [ 140 ] [ 141 ] [ 142 ] These lightweight, flexible cells can be produced in bulk at a low cost and could be used to create power generating windows. In 2013, researchers announced polymer cells with some 3% efficiency. They used block copolymers , self-assembling organic materials that arrange themselves into distinct layers. The research focused on P3HT-b-PFTBT that separates into bands some 16 nanometers wide. [ 143 ] [ 144 ] Adaptive cells change their absorption/reflection characteristics depending on environmental conditions. An adaptive material responds to the intensity and angle of incident light. At the part of the cell where the light is most intense, the cell surface changes from reflective to adaptive, allowing the light to penetrate the cell. The other parts of the cell remain reflective increasing the retention of the absorbed light within the cell. [ 145 ] In 2014, a system was developed that combined an adaptive surface with a glass substrate that redirect the absorbed to a light absorber on the edges of the sheet. The system also includes an array of fixed lenses/mirrors to concentrate light onto the adaptive surface. As the day continues, the concentrated light moves along the surface of the cell. That surface switches from reflective to adaptive when the light is most concentrated and back to reflective after the light moves along. [ 145 ] Incident light rays onto a textured surface do not reflect out to the air as opposed to rays onto a flat surface, but rather some light rays are bounced back onto the other surface again due to the geometry of the surface; increasing light absorption and light to electricity conversion efficiency. Surface texturing is one technique used to reduce optical losses, primarily in cost-effective, low light absorption thin-film solar cells . In combination with anti-reflective coating , surface texturing technique can effectively trap light rays within a thin film silicon solar cell. Consequently, at the same power output, thickness for solar cells can decrease with the increased absorption of light rays. Surface texture geometry and texturing techniques can be done in multiple ways. Etching c-Si substrates can produce randomly distributed square based pyramids on the surface using anisotropic etchants. [ 146 ] Studies show that c-Si wafers could be etched down to form nano-scale inverted pyramids. In 2012, researchers at MIT reported that c-Si films textured with nanoscale inverted pyramids could achieve light absorption comparable to 30 times thicker planar c-Si. [ 147 ] While easier to manufacture, but with less efficiency, multicrystalline solar cells can be surface-textured through isotopic etching or photolithography methods to yield solar energy conversion efficiency comparable to that of monocrystalline silicon cells. [ 148 ] [ 149 ] This texture effect as well as the interaction with other interfaces in the PV module is a challenging optical simulation task, but at least one efficient method for modeling and optimization that exists is the OPTOS formalism . [ 150 ] Solar cells are commonly encapsulated in a transparent polymeric resin to protect the delicate solar cell regions for coming into contact with moisture, dirt, ice, and other environmental conditions expected during operation. Encapsulants are commonly made from polyvinyl acetate or glass. Most encapsulants are uniform in structure and composition, which increases light collection owing to light trapping from total internal reflection of light within the resin. Research has been conducted into structuring the encapsulant to provide further collection of light. Such encapsulants have included roughened glass surfaces, [ 151 ] diffractive elements, [ 152 ] prism arrays, [ 153 ] air prisms, [ 154 ] v-grooves, [ 155 ] diffuse elements, as well as multi-directional waveguide arrays. [ 156 ] Prism arrays show an overall 5% increase in the total solar energy conversion. [ 154 ] Arrays of vertically aligned broadband waveguides provide a 10% increase at normal incidence, as well as wide-angle collection enhancement of up to 4%, [ 157 ] with optimized structures yielding up to a 20% increase in short circuit current. [ 158 ] Active coatings that convert infrared light into visible light have shown a 30% increase. [ 159 ] Nanoparticle coatings inducing plasmonic light scattering increase wide-angle conversion efficiency up to 3%. Optical structures have also been created in encapsulation materials to effectively "cloak" the metallic front contacts. [ 160 ] [ 161 ] Solar cells share some of the same processing and manufacturing techniques as other semiconductor devices. However, the strict requirements for cleanliness and quality control of semiconductor fabrication are more relaxed for solar cells, lowering costs. Polycrystalline silicon wafers are made by wire-sawing block-cast silicon ingots into 180 to 350 micrometer thick wafers. The wafers are usually lightly p-type -doped. A surface diffusion of n-type dopants is performed on the front side of the wafer. This forms a p–n junction a few hundred nanometers below the surface. Anti-reflection coatings are then typically applied to increase the amount of light coupled into the solar cell. Silicon nitride has gradually replaced titanium dioxide as the preferred material, because of its excellent surface passivation qualities. It prevents carrier recombination at the cell surface. A layer several hundred nanometers thick is applied using plasma-enhanced chemical vapor deposition . Some solar cells have textured front surfaces that, like anti-reflection coatings, increase the amount of light reaching the wafer. Such surfaces were first applied to single-crystal silicon, followed by multicrystalline silicon somewhat later. A full area metal contact is made on the back surface, and a grid-like metal contact made up of fine "fingers" and larger "bus bars" are screen-printed onto the front surface using a silver paste. This is an evolution of the so-called "wet" process for applying electrodes, first described in a US patent filed in 1981 by Bayer AG . [ 162 ] The rear contact is formed by screen-printing a metal paste.To maximize frontal surface area available for sunlight and improve solar cell efficiency, manufacturers use various rear contact electrode techniques: The paste is then fired at several hundred degrees Celsius to form metal electrodes in ohmic contact with the silicon. Some companies use an additional electroplating step to increase efficiency. After the metal contacts are made, the solar cells are interconnected by flat wires or metal ribbons, and assembled into modules or "solar panels". Solar panels have a sheet of tempered glass on the front, and a polymer or glass encapsulation on the back. Different types of manufacturing and recycling partly determine how effective it is in decreasing emissions and having a positive environmental effect. [ 48 ] Such differences and effectiveness could be quantified [ 48 ] for production of the most optimal types of products for different purposes in different regions across time. National Renewable Energy Laboratory tests and validates solar technologies. Three reliable groups certify solar equipment: UL and IEEE (both U.S. standards) and IEC [ citation needed ] . The IEA 's 2022 Special Report highlights China's dominance over the solar PV supply chain , with an investment exceeding US$50 billion and the creation of around 300,000 jobs since 2011. China commands over 80% of all manufacturing stages for solar panels. This control has drastically cut costs but also led to issues like supply-demand imbalances and polysilicon production constraints. Nevertheless, China's strategic policies have reduced solar PV costs by more than 80%, increasing global affordability. In 2021, China's solar PV exports were over US$30 billion. [ 168 ] Meeting global energy and climate targets necessitates a major expansion in solar PV manufacturing, aiming for over 630 GW by 2030 according to the IEA's "Roadmap to Net Zero Emissions by 2050". China's dominance, controlling nearly 95% of key solar PV components and 40% of the world's polysilicon production in Xinjiang, poses risks of supply shortages and cost surges. Critical mineral demand, like silver, may exceed 30% of 2020's global production by 2030. [ 168 ] In 2021, China's share of solar PV module production reached approximately 70%, an increase from 50% in 2010. Other key producers included Vietnam (5%), Malaysia (4%), Korea (4%), and Thailand (2%), with much of their production capacity developed by Chinese companies aimed at exports, notably to the United States. [ 168 ] As of September 2018, sixty percent of the world's solar photovoltaic modules were made in China. [ 169 ] As of May 2018, the largest photovoltaic plant in the world is located in the Tengger desert in China. [ 170 ] In 2018, China added more photovoltaic installed capacity (in GW) than the next 9 countries combined. [ 171 ] In 2021, China's share of solar PV module production reached approximately 70%. [ 168 ] In the first half of 2023, China's production of PV modules exceeded 220 GW, marking an increase of over 62% compared to the same period in 2022. In 2022, China maintained its position as the world's largest PV module producer, holding a dominant market share of 77.8%. [ 172 ] In 2022, Vietnam was the second-largest PV module producer, only behind China, with its production capacity rising to 24.1 GW, marking a significant 47% increase from the 16.4 GW produced in 2021. Vietnam accounts for 6.4% of the world's photovoltaic production. [ 172 ] In 2022, Malaysia was the third-largest PV module producer, with a production capacity of 10.8 GW, accounting for 2.8% of global production. This placed it behind China, which dominated with 77.8%, and Vietnam, which contributed 6.4%. [ 172 ] Solar energy production in the U.S. has doubled from 2013 to 2019. [ 173 ] This was driven first by the falling price of quality silicon, [ 174 ] [ 175 ] [ 176 ] and later simply by the globally plunging cost of photovoltaic modules. [ 170 ] [ 177 ] In 2018, the U.S. added 10.8GW of installed solar photovoltaic energy, an increase of 21%. [ 171 ] Latin America : Latin America has emerged as a promising region for solar energy development in recent years, with over 10 GW of installations in 2020. The solar market in Latin America has been driven by abundant solar resources, falling costs, competitive auctions and growing electricity demand. Some of the leading countries for solar energy in Latin America are Brazil, Mexico, Chile and Argentina. However, the solar market in Latin America also faces some challenges, such as political instability, financing gaps and power transmission bottlenecks. [ citation needed ] Middle East and Africa : The Middle East and Africa has also experienced significant growth in solar energy deployment in recent years, with over 8 GW installations in 2020. The solar market in the Middle East and Africa has been driven by the low-cost generation of solar energy, the diversification of energy sources, the fight against climate change and rural electrification are motivated. Some of the notable countries for solar energy in the Middle East and Africa are Saudi Arabia, United Arab Emirates, Egypt, Morocco and South Africa. However, the solar market in the Middle East and Africa also faces several obstacles, including social unrest, regulatory uncertainty and technical barriers. [ 178 ] Like many other energy generation technologies, the manufacture of solar cells, especially its rapid expansion, has many environmental and supply-chain implications. Global mining may adapt and potentially expand for sourcing the needed minerals which vary per type of solar cell. [ 179 ] [ 180 ] Recycling solar panels could be a source for materials that would otherwise need to be mined. [ 48 ] Solar cells degrade over time and lose their efficiency. Solar cells in extreme climates, such as desert or polar, are more prone to degradation due to exposure to harsh UV light and snow loads respectively. [ 181 ] Usually, solar panels are given a lifespan of 25–30 years before decommissioning. [ 182 ] The International Renewable Energy Agency estimated that the amount of solar panel electronic waste generated in 2016 was 43,500–250,000 metric tons. This number is estimated to increase substantially by 2030, reaching an estimated waste volume of 60–78 million metric tons in 2050. [ 183 ] The most widely used solar cells in the market are crystalline solar cells. A product is truly recyclable if it can be harvested again. In the 2016 Paris Agreement , 195 countries agreed to reduce their carbon emissions by shifting their focus away from fossil fuels and towards renewable energy sources. Owing to this, Solar will be a major contributor to electricity generation all over the world. So, there will be a plethora of solar panels to be recycled after the end of their life cycle. In fact, many researchers around the globe have voiced their concern about finding ways to use silicon cells after recycling. [ 184 ] [ 185 ] [ 186 ] [ 187 ] Additionally, these cells have hazardous elements/compounds, including lead (Pb), cadmium (Cd) or cadmium sulfide (CdS), selenium (Se), and barium (Ba) as dopants aside from the valuables silicon (Si), aluminum (Al), silver (Ag), and copper (Cu). The harmful elements/compounds if not disposed of with the proper technique can have severe harmful effects on human life and wildlife alike. [ 188 ] There are various ways c-Si can be recycled. Mainly thermal and chemical separation methods are used. This happens in two stages [ 189 ] The First Solar panel recycling plant opened in Rousset, France in 2018. It was set to recycle 1300 tonnes of solar panel waste a year, and can increase its capacity to 4000 tonnes. [ 190 ] [ 191 ] [ 192 ] If recycling is driven only by market-based prices, rather than also environmental regulations, the economic incentives for recycling remain uncertain and as of 2021 the environmental impact of different types of developed recycling techniques still need to be quantified. [ 48 ] Renewable energy portal
https://en.wikipedia.org/wiki/Solar_cell
A solar chimney – often referred to as a thermal chimney – is a way of improving the natural ventilation of buildings by using convection of air heated by passive solar energy. A simple description of a solar chimney is that of a vertical shaft utilizing solar energy to enhance the natural stack ventilation through a building. The solar chimney has been in use for centuries, particularly in the Middle East and Near East by the Persians , as well as in Europe by the Romans . In its simplest form, the solar chimney consists of a black-painted chimney . During the day solar energy heats the chimney and the air within it, creating an updraft of air in the chimney. The suction created at the chimney's base can be used to ventilate and cool the building below. [ 1 ] In most parts of the world it is easier to harness wind power for such ventilation as with a windcatcher , but on hot windless days a solar chimney can provide ventilation where otherwise there would be none. There are however a number of solar chimney variations. The basic design elements of a solar chimney are: A principle has been proposed for solar power generation, using a large greenhouse at the base rather than relying solely on heating the chimney itself. (For further information on this issue, see Solar updraft tower .) Solar chimneys are painted black so that they absorb the sun's heat more effectively. When the air inside the chimney is heated, it rises and pulls cold air out from under the ground via the heat exchange tubes. Solar chimneys, also called heat chimneys or heat stacks, can also be used in architectural settings to decrease the energy used by mechanical systems (systems that heat and cool the building through mechanical means). For decades, air conditioning and mechanical ventilation have been the standard method of environmental control in many building types, especially offices, in developed countries. Pollution and reallocating energy supplies have led to a new environmental approach in building design. Innovative technologies along with bioclimatic principles and traditional design strategies are often combined to create new and potentially successful design solutions. The solar chimney is one of these concepts currently explored by scientists as well as designers, mostly through research and experimentation. A solar chimney can serve many purposes. Direct sunlight warms air inside the chimney causing it to rise out the top and drawing air in from the bottom. This drawing of air can be used to ventilate a home or office, to draw air through a geothermal heat exchange, or to ventilate only a specific area such as a composting toilet. Natural ventilation can be created by providing vents in the upper level of a building to allow warm air to rise by convection and escape to the outside. At the same time cooler air can be drawn in through vents at the lower level. Trees may be planted on that side of the building to provide shade for cooler outside air. This natural ventilation process can be augmented by a solar chimney. The chimney has to be higher than the roof level, and has to be constructed on the wall facing the direction of the Sun. Absorption of heat from the Sun can be increased by using a glazed surface on the side facing the Sun. Heat absorbing material can be used on the opposing side. The size of the heat-absorbing surface is more important than the diameter of the chimney. A large surface area allows for more effective heat exchange with the air necessary for heating by solar radiation. Heating of the air within the chimney will enhance convection, and hence airflow through the chimney. Openings of the vents in the chimney should face away from the direction of the prevailing wind . To further maximize the cooling effect, the incoming air may be led through underground ducts before it is allowed to enter the building. The solar chimney can be improved by integrating it with a trombe wall . The added advantage of this design is that the system may be reversed during the cold season, providing solar heating instead. A variation of the solar chimney concept is the solar attic . In a hot sunny climate the attic space is often blazingly hot in the summer. In a conventional building this presents a problem as it leads to the need for increased air conditioning . By integrating the attic space with a solar chimney, the hot air in the attic can be put to work. It can help the convection in the chimney, improving ventilation. [ 4 ] The use of a solar chimney may benefit natural ventilation and passive cooling strategies of buildings thus help reduce energy use, CO 2 emissions and pollution in general. Potential benefits regarding natural ventilation and use of solar chimneys are: Potential benefits regarding passive cooling may include: The Building Research Establishment (BRE) office building in Garston, Watford, United Kingdom, incorporates solar-assisted passive ventilation stacks as part of its ventilation strategy. Designed by architects Feilden Clegg Bradley, the BRE offices aim to reduce energy consumption and CO 2 emissions by 30% from current best practice guidelines and sustain comfortable environmental conditions without the use of air conditioning. The passive ventilation stacks, solar shading, and hollow concrete slabs with embedded under floor cooling are key features of this building. Ventilation and heating systems are controlled by the building management system (BMS) while a degree of user override is provided to adjust conditions to occupants' needs. The building utilizes five vertical shafts as an integral part of the ventilation and cooling strategy. The main components of these stacks are a south facing glass-block wall, thermal mass walls and stainless steel round exhausts rising a few meters above roof level. The chimneys are connected to the curved hollow concrete floor slabs which are cooled via night ventilation. Pipes embedded in the floor can provide additional cooling utilizing groundwater. On warm windy days air is drawn in through passages in the curved hollow concrete floor slabs. Stack ventilation naturally rising out through the stainless steel chimneys enhances the air flow through the building. The movement of air across the chimney tops enhances the stack effect. During warm, still days, the building relies mostly on the stack effect while air is taken from the shady north side of the building. Low-energy fans in the tops of the stacks can also be used to improve airflow. Overnight, control systems enable ventilation paths through the hollow concrete slab removing the heat stored during the day, which then remains cold for the following day. The exposed curved ceiling gives more surface area than a flat ceiling would, acting as a heat sink , again providing summer cooling. Research based on actual performance measurements of the passive stacks found that they enhanced the cooling ventilation of the space during warm and still days and may also have the potential to assist night-time cooling due to their thermally massive structure. [ 5 ] A technology closely related to the solar chimney is the evaporative down-draft cooltower. In areas with a hot, arid climate this approach may contribute to a sustainable way to provide air conditioning for buildings. The principle is to allow water to evaporate at the top of a tower, either by using evaporative cooling pads or by spraying water. Evaporation cools the incoming air, causing a downdraft of cool air that will bring down the temperature inside the building. [ 6 ] Airflow can be increased by using a solar chimney on the opposite side of the building to help in venting hot air to the outside. [ 7 ] This concept has been used for the Visitor Center of Zion National Park . The Visitor Center was designed by the High Performance Buildings Research of the National Renewable Energy Laboratory (NREL). The principle of the downdraft cooltower has been proposed for solar power generation as well. (See Energy tower for more information.) Evaporation of moisture from the pads on top of the Toguna buildings built by the Dogon people of Mali, Africa, contribute to the coolness felt by the men who rest underneath. The women's buildings on the outskirts of town are functional as more conventional solar chimneys.
https://en.wikipedia.org/wiki/Solar_chimney
In solar observation and imaging , coordinate systems are used to identify and communicate locations on and around the Sun . The Sun is made of plasma , so there are no permanent demarcated points that can be referenced. The Sun is a rotating sphere of plasma at the center of the Solar System. It lacks a solid or liquid surface, so the interface separating its interior and its exterior is usually defined as the boundary where plasma becomes opaque to visible light, the photosphere . Since plasma is gaseous in nature, this surface has no permanent demarcated points that can be used for reference. Furthermore, its rate of rotation varies with latitude, rotating faster at the equator than at the poles . [ 1 ] [ 2 ] In observations of the solar disk, cardinal directions are typically defined so that the Sun's northern and southern hemispheres point toward Earth's northern and southern celestial poles , respectively, and the Sun's eastern and western hemispheres point toward Earth's eastern and western horizons , respectively. In this scheme, clockwise from north at 90° intervals one encounters west, south, and east, and the direction of solar rotation is from east to west. [ 3 ] [ 4 ] Heliographic coordinate systems are used to identify locations on the Sun's surface. The two most commonly used systems are the Stonyhurst and Carrington systems. They both define latitude as the angular distance from the solar equator, but differ in how they define longitude . In Stonyhurst coordinates, the longitude is fixed for an observer on Earth, and, in Carrington coordinates, the longitude is fixed for the Sun's rotation. [ 5 ] [ 6 ] [ 7 ] [ 8 ] The Stonyhurst heliographic coordinate system, developed at Stonyhurst College in the 1800s, has its origin (where longitude and latitude are both 0°) at the point where the solar equator intersects the central solar meridian as seen from Earth. Longitude in this system is therefore fixed for observers on Earth. [ 8 ] [ 5 ] The Carrington heliographic coordinate system, established by Richard C. Carrington in 1863, rotates with the Sun at a fixed rate based on the observed rotation of low-latitude sunspots. It rotates with a sidereal period of exactly 25.38 days, which corresponds to a mean synodic period of 27.2753 days. [ 9 ] : 221 [ 1 ] [ 2 ] [ 5 ] Whenever the Carrington prime meridian (the line of 0° Carrington longitude) passes the Sun's central meridian as seen from Earth, a new Carrington rotation begins. These rotations are numbered sequentially, with Carrington rotation number 1 starting on 9 November 1853. [ 10 ] [ 11 ] [ 12 ] [ 7 ] : 278 Heliocentric coordinate systems measure spatial positions relative to an origin at the Sun's center. There are four systems in use: the heliocentric inertial (HCI) system, the heliocentric Aries ecliptic (HAE) system, the heliocentric Earth ecliptic (HEE) system, and the heliocentric Earth equatorial (HEEQ) system. They are summarized in the following table. The third axis not presented in the table completes a right-handed Cartesian triad . [ 1 ] [ 13 ] [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/Solar_coordinate_systems
Solar desalination is a desalination technique powered by solar energy . The two common methods are direct (thermal) and indirect (photovoltaic). [ 1 ] Solar distillation has been used for thousands of years. Early Greek mariners and Persian alchemists produced both freshwater and medicinal distillates. Solar stills were the first method used on a large scale to convert contaminated water into a potable form. [ 2 ] In 1870 the first US patent was granted for a solar distillation device to Norman Wheeler and Walton Evans. [ 3 ] Two years later in Las Salinas, Chile, Swedish engineer Charles Wilson began building a solar distillation plant to supply freshwater to workers at a saltpeter and silver mine . It operated continuously for 40 years and distilled an average of 22.7 m 3 of water a day using the effluent from mining operations as its feed water. [ 4 ] Solar desalination in the United States began in the early 1950s when Congress passed the Conversion of Saline Water Act, which led to the establishment of the Office of Saline Water (OSW) in 1955. OSW's main function was to administer funds for desalination research and development projects. [ 5 ] One of five demonstration plants was located in Daytona Beach, Florida . Many of the projects were aimed at solving water scarcity issues in remote desert and coastal communities. [ 4 ] In the 1960s and 1970s several distillation plants were constructed on the Greek isles with capacities ranging from 2000 to 8500 m 3 /day. [ 2 ] In 1984 a plant was constructed in Abu-Dhabi with a capacity of 120 m 3 /day that is still in operation. [ 4 ] In Italy , an open source design called "the Eliodomestico" by Gabriele Diamanti was developed for personal costing $50. [ 6 ] Of the estimated 22 million m 3 daily freshwater produced through desalination worldwide, less than 1% uses solar energy. [ 2 ] The prevailing methods of desalination, MSF and RO, are energy-intensive and rely heavily on fossil fuels. [ 8 ] Because of inexpensive methods of freshwater delivery and abundant low-cost energy resources, solar distillation has been viewed as cost-prohibitive and impractical. [ 2 ] It is estimated that desalination plants powered by conventional fuels consume the equivalent of 203 million tons of fuel a year. [ 2 ] Solar desalination is a technique that harnesses solar energy to convert saline water into fresh water, making it suitable for human consumption and irrigation. The process can be categorized based on the type of solar energy source utilized. In direct solar desalination, saline water absorbs solar energy and evaporates, leaving behind salt and other impurities. An example of this is solar stills, where an enclosed environment allows for the collection and condensation of pure water vapor. On the other hand, indirect solar desalination involves the use of solar collectors that capture and transfer solar energy to saline water. This energy is then used to power desalination processes such as Humidification-Dehumidification (HDH) and diffusion-driven methods. In the direct (distillation) method, a solar collector is coupled with a distilling mechanism. [ 9 ] Solar stills of this type are described in survival guides, provided in marine survival kits, and employed in many small desalination and distillation plants. Water production is proportional to the area of the solar surface and solar incidence angle and has an average estimated value of 3–4 litres per square metre (0.074–0.098 US gal/sq ft). [ 2 ] Because of this proportionality and the relatively high cost of property and material for construction, distillation tends to favor plants with production capacities less than 200 m 3 /d (53,000 US gal/d). [ 2 ] This uses the same process as rainfall. A transparent cover encloses a pan where saline water is placed. The latter traps solar energy, evaporating the seawater. The vapor condenses on the inner face of a sloping transparent cover, leaving behind salts, inorganic and organic components and microbes. The direct method achieves values of 4-5 L/m 2 /day and efficiency of 30-40%. [ 10 ] Efficiency can be improved to 45% by using a double slope or an additional condenser. [ 11 ] In a wick still, feed water flows slowly through a porous radiation-absorbing pad. This requires less water to be heated and is easier to change the angle towards the sun which saves time and achieves higher temperatures. [ 12 ] A diffusion still is composed of a hot storage tank coupled to a solar collector and the distillation unit. Heating is produced by the thermal diffusion between them. [ 13 ] Increasing the internal temperature using an external energy source can improve productivity. [ citation needed ] Direct methods use thermal energy to vaporize the seawater as part of a 2-phase separation. Such methods are relatively simple and require little space so they are normally used on small systems. However, they have a low production rate due to low operating temperature and pressure, so they are appropriate for systems that yield 200 m 3 /day. [ 14 ] Indirect desalination employs a solar collection array, consisting of photovoltaic and/or fluid-based thermal collectors, and a separate conventional desalination plant. [ 9 ] Many arrangements have been analyzed, experimentally tested and deployed. Categories include multiple-effect humidification (MEH), multi-stage flash distillation (MSF), multiple-effect distillation (MED), multiple-effect boiling (MEB), humidification–dehumidification (HDH), reverse osmosis (RO), and freeze-effect distillation. [ 8 ] Large solar desalination plants typically use indirect methods. [ citation needed ] Indirect solar desalination processes are categorized into single-phase processes (membrane based) and phase change processes (non-membrane based). [ 15 ] Single-phase desalination use photovoltaics to produce electricity that drive pumps. [ 16 ] Phase-change (or multi-phase) solar desalination is not membrane-based. [ 17 ] Indirect solar desalination systems using photovoltaic (PV) panels and reverse osmosis (RO) have been in use since 2009. Output by 2013 reached 1,600 litres (420 US gal) per hour per system, and 200 litres (53 US gal) per day per square metre of PV panel. [ 18 ] [ 19 ] Utirik Atoll in the Pacific Ocean has been supplied with fresh water this way since 2010. [ 20 ] Single-phase desalination processes include reverse osmosis and membrane distillation , where membranes filter water from contaminants. [ 15 ] [ 17 ] As of 2014 reverse osmosis (RO) made up about 52% of indirect methods. [ 21 ] [ 22 ] Pumps push salt water through RO modules at high pressure. [ 15 ] [ 21 ] RO systems depend on pressure differences. A pressure of 55–65 bar is required to purify seawater. An average of 5 kWh/m 3 of energy is typically required to run a large-scale RO plant. [ 21 ] Membrane distillation (MD) utilizes pressure difference from two sides of a microporous hydrophobic membrane. [ 21 ] [ 23 ] Fresh water can be extracted through four MD methods: Direct Contact (DCMD), Air Gap (AGMD), Sweeping Gas (SGMD) and Vacuum (VMD). [ 21 ] [ 23 ] An estimated water cost of $15/m 3 and $18/m 3 support medium-scale solar-MD plants. [ 21 ] [ 24 ] Energy consumption ranges from 200 to 300 kWh/m 3 . [ 25 ] Phase-change (or multi-phase) solar desalination [ 17 ] [ 22 ] [ 26 ] includes multi-stage flash , multi-effect distillation (MED), and thermal vapor compression (VC) . [ 17 ] It is accomplished by using phase change materials (PCMs) to maximize latent heat storage and high temperatures. [ 27 ] MSF phase change temperatures range 80–120 °C, 40–100 °C for VC, and 50–90 °C for the MED method. [ 17 ] [ 26 ] Multi-stage flash (MSF) requires seawater to travel through a series of vacuumed reactors held at successively lower pressures. [ 22 ] Heat is added to capture the latent heat of the vapor. As seawater flows through the reactors, steam is collected and is condensed to produce fresh water. [ 22 ] In Multi-effect distillation (MED) , seawater flows through successively low pressure vessels and reuses latent heat to evaporate seawater for condensation. [ 22 ] MED desalination requires less energy than MSF due to higher efficiency in thermodynamic transfer rates. [ 22 ] [ 26 ] The multi-stage flash (MSF) method is a widely used technology for desalination, particularly in large-scale seawater desalination plants. It is based on the principle of utilizing the evaporation and condensation process to separate saltwater from freshwater. [ 28 ] In the MSF desalination process, seawater is heated and subjected to a series of flashings or rapid depressurizations in multiple stages. Each stage consists of a series of heat exchangers and flash chambers. The process typically involves the following steps: The multi-stage flash (MSF) method, known for its high energy efficiency through the utilization of latent heat of vaporization during the flashing process, accounted for approximately 45% of the world's desalination capacity and a dominant 93% of thermal systems as recorded in 2009. [ 2 ] In Margherita di Savoia , Italy a 50–60 m 3 /day MSF plant uses a salinity gradient solar pond. In El Paso , Texas a similar project produces 19 m 3 /day. In Kuwait a MSF facility uses parabolic trough collectors to provide solar thermal energy to produce 100 m 3 of fresh water a day. [ 8 ] And in Northern China an experimental, automatic, unmanned operation uses 80 m 2 of vacuum tube solar collectors coupled with a 1 kW wind turbine (to drive several small pumps) to produce 0.8 m 3 /day. [ 29 ] MSF solar distillation has an output capacity of 6–60 L/m 2 /day versus the 3-4 L/m 2 /day standard output of a solar still. [ 8 ] MSF experience poor efficiency during start-up or low energy periods. Achieving highest efficiency requires controlled pressure drops across each stage and steady energy input. As a result, solar applications require some form of thermal energy storage to deal with cloud interference, varying solar patterns, nocturnal operation, and seasonal temperature changes. As thermal energy storage capacity increases a more continuous process can be achieved and production rates approach maximum efficiency. [ 30 ] Indirect solar desalination by a form of humidification/dehumidification is in use in the seawater greenhouse . [ 31 ] Although it has only been used on demonstration projects, this indirect method based on crystallization of the saline water has the advantage of the low energy required. Since the latent heat of fusion of water is 6,01 kJ/mole and the latent heat of vaporization at 100 °C is 40,66 kJ/mole, it should be cheaper in terms of energy cost. Furthermore, the corrosion risk is lower too. There is however a disadvantage related with the difficulties of mechanically moving mixtures of ice and liquid. The process has not been commercialized yet due to cost and difficulties with refrigeration systems. [ 32 ] The most studied way of using this process is the refrigeration freezing. A refrigeration cycle is used to cool the water stream to form ice, and after that those crystals are separated and melted to obtain fresh water. There are some recent examples of this solar powered processes: the unit constructed in Saudi Arabia by Chicago Bridge and Iron Inc. in the late 1980s, which was shut down for its inefficiency. [ 33 ] Nevertheless, there is a recent study for the saline groundwater [ 34 ] concluding that a plant capable of producing 1 million gal/day would produce water at a cost of $1.30/1000 gallons. Being this true, it would be a cost-competitive device with the reverse osmosis ones. Inherent design problems face thermal solar desalination projects. First, the system's efficiency is governed by competing heat and mass transfer rates during evaporation and condensation. [ 1 ] Second, the heat of condensation is valuable because it takes large amounts of solar energy to evaporate water and generate saturated, vapor-laden hot air. This energy is, by definition, transferred to the condenser's surface during condensation. With most solar stills, this heat is emitted as waste heat. [ citation needed ] Heat recovery allows the same heat input to be reused, providing several times the water. [ 1 ] One solution is to reduce the pressure within the reservoir. This can be accomplished using a vacuum pump, and significantly decreases the required heat energy. For example, water at a pressure of 0.1 atmospheres boils at 50 °C (122 °F) rather than 100 °C (212 °F). [ 35 ] The solar humidification–dehumidification (HDH) process (also called the multiple-effect humidification–dehumidification process, solar multistage condensation evaporation cycle (SMCEC) or multiple-effect humidification (MEH) [ 36 ] mimics the natural water cycle on a shorter time frame by distilling water. Thermal energy produces water vapor that is condensed in a separate chamber. In sophisticated systems, waste heat is minimized by collecting the heat from the condensing water vapor and pre-heating the incoming water source. [ 37 ] In indirect, or single phase, solar-powered desalination, two systems are combined: a solar energy collection system (e.g. photovoltaic panels) and a desalination system such as reverse osmosis (RO). The main single-phase processes, generally membrane processes, consist of RO and electrodialysis (ED). Single phase desalination is predominantly accomplished with photovoltaics that produce electricity to drive RO pumps. Over 15,000 desalination plants operate around the world. Nearly 70% use RO, yielding 44% of desalination. [ 38 ] Alternative methods that use solar thermal collection to provide mechanical energy to drive RO are in development. RO is the most common desalination process due to its efficiency compared to thermal desalination systems, despite the need for water pre-treatment. [ 39 ] Economic and reliability considerations are the main challenges to improving PV powered RO desalination systems. However, plummeting PV panel costs make solar-powered desalination more feasible. [ citation needed ] Solar-powered RO desalination is common in demonstration plants due to the modularity and scalability of both PV and RO systems. An economic analysis [ 40 ] that explored an optimisation strategy [ 41 ] of PV-powered RO reported favorable results. PV converts solar radiation into direct-current (DC) electricity, which powers the RO unit. The intermittent nature of sunlight and its variable intensity throughout the day complicates PV efficiency prediction and limits night-time desalination. Batteries can store solar energy for later use. Similarly, thermal energy storage systems ensure constant performance after sunset and on cloudy days. [ 42 ] Batteries allow continuous operation. Studies have indicated that intermittent operations can increase biofouling . [ 43 ] Batteries remain expensive and require ongoing maintenance. Also, storing and retrieving energy from the battery lowers efficiency. [ 43 ] Reported average cost of RO desalination is US$0.56/m 3 . Using renewable energy, that cost could increase up to US$16/m 3 . [ 38 ] Although renewable energy costs are greater, their use is increasing. Both electrodialysis (ED) and reverse electrodialysis (RED) use selective ion transport through ion exchange membranes (IEMs) due either to the influence of concentration difference (RED) or electrical potential (ED). [ 44 ] In ED, an electrical force is applied to the electrodes; the cations travel toward the cathode and anions travel toward the anode. The exchange membranes only allow the passage of its permeable type (cation or anion), hence with this arrangement, diluted and concentrated salt solutions are placed in the space between the membranes (channels). The configuration of this stack can be either horizontal or vertical. The feed water passes in parallel through all the cells, providing a continuous flow of permeate and brine. Although this is a well-known process electrodialysis is not commercially suited for seawater desalination, because it can be used only for brackish water (TDS < 1000 ppm). [ 38 ] Due to the complexity for modeling ion transport phenomena in the channels, performance could be affected, considering the non-ideal behavior presented by the exchange membranes. [ 45 ] The basic ED process could be modified and turned into RED, in which the polarity of the electrodes changes periodically, reversing the flow through the membranes. This limits the deposition of colloidal substances, which makes this a self-cleaning process, almost eliminating the need for chemical pre-treatment, making it economically attractive for brackish water. [ 46 ] The use ED systems began in 1954, while RED was developed in the 1970s. These processes are used in over 1100 plants worldwide. The main advantages of PV in desalination plants is due to its suitability for small-scale plants. One example is in Japan, on Oshima Island ( Nagasaki ), which has operated since 1986 with 390 PV panels producing 10 m 3 /day with dissolved solids (TDS) about 400 ppm. [ 46 ]
https://en.wikipedia.org/wiki/Solar_desalination
Solar dryers are devices that use solar energy to dry substances, especially food . Solar dryers use the heat from the Sun to reduce the moisture content of food substances. There are two general types of solar dryers: direct and indirect. [ 1 ] Direct solar dryers expose the substance to be dehydrated to direct sunlight . Historically, food and clothing was dried in the sun by using lines , or laying the items on rocks or on top of tents. [ 2 ] In Mongolia cheese and meat are still traditionally dried using the top of the ger (tent) as a solar dryer. [ 3 ] In these systems the solar drying is assisted by the movement of the air (wind) that removes the more saturated air away from the items being dried. [ 2 ] More recently, complex drying racks [ 4 ] and solar tents [ 5 ] were constructed as solar dryers. One modern type of solar dryer has a black absorbing surface which collects the light and converts it to heat; the substance to be dried is placed directly on this surface. These driers may have enclosures, glass covers and/or vents in order to increase efficiency . [ 6 ] In indirect solar dryers, the black surface heats incoming air rather than directly heating the substance to be dried. This heated air is then passed over the substance to be dried and exits upwards often through a chimney , taking moisture released from the substance with it. [ 2 ] They can be very simple, just a tilted cold frame with black cloth [ 7 ] to an insulated brick building with active ventilation and a back-up heating system. [ 8 ] One of the advantages of the indirect system is that it is easier to protect the food, or other substance, from contamination whether wind-blown or by birds, insects, or animals. [ 2 ] [ 8 ] Also, direct sun can chemically alter some foods making them less appetizing. [ 2 ] [ 8 ] Solar drying is mostly carried out between 50-70 degree Celsius. Solar dryers such as Vyom and many other models now use polycarbonate sheets or UV preventive glass so that UV rays of the sun do not penetrate the food which leads to degradation of dried food. Solar dryers not only make the drying faster, it also prevents dust, pathogens, bird droppings, and interference of external agents that affect the quality of the food. Food items such as fruits, vegetables, spices and other items once dried in solar can be stored for longer period of time. [ citation needed ]
https://en.wikipedia.org/wiki/Solar_dryer
The solar dynamo is a physical process that generates the Sun 's magnetic field . It is explained with a variant of the dynamo theory . A naturally occurring electric generator in the Sun's interior produces electric currents and a magnetic field, following the laws of Ampère , Faraday and Ohm , as well as the laws of fluid dynamics , which together form the laws of magnetohydrodynamics . The detailed mechanism of the solar dynamo is not known and is the subject of current research. [ 1 ] A dynamo converts kinetic energy into electric-magnetic energy. An electrically conducting fluid with shear or more complicated motion , such as turbulence, can temporarily amplify a magnetic field through Lenz's law : fluid motion relative to a magnetic field induces electric currents in the fluid that distort the initial field. If the fluid motion is sufficiently complicated, it can sustain its own magnetic field, with advective fluid amplification essentially balancing diffusive or ohmic decay. Such systems are called self-sustaining dynamos . The Sun is a self-sustaining dynamo that converts convective motion and differential rotation within the Sun to electric-magnetic energy. Currently, the geometry and width of the tachocline are hypothesized to play an important role in models of the solar dynamo by winding up the weaker poloidal field to create a much stronger toroidal field. However, recent radio observations of cooler stars and brown dwarfs , which do not have a radiative core and only have a convection zone , have demonstrated that they maintain large-scale, solar-strength magnetic fields and display solar-like activity despite the absence of tachoclines. This suggests that the convection zone alone may be responsible for the function of the solar dynamo. [ 2 ] The most prominent time variation of the solar magnetic field is related to the quasi-periodic 11-year solar cycle , characterized by an increasing and decreasing number and size of sunspots . [ 3 ] [ 4 ] Sunspots are visible as dark patches on the Sun's photosphere and correspond to concentrations of magnetic field. At a typical solar minimum , few or no sunspots are visible. Those that do appear are at high solar latitudes. As the solar cycle progresses towards its maximum , sunspots tend to form closer to the solar equator, following Spörer's law . The 11-year sunspot cycle is half of a 22-year Babcock –Leighton solar dynamo cycle, which corresponds to an oscillatory exchange of energy between toroidal and poloidal solar magnetic fields. At solar-cycle maximum , the external poloidal dipolar magnetic field is near its dynamo-cycle minimum strength, but an internal toroidal quadrupolar field, generated through differential rotation within the tachocline , is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the convection zone forces emergence of the toroidal magnetic field through the photosphere, giving rise to pairs of sunspots, roughly aligned east–west with opposite magnetic polarities. The magnetic polarity of sunspot pairs alternates every solar cycle, a phenomenon known as the Hale cycle. [ 5 ] [ 6 ] During the solar cycle's declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number. At solar minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are relatively rare and the poloidal field is at maximum strength. During the next cycle, differential rotation converts magnetic energy back from the poloidal to the toroidal field, with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11-year sunspot cycle corresponds to a change in the polarity of the Sun's large-scale magnetic field. [ 6 ] [ 7 ] [ 8 ] Long minima of solar activity can be associated with the interaction between double dynamo waves of the solar magnetic field caused by the beating effect of the wave interference. [ 9 ]
https://en.wikipedia.org/wiki/Solar_dynamo
Solar energy conversion describes technologies devoted to the transformation of solar energy to other (useful) forms of energy, including electricity, fuel, and heat. [ 1 ] It covers light-harvesting technologies including traditional semiconductor photovoltaic devices (PVs), emerging photovoltaics, [ 2 ] [ 3 ] [ 4 ] solar fuel generation via electrolysis , artificial photosynthesis , and related forms of photocatalysis directed at the generation of energy rich molecules. [ 5 ] Fundamental electro-optical aspects in several emerging solar energy conversion technologies for generation of both electricity (photovoltaics) and solar fuels constitute an active area of current research. [ 6 ] Solar cells started in 1876 with William Grylls Adams along with an undergraduate student of his. A French scientist, by the name of Edmond Becquerel , first discovered the photovoltaic effect in the summer of 1839. [ 7 ] He theorized that certain elements on the periodic table, such as silicon, reacted to the exposure of sunlight in very unusual ways. Solar power is created when solar radiation is converted to heat or electricity. English electrical engineer Willoughby Smith , between 1873 and 1876, discovered that when selenium is exposed to light, it produced a high amount of electricity. The use of selenium was highly inefficient, but it proved Becquerel's theory that light could be converted into electricity through the use of various semi-metals on the periodic table, that were later labelled as photo-conductive material. By 1953, Calvin Fuller, Gerald Pearson, and Daryl Chapin discovered the use of silicon to produce solar cells was extremely efficient and produced a net charge that far exceeded that of selenium. Today solar power has many uses, from heating, electrical production, thermal processes, water treatment and storage of power that is highly prevalent in the world of renewable energy. By the 1960s solar power was the standard for powering space-bound satellites. In the early 1970s, solar cell technology became cheaper and more available ($20/watt). Between 1970 and 1990, solar power became more commercially operated. Railroad crossings, oil rigs, space stations, microwave towers, aircraft, etc. Now, houses and businesses all over the world use solar cells to power electrical devices with a wide variety of uses. Solar power is the dominant technology in the renewable energy field, primarily due to its high efficiency and cost-effectiveness. By the early 1990s, photovoltaic conversion had reached an unprecedented new height. Scientists used solar cells constructed of highly conductive photovoltaic materials such as gallium, indium, phosphide and gallium arsenide that increased total efficiency by over 30%. By the end of the century, scientists created a special type of solar cells that converted upwards of 36% of the sunlight it collected into usable energy. These developments built tremendous momentum for not only solar power, but for renewable energy technologies around the world. Photovoltaics (PV) use silicon solar cells to convert the energy of sunlight into electricity. Operates under the photoelectric effect which results in the emission of electrons. [ 8 ] Concentrated solar power (CSP) Uses lenses or mirrors and tracking devices to focus a large area of sunlight into a small beam. Solar power is anticipated to be the world's largest source of electricity by 2050. Solar power plants, such as Ivanpah Solar Power Facility in the Mojave Desert produces over 392MW of power. Solar projects exceeding 1 GW (1 billion watts) are in development and are anticipated to be the future of solar power in the US. [ citation needed ] The sun bombards the earth with billions of charged nanoparticles with an immense amount of energy stored in them. This energy can be used for water heating, space heating, space cooling and process heat generation. Many steam generation systems have adapted to using sunlight as a primary source for heating feed water, a development that has greatly increased the overall efficiency of boilers and many other types of waste heat recovery systems. Solar cookers use sunlight for cooking, drying and pasteurization . Solar distillation is used for water treatment processes to create potable drinking water, which has been an extremely powerful player in providing countries in need with relief efforts through the use of advancing technology. Solar energy conversion has the potential to be a very cost-effective technology. It is cheaper as compared to non-conventional energy sources. The use of solar energy help to increase employment and development of the transportation & agriculture sector. Solar installations are becoming cheaper and more readily available to countries where energy demand is high, but supply is low due to economic circumstances. A 1 GW solar power plant can produce almost 10 times as much power as a fossil fuel combustion power plant that would cost twice as much to establish. Solar power plants have been projected to be the leader of the energy production by the year 2050. [ 9 ] Solar energy conversion has the potential for many positive social impacts, especially in rural areas that did not previously have grid-based energy access. In many off-grid areas, the solar-electric conversion is the fastest growing form of energy procurement. This is especially true at latitudes within 45° north or south of the Equator, where solar irradiance is more constant throughout the year and where the bulk of the developing world's population lives. From a health perspective, solar home systems can replace kerosene lamps (frequently found in rural areas), which can cause fires and emit pollutants like carbon monoxide (CO), nitric oxides (NOx), and sulfur dioxide (SO2) that adversely affect air quality and can cause impair lung function and increase tuberculosis, asthma, and cancer risks. In such areas, solar energy access has been shown to save rural residents the time and money needed to purchase and transport kerosene, thereby increasing productivity and lengthening business hours. [ 10 ] In addition to energy access, these communities gain energy independence, meaning they are not reliant on a third-party electricity provider. The concept of energy independence is relatively new; for the vast majority of the 20th century, energy analyses were purely technical or financial and did not include social impact analysis. A 1980 study concluded that access to renewable energy would promote values conducive to larger societal benefit as opposed to personal promotion. [ 11 ] While some academics argue that historically the parties in control of energy sources are those that create social hierarchies, [ 12 ] this type of analysis became less “radical” and more mainstream after the introduction of technologies that enabled solar energy conversion. [ citation needed ] Solar energy conversion can impact not only just individual customers but whole communities. In a growing number of neighborhoods across America, the conventional model of independent, non-connected rooftop installations is being replaced by community-sized solar microgrids. The idea of “ community solar ” first became popular because of issues regarding energy storage. [ 13 ] Because as of 2018 the wide-scale production of lithium-ion battery and other storage technologies lags the progress of rooftop PV installations, a main issue preventing a nationwide shift to rooftop solar energy generation is the lack of a reliable, single-home storage system that would provide contingencies for night-time energy use, cloud cover, curtailments and blackouts. Additionally, financing solar installations for single homes may be more difficult to secure given a smaller project scope and lack of access to funds. A viable alternative is to connect blocks of homes together in a community microgrid, using more proven large storage installations, thus lowering barriers to solar adoption. In some cases, a microgrid “web” is made by connecting each independent rooftop PV house to a greater storage facility. Other designs, primarily where rooftop installations are not possible, feature a large combined solar array + storage facility located on an adjacent field. As an added social impact, this form of installation makes solar energy economically viable for multi-family homes and historically low income neighborhoods. [ 14 ] A potential socioeconomic drawback associated with solar energy conversion is a disruption to the electric utility business model. In America, the economic viability of regional “monopoly” utilities is based on the large aggregation of local customers who balance out each other's variable load. Therefore, the widespread installation of rooftop solar systems that are not connected to the grid poses a threat to the stability of the utility market. This phenomenon is known as Grid Defection. [ 15 ] The pressure on electric utilities is exacerbated by an aging grid infrastructure that has yet to adapt to the new challenges posed by renewable energy (mainly regarding inertia, reverse power flow and relay protection schemes). However, some analysts make the case that with the steady increase in natural disasters (which destroy vital grid infrastructure), solar microgrid installation may be necessary to ensure emergency energy access. [ 16 ] This emphasis on contingency preparation has expanded the off-grid energy market dramatically in recent years, especially in areas prone to natural disasters. [ citation needed ] Installations can destroy and/or relocate ecological habitats by covering large tracts of land and promoting habitat fragmentation . Solar facilities constructed on Native American reservations have interrupted traditional practices and have also had negative impact on the local ecosphere. [ 9 ] [ 17 ]
https://en.wikipedia.org/wiki/Solar_energy_conversion
A solar flare is a relatively intense, localized emission of electromagnetic radiation in the Sun 's atmosphere . Flares occur in active regions and are often, but not always, accompanied by coronal mass ejections , solar particle events , and other eruptive solar phenomena . The occurrence of solar flares varies with the 11-year solar cycle . Solar flares are thought to occur when stored magnetic energy in the Sun's atmosphere accelerates charged particles in the surrounding plasma . This results in the emission of electromagnetic radiation across the electromagnetic spectrum . The typical time profile of these emissions features three identifiable phases: a precursor phase , an impulsive phase when particle acceleration dominates, and a gradual phase in which hot plasma injected into the corona by the flare cools by a combination of radiation and conduction of energy back down to the lower atmosphere. The extreme ultraviolet and X-ray radiation from solar flares is absorbed by the daylight side of Earth's upper atmosphere, in particular the ionosphere , and does not reach the surface. This absorption can temporarily increase the ionization of the ionosphere which may interfere with short-wave radio communication. The prediction of solar flares is an active area of research. Flares also occur on other stars , where the term stellar flare applies. Solar flares are eruptions of electromagnetic radiation originating in the Sun's atmosphere. [ 1 ] They affect all layers of the solar atmosphere ( photosphere , chromosphere , and corona ). [ 2 ] The plasma medium is heated to >10 7 kelvin , while electrons , protons , and heavier ions are accelerated to near the speed of light . [ 3 ] [ 4 ] Flares emit electromagnetic radiation across the electromagnetic spectrum , from radio waves to gamma rays . [ 2 ] Flares occur in active regions , often around sunspots , where intense magnetic fields penetrate the photosphere to link the corona to the solar interior. Flares are powered by the sudden (timescales of minutes to tens of minutes) release of magnetic energy stored in the corona. The same energy releases may also produce coronal mass ejections (CMEs), although the relationship between CMEs and flares is not well understood. [ 5 ] Associated with solar flares are flare sprays. [ 6 ] They involve faster ejections of material than eruptive prominences , [ 7 ] and reach velocities of 20 to 2000 kilometers per second. [ 8 ] Flares occur when accelerated charged particles, mainly electrons, interact with the plasma medium. Evidence suggests that the phenomenon of magnetic reconnection leads to this extreme acceleration of charged particles. [ 9 ] On the Sun, magnetic reconnection may happen on solar arcades – a type of prominence consisting of a series of closely occurring loops following magnetic lines of force. [ 10 ] These lines of force quickly reconnect into a lower arcade of loops leaving a helix of magnetic field unconnected to the rest of the arcade. The sudden release of energy in this reconnection is the origin of the particle acceleration. The unconnected magnetic helical field and the material that it contains may violently expand outwards forming a coronal mass ejection. [ 11 ] This also explains why solar flares typically erupt from active regions on the Sun where magnetic fields are much stronger. Although there is a general agreement on the source of a flare's energy, the mechanisms involved are not well understood. It is not clear how the magnetic energy is transformed into the kinetic energy of the particles, nor is it known how some particles can be accelerated to the GeV range (10 9 electron volt ) and beyond. There are also some inconsistencies regarding the total number of accelerated particles, which sometimes seems to be greater than the total number in the coronal loop. [ 12 ] After the eruption of a solar flare, post-eruption loops made of hot plasma begin to form across the neutral line separating regions of opposite magnetic polarity near the flare's source. These loops extend from the photosphere up into the corona and form along the neutral line at increasingly greater distances from the source as time progresses. [ 14 ] The existence of these hot loops is thought to be continued by prolonged heating present after the eruption and during the flare's decay stage. [ 15 ] In sufficiently powerful flares, typically of C-class or higher, the loops may combine to form an elongated arch-like structure known as a post-eruption arcade . These structures may last anywhere from multiple hours to multiple days after the initial flare. [ 14 ] In some cases, dark sunward-traveling plasma voids known as supra-arcade downflows may form above these arcades. [ 16 ] The frequency of occurrence of solar flares varies with the 11-year solar cycle . It can typically range from several per day during solar maxima to less than one every week during solar minima . Additionally, more powerful flares are less frequent than weaker ones. For example, X10-class (severe) flares occur on average about eight times per cycle, whereas M1-class (minor) flares occur on average about 2000 times per cycle. [ 17 ] Erich Rieger discovered with coworkers in 1984, an approximately 154 day period in the occurrence of gamma-ray emitting solar flares at least since the solar cycle 19 . [ 18 ] The period has since been confirmed in most heliophysics data and the interplanetary magnetic field and is commonly known as the Rieger period . The period's resonance harmonics also have been reported from most data types in the heliosphere . The frequency distributions of various flare phenomena can be characterized by power-law distributions . For example, the peak fluxes of radio, extreme ultraviolet, and hard and soft X-ray emissions; total energies; and flare durations (see § Duration ) have been found to follow power-law distributions. [ 19 ] [ 20 ] [ 21 ] [ 22 ] : 23–28 The modern classification system for solar flares uses the letters A, B, C, M, or X, according to the peak flux in watts per square metre (W/m 2 ) of soft X-rays with wavelengths 0.1 to 0.8 nanometres (1 to 8 ångströms ), as measured by GOES satellites in geosynchronous orbit . The strength of an event within a class is noted by a numerical suffix ranging from 1 up to, but excluding, 10, which is also the factor for that event within the class. Hence, an X2 flare is twice the strength of an X1 flare, an X3 flare is three times as powerful as an X1. M-class flares are a tenth the size of X-class flares with the same numeric suffix. [ 23 ] An X2 is four times more powerful than an M5 flare. [ 24 ] X-class flares with a peak flux that exceeds 10 −3 W/m 2 may be noted with a numerical suffix equal to or greater than 10. This system was originally devised in 1970 and included only the letters C, M, and X. These letters were chosen to avoid confusion with other optical classification systems. The A and B classes were added in the 1990s as instruments became more sensitive to weaker flares. Around the same time, the backronym moderate for M-class flares and extreme for X-class flares began to be used. [ 25 ] An earlier classification system, sometimes referred to as the flare importance , was based on H-alpha spectral observations. The scheme uses both the intensity and emitting surface. The classification in intensity is qualitative, referring to the flares as: faint (f), normal (n), or brilliant (b). The emitting surface is measured in terms of millionths of the hemisphere and is described below. (The total hemisphere area A H = 15.5 × 10 12 km 2 .) A flare is then classified taking S or a number that represents its size and a letter that represents its peak intensity, v.g.: Sn is a normal sunflare. [ 26 ] A common measure of flare duration is the full width at half maximum (FWHM) time of flux in the soft X-ray bands 0.05 to 0.4 and 0.1 to 0.8 nm measured by GOES. The FWHM time spans from when a flare's flux first reaches halfway between its maximum flux and the background flux and when it again reaches this value as the flare decays. Using this measure, the duration of a flare ranges from approximately tens of seconds to several hours with a median duration of approximately 6 and 11 minutes in the 0.05 to 0.4 and 0.1 to 0.8 nm bands, respectively. [ 27 ] [ 28 ] Flares can also be classified based on their duration as either impulsive or long duration events ( LDE ). The time threshold separating the two is not well defined. The SWPC regards events requiring 30 minutes or more to decay to half maximum as LDEs, whereas Belgium's Solar-Terrestrial Centre of Excellence regards events with duration greater than 60 minutes as LDEs. [ 29 ] [ 30 ] The electromagnetic radiation emitted during a solar flare propagates away from the Sun at the speed of light with intensity inversely proportional to the square of the distance from its source region . The excess ionizing radiation , namely X-ray and extreme ultraviolet (XUV) radiation, is known to affect planetary atmospheres and is of relevance to human space exploration and the search for extraterrestrial life. Solar flares also affect other objects in the Solar System. Research into these effects has primarily focused on the atmosphere of Mars and, to a lesser extent, that of Venus . [ 31 ] The impacts on other planets in the Solar System are little studied in comparison. As of 2024, research on their effects on Mercury have been limited to modeling of the response of ions in the planet's magnetosphere , [ 32 ] and their impact on Jupiter and Saturn have only been studied in the context of X-ray radiation back scattering off of the planets' upper atmospheres. [ 33 ] [ 34 ] Enhanced XUV irradiance during solar flares can result in increased ionization , dissociation , and heating in the ionospheres of Earth and Earth-like planets. On Earth, these changes to the upper atmosphere, collectively referred to as sudden ionospheric disturbances , can interfere with short-wave radio communication and global navigation satellite systems (GNSS) such as GPS , [ 35 ] and subsequent expansion of the upper atmosphere can increase drag on satellites in low Earth orbit leading to orbital decay over time. [ 36 ] [ 37 ] [ additional citation(s) needed ] Flare-associated XUV photons interact with and ionize neutral constituents of planetary atmospheres via the process of photoionization . The electrons that are freed in this process, referred to as photoelectrons to distinguish them from the ambient ionospheric electrons, are left with kinetic energies equal to the photon energy in excess of the ionization threshold . In the lower ionosphere where flare impacts are greatest and transport phenomena are less important, the newly liberated photoelectrons lose energy primarily via thermalization with the ambient electrons and neutral species and via secondary ionization due to collisions with the latter, or so-called photoelectron impact ionization . In the process of thermalization, photoelectrons transfer energy to neutral species, resulting in heating and expansion of the neutral atmosphere. [ 38 ] The greatest increases in ionization occur in the lower ionosphere where wavelengths with the greatest relative increase in irradiance—the highly penetrative X-ray wavelengths—are absorbed, corresponding to Earth's E and D layers and Mars's M 1 layer. [ 31 ] [ 35 ] [ 39 ] [ 40 ] [ 41 ] The temporary increase in ionization of the daylight side of Earth's atmosphere, in particular the D layer of the ionosphere , can interfere with short-wave radio communications that rely on its level of ionization for skywave propagation. Skywave, or skip, refers to the propagation of radio waves reflected or refracted off of the ionized ionosphere. When ionization is higher than normal, radio waves get degraded or completely absorbed by losing energy from the more frequent collisions with free electrons. [ 1 ] [ 35 ] The level of ionization of the atmosphere correlates with the strength of the associated solar flare in soft X-ray radiation. The Space Weather Prediction Center , a part of the United States National Oceanic and Atmospheric Administration , classifies radio blackouts by the peak soft X-ray intensity of the associated flare. During non-flaring or solar quiet conditions, electric currents flow through the ionosphere's dayside E layer inducing small-amplitude diurnal variations in the geomagnetic field. These ionospheric currents can be strengthened during large solar flares due to increases in electrical conductivity associated with enhanced ionization of the E and D layers. The subsequent increase in the induced geomagnetic field variation is referred to as a solar flare effect ( sfe ) or historically as a magnetic crochet . The latter term derives from the French word crochet meaning hook reflecting the hook-like disturbances in magnetic field strength observed by ground-based magnetometers . These disturbances are on the order of a few nanoteslas and last for a few minutes, which is relatively minor compared to those induced during geomagnetic storms. [ 42 ] [ 43 ] For astronauts in low Earth orbit , an expected radiation dose from the electromagnetic radiation emitted during a solar flare is about 0.05 gray , which is not immediately lethal on its own. Of much more concern for astronauts is the particle radiation associated with solar particle events. [ 44 ] [ better source needed ] The impacts of solar flare radiation on Mars are relevant to exploration and the search for life on the planet . Models of its atmosphere indicate that the most energetic solar flares previously recorded may have provided acute doses of radiation that would have been almost harmful or lethal to mammals and other higher organisms on Mars's surface. Furthermore, flares energetic enough to provide lethal doses, while not yet observed on the Sun, are thought to occur and have been observed on other Sun-like stars . [ 45 ] [ 46 ] [ 47 ] Flares produce radiation across the electromagnetic spectrum, although with different intensity. They are not very intense in visible light, but they can be very bright at particular spectral lines . They normally produce bremsstrahlung in X-rays and synchrotron radiation in radio. [ 48 ] Solar flares were first observed by Richard Carrington and Richard Hodgson independently on 1 September 1859 by projecting the image of the solar disk produced by an optical telescope through a broad-band filter. [ 50 ] [ 51 ] It was an extraordinarily intense white light flare , a flare emitting a high amount of light in the visual spectrum . [ 50 ] Since flares produce copious amounts of radiation at H-alpha , [ 52 ] adding a narrow (≈1 Å) passband filter centered at this wavelength to the optical telescope allows the observation of not very bright flares with small telescopes. For years Hα was the main, if not the only, source of information about solar flares. Other passband filters are also used. [ citation needed ] During World War II , on February 25 and 26, 1942, British radar operators observed radiation that Stanley Hey interpreted as solar emission. Their discovery did not go public until the end of the conflict. The same year, Southworth also observed the Sun in radio, but as with Hey, his observations were only known after 1945. In 1943, Grote Reber was the first to report radioastronomical observations of the Sun at 160 MHz. The fast development of radioastronomy revealed new peculiarities of the solar activity like storms and bursts related to the flares. Today, ground-based radiotelescopes observe the Sun from c. 15 MHz up to 400 GHz. Because the Earth's atmosphere absorbs much of the electromagnetic radiation emitted by the Sun with wavelengths shorter than 300 nm, space-based telescopes allowed for the observation of solar flares in previously unobserved high-energy spectral lines. Since the 1970s, the GOES series of satellites have been continuously observing the Sun in soft X-rays, and their observations have become the standard measure of flares, diminishing the importance of the H-alpha classification. Additionally, space-based telescopes allow for the observation of extremely long wavelengths—as long as a few kilometres—which cannot propagate through the ionosphere. The most powerful flare ever observed is thought to be the flare associated with the 1859 Carrington Event. [ 54 ] While no soft X-ray measurements were made at the time, the magnetic crochet associated with the flare was recorded by ground-based magnetometers allowing the flare's strength to be estimated after the event. Using these magnetometer readings, its soft X-ray class has been estimated to be greater than X10 [ 55 ] and around X45 (±5). [ 56 ] [ 57 ] In modern times, the largest solar flare measured with instruments occurred on 4 November 2003 . This event saturated the GOES detectors, and because of this, its classification is only approximate. Initially, extrapolating the GOES curve, it was estimated to be X28. [ 58 ] Later analysis of the ionospheric effects suggested increasing this estimate to X45. [ 59 ] [ 60 ] This event produced the first clear evidence of a new spectral component above 100 GHz. [ 61 ] Current methods of flare prediction are problematic, and there is no certain indication that an active region on the Sun will produce a flare. However, many properties of active regions and their sunspots correlate with flaring. For example, magnetically complex regions (based on line-of-sight magnetic field) referred to as delta spots frequently produce the largest flares. A simple scheme of sunspot classification based on the McIntosh system for sunspot groups, or related to a region's fractal complexity [ 62 ] is commonly used as a starting point for flare prediction. [ 63 ] Predictions are usually stated in terms of probabilities for occurrence of flares above M- or X-class within 24 or 48 hours. The U.S. National Oceanic and Atmospheric Administration (NOAA) issues forecasts of this kind. [ 64 ] MAG4 was developed at the University of Alabama in Huntsville with support from the Space Radiation Analysis Group at Johnson Space Flight Center (NASA/SRAG) for forecasting M- and X-class flares, CMEs, fast CME, and solar energetic particle events. [ 65 ] A physics-based method that can predict imminent large solar flares was proposed by Institute for Space-Earth Environmental Research (ISEE), Nagoya University. [ 66 ]
https://en.wikipedia.org/wiki/Solar_flare_classification
The solar flux unit (sfu) is a convenient measure of spectral flux density often used in solar radio observations, such as the F10.7 solar activity index: [ 1 ] See jansky for further information about related units; one Jy = 10 -4 sfu = 10 −26 W⋅m −2 ⋅Hz −1 = 10 −23 erg⋅s −1 ⋅cm −2 ⋅Hz −1 . This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it . This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solar_flux_unit
A solar fuel is a synthetic fuel produced using solar energy , through photochemical (i.e. photon activation of certain chemical reactions ), photobiological (i.e., artificial photosynthesis ), electrochemical (i.e. using solar electricity to drive an endogenic reaction such as hydroelectrolysis ), [ 1 ] [ 2 ] [ 3 ] [ 4 ] or thermochemical methods (i.e., through the use of solar heat supplied by concentrated solar thermal energy to drive a chemical reaction). [ 5 ] [ 6 ] Sunlight is the primary energy source , with its radiant energy being transduced to chemical energy stored in bonds , typically by reducing protons to hydrogen , or carbon dioxide to organic compounds . A solar fuel can be produced and stored for later use, when sunlight is not available, making it an alternative to fossil fuels and batteries. Examples of such fuels are hydrogen, ammonia, and hydrazine. Diverse photocatalysts are being developed to carry these reactions in a sustainable, environmentally friendly way. [ 7 ] The world's dependence on the declining reserves of fossil fuels poses not only environmental problems but also geopolitical ones. [ 8 ] Solar fuels, in particular hydrogen, are viewed as an alternative source of energy for replacing fossil fuels especially where storage is essential. Electricity can be produced directly from sunlight through photovoltaics , but this form of energy is rather inefficient to store compared to hydrogen. [ 7 ] A solar fuel can be produced when and where sunlight is available, and stored and transported for later usage. This makes it much more convenient, because it can be used in situations where direct sunlight is not available. The most widely researched solar fuels are hydrogen, because the only product of using this fuel is water, and products of photochemical carbon dioxide reduction , which are more conventional fuels like methane and propane. Upcoming research also involves ammonia and related substances (i.e. hydrazine). These can address the challenges that come with hydrogen, by being a more compact and safer way of storing hydrogen. Direct ammonia fuel cells are also being researched. [ 9 ] Solar fuels can be produced via direct or indirect processes. Direct processes harness the energy in sunlight to produce a fuel without intermediary energy conversions. Solar thermochemistry uses the heat of the sun directly to heat a receiver adjacent to the solar reactor where the thermochemical process is performed. In contrast, indirect processes have solar energy converted to another form of energy first (such as biomass or electricity) that can then be used to produce a fuel. Indirect processes have been easier to implement but have the disadvantage of being less efficient than the direct method. Therefore, direct methods should be considered more interesting than their less efficient counterparts. New research therefore focusses more on this direct conversion, but also in fuels that can be used immediately to balance the power grid. [ 7 ] In a solar photoelectrochemical process, hydrogen can be produced by electrolysis . To use sunlight in this process, a photoelectrochemical cell can be used, where one photosensitized electrode converts light into an electric current that is then used for water splitting . One such type of cell is the dye-sensitized solar cell . [ 10 ] This is an indirect process, since it produces electricity that then is used to form hydrogen. Another indirect process using sunlight is conversion of biomass to biofuel using photosynthetic organisms ; however, most of the energy harvested by photosynthesis is used in life-sustaining processes and therefore lost for energy use. [ 7 ] A semiconductor can also be used as the photosensitizer. When a semiconductor is hit by a photon with an energy higher than the bandgap , an electron is excited to the conduction band and a hole is created in the valence band. Due to band bending, the electrons and holes move to the surface, where these charges are used to split the water molecules. Many different materials have been tested, but none so far have shown the requirements for practical application. [ 11 ] In a photochemical process, the sunlight is directly used to split water into hydrogen and oxygen. Because the absorption spectrum of water does not overlap with the emission spectrum of the sun, direct dissociation of water cannot take place; a photosensitizer needs to be used. Several such catalysts have been developed as proof of concept , but not yet scaled up for commercial use; nevertheless, their relative simplicity gives the advantage of potential lower cost and increased energy conversion efficiency . [ 7 ] [ 12 ] One such proof of concept is the "artificial leaf" developed by Nocera and coworkers: a combination of metal oxide -based catalysts and a semiconductor solar cell produces hydrogen upon illumination, with oxygen as the only byproduct. [ 13 ] In a photobiological process, the hydrogen is produced using photosynthetic microorganisms (green microalgae and cyanobacteria ) in photobioreactors . Some of these organisms produce hydrogen upon switching culture conditions; for example, Chlamydomonas reinhardtii produces hydrogen anaerobically under sulfur deprivation, that is, when cells are moved from one growth medium to another that does not contain sulfur, and are grown without access to atmospheric oxygen. [ 14 ] Another approach was to abolish activity of the hydrogen-oxidizing (uptake) hydrogenase enzyme in the diazotrophic cyanobacterium Nostoc punctiforme , so that it would not consume hydrogen that is naturally produced by the nitrogenase enzyme in nitrogen-fixing conditions. [ 15 ] This N. punctiforme mutant could then produce hydrogen when illuminated with visible light . Another mutant Cyanobacteria , Synechocystis , is using genes of the bacteria Rubrivivax gelatinosus CBS to produce hydrogen. The CBS bacteria produce hydrogen through the oxidation of carbon monoxide. Researchers are working to implement these genes into the Synechocystis. If these genes can be applied, it will take some effort to overcome the problems of oxygen inhibition in the production of hydrogen, but it is estimated that this process can potentially yield as much as 10% solar energy capture. This makes photobiological research a very exciting and promising branch of the hydrogen production explorations. Still the problems of overcoming the short-term nature of algal hydrogen production are many and research is in the early stages. However, this research provides a viable way to industrialize these renewable and environmental friendly processes. [ 16 ] In the solar thermochemical [ 17 ] process, water is split into hydrogen and oxygen using direct solar heat, rather than electricity, inside a high temperature solar reactor [ 18 ] which receives highly concentrated solar flux from a solar field of heliostats that focus the highly concentrated sunlight into the reactor. The two most promising routes are the two step cerium oxide cycle and the copper chlorine hybrid cycle . For the cerium oxide cycle the first step is to strip the CeO 3 into Ce 2 O 3 at more than 1400 °C. After the thermal reduction step to reduce the metal oxide, hydrogen is then produced through hydrolysis at around 800 °C. [ 19 ] [ 20 ] The copper chloride cycle requires a lower temperature (~500°C), which makes this process more efficient, but the cycle contains more steps and is also more complex than the cerium oxide cycle. [ 19 ] Because hydrogen manufacture requires continuous performance, the solar thermochemical process includes thermal energy storage . [ 21 ] Another thermochemical method uses solar reforming of methane, a process that replicates traditional fossil fuel reforming process but substitutes solar heat. [ 22 ] In a November 2021 publication in Nature , Aldo Steinfeld of Swiss technological university ETH Zurich reported an artificial photosynthesis where carbon dioxide and water vapour absorbed from the air are passed over a cerium oxide catalyst heated by concentrated solar power to produce hydrogen and carbon monoxide, transformed through the Fischer-Tropsch process into complex hydrocarbons forming methanol , a liquid fuel . Scaling could produce the 414 billion L (414 million m 3 ) of aviation fuel used in 2019 with a surface of 45,000 km 2 (17,000 sq mi): 0.5% of the Sahara Desert . [ 23 ] [ 24 ] [ 25 ] One author, Philipp Furler, leads specialist Synhelion , which in 2022 was building a solar fuel production facility at Jülich , west of Cologne , before another one in Spain. [ 26 ] Swiss Airlines , part of the Lufthansa Group , should become its first customer in 2023. [ 26 ] Carbon dioxide (CO 2 ) can be reduced to carbon monoxide (CO) and other more reduced compounds, such as methane , using the appropriate photocatalysts. One early example was the use of Tris(bipyridine)ruthenium(II) chloride (Ru(bipy) 3 Cl 2 ) and cobalt chloride (CoCl 2 ) for CO 2 reduction to CO. [ 27 ] In recent years many new catalysts have been found to reduce CO 2 into CO, after which the CO could be used to make hydrocarbons using for example the Fischer-Tropsch process . The most promising system for the solar-powered reduction of CO 2 is the combination of a photovoltaic cell with an electrochemical cell (PV+EC). [ 28 ] [ 29 ] Using solar-driven processes, CO 2 can also be converted to other products such as formate and alcohols. [ 30 ] [ 31 ] For the photovoltaic cell the highly efficient GaInP/GaAs/Ge solar cell has been used, but many other series-connected and/or tandem (multi-junction) PV architectures can be employed to deliver the required voltage and current density to drive the CO 2 reduction reactions and provide reasonable product outflow. [ 32 ] The solar cells/panels can be placed in direct contact with the electrolyzer(s), which can bring advantages in terms of system compactness and thermal management of both technologies, [ 32 ] or separately for instance by placing the PV outdoors exposed to sunlight and the EC systems protected indoors. [ 33 ] The currently best performing electrochemical cell is the gas diffusion electrode (GED) flow cell. In which the CO 2 reacts on Ag nanoparticles to produce CO. Solar to CO efficiencies of up to 19% have been reached, with minimal loss in activity after 20h. [ 29 ] CO can also be produced without a catalyst using microwave plasma driven dissociation of CO 2 . This process is relatively efficient, with an electricity to CO efficiency of up to 50%, but with low conversion around 10%. These low conversions are not ideal, because CO and CO 2 are hard to separate at large scale in a efficient manner. The big upside of this process is that it can be turned off and on quite rapidly and does not use scarce materials. The (weakly ionised) plasma is produced using microwaves , these microwaves can accelerate the free electrons in the plasma. These electrons interact with the CO 2 which vibrationally excite the CO 2 , this leads to dissociation of the CO 2 to CO. The excitation and dissociation happens fast enough that only a little bit of the energy is converted to heat, which keeps the efficiency high. The dissociation also produces an oxygen radical , which reacts with CO 2 to CO and O 2 . [ 34 ] Also in this case, the use of microorganisms has been explored. Using genetic engineering and synthetic biology techniques, parts of or whole biofuel-producing metabolic pathways can be introduced in photosynthetic organisms. One example is the production of 1-butanol in Synechococcus elongatus using enzymes from Clostridium acetobutylicum , Escherichia coli and Treponema denticola . [ 35 ] One example of a large-scale research facility exploring this type of biofuel production is the AlgaePARC in the Wageningen University and Research Centre , Netherlands . Hydrogen rich substances as ammonia and hydrazine are great for storing hydrogen. This is due to their energy density, for ammonia at least 1.3 times that of liquid hydrogen. [ 36 ] Hydrazine is almost twice as dense in energy compared to liquid hydrogen, however a downside is that dilution is required in the use of direct hydrazine fuel cells, which lowers the overall power one can get from this fuel cell. Besides the high volumetric density, ammonia and hydrous hydrazine have a low flammability, which makes it superior to hydrogen by lowering the storage and transportation costs. [ 37 ] Direct ammonia fuel cells are researched for this exact reason and new studies presented a new integrated solar-based ammonia synthesis and fuel cell. The solar base follows from excess solar power that is used to synthesize ammonia. This is done by using an ammonia electrolytic cell (AEC) in combination with a proton exchange membrane (PEM) fuel cell. When a dip in solar power occurs, a direct ammonia fuel cell kicks into action providing the lacking energy. This recent research (2020) is a clear example of efficient use of energy, which is essentially done by temporary storage and use of ammonia as a fuel. Storage of energy in ammonia does not degrade over time, which is the case with batteries and flywheels . This provides long-term energy storage. This compact form of energy has the additional advantage that excess energy can easily be transported to other locations. [ 9 ] This needs to be done with high safety measures due to the toxicity of ammonia for humans. Further research needs to be done to complement this system with wind energy and hydro-power plants to create a hybrid system to limit the interruptions in power supply. It is necessary to also investigate on the economic performance of the proposed system. Some scientists envision a new ammonia economy that is almost the same as the oil industry, but with the enormous advantage of inexhaustible carbon-free power. [ 38 ] This so called green ammonia is considered as a potential fuel for super large ships. South Korean shipbuilder DSME plans on commercializing these ships by 2025. [ 39 ] Another way of storing energy is with the use of hydrazine . This molecule is related to ammonia and has the potential to be equally as useful as ammonia. It can be created from ammonia and hydrogen peroxide or via chlorine based oxidations . [ 40 ] This makes it an even denser energy storing fuel. The downside of hydrazine is that it is very toxic and that it will react with oxygen quite violently. This makes it an ideal fuel for oxygen low area's such as space. Recent launched Iridium NEXT satellites have hydrazine as their source of energy. [ 41 ] However toxic, this fuel has great potential, because safety measures can be increased sufficiently to safely transport and convert hydrazine back into hydrogen and ammonia. Researchers discovered a way to decompose hydrazine with a photo catalysis system that works over the entire visible-light region. This means that sunlight can not only be used to produce hydrazine, but also to produce hydrogen from this fuel. The decomposition of hydrazine is done with a p-n bilayer consisting of fullerene (C 60 ), also known as "buckeyballs" which is a n-type semiconductor and zinc phthalocyanine (ZnPc) which is a p-type semiconductor creating an organic photo catalysis system. This system uses visible light irradiation to excite electrons to the n-type semiconductor creating an electric current. The holes created in the p-type semiconductor are forced in the direction of the so called Nafion part of the device, which oxidizes hydrazine to nitrogen gas and dissolved hydrogen ions. This was done in the first compartment of the fuel cell. The hydrogen ions travel through a salt bridge to another compartment to be reduced to hydrogen gas by the electrons, gained by the interaction with light, from the first compartment. Thus creating hydrogen, which can be used in fuel cells. [ 42 ] This promising studies shows that hydrazine is a solar fuel that has great potential to become very useful in the energy transition . A different approach to hydrazine are the direct fuel cells. The concepts for these cells have been developed since the 1960s. [ 43 ] [ 44 ] Recent studies provide much better direct hydrazine fuel cells, for example with the use of hydrogen peroxide as an oxidant. Making the anode basic and the cathode acidic increased the power density a lot, showing high peaks of around 1 W/cm 2 at a temperature of 80 degrees Celsius. As mentioned earlier the main weakness of direct hydrazine fuel cells is the high toxicity of hydrazine and its derivatives. [ 37 ] However hydrous hydrazine, which is a water-like liquid retains the high hydrogen density and can be stored and transported safely using the existing fuel infrastructure. [ 45 ] Researchers also aim for self-powered fuel cells involving hydrazine. These fuel cells make use of hydrazine in two ways, namely as the fuel for a direct fuel cell and as the splitting target. This means that one only needs hydrazine to produce hydrogen with this fuel cell, so no external power is needed. This is done with the use of iron doped cobalt sulfide nanosheets. The doping with iron decreases the free-energy changes of hydrogen adsorption and hydrazine dehydrogenation . This method has a 20 hour stability and 98% Faradaic efficiency , which is comparable with the best reported claims of self-powered hydrogen generating cells. [ 46 ]
https://en.wikipedia.org/wiki/Solar_fuel
Solar gain (also known as solar heat gain or passive solar gain ) is the increase in thermal energy of a space, object or structure as it absorbs incident solar radiation . The amount of solar gain a space experiences is a function of the total incident solar irradiance and of the ability of any intervening material to transmit or resist the radiation. Objects struck by sunlight absorb its visible and short-wave infrared components, increase in temperature, and then re-radiate that heat at longer infrared wavelengths . Though transparent building materials such as glass allow visible light to pass through almost unimpeded, once that light is converted to long-wave infrared radiation by materials indoors, it is unable to escape back through the window since glass is opaque to those longer wavelengths. The trapped heat thus causes solar gain via a phenomenon known as the greenhouse effect . In buildings, excessive solar gain can lead to overheating within a space, but it can also be used as a passive heating strategy when heat is desired. [ 1 ] Solar gain is most frequently addressed in the design and selection of windows and doors. Because of this, the most common metrics for quantifying solar gain are used as a standard way of reporting the thermal properties of window assemblies. In the United States, The American Society of Heating, Refrigerating, and Air-Conditioning Engineers ( ASHRAE ), [ 2 ] and The National Fenestration Rating Council (NFRC) [ 3 ] maintain standards for the calculation and measurement of these values. The shading coefficient (SC) is a measure of the radiative thermal performance of a glass unit (panel or window) in a building . It is defined as the ratio of solar radiation at a given wavelength and angle of incidence passing through a glass unit to the radiation that would pass through a reference window of frameless 3 millimetres (0.12 in) Clear Float Glass. [ 3 ] Since the quantities compared are functions of both wavelength and angle of incidence, the shading coefficient for a window assembly is typically reported for a single wavelength typical of solar radiation entering normal to the plane of glass. This quantity includes both energy that is transmitted directly through the glass as well as energy that is absorbed by the glass and frame and re-radiated into the space, and is given by the following equation: [ 4 ] F ( λ , θ ) = T ( λ , θ ) + N ∗ A ( λ , θ ) {\displaystyle F(\lambda ,\theta )=T(\lambda ,\theta )+N*A(\lambda ,\theta )} Here, λ is the wavelength of radiation and θ is the angle of incidence. "T" is the transmissivity of the glass, "A" is its absorptivity, and "N" is the fraction of absorbed energy that is re-emitted into the space. The overall shading coefficient is thus given by the ratio: S . C . = F ( λ , θ ) 1 / F ( λ , θ ) o {\displaystyle S.C.=F(\lambda ,\theta )_{1}/F(\lambda ,\theta )_{o}} The shading coefficient depends on the radiation properties of the window assembly. These properties are the transmissivity "T" , absorptivity "A", emissivity (which is equal to the absorptivity for any given wavelength), and reflectivity all of which are dimensionless quantities that together sum to 1. [ 4 ] Factors such as color , tint, and reflective coatings affect these properties, which is what prompted the development of the shading coefficient as a correction factor to account for this. ASHRAE's table of solar heat gain factors [ 2 ] provides the expected solar heat gain for 1 ⁄ 8 in (3.2 mm) clear float glass at different latitudes, orientations, and times, which can be multiplied by the shading coefficient to correct for differences in radiation properties. The value of the shading coefficient ranges from 0 to 1. The lower the rating, the less solar heat is transmitted through the glass, and the greater its shading ability. In addition to glass properties, shading devices integrated into the window assembly are also included in the SC calculation. Such devices can reduce the shading coefficient by blocking portions of the glazing with opaque or translucent material, thus reducing the overall transmissivity. [ 5 ] Window design methods have moved away from the Shading Coefficient and towards the Solar Heat Gain Coefficient (SHGC) , which is defined as the fraction of incident solar radiation that actually enters a building through the entire window assembly as heat gain (not just the glass portion). The standard method for calculating the SHGC also uses a more realistic wavelength-by-wavelength method, rather than just providing a coefficient for a single wavelength like the shading coefficient does. [ 4 ] Though the shading coefficient is still mentioned in manufacturer product literature and some industry computer software, [ 6 ] it is no longer mentioned as an option in industry-specific texts [ 2 ] or model building codes. [ 7 ] Aside from its inherent inaccuracies, another shortcoming of the SC is its counter-intuitive name, which suggests that high values equal high shading when in reality the opposite is true. Industry technical experts recognized the limitations of SC and pushed towards SHGC in the United States (and the analogous g-value in Europe) before the early 1990s. [ 8 ] A conversion from SC to SHGC is not necessarily straightforward, as they each take into account different heat transfer mechanisms and paths (window assembly vs. glass-only). To perform an approximate conversion from SC to SHGC, multiply the SC value by 0.87. [ 3 ] The g-value (sometimes also called a Solar Factor or Total Solar Energy Transmittance) is the coefficient commonly used in Europe to measure the solar energy transmittance of windows. Despite having minor differences in modeling standards compared to the SHGC, the two values are effectively the same. A g-value of 1.0 represents full transmittance of all solar radiation while 0.0 represents a window with no solar energy transmittance. In practice though, most g-values will range between 0.2 and 0.7, with solar control glazing having a g-value of less than 0.5. [ 9 ] SHGC is the successor to the shading coefficient used in the United States and it is the ratio of transmitted solar radiation to incident solar radiation of an entire window assembly. It ranges from 0 to 1 and refers to the solar energy transmittance of a window or door as a whole, factoring in the glass, frame material, sash (if present), divided lite bars (if present) and screens (if present). [ 3 ] The transmittance of each component is calculated in a similar manner to the shading coefficient. However, in contrast to the shading coefficient, the total solar gain is calculated on a wavelength-by-wavelength basis where the directly transmitted portion of the solar heat gain coefficient is given by: [ 4 ] T = ∫ 350 n m 3500 n m T ( λ ) E ( λ ) d λ {\displaystyle T=\int \limits _{350\ nm}^{3500\ nm}T(\lambda )E(\lambda )d\lambda } Here T ( λ ) {\displaystyle T(\lambda )} is the spectral transmittance at a given wavelength in nanometers and E ( λ ) {\displaystyle E(\lambda )} is the incident solar spectral irradiance. When integrated over the wavelengths of solar short-wave radiation, it yields the total fraction of transmitted solar energy across all solar wavelengths. The product N ∗ A ( λ , θ ) {\displaystyle N*A(\lambda ,\theta )} is thus the portion of absorbed and re-emitted energy across all assembly components beyond just the glass. It is important to note that the standard SHGC is calculated only for an angle of incidence normal to the window. However this tends to provide a good estimate over a wide range of angles, up to 30 degrees from normal in most cases. [ 3 ] SHGC can either be estimated through simulation models or measured by recording the total heat flow through a window with a calorimeter chamber. In both cases, NFRC standards outline the procedure for the test procedure and calculation of the SHGC. [ 10 ] For dynamic fenestration or operable shading, each possible state can be described by a different SHGC. Though the SHGC is more realistic than the SC, both are only rough approximations when they include complex elements such as shading devices, which offer more precise control over when fenestration is shaded from solar gain than glass treatments. [ 5 ] Apart from windows, walls and roofs also serve as pathways for solar gain. In these components heat transfer is entirely due to absorptance, conduction, and re-radiation since all transmittance is blocked in opaque materials. The primary metric in opaque components is the Solar Reflectance Index which accounts for both solar reflectance (albedo) and emittance of a surface. [ 11 ] Materials with high SRI will reflect and emit a majority of heat energy, keeping them cooler than other exterior finishes. This is quite significant in the design of roofs since dark roofing materials can often be as much as 50 °C hotter than the surrounding air temperature, leading to large thermal stresses as well as heat transfer to interior space. [ 5 ] Solar gain can have either positive or negative effects depending on the climate. In the context of passive solar building design, the aim of the designer is normally to maximize solar gain within the building in the winter (to reduce space heating demand), and to control it in summer (to minimize cooling requirements). Thermal mass may be used to even out the fluctuations during the day, and to some extent between days. Uncontrolled solar gain is undesirable in hot climates due to its potential for overheating a space. To minimize this and reduce cooling loads, several technologies exist for solar gain reduction. SHGC is influenced by the color or tint of glass and its degree of reflectivity . Reflectivity can be modified through the application of reflective metal oxides to the surface of the glass. Low-emissivity coating is another more recently developed option that offers greater specificity in the wavelengths reflected and re-emitted. This allows glass to block mainly short-wave infrared radiation without greatly reducing visible transmittance . [ 3 ] In climate-responsive design for cold and mixed climates , windows are typically sized and positioned in order to provide solar heat gains during the heating season. To that end, glazing with a relatively high solar heat gain coefficient is often used so as not to block solar heat gains, especially in the sunny side of the house. SHGC also decreases with the number of glass panes used in a window. For example, in triple glazed windows , SHGC tends to be in the range of 0.33 - 0.47. For double glazed windows SHGC is more often in the range of 0.42 - 0.55. Different types of glass can be used to increase or to decrease solar heat gain through fenestration, but can also be more finely tuned by the proper orientation of windows and by the addition of shading devices such as overhangs , louvers , fins, porches , and other architectural shading elements. Passive solar heating is a design strategy that attempts to maximize the amount of solar gain in a building when additional heating is desired. It differs from active solar heating which uses exterior water tanks with pumps to absorb solar energy because passive solar systems do not require energy for pumping and store heat directly in structures and finishes of occupied space. [ 12 ] In direct solar gain systems, the composition and coating of the building glazing can also be manipulated to increase the greenhouse effect by optimizing their radiation properties, while their size, position, and shading can be used to optimize solar gain. Solar gain can also be transferred to the building by indirect or isolated solar gain systems. Passive solar designs typically employ large equator facing windows with a high SHGC and overhangs that block sunlight in summer months and permit it to enter the window in the winter. When placed in the path of admitted sunlight, high thermal mass features such as concrete slabs or trombe walls store large amounts of solar radiation during the day and release it slowly into the space throughout the night. [ 13 ] When designed properly, this can modulate temperature fluctuations. Some of the current research into this subject area is addressing the tradeoff between opaque thermal mass for storage and transparent glazing for collection through the use of transparent phase change materials that both admit light and store energy without the need for excessive weight. [ 14 ]
https://en.wikipedia.org/wiki/Solar_gain
The solar humidification–dehumidification method ( HDH ) is a thermal water desalination method. It is based on evaporation of sea water or brackish water and subsequent condensation of the generated humid air, mostly at ambient pressure. This process mimics the natural water cycle , but over a much shorter time frame. The simplest configuration is implemented in the solar still , evaporating the sea water inside a glass covered box and condensing the water vapor on the lower side of the glass cover but not below the unevaporated seawater. More sophisticated designs separate the solar heat gain section from the evaporation-condensation chamber. An optimized design comprises separated evaporation and condensation sections. A significant part of the heat consumed for evaporation can be regained during condensation. An example for such an optimized thermal desalination cycle is the multiple-effect humidification (MEH) method of desalination. [ citation needed ] Solar humidification takes place in every greenhouse. Water evaporates from the surfaces of soil, water and plants because of thermal input. In this way the humidification process is naturally integrated within the architecture of the greenhouse. Several companies like Seawater greenhouse utilize this inherent feature of a greenhouse in order to conduct desalination inside the atmosphere of the facility. [ citation needed ] The method can be optimized by using various effects in the categories of thermal energy collection and storage for continued nocturnal operation, choice of site location, various evaporation effects, as well as condensator design and provision of cooling energy to harvest distillate from the moist air. A Desalination Greenhouse using all of the effects in all categories, with an emphasis on the optimized combination of the effects including synergies, is the IBTS Greenhouse . The Global water cycle also includes all sub-effects of HDH, like increased evaporation over the oceans surface and surface increase by wind, making the generation of freshwater on the planet so efficient. [ citation needed ] There are successful small-scale agricultural experimentation done in arid regions such as Israel, West Africa, and Peru. The major difficulty lies in effectively concentrating the energy of sun on a small area to speed up evaporation. [ 1 ] This water supply –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Solar_humidification
A solar hydrogen panel is a device for artificial photosynthesis that produces photohydrogen from sunlight and water. The panel uses electrochemical water splitting, where energy captured from solar panels powers water electrolysis , producing hydrogen and oxygen . The oxygen is discarded into the atmosphere while the hydrogen is collected and stored . Solar hydrogen panels offer a method of capturing solar energy by producing green hydrogen that can be used in industrial and transportation applications. Solar hydrogen panels operate via photovoltaic−electrochemical (PV-EC) water splitting with two components: the photovoltaic cell and the electrochemical cell (or electrolyzer). The photovoltaic cell uses solar energy to generate electricity, which it sends to an electrochemical cell. This electrochemical cell uses electrolysis to split the water electrolyte, creating hydrogen (H 2 ) at the cathode and oxygen (O 2 ) at the anode . [ 1 ] With the development of photovoltaic cells and electrolysis devices, the efficiency of solar hydrogen panels has been optimized to over 10%. [ 2 ] In the photovoltaic component, the Shockley-Queisser limit restricts the efficiency of the solar cells. [ 1 ] The efficiency of the electrolytic component depends on the catalyst chosen, with efficiencies ranging from 59 to 70%. [ 1 ] Another method of solar hydrogen generation is the photoelectrochemical cell (PEC), where solar energy is captured by a semiconductor immersed in a water electrolyte . The photoelectrochemical cell is favored for its lower complexity and cost; however, it has lower efficiencies than PV-EC and cannot be contained within a panel. [ 3 ] In 1970, South African electrochemist John Bockris claimed that hydrogen as a fuel source could be supplied by a chemical reaction between water and solar energy. In his 1975 book, Energy, the Solar Hydrogen Alternative , Bockris formally explain the process by which hydrogen could theoretically be extracted from solar energy. In this book, Bockris included his suggestions on using hydrogen as a medium of energy and the potential of harnessing the sun to synthesize hydrogen. [ 4 ] The world's first solar-powered hydrogen production plant became operational in 1990 in Neunburg vorm Wald , a town in southern Germany . [ 5 ] [ 6 ] In 2019, chemists and physicists at The University of Tokyo and Tokyo Metropolitan University made improvements in the material construction and efficiency of water-splitting solar panels, showing one square meter of sunlight-exposed area with a solar-to-hydrogen efficiency of 0.4%; the research claims to be viable for scalable and cheap renewable solar hydrogen production. [ 7 ] Also in 2019, scientists at KU Leuven ’s Center for Surface Chemistry and Catalysis in Leuven , Belgium created a solar hydrogen panel which produced hydrogen with a 15% solar-to-hydrogen efficiency, a leap from their maximum efficiency of 0.1% a decade earlier. [ 5 ] [ 8 ] This 15% efficiency is also the current world record for solar hydrogen panels. [ 8 ] The University of Michigan reported developing a panel with a water-to-hydrogen efficiency of 9% in 2023. [ 9 ] While solar hydrogen panels are currently not sophisticated enough to be sold to the general public, there are multiple companies leading the market in solar hydrogen panel production. SunHydrogen is a public company that has been working on the development of efficient solar hydrogen panels since 2009. [ 10 ] On 19 February 2021, exactly 2 years after the reveal of their world-record panel, KU Leuven launched the Solhyd Project, an effort to make their panel commercially available. [ 11 ] Of the current 50 million tons of hydrogen produced annually, over 25% is directly used to produce nitrogen-based fertilizers , such as ammonia , nitrate , and urea , via the Haber-Bosch process . [ 12 ] [ 13 ] For ammonia, over 80% of the 175 million tons produced in 2020 were used as fertilizers and feedstocks for agricultural growth. [ 13 ] Because the production of nitrogen-based fertilizers will continue to grow to meet population growth needs, further developments in solar hydrogen panel technology can aid the increased hydrogen needs of ammonia production. [ 12 ] Solar hydrogen panel technology can be beneficial for the space industry. Liquid hydrogen is utilized as rocket engine fuel , such as the BE-3PM engine on Blue Origin ’s New Shepard suborbital launch vehicle . For orbital launch vehicles, the large mass of fuel and oxidizer required for launch necessitates in-space fuel production for return missions. [ 12 ] Usage of solar hydrogen panels may offer a lightweight option to produce fuel for both hydrogen and methane -powered rocket engines. [ 12 ] Solar hydrogen panel technologies can be arranged in a distributed approach, where the site of hydrogen production is independent of the energy production. [ 12 ] Existing electrical grids can be used to drive the electricity transport from solar hydrogen panels to hydrogen production plants, avoiding the need for hydrogen transport . [ 12 ] By producing and storing hydrogen during periods of high solar insolation when the cost of electricity is low, the system could be incredibly cost and energy-efficient. [ 12 ] The hydrogen could be stored for usage during periods of low solar insolation when electricity costs are higher. [ 12 ] However, this method would require powered electronics for electricity transport, such as DC–DC converters and AC–DC inverters , that further reduce the system's efficiency. [ 12 ] [ 14 ] Further advancements would be needed to reduce the cost of grid electrolysis technologies and increase the efficiency of electricity transport to make the system viable on a larger scale. [ 14 ] Challenges hindering the development and large-scale adoption of this technology mostly relate to high monetary costs for panel production. [ 15 ] Specifically, the manufacturing of photovoltaic cells remains expensive, keeping the cost of solar-based H 2 production higher than H 2 production from fossil fuels. [ 16 ] Environmental impacts of the process of creating these cells include the production of large amounts of CO 2 and SO 2 , contributing to global warming and ocean acidification . These impacts may offset the realized environmental benefits of the technology. [ 16 ] Additional obstacles relate to the lack of infrastructure for hydrogen storage and transportation . As hydrogen possesses a low volumetric energy density and high flammability , [ 16 ] a network of specialized containers and pipelines is required to enable safe, widespread hydrogen production and use. [ 15 ] Another notable challenge revolves around the technology’s dependence on sunlight for operation. As solar energy can only be produced during the day, the system undergoes daily startup and shutdown sequences, which hinders the durability and efficiency of the conversion process over time. [ 16 ] Scientists have not yet found an electrolyzer material that possesses sufficient durability to perturbations such as frequent on/off cycles. [ 17 ] .
https://en.wikipedia.org/wiki/Solar_hydrogen_panel
Solar irradiance is the power per unit area ( surface power density ) received from the Sun in the form of electromagnetic radiation in the wavelength range of the measuring instrument. Solar irradiance is measured in watts per square metre (W/m 2 ) in SI units . Solar irradiance is often integrated over a given time period in order to report the radiant energy emitted into the surrounding environment ( joule per square metre, J/m 2 ) during that time period. This integrated solar irradiance is called solar irradiation , solar radiation , solar exposure , solar insolation , or insolation . Irradiance may be measured in space or at the Earth's surface after atmospheric absorption and scattering . Irradiance in space is a function of distance from the Sun, the solar cycle , and cross-cycle changes. [ 2 ] Irradiance on the Earth's surface additionally depends on the tilt of the measuring surface, the height of the Sun above the horizon, and atmospheric conditions. [ 3 ] Solar irradiance affects plant metabolism and animal behavior. [ 4 ] The study and measurement of solar irradiance have several important applications, including the prediction of energy generation from solar power plants , the heating and cooling loads of buildings, climate modeling and weather forecasting, passive daytime radiative cooling applications, and space travel. There are several measured types of solar irradiance. Spectral versions of the above irradiances (e.g. spectral TSI , spectral DNI , etc.) are any of the above with units divided either by meter or nanometer (for a spectral graph as function of wavelength), or per- Hz (for a spectral function with an x-axis of frequency). [ citation needed ] When one plots such spectral distributions as a graph, the integral of the function (area under the curve) will be the (non-spectral) irradiance. e.g.: Say one had a solar cell on the surface of the earth facing straight up, and had DNI in units of Wm −2 nm −1 , graphed as a function of wavelength (in nm). Then, the unit of the integral (Wm −2 ) is the product of those two units. [ citation needed ] The SI unit of irradiance is watts per square metre (W/m 2 = Wm −2 ). The unit of insolation often used in the solar power industry is kilowatt hours per square metre (kWh/m 2 ). [ 11 ] The langley is an alternative unit of insolation. One langley is one thermochemical calorie per square centimetre or 41,840 J/m 2 . [ 12 ] The average annual solar radiation arriving at the top of the Earth's atmosphere is about 1361 W/m 2 . This represents the power per unit area of solar irradiance across the spherical surface surrounding the Sun with a radius equal to the distance to the Earth (1 AU ). This means that the approximately circular disc of the Earth, as viewed from the Sun, receives a roughly stable 1361 W/m 2 at all times. The area of this circular disc is π r 2 , in which r is the radius of the Earth. Because the Earth is approximately spherical , it has total area 4 π r 2 {\displaystyle 4\pi r^{2}} , meaning that the solar radiation arriving at the top of the atmosphere, averaged over the entire surface of the Earth, is simply divided by four to get 340 W/m 2 . In other words, averaged over the year and the day, the Earth's atmosphere receives 340 W/m 2 from the Sun. This figure is important in radiative forcing . The distribution of solar radiation at the top of the atmosphere is determined by Earth's sphericity and orbital parameters. This applies to any unidirectional beam incident to a rotating sphere. Insolation is essential for numerical weather prediction and understanding seasons and climatic change . Application to ice ages is known as Milankovitch cycles . Distribution is based on a fundamental identity from spherical trigonometry , the spherical law of cosines : cos ⁡ ( c ) = cos ⁡ ( a ) cos ⁡ ( b ) + sin ⁡ ( a ) sin ⁡ ( b ) cos ⁡ ( C ) {\displaystyle \cos(c)=\cos(a)\cos(b)+\sin(a)\sin(b)\cos(C)} where a , b and c are arc lengths, in radians, of the sides of a spherical triangle. C is the angle in the vertex opposite the side which has arc length c . Applied to the calculation of solar zenith angle Θ , the following applies to the spherical law of cosines: C = h c = Θ a = 1 2 π − φ b = 1 2 π − δ cos ⁡ ( Θ ) = sin ⁡ ( φ ) sin ⁡ ( δ ) + cos ⁡ ( φ ) cos ⁡ ( δ ) cos ⁡ ( h ) {\displaystyle {\begin{aligned}C&=h\\c&=\Theta \\a&={\tfrac {1}{2}}\pi -\varphi \\b&={\tfrac {1}{2}}\pi -\delta \\\cos(\Theta )&=\sin(\varphi )\sin(\delta )+\cos(\varphi )\cos(\delta )\cos(h)\end{aligned}}} This equation can be also derived from a more general formula: [ 13 ] cos ⁡ ( Θ ) = sin ⁡ ( φ ) sin ⁡ ( δ ) cos ⁡ ( β ) + sin ⁡ ( δ ) cos ⁡ ( φ ) sin ⁡ ( β ) cos ⁡ ( γ ) + cos ⁡ ( φ ) cos ⁡ ( δ ) cos ⁡ ( β ) cos ⁡ ( h ) − cos ⁡ ( δ ) sin ⁡ ( φ ) sin ⁡ ( β ) cos ⁡ ( γ ) cos ⁡ ( h ) − cos ⁡ ( δ ) sin ⁡ ( β ) sin ⁡ ( γ ) sin ⁡ ( h ) {\displaystyle {\begin{aligned}\cos(\Theta )=\sin(\varphi )\sin(\delta )\cos(\beta )&+\sin(\delta )\cos(\varphi )\sin(\beta )\cos(\gamma )+\cos(\varphi )\cos(\delta )\cos(\beta )\cos(h)\\&-\cos(\delta )\sin(\varphi )\sin(\beta )\cos(\gamma )\cos(h)-\cos(\delta )\sin(\beta )\sin(\gamma )\sin(h)\end{aligned}}} where β is an angle from the horizontal and γ is an azimuth angle . The separation of Earth from the Sun can be denoted R E and the mean distance can be denoted R 0 , approximately 1 astronomical unit (AU). The solar constant is denoted S 0 . The solar flux density (insolation) onto a plane tangent to the sphere of the Earth, but above the bulk of the atmosphere (elevation 100 km or greater) is: Q = { S o R o 2 R E 2 cos ⁡ ( Θ ) cos ⁡ ( Θ ) > 0 0 cos ⁡ ( Θ ) ≤ 0 {\displaystyle Q={\begin{cases}S_{o}{\frac {R_{o}^{2}}{R_{E}^{2}}}\cos(\Theta )&\cos(\Theta )>0\\0&\cos(\Theta )\leq 0\end{cases}}} The average of Q over a day is the average of Q over one rotation, or the hour angle progressing from h = π to h = −π : Q ¯ day = − 1 2 π ∫ π − π Q d h {\displaystyle {\overline {Q}}^{\text{day}}=-{\frac {1}{2\pi }}{\int _{\pi }^{-\pi }Q\,dh}} Let h 0 be the hour angle when Q becomes positive. This could occur at sunrise when Θ = 1 2 π {\displaystyle \Theta ={\tfrac {1}{2}}\pi } , or for h 0 as a solution of sin ⁡ ( φ ) sin ⁡ ( δ ) + cos ⁡ ( φ ) cos ⁡ ( δ ) cos ⁡ ( h o ) = 0 {\displaystyle \sin(\varphi )\sin(\delta )+\cos(\varphi )\cos(\delta )\cos(h_{o})=0} or cos ⁡ ( h o ) = − tan ⁡ ( φ ) tan ⁡ ( δ ) {\displaystyle \cos(h_{o})=-\tan(\varphi )\tan(\delta )} If tan( φ ) tan( δ ) > 1 , then the sun does not set and the sun is already risen at h = π , so h o = π . If tan( φ ) tan( δ ) < −1 , the sun does not rise and Q ¯ day = 0 {\displaystyle {\overline {Q}}^{\text{day}}=0} . R o 2 R E 2 {\displaystyle {\frac {R_{o}^{2}}{R_{E}^{2}}}} is nearly constant over the course of a day, and can be taken outside the integral ∫ π − π Q d h = ∫ h o − h o Q d h = S o R o 2 R E 2 ∫ h o − h o cos ⁡ ( Θ ) d h = S o R o 2 R E 2 [ h sin ⁡ ( φ ) sin ⁡ ( δ ) + cos ⁡ ( φ ) cos ⁡ ( δ ) sin ⁡ ( h ) ] h = h o h = − h o = − 2 S o R o 2 R E 2 [ h o sin ⁡ ( φ ) sin ⁡ ( δ ) + cos ⁡ ( φ ) cos ⁡ ( δ ) sin ⁡ ( h o ) ] {\displaystyle {\begin{aligned}\int _{\pi }^{-\pi }Q\,dh&=\int _{h_{o}}^{-h_{o}}Q\,dh\\[5pt]&=S_{o}{\frac {R_{o}^{2}}{R_{E}^{2}}}\int _{h_{o}}^{-h_{o}}\cos(\Theta )\,dh\\[5pt]&=S_{o}{\frac {R_{o}^{2}}{R_{E}^{2}}}{\Bigg [}h\sin(\varphi )\sin(\delta )+\cos(\varphi )\cos(\delta )\sin(h){\Bigg ]}_{h=h_{o}}^{h=-h_{o}}\\[5pt]&=-2S_{o}{\frac {R_{o}^{2}}{R_{E}^{2}}}\left[h_{o}\sin(\varphi )\sin(\delta )+\cos(\varphi )\cos(\delta )\sin(h_{o})\right]\end{aligned}}} Therefore: Q ¯ day = S o π R o 2 R E 2 [ h o sin ⁡ ( φ ) sin ⁡ ( δ ) + cos ⁡ ( φ ) cos ⁡ ( δ ) sin ⁡ ( h o ) ] {\displaystyle {\overline {Q}}^{\text{day}}={\frac {S_{o}}{\pi }}{\frac {R_{o}^{2}}{R_{E}^{2}}}\left[h_{o}\sin(\varphi )\sin(\delta )+\cos(\varphi )\cos(\delta )\sin(h_{o})\right]} Let θ be the conventional polar angle describing a planetary orbit . Let θ = 0 at the March equinox . The declination δ as a function of orbital position is [ 14 ] [ 15 ] δ = ε sin ⁡ ( θ ) {\displaystyle \delta =\varepsilon \sin(\theta )} where ε is the obliquity . (Note: The correct formula, valid for any axial tilt, is sin ⁡ ( δ ) = sin ⁡ ( ε ) sin ⁡ ( θ ) {\displaystyle \sin(\delta )=\sin(\varepsilon )\sin(\theta )} . [ 16 ] ) The conventional longitude of perihelion ϖ is defined relative to the March equinox, so for the elliptical orbit: [ 17 ] R E = R o ( 1 − e 2 ) 1 + e cos ⁡ ( θ − ϖ ) {\displaystyle R_{E}={\frac {R_{o}(1-e^{2})}{1+e\cos(\theta -\varpi )}}} or R o R E = 1 + e cos ⁡ ( θ − ϖ ) 1 − e 2 {\displaystyle {\frac {R_{o}}{R_{E}}}={\frac {1+e\cos(\theta -\varpi )}{1-e^{2}}}} With knowledge of ϖ , ε and e from astrodynamical calculations [ 18 ] and S o from a consensus of observations or theory, Q ¯ day {\displaystyle {\overline {Q}}^{\text{day}}} can be calculated for any latitude φ and θ . Because of the elliptical orbit, and as a consequence of Kepler's second law , θ does not progress uniformly with time. Nevertheless, θ = 0° is exactly the time of the March equinox, θ = 90° is exactly the time of the June solstice, θ = 180° is exactly the time of the September equinox and θ = 270° is exactly the time of the December solstice. A simplified equation for irradiance on a given day is: [ 19 ] [ 20 ] Q ≈ S 0 ( 1 + 0.034 cos ⁡ ( 2 π n 365.25 ) ) {\displaystyle Q\approx S_{0}\left(1+0.034\cos \left(2\pi {\frac {n}{365.25}}\right)\right)} where n is a number of a day of the year. Total solar irradiance (TSI) [ 21 ] changes slowly on decadal and longer timescales. The variation during solar cycle 21 was about 0.1% (peak-to-peak). [ 22 ] In contrast to older reconstructions, [ 23 ] most recent TSI reconstructions point to an increase of only about 0.05% to 0.1% between the 17th century Maunder Minimum and the present. [ 24 ] [ 25 ] [ 26 ] However, current understanding based on various lines of evidence suggests that the lower values for the secular trend are more probable. [ 26 ] In particular, a secular trend greater than 2 Wm −2 is considered highly unlikely. [ 26 ] [ 27 ] [ 28 ] Ultraviolet irradiance (EUV) varies by approximately 1.5 percent from solar maxima to minima, for 200 to 300 nm wavelengths. [ 29 ] However, a proxy study estimated that UV has increased by 3.0% since the Maunder Minimum. [ 30 ] Some variations in insolation are not due to solar changes but rather due to the Earth moving between its perihelion and aphelion , or changes in the latitudinal distribution of radiation. These orbital changes or Milankovitch cycles have caused radiance variations of as much as 25% (locally; global average changes are much smaller) over long periods. The most recent significant event was an axial tilt of 24° during boreal summer near the Holocene climatic optimum . Obtaining a time series for a Q ¯ d a y {\displaystyle {\overline {Q}}^{\mathrm {day} }} for a particular time of year, and particular latitude, is a useful application in the theory of Milankovitch cycles. For example, at the summer solstice, the declination δ is equal to the obliquity ε . The distance from the Sun is R o R E = 1 + e cos ⁡ ( θ − ϖ ) = 1 + e cos ⁡ ( π 2 − ϖ ) = 1 + e sin ⁡ ( ϖ ) {\displaystyle {\frac {R_{o}}{R_{E}}}=1+e\cos(\theta -\varpi )=1+e\cos \left({\frac {\pi }{2}}-\varpi \right)=1+e\sin(\varpi )} For this summer solstice calculation, the role of the elliptical orbit is entirely contained within the important product e sin ⁡ ( ϖ ) {\displaystyle e\sin(\varpi )} , the precession index, whose variation dominates the variations in insolation at 65° N when eccentricity is large. For the next 100,000 years, with variations in eccentricity being relatively small, variations in obliquity dominate. The space-based TSI record comprises measurements from more than ten radiometers and spans three solar cycles. All modern TSI satellite instruments employ active cavity electrical substitution radiometry . This technique measures the electrical heating needed to maintain an absorptive blackened cavity in thermal equilibrium with the incident sunlight which passes through a precision aperture of calibrated area. The aperture is modulated via a shutter . Accuracy uncertainties of < 0.01% are required to detect long term solar irradiance variations, because expected changes are in the range 0.05–0.15 W/m 2 per century. [ 31 ] In orbit, radiometric calibrations drift for reasons including solar degradation of the cavity, electronic degradation of the heater, surface degradation of the precision aperture and varying surface emissions and temperatures that alter thermal backgrounds. These calibrations require compensation to preserve consistent measurements. [ 31 ] For various reasons, the sources do not always agree. The Solar Radiation and Climate Experiment/Total Irradiance Measurement ( SORCE /TIM) TSI values are lower than prior measurements by the Earth Radiometer Budget Experiment (ERBE) on the Earth Radiation Budget Satellite (ERBS), VIRGO on the Solar Heliospheric Observatory (SoHO) and the ACRIM instruments on the Solar Maximum Mission (SMM), Upper Atmosphere Research Satellite (UARS) and ACRIMSAT . Pre-launch ground calibrations relied on component rather than system-level measurements since irradiance standards at the time lacked sufficient absolute accuracies. [ 31 ] Measurement stability involves exposing different radiometer cavities to different accumulations of solar radiation to quantify exposure-dependent degradation effects. These effects are then compensated for in the final data. Observation overlaps permits corrections for both absolute offsets and validation of instrumental drifts. [ 31 ] Uncertainties of individual observations exceed irradiance variability (~0.1%). Thus, instrument stability and measurement continuity are relied upon to compute real variations. Long-term radiometer drifts can potentially be mistaken for irradiance variations which can be misinterpreted as affecting climate. Examples include the issue of the irradiance increase between cycle minima in 1986 and 1996, evident only in the ACRIM composite (and not the model) and the low irradiance levels in the PMOD composite during the 2008 minimum. Despite the fact that ACRIM I, ACRIM II, ACRIM III, VIRGO and TIM all track degradation with redundant cavities, notable and unexplained differences remain in irradiance and the modeled influences of sunspots and faculae . Disagreement among overlapping observations indicates unresolved drifts that suggest the TSI record is not sufficiently stable to discern solar changes on decadal time scales. Only the ACRIM composite shows irradiance increasing by ~1 W/m 2 between 1986 and 1996. It is noteworthy that the most accurate TSI reconstructions with empirical and physics-based semi-empirical models using independent inputs consistently disfavor this increase during the ACRIM-gap. [ 31 ] [ 32 ] [ 33 ] [ 34 ] [ 35 ] Recommendations to resolve the instrument discrepancies include validating optical measurement accuracy by comparing ground-based instruments to laboratory references, such as those at National Institute of Science and Technology (NIST); NIST validation of aperture area calibrations uses spares from each instrument; and applying diffraction corrections from the view-limiting aperture. [ 31 ] For ACRIM, NIST determined that diffraction from the view-limiting aperture contributes a 0.13% signal not accounted for in the three ACRIM instruments. This correction lowers the reported ACRIM values, bringing ACRIM closer to TIM. In ACRIM and all other instruments but TIM, the aperture is deep inside the instrument, with a larger view-limiting aperture at the front. Depending on edge imperfections this can directly scatter light into the cavity. This design admits into the front part of the instrument two to three times the amount of light intended to be measured; if not completely absorbed or scattered, this additional light produces erroneously high signals. In contrast, TIM's design places the precision aperture at the front so that only desired light enters. [ 31 ] Variations from other sources likely include an annual systematics in the ACRIM III data that is nearly in phase with the Sun-Earth distance and 90-day spikes in the VIRGO data coincident with SoHO spacecraft maneuvers that were most apparent during the 2008 solar minimum. TIM's high absolute accuracy creates new opportunities for measuring climate variables. TSI Radiometer Facility (TRF) is a cryogenic radiometer that operates in a vacuum with controlled light sources. L-1 Standards and Technology (LASP) designed and built the system, completed in 2008. It was calibrated for optical power against the NIST Primary Optical Watt Radiometer, a cryogenic radiometer that maintains the NIST radiant power scale to an uncertainty of 0.02% (1 σ ). As of 2011 TRF was the only facility that approached the desired <0.01% uncertainty for pre-launch validation of solar radiometers measuring irradiance (rather than merely optical power) at solar power levels and under vacuum conditions. [ 31 ] TRF encloses both the reference radiometer and the instrument under test in a common vacuum system that contains a stationary, spatially uniform illuminating beam. A precision aperture with an area calibrated to 0.0031% (1 σ ) determines the beam's measured portion. The test instrument's precision aperture is positioned in the same location, without optically altering the beam, for direct comparison to the reference. Variable beam power provides linearity diagnostics, and variable beam diameter diagnoses scattering from different instrument components. [ 31 ] The Glory/TIM and PICARD/PREMOS flight instrument absolute scales are now traceable to the TRF in both optical power and irradiance. The resulting high accuracy reduces the consequences of any future gap in the solar irradiance record. [ 31 ] The most probable value of TSI representative of solar minimum is 1 360 .9 ± 0.5 W/m 2 , lower than the earlier accepted value of 1 365 .4 ± 1.3 W/m 2 , established in the 1990s. The new value came from SORCE/TIM and radiometric laboratory tests. Scattered light is a primary cause of the higher irradiance values measured by earlier satellites in which the precision aperture is located behind a larger, view-limiting aperture. The TIM uses a view-limiting aperture that is smaller than the precision aperture that precludes this spurious signal. The new estimate is from better measurement rather than a change in solar output. [ 31 ] A regression model-based split of the relative proportion of sunspot and facular influences from SORCE/TIM data accounts for 92% of observed variance and tracks the observed trends to within TIM's stability band. This agreement provides further evidence that TSI variations are primarily due to solar surface magnetic activity. [ 31 ] Instrument inaccuracies add a significant uncertainty in determining Earth's energy balance . The energy imbalance has been variously measured (during a deep solar minimum of 2005–2010) to be +0.58 ± 0.15 W/m 2 , [ 36 ] +0.60 ± 0.17 W/m 2 [ 37 ] and +0.85 W/m 2 . Estimates from space-based measurements range +3–7 W/m 2 . SORCE/TIM's lower TSI value reduces this discrepancy by 1 W/m 2 . This difference between the new lower TIM value and earlier TSI measurements corresponds to a climate forcing of −0.8 W/m 2 , which is comparable to the energy imbalance. [ 31 ] Average annual solar radiation arriving at the top of the Earth's atmosphere is roughly 1361 W/m 2 . [ 38 ] The Sun's rays are attenuated as they pass through the atmosphere , leaving maximum normal surface irradiance at approximately 1000 W/m 2 at sea level on a clear day. When 1361 W/m 2 is arriving above the atmosphere (when the Sun is at the zenith in a cloudless sky), direct sun is about 1050 W/m 2 , and global radiation on a horizontal surface at ground level is about 1120 W/m 2 . [ 39 ] The latter figure includes radiation scattered or reemitted by the atmosphere and surroundings. The actual figure varies with the Sun's angle and atmospheric circumstances. Ignoring clouds, the daily average insolation for the Earth is approximately 6 kWh/m 2 = 21.6 MJ/m 2 . The output of, for example, a photovoltaic panel, partly depends on the angle of the sun relative to the panel. One Sun is a unit of power flux , not a standard value for actual insolation. Sometimes this unit is referred to as a Sol, not to be confused with a sol , meaning one solar day . [ 40 ] Part of the radiation reaching an object is absorbed and the remainder reflected. Usually, the absorbed radiation is converted to thermal energy , increasing the object's temperature. Humanmade or natural systems, however, can convert part of the absorbed radiation into another form such as electricity or chemical bonds , as in the case of photovoltaic cells or plants . The proportion of reflected radiation is the object's reflectivity or albedo . Insolation onto a surface is largest when the surface directly faces (is normal to) the sun. As the angle between the surface and the Sun moves from normal, the insolation is reduced in proportion to the angle's cosine ; see effect of Sun angle on climate . In the figure, the angle shown is between the ground and the sunbeam rather than between the vertical direction and the sunbeam; hence the sine rather than the cosine is appropriate. A sunbeam one mile wide arrives from directly overhead, and another at a 30° angle to the horizontal. The sine of a 30° angle is 1/2, whereas the sine of a 90° angle is 1. Therefore, the angled sunbeam spreads the light over twice the area. Consequently, half as much light falls on each square mile. This projection effect is the main reason why Earth's polar regions are much colder than equatorial regions . On an annual average, the poles receive less insolation than does the equator, because the poles are always angled more away from the Sun than the tropics, and moreover receive no insolation at all for the six months of their respective winters. At a lower angle, the light must also travel through more atmosphere. This attenuates it (by absorption and scattering) further reducing insolation at the surface. Attenuation is governed by the Beer-Lambert Law , namely that the transmittance or fraction of insolation reaching the surface decreases exponentially in the optical depth or absorbance (the two notions differing only by a constant factor of ln(10) = 2.303 ) of the path of insolation through the atmosphere. For any given short length of the path, the optical depth is proportional to the number of absorbers and scatterers along that length, typically increasing with decreasing altitude. The optical depth of the whole path is then the integral (sum) of those optical depths along the path. When the density of absorbers is layered, that is, depends much more on vertical than horizontal position in the atmosphere, to a good approximation the optical depth is inversely proportional to the projection effect, that is, to the cosine of the zenith angle. Since transmittance decreases exponentially with increasing optical depth, as the sun approaches the horizon there comes a point when absorption dominates projection for the rest of the day. With a relatively high level of absorbers this can be a considerable portion of the late afternoon, and likewise of the early morning. Conversely, in the (hypothetical) total absence of absorption, the optical depth remains zero at all altitudes of the sun, that is, transmittance remains 1, and so only the projection effect applies. Assessment and mapping of solar potential at the global, regional and country levels have been the subject of significant academic and commercial interest. One of the earliest attempts to carry out comprehensive mapping of solar potential for individual countries was the Solar & Wind Resource Assessment (SWERA) project, [ 41 ] funded by the United Nations Environment Program and carried out by the US National Renewable Energy Laboratory (NREL) . The National Aeronautics and Space Administration (NASA) provides data for global solar potential maps through the CERES experiment and the POWER project. Global mapping by many other similar institutes are available on the Global Atlas for Renewable Energy provided by the International Renewable Energy Agency . A number of commercial firms now exist to provide solar resource data to solar power developers, including 3E, Clean Power Research, SoDa Solar Radiation Data, Solargis, Vaisala (previously 3Tier), and Vortex, and these firms have often provided solar potential maps for free. The Global Solar Atlas was launched by the World Bank in January 2017, using data provided by Solargis, to provide a single source for high-quality solar data, maps, and GIS layers covering all countries. Solar radiation maps are built using databases derived from satellite imagery, as for example using visible images from Meteosat Prime satellite. A method is applied to the images to determine solar radiation. One well validated satellite-to-irradiance model is the SUNY model. [ 42 ] The accuracy of this model is well evaluated. In general, solar irradiance maps are accurate, especially for Global Horizontal Irradiance. Solar irradiation figures are used to plan the deployment of solar power systems . [ 43 ] In many countries, the figures can be obtained from an insolation map or from insolation tables that reflect data over the prior 30–50 years. Different solar power technologies are able to use different components of the total irradiation. While solar photovoltaics panels are able to convert to electricity both direct irradiation and diffuse irradiation, concentrated solar power is only able to operate efficiently with direct irradiation, thus making these systems suitable only in locations with relatively low cloud cover. Because solar collectors panels are almost always mounted at an angle towards the Sun, insolation figures must be adjusted to find the amount of sunlight falling on the panel. This will prevent estimates that are inaccurately low for winter and inaccurately high for summer. [ 44 ] This also means that the amount of sunlight falling on a solar panel at high latitude is not as low compared to one at the equator as would appear from just considering insolation on a horizontal surface. Horizontal insolation values range from 800 to 950 kWh/(kWp·y) in Norway to up to 2,900 kWh/(kWp·y) in Australia . But a properly tilted panel at 50° latitude receives 1860 kWh/m 2 /y, compared to 2370 at the equator. [ 45 ] In fact, under clear skies a solar panel placed horizontally at the north or south pole at midsummer receives more sunlight over 24 hours (cosine of angle of incidence equal to sin(23.5°) or about 0.40) than a horizontal panel at the equator at the equinox (average cosine equal to 1/ π or about 0.32). Photovoltaic panels are rated under standard conditions to determine the Wp (peak watts) rating, [ 46 ] which can then be used with insolation, adjusted by factors such as tilt, tracking and shading, to determine the expected output. [ 47 ] In construction, insolation is an important consideration when designing a building for a particular site. [ 48 ] The projection effect can be used to design buildings that are cool in summer and warm in winter, by providing vertical windows on the equator-facing side of the building (the south face in the northern hemisphere , or the north face in the southern hemisphere ): this maximizes insolation in the winter months when the Sun is low in the sky and minimizes it in the summer when the Sun is high. (The Sun's north–south path through the sky spans 47° through the year). In civil engineering and hydrology , numerical models of snowmelt runoff use observations of insolation. This permits estimation of the rate at which water is released from a melting snowpack. Field measurement is accomplished using a pyranometer . Irradiance plays a part in climate modeling and weather forecasting . A non-zero average global net radiation at the top of the atmosphere is indicative of Earth's thermal disequilibrium as imposed by climate forcing . The impact of the lower 2014 TSI value on climate models is unknown. A few tenths of a percent change in the absolute TSI level is typically considered to be of minimal consequence for climate simulations. The new measurements require climate model parameter adjustments. Experiments with GISS Model 3 investigated the sensitivity of model performance to the TSI absolute value during the present and pre-industrial epochs, and describe, for example, how the irradiance reduction is partitioned between the atmosphere and surface and the effects on outgoing radiation. [ 31 ] Assessing the impact of long-term irradiance changes on climate requires greater instrument stability [ 31 ] combined with reliable global surface temperature observations to quantify climate response processes to radiative forcing on decadal time scales. The observed 0.1% irradiance increase imparts 0.22 W/m 2 climate forcing, which suggests a transient climate response of 0.6 °C per W/m 2 . This response is larger by a factor of 2 or more than in the IPCC-assessed 2008 models, possibly appearing in the models' heat uptake by the ocean. [ 31 ] Measuring a surface's capacity to reflect solar irradiance is essential to passive daytime radiative cooling , which has been proposed as a method of reversing local and global temperature increases associated with global warming . [ 49 ] [ 50 ] In order to measure the cooling power of a passive radiative cooling surface, both the absorbed powers of atmospheric and solar radiations must be quantified. On a clear day, solar irradiance can reach 1000 W/m 2 with a diffuse component between 50 and 100 W/m 2 . On average the cooling power of a passive daytime radiative cooling surface has been estimated at ~100-150 W/m 2 . [ 51 ] Insolation is the primary variable affecting equilibrium temperature in spacecraft design and planetology . Solar activity and irradiance measurement is a concern for space travel. For example, the American space agency, NASA , launched its Solar Radiation and Climate Experiment (SORCE) satellite with Solar Irradiance Monitors . [ 2 ]
https://en.wikipedia.org/wiki/Solar_irradiance
A solar lamp , also known as a solar light or solar lantern , is a lighting system composed of an LED lamp , solar panels , battery , charge controller and there may also be an inverter . The lamp operates on electricity from batteries , charged through the use of a solar photovoltaic panel. Solar-powered household lighting can replace other light sources like candles or kerosene lamps . Solar lamps have a lower operating cost than kerosene lamps because renewable energy from the sun is free, unlike fuel. In addition, solar lamps produce no indoor air pollution unlike kerosene lamps. However, solar lamps generally have a higher initial cost, and are weather dependent. Solar lamps for use in rural situations often have the capability of providing a supply of electricity for other devices, such as for charging cell phones . The costs of solar lamps have continued to fall in recent years as the components and lamps have been mass-produced in ever greater numbers. Some solar photovoltaics use Monocrystalline silicon or poly-crystalline silicon panels, while newer technologies have used thin-film solar cells . [ 1 ] Since modern solar cells were introduced in 1954 at Bell labs , [ 2 ] advances in solar cell efficiency at converting light into electric power, and modern manufacturing techniques combined with efficiencies of scale have led to an international growth of photovoltaics . The first solar light patent was filed by Maurice E Paradise in 1955. [ 3 ] As of 2016, LED lamps use only about 10% of the energy an incandescent lamp requires. [ 4 ] Efficiency in production of LED lamps has led to increased adoption as an alternative to older electric lightings Most solar panels are made out of single crystalline silicon, a semiconductor material. [ 5 ] When light strikes a solar cell , an electric current is produced in the connected electric circuit. This is called the photoelectric effect . [ 5 ] Photovoltaic systems directly convert the energy of sunlight into electricity. Solar panels are made out of layers of different materials, in order of glass, encapsulate, crystalline cells, back sheet, junction box and lastly frame. The encapsulate keeps out moisture and contaminants which could cause problems. [ 6 ] A battery is usually housed within a metal or plastic case. Inside the case are electrodes including cathodes and anodes where chemical reactions occur. A separator also exists between cathode and anode which stops the electrodes reacting together at the same time as allowing electrical charge to flow freely between the two. Lastly, the collector conducts a charge from the battery to outside. [ 7 ] Batteries inside solar lamps usually use gel electrolyte technology with high performance in deep discharging, in order to enable use in extreme ranges of temperature. [ citation needed ] It may also use lead-acid, nickel metal hydride, nickel cadmium, or lithium. This part of the lamp saves up energy from the solar panel and provides power when needed at night when there is no light energy available. In general, the efficiency of photovoltaic energy conversion is limited for physical reasons. Around 24% of solar radiation of a long wavelength is not absorbed. 33% is heat lost to surroundings, and further losses are of approximately 15-20%. Only 23% is absorbed, which means a battery is a crucial part of solar lamp. [ 8 ] This section controls the entire working systems to protect battery charge. It ensures, under any circumstances including extreme weather conditions with large temperature difference, the battery does not overcharge or over discharge and damage the battery even further. [ citation needed ] This section also includes additional parts such as light controller, time controller, sound, temperature compensation, lighting protection, reverse polarity protection and AC transfer switches which ensure sensitive back-up loads work normally when outage occurs. [ citation needed ] LED lights are used due to their high luminous efficiency and long life. Under the control of a DC charge controller, non-contact control automatically turns on the light at dark and switches off at daytime. It sometimes also combines with time controllers to set certain time for it to automatically switch light on and off. [ citation needed ] Solar lamps are easier for customers to install and maintain as they do not require an electricity cable. Solar lamps can benefit owners with reduced maintenance cost and costs of electricity bills. Solar lamps can also be used in areas where there is no electrical grid or remote areas that lack a reliable electricity supply. [ citation needed ] Over 1 billion people around the globe lack electric lighting, which contributes to continued poverty. [ citation needed ] Solar energy output is limited by weather and can be less effective if it is cloudy, wet, or winter. [ citation needed ] Households switching to solar lamps from kerosene lamps also gain from health risk associated with kerosene emissions. Kerosene often has negative impacts on human lungs. [ 9 ] The use of solar energy minimises the creation of pollution indoors, where kerosene has been linked to cases of health issues. However, photovoltaic panels are made out of silicon and other toxic metals, including lead that can be difficult to dispose of. [ citation needed ] The use of solar lights improves education for students who live in households without electricity. When the nonprofit Unite to Light donated solar-lamps to schools in a remote region of Kwa Zulu Natal in South Africa, test scores and pass rates improved by over 30%. [ 10 ] The light gives students added time to study after dark. [ citation needed ] A 2017 experimental study in un-electrified areas of northern Bangladesh found that the use of solar lanterns decreased total household expenditure, increased children's home-study hours and increased school attendance. It did not however improve the children's educational achievement to any large extent. [ 11 ] These lights provide a convenient and cost-effective way to light streets at night without the need of AC electrical grids for pedestrians and drivers. They may have individual panels for each lamp of a system, or may have a large central solar panel and battery bank to power multiple lamps. Small solar lamps can be used by homeowners to add ambient lighting to their gardens. These lights can be found in many form factors, commonly pathway lights and spotlights. [ 12 ] In rural India, solar lamps, commonly called solar lanterns, using either LEDs or CFLs, are being used to replace kerosene lamps, and other cheap alternatives of lighting. Especially in areas where electricity is otherwise difficult to access, solar lamps are very useful, and it also improves the quality of life. [ 13 ] Africa, which has the lowest electricity access rate globally at 40% [ 14 ] has benefited greatly through access to solar lamps and complete home lighting solutions. In many regions in Africa inadequate lighting after dusk poses safety risks. Solar lights illuminate dark streets and pathways, enhancing public safety and reducing accidents. [ 15 ] Marine settings are increasingly using LED solar lights as alternatives to conventional lighting. The remote nature of boating and sailing makes power hard to come by and thus lends itself to self-sufficient technologies like solar boat lighting [ citation needed ] American investors have been working towards developing a $10 per unit solar lantern for replacement of kerosene lamps. [ 16 ] Solar home lighting solutions can be expensive to purchase. Off-grid solar organizations offer solar home lighting systems through innovative financial mechanisms such as the Pay-As-You-Go model, permitting consumers to power their entire home while paying easy monthly installments. Currently, over 40% of all sales of off-grid solar lighting products in Sub-Saharan Africa are conducted through PayGo, reaching almost 50% in Kenya and 65% in Rwanda. [ 14 ]
https://en.wikipedia.org/wiki/Solar_lamp
Solar longitude , commonly abbreviated as L s (pronounced ell sub ess ), is the longitude of the Sun as seen from a given body, i.e. the position of the Sun on the celestial sphere along the orbital plane of that body. It is also an effective measure of the position of the Earth (or any other Sun-orbiting body) in its orbit around the Sun, [ 1 ] usually taken as zero at the moment of the vernal equinox . [ 2 ] Since it is based on how far the Earth has moved in its orbit since the equinox, it is a measure of what time of the tropical year (the year of seasons) the planet is in, but without the inaccuracies of a calendar date, which is perturbed by leap years and calendar imperfections. Its independence from a calendar also allows it to be used to tell the time of year on other planets, such as Mars. [ 3 ] Solar longitude does not increase linearly with time, the deviation being larger the greater the eccentricity of the orbit. For instance, here are the dates for multiples of 90° solar longitude on Mars in the mid 1950s: [ 3 ] Solar longitude is especially used in the field of meteor showers , because a particular meteor shower is caused by a stream of small particles very close to the elliptical orbit of a comet , or former comet. This means that the shower occurs when Earth reaches a particular point in its own orbit, designated by the solar longitude. For example, after passing the March equinox , the solar longitude (λ ☉ ) of the April Lyrids is 32°. The value of the solar longitude, like any ecliptic longitude, depends on the epoch being used. The solar longitude for a given meteor shower would therefore not be constant if the current date were used as the epoch. For this reason, a standard epoch is used, usually J2000. The Martian year can be divided into 12 Martian months of unequal duration, with the breakpoints being at solar longitudes that are multiples of 30°. [ 4 ]
https://en.wikipedia.org/wiki/Solar_longitude
The solar luminosity ( L ☉ ) is a unit of radiant flux ( power emitted in the form of photons ) conventionally used by astronomers to measure the luminosity of stars , galaxies and other celestial objects in terms of the output of the Sun . One nominal solar luminosity is defined by the International Astronomical Union to be 3.828 × 10 26 W . [ 2 ] This corresponds almost exactly to a bolometric absolute magnitude of +4.74. The Sun is a weakly variable star , and its actual luminosity therefore fluctuates . [ 3 ] The major fluctuation is the eleven-year solar cycle (sunspot cycle) that causes a quasi-periodic variation of about ±0.1%. Other variations over the last 200–300 years are thought to be much smaller than this. [ 4 ] Solar luminosity is related to solar irradiance (the solar constant ). Slow changes in the axial tilt of the planet and the shape of its orbit cause cyclical changes to the solar irradiance. The result is orbital forcing that causes the Milankovitch cycles , which determine Earthly glacial cycles. The mean irradiance at the top of the Earth's atmosphere is sometimes known as the solar constant, I ☉ . Irradiance is defined as power per unit area, so the solar luminosity (total power emitted by the Sun) is the irradiance received at the Earth (solar constant) multiplied by the area of the sphere whose radius is the mean distance between the Earth and the Sun: L ⊙ = 4 π k I ⊙ A 2 {\displaystyle L_{\odot }=4\pi kI_{\odot }A^{2}} where A is the unit distance (the value of the astronomical unit in metres ) and k is a constant (whose value is very close to one) that reflects the fact that the mean distance from the Earth to the Sun is not exactly one astronomical unit.
https://en.wikipedia.org/wiki/Solar_luminosity
The solar mass ( M ☉ ) is a standard unit of mass in astronomy , equal to approximately 2 × 10 30 kg (2 nonillion kilograms in US short scale). It is approximately equal to the mass of the Sun . It is often used to indicate the masses of other stars , as well as stellar clusters , nebulae , galaxies and black holes . More precisely, the mass of the Sun is The solar mass is about 333 000 times the mass of Earth ( M E ), or 1047 times the mass of Jupiter ( M J ). The value of the gravitational constant was first derived from measurements that were made by Henry Cavendish in 1798 with a torsion balance . [ 2 ] The value he obtained differs by only 1% from the modern value, but was not as precise. [ 3 ] The diurnal parallax of the Sun was accurately measured during the transits of Venus in 1761 and 1769, [ 4 ] yielding a value of 9″ (9 arcseconds , compared to the present value of 8.794 148 ″ ). From the value of the diurnal parallax, one can determine the distance to the Sun from the geometry of Earth. [ 5 ] [ 6 ] The first known estimate of the solar mass was by Isaac Newton . [ 7 ] In his work Principia (1687), he estimated that the ratio of the mass of Earth to the Sun was about 1 ⁄ 28 700 . Later he determined that his value was based upon a faulty value for the solar parallax, which he had used to estimate the distance to the Sun. He corrected his estimated ratio to 1 ⁄ 169 282 in the third edition of the Principia . The current value for the solar parallax is smaller still, yielding an estimated mass ratio of 1 ⁄ 332 946 . [ 8 ] As a unit of measurement, the solar mass came into use before the AU and the gravitational constant were precisely measured. This is because the relative mass of another planet in the Solar System or the combined mass of two binary stars can be calculated in units of Solar mass directly from the orbital radius and orbital period of the planet or stars using Kepler's third law. The mass of the Sun cannot be measured directly, and is instead calculated from other measurable factors, using the equation for the orbital period of a small body orbiting a central mass. [ 9 ] Based on the length of the year, the distance from Earth to the Sun (an astronomical unit or AU), and the gravitational constant ( G ), the mass of the Sun is given by solving Kepler's third law : [ 10 ] [ 11 ] M ⊙ = 4 π 2 × ( 1 A U ) 3 G × ( 1 y r ) 2 {\displaystyle M_{\odot }={\frac {4\pi ^{2}\times (1\,\mathrm {AU} )^{3}}{G\times (1\,\mathrm {yr} )^{2}}}} The value of G is difficult to measure and is only known with limited accuracy ( see Cavendish experiment ). The value of G times the mass of an object, called the standard gravitational parameter , is known for the Sun and several planets to a much higher accuracy than G alone. [ 12 ] As a result, the solar mass is used as the standard mass in the astronomical system of units . The Sun is losing mass because of fusion reactions occurring within its core, leading to the emission of electromagnetic energy , neutrinos and by the ejection of matter with the solar wind . It is expelling about (2–3) × 10 −14 M ☉ /year. [ 13 ] The mass loss rate will increase when the Sun enters the red giant stage, climbing to (7–9) × 10 −14 M ☉ /year when it reaches the tip of the red-giant branch . This will rise to 10 −6 M ☉ /year on the asymptotic giant branch , before peaking at a rate of 10 −5 to 10 −4 M ☉ /year as the Sun generates a planetary nebula . By the time the Sun becomes a degenerate white dwarf , it will have lost 46% of its starting mass. [ 14 ] The mass of the Sun has been decreasing since the time it formed. This occurs through two processes in nearly equal amounts. First, in the Sun's core , hydrogen is converted into helium through nuclear fusion , in particular the p–p chain , and this reaction converts some mass into energy in the form of gamma ray photons. Most of this energy eventually radiates away from the Sun. Second, high-energy protons and electrons in the atmosphere of the Sun are ejected directly into outer space as the solar wind and coronal mass ejections . [ 15 ] The original mass of the Sun at the time it reached the main sequence remains uncertain. [ 16 ] The early Sun had much higher mass-loss rates than at present, and it may have lost anywhere from 1–7% of its natal mass over the course of its main-sequence lifetime. [ 17 ] One solar mass, M ☉ , can be converted to related units: [ 18 ] It is also frequently useful in general relativity to express mass in units of length or time. The solar mass parameter ( G · M ☉ ), as listed by the IAU Division I Working Group, has the following estimates: [ 19 ]
https://en.wikipedia.org/wiki/Solar_mass
Solar myth (Latin: solaris «solar») — mythologization of the Sun and its impact on earthly life; usually closely associated with lunar myths. Contrary to the assumptions of ethnographers of the 19th and early 20th centuries, in the "primitive", archaic religious and mythological systems, a particularly revered "cult of the Sun" is not observed. In them, the Sun is perceived as a minor character or even an inanimate object. Among the archaic solar myths are myths about the emergence of the Sun and the destruction of superfluous suns, about the disappearance and return of the Sun, common among African, Siberian, and Australian peoples. As Vyacheslav Ivanov suggests, twin myths about the Sun and the Moon and the motif of the “heavenly wedding” also seem archaic. In the most ancient versions (in particular, among the Siberian peoples), the Sun in this pair represents a woman, and the Moon represents a man. [ 1 ] According to the ethnographer Arthur Hocart , the cult of the Sun comes to the fore in cultures where the role of the "sacred king" is increasing. In Sumerian-Akkadian mythology, the sun god Shamash is still inferior in importance to the moon god, but is already becoming one of the most revered deities. Solar cults play an important role in ancient Egyptian religion. Among the Egyptian solar deities are Ra , Horus , Amun , Khepri - the scarab god, rolling the Sun across the sky. In the 14th century BC Pharaoh Akhenaten attempts a radical religious reform and introduces a single cult of the Aten in Egypt (originally the personification of the solar disk). [ 2 ] Solar cults occupy an important place in Indo-European mythology, where they are associated with the cult of the horse and the image of the divine twins ( Ashvins , Dioscuri). According to Indo-European ideas, the Sun “travels” (or “carries”) across the sky on a horse-drawn cart, passing through the sky in a day. Examples of Indo-European solar deities are the ancient Indian Surya, the Greek Apollo and Helios , the Roman Sol. Solar origin has one of the main deities of late Zoroastrianism - Mitra . Various researchers associate the Slavic gods Dazhbog , Khors with the cult of the Sun; the lack of information on Slavic pre-Christian mythology does not allow us to unambiguously confirm or refute these constructions. Developed solar cults existed in South and Mesoamerica ( Huitzilopochtli , Inti ). The supreme deity in the Japanese pantheon of Shinto is the sun goddess Amaterasu . Azerbaijani historian Aydin Mammadov writes that in the pre-Islamic spiritual culture of the Azerbaijani people, beliefs and rituals associated with the cult of the Sun occupy a special place. The cult of the Sun arose in ancient times as a result of the natural human need for sunlight and warmth and is firmly rooted in the minds of people, in their mythologized thinking. In Azerbaijan, the cult of the daylight experienced its heyday in the Bronze Age. According to many researchers, dolmens and cromlechs known in Azerbaijan are also associated with the cult of the Sun. [ 3 ] Ethnographers of the mythological school of the 18-19th centuries gave exaggerated significance to solar myths, declaring various cult heroes and mythological characters as personifications of the Sun, who in fact have no real connections with it. These exaggerations in their turn prompted parodic essays, which ostensibly demonstrated that figures such as Napoleon Bonaparte [ 4 ] and Max Müller [ 5 ] were solar myths.
https://en.wikipedia.org/wiki/Solar_myths
A solar neutrino is a neutrino originating from nuclear fusion in the Sun 's core , and is the most common type of neutrino passing through any source observed on Earth at any particular moment. [ citation needed ] Neutrinos are elementary particles with extremely small rest mass and a neutral electric charge . They only interact with matter via weak interaction and gravity , making their detection very difficult. This has led to the now-resolved solar neutrino problem . Much is now known about solar neutrinos, but research in this field is ongoing. The timeline of solar neutrinos and their discovery dates back to the 1960s, beginning with the two astrophysicists John N. Bahcall and Raymond Davis Jr . The experiment, known as the Homestake experiment , named after the town in which it was conducted (Homestake, South Dakota ), aimed to count the solar neutrinos arriving at Earth. Bahcall, using a solar model he developed, came to the conclusion that the most effective way to study solar neutrinos would be via the chlorine-argon reaction. [ 1 ] Using his model, Bahcall was able to calculate the number of neutrinos expected to arrive at Earth from the Sun. [ 2 ] Once the theoretical value was determined, the astrophysicists began pursuing experimental confirmation. Davis developed the idea of taking hundreds of thousands of liters of perchloroethylene , a chemical compound made up of carbon and chlorine , and searching for neutrinos using a chlorine-argon detector. [ 1 ] The process was conducted very far underground, hence the decision to conduct the experiment in Homestake as the town was home to the Homestake Gold Mine. [ 1 ] By conducting the experiment deep underground, Bahcall and Davis were able to avoid cosmic ray interactions which could affect the process and results. [ 2 ] The entire experiment lasted several years as it was able to detect only a few chlorine to argon conversions each day, and the first results were not yielded by the team until 1968. [ 2 ] To their surprise, the experimental value of the solar neutrinos present was less than 20% of the theoretical value Bahcall calculated. [ 2 ] At the time, it was unknown if there was an error with the experiment or with the calculations, or if Bahcall and Davis did not account for all variables, but this discrepancy gave birth to what became known as the solar neutrino problem . Davis and Bahcall continued their work to understand where they may have gone wrong or what they were missing, along with other astrophysicists who also did their own research on the subject. Many reviewed and redid Bahcall's calculations in the 1970s and 1980s, and although there was more data making the results more precise, the difference still remained. [ 3 ] Davis even repeated his experiment changing the sensitivity and other factors to make sure nothing was overlooked, but he found nothing and the results still showed "missing" neutrinos. [ 3 ] By the end of the 1970s, the widely expected result was the experimental data yielded about 39% of the calculated number of neutrinos. [ 2 ] In 1969, Bruno Pontecorvo , an Italo-Russian astrophysicist, suggested a new idea that maybe we do not quite understand neutrinos like we think we do, and that neutrinos could change in some way, meaning the neutrinos that are released by the sun changed form and were no longer neutrinos the way neutrinos were thought of by the time they reached Earth where the experiment was conducted. [ 3 ] This theory Pontecorvo had would make sense in accounting for the discrepancy between the experimental and theoretical results that persisted. Pontecorvo was never able to prove his theory, but he was on to something with his thinking. In 2002, results from an experiment conducted 2100 meters underground at the Sudbury Neutrino Observatory proved and supported Pontecorvo's theory and discovered that neutrinos released from the Sun can in fact change form or flavor because they are not completely massless. [ 4 ] This discovery of neutrino oscillation solved the solar neutrino problem, nearly 40 years after Davis and Bahcall began studying solar neutrinos. The Super-Kamiokande is a 50,000 ton water Cherenkov detector 2,700 meters (8,900 ft) underground. [ 5 ] The primary uses for this detector in Japan in addition to neutrino observation is cosmic ray observation as well as searching for proton decay. In 1998, the Super-Kamiokande was the site of the Super-Kamiokande experiment which led to the discovery of neutrino oscillation, the process by neutrinos change their flavor, either to electron, muon or tau. The Super-Kamiokande experiment began in 1996 and is still active. [ 6 ] In the experiment, the detector works by being able to spot neutrinos by analyzing the blue Cherenkov light emitted by electrons removed from water molecules by neutrinos. [ 7 ] Therefore, when this detection of blue light happens it can be inferred that a neutrino is present and counted. The Sudbury Neutrino Observatory (SNO), a 2,100 m (6,900 ft) underground observatory in Sudbury , Canada, is the other site where neutrino oscillation research was taking place in the late 1990s and early 2000s. The results from experiments at this observatory along with those at Super-Kamiokande are what helped solve the solar neutrino problem. The SNO is also a heavy-water Cherenkov detector and designed to work the same way as the Super-Kamiokande. The Neutrinos when reacted with heavy water produce the blue Cherenkov light, signaling the detection of neutrinos to researchers and observers. [ 8 ] The Borexino detector is located at the Laboratori Nazionali de Gran Sasso , Italy. [ 9 ] Borexino is an actively used detector, and experiments are on-going at the site. The goal of the Borexino experiment is measuring low energy, typically below 1 MeV, solar neutrinos in real-time. [ 9 ] The detector is a complex structure consisting of photomultipliers, electrons, and calibration systems making it equipped to take proper measurements of the low energy solar neutrinos. [ 9 ] Photomultipliers are used as the detection device in this system as they are able to detect light for extremely weak signals. [ 10 ] Solar neutrinos are able to provide direct insight into the core of the Sun because that is where the solar neutrinos originate. [ 1 ] Solar neutrinos leaving the Sun's core reach Earth before light does due to the fact solar neutrinos do not interact with any other particle or subatomic particle during their path, while light ( photons ) bounces around from particle to particle. [ 1 ] The Borexino experiment used this phenomenon to discover that the Sun releases the same amount of energy currently as it did a 100,000 years ago. [ 1 ] Solar neutrinos are produced in the core of the Sun through various nuclear fusion reactions, each of which occurs at a particular rate and leads to its own spectrum of neutrino energies. Details of the more prominent of these reactions are described below. The main contribution comes from the proton–proton chain . The reaction is: or in words: Of all Solar neutrinos, approximately 91% are produced from this reaction. [ 11 ] As shown in the figure titled "Solar neutrinos (proton–proton chain) in the standard solar model", the deuteron will fuse with another proton to create a 3 He nucleus and a gamma ray. This reaction can be seen as: The isotope 4 He can be produced by using the 3 He in the previous reaction which is seen below. With both helium-3 and helium-4 now in the environment, one of each weight of helium nucleus can fuse to produce beryllium: Beryllium-7 can follow two different paths from this stage: It could capture an electron and produce the more stable lithium-7 nucleus and an electron neutrino, or alternatively, it could capture one of the abundant protons, which would create boron-8 . The first reaction via lithium-7 is: This lithium-yielding reaction produces approximately 7% of the solar neutrinos. [ 11 ] The resulting lithium-7 later combines with a proton to produce two nuclei of helium-4. The alternative reaction is proton capture, that produces boron-8, which then beta + decays into beryllium-8 as shown below: This alternative boron-yielding reaction produces about 0.02% of the solar neutrinos; although so few that they would conventionally be neglected, these rare solar neutrinos stand out because of their higher average energies. The asterisk (*) on the beryllium-8 nucleus indicates that it is in an excited, unstable state. The excited beryllium-8 nucleus then splits into two helium-4 nuclei: [ 12 ] The highest flux of solar neutrinos come directly from the proton–proton interaction, and have a low energy, up to 400 keV. There are also several other significant production mechanisms, with energies up to 18 MeV. [ 13 ] From the Earth, the amount of neutrino flux at Earth is around 7·10 10 particles·cm −2 ·s −1 . [ 14 ] The number of neutrinos can be predicted with great confidence by the standard solar model , but the number of neutrinos detected on Earth versus the number of neutrinos predicted are different by a factor of a third, which is the solar neutrino problem . Solar models additionally predict the location within the Sun's core where solar neutrinos should originate, depending on the nuclear fusion reaction which leads to their production. Future neutrino detectors will be able to detect the incoming direction of these neutrinos with enough precision to measure this effect. [ 15 ] The energy spectrum of solar neutrinos is also predicted by solar models. [ 16 ] It is essential to know this energy spectrum because different neutrino detection experiments are sensitive to different neutrino energy ranges. The Homestake experiment used chlorine and was most sensitive to solar neutrinos produced by the decay of the beryllium isotope 7 Be. The Sudbury Neutrino Observatory is most sensitive to solar neutrinos produced by 8 B. The detectors that use gallium are most sensitive to the solar neutrinos produced by the proton–proton chain reaction process, however they were not able to observe this contribution separately. The observation of the neutrinos from the basic reaction of this chain, proton–proton fusion in deuterium, was achieved for the first time by Borexino in 2014. In 2012 the same collaboration reported detecting low-energy neutrinos for the proton–electron–proton ( pep reaction ) that produces 1 in 400 deuterium nuclei in the Sun. [ 17 ] [ 18 ] The detector contained 100 metric tons of liquid and saw on average 3 events each day (due to 11 C production ) from this relatively uncommon thermonuclear reaction. In 2014, Borexino reported a successful direct detection of neutrinos from the pp-reaction at a rate of 144±33/day, consistent with the predicted rate of 131±2/day that was expected based on the standard solar model prediction that the pp-reaction generates 99% of the Sun's luminosity and their analysis of the detector's efficiency. [ 19 ] [ 20 ] And in 2020, Borexino reported the first detection of CNO cycle neutrinos from deep within the solar core. [ 21 ] Note that Borexino measured neutrinos of several energies; in this manner they have demonstrated experimentally, for the first time, the pattern of solar neutrino oscillations predicted by the theory. Neutrinos can trigger nuclear reactions. By looking at ancient ores of various ages that have been exposed to solar neutrinos over geologic time, it may be possible to interrogate the luminosity of the Sun over time, [ 22 ] which, according to the standard solar model, has changed over the eons as the (presently) inert byproduct helium has accumulated in its core. Wolfgang Pauli was the first to suggest the idea of a particle such as the neutrino existing in our universe in 1930. He believed such a particle to be completely massless. [ 23 ] This was the belief amongst the astrophysics community until the solar neutrino problem was solved. [ citation needed ] Frederick Reines , from the University of California at Irvine, and Clyde Cowan were the first astrophysicists to detect neutrinos in 1956. They won a Nobel Prize in Physics for their work in 1995. [ 24 ] Raymond Davis and John Bahcall are the pioneers of solar neutrino studies. While Bahcall never won a Nobel Prize , Davis along with Masatoshi Koshiba won the Nobel Prize in Physics in 2002 after the solar neutrino problem was solved for their contributions in helping solve the problem. Pontecorvo, known as the first astrophysicist to suggest the idea neutrinos have some mass and can oscillate, never received a Nobel Prize for his contributions due to his passing in 1993. [ speculation? ] Arthur B. McDonald , a Canadian physicist, was a key contributor in building the Sudbury Neutrino Observatory (SNO) in the mid 1980s and later became the director of the SNO and leader of the team that solved the solar neutrino problem. [ 23 ] McDonald, along with Japanese physicist Kajita Takaaki both received a Nobel Prize for their work discovering the oscillation of neutrinos in 2015. [ 23 ] The critical issue of the solar neutrino problem, that many astrophysicists interested in solar neutrinos studied and attempted to solve in late 1900s and early 2000s, is solved. In the 21st century, even without a main problem to solve, there is still unique and novel research ongoing in this field of astrophysics. This research, published in 2017, aimed to solve the solar neutrino and antineutrino flux for extremely low energies (keV range). [ 25 ] Processes at these low energies consisted vital information that told researchers about the solar metallicity . [ 25 ] Solar metallicity is the measure of elements present in the particle that are heavier than hydrogen and helium , typically in this field this element is usually iron . [ 26 ] The results from this research yielded significantly different findings compared to past research in terms of the overall flux spectrum. [ 25 ] Currently technology does not yet exist to put these findings to the test. [ 25 ] This research, published in 2017, aimed to search for the solar neutrino effective magnetic moment . [ 27 ] The search was completed using data from exposure from the Borexino experiment's second phase which consisted of data over 1291.5 days (3.54 years). [ 27 ] The results yielded that the electron recoil spectrum shape was as expected with no major changes or deviations from it. [ 27 ]
https://en.wikipedia.org/wiki/Solar_neutrino
The solar neutrino problem concerned a large discrepancy between the flux of solar neutrinos as predicted from the Sun 's luminosity and as measured directly. The discrepancy was first observed in the mid-1960s and was resolved around 2002. The flux of neutrinos at Earth is several tens of billions per square centimetre per second, mostly from the Sun's core . They are nevertheless difficult to detect, because they interact very weakly with matter, traversing the whole Earth . Of the three types ( flavors ) of neutrinos known in the Standard Model of particle physics , the Sun produces only electron neutrinos . When neutrino detectors became sensitive enough to measure the flow of electron neutrinos from the Sun, the number detected was much lower than predicted. In various experiments, the number deficit was between one half and two thirds. Particle physicists knew that a mechanism, discussed in 1957 by Bruno Pontecorvo , could explain the deficit in electron neutrinos. [ 1 ] However, they hesitated to accept it for various reasons, including the fact that it required a modification of the accepted Standard Model. They first pointed at the solar model for adjustment, which was ruled out. Today it is accepted that the neutrinos produced in the Sun are not massless particles as predicted by the Standard Model but rather a superposition of defined- mass eigenstates in different ( complex ) proportions. That allows a neutrino produced as a pure electron neutrino to change during propagation into a mixture of electron, muon and tau neutrinos, with a reduced probability of being detected by a detector sensitive to only electron neutrinos. Several neutrino detectors aiming at different flavors, energies, and traveled distance contributed to our present knowledge of neutrinos. In 2002 and 2015, a total of four researchers related to some of these detectors were awarded the Nobel Prize in Physics . The Sun performs nuclear fusion via the proton–proton chain reaction , which converts four protons into alpha particles , neutrinos , positrons , and energy. This energy is released in the form of electromagnetic radiation, as gamma rays , as well as in the form of the kinetic energy of both the charged particles and the neutrinos. The neutrinos travel from the Sun's core to Earth without any appreciable absorption by the Sun's outer layers. In the late 1960s, Ray Davis and John N. Bahcall 's Homestake Experiment was the first to measure the flux of neutrinos from the Sun and detect a deficit. The experiment used a chlorine -based detector. Many subsequent radiochemical and water Cherenkov detectors confirmed the deficit, including the Kamioka Observatory and Sudbury Neutrino Observatory . The expected number of solar neutrinos was computed using the standard solar model , which Bahcall had helped establish. The model gives a detailed account of the Sun's internal operation. In 2002, Ray Davis and Masatoshi Koshiba won part of the Nobel Prize in Physics for experimental work which found the number of solar neutrinos to be around a third of the number predicted by the standard solar model. [ 2 ] In recognition of the firm evidence provided by the 1998 and 2001 experiments "for neutrino oscillation", Takaaki Kajita from the Super-Kamiokande Observatory and Arthur McDonald from the Sudbury Neutrino Observatory (SNO) were awarded the 2015 Nobel Prize for Physics . [ 3 ] [ 4 ] The Nobel Committee for Physics, however, erred in mentioning neutrino oscillations in regard to the SNO Experiment: for the high-energy solar neutrinos observed in that experiment, it is not neutrino oscillations, but rather the Mikheyev–Smirnov–Wolfenstein effect , that produced the observed results. [ 5 ] [ 6 ] Bruno Pontecorvo was not included in these Nobel prizes since he died in 1993. Early attempts to explain the discrepancy proposed that the models of the Sun were wrong, i.e., the temperature and pressure in the interior of the Sun were substantially different from what was believed. For example, since neutrinos measure the amount of current nuclear fusion, it was suggested that the nuclear processes in the core of the Sun might have temporarily shut down. Since it takes thousands of years for heat energy to move from the core to the surface of the Sun, this would not immediately be apparent. Advances in helioseismology observations made it possible to infer the interior temperatures of the Sun; these results agreed with the well established standard solar model . Detailed observations of the neutrino spectrum from more advanced neutrino observatories produced results which no adjustment of the solar model could accommodate: while the overall lower neutrino flux (which the Homestake experiment results found) required a reduction in the solar core temperature, details in the energy spectrum of the neutrinos required a higher core temperature. This happens because different nuclear reactions, whose rates have different dependence upon the temperature, produce neutrinos with different energy. Any adjustment to the solar model worsened at least one aspect of the discrepancies. [ 7 ] The solar neutrino problem was resolved with an improved understanding of the properties of neutrinos. According to the Standard Model of particle physics, there are three flavors of neutrinos: electron neutrinos , muon neutrinos , and tau neutrinos . Electron neutrinos are the ones produced in the Sun and the ones detected by the above-mentioned experiments, in particular the chlorine-detector Homestake Mine experiment. Through the 1970s, it was widely believed that neutrinos were massless and their flavors were invariant. However, in 1968 Pontecorvo proposed that if neutrinos had mass, then they could change from one flavor to another. [ 8 ] Thus, the "missing" solar neutrinos could be electron neutrinos which changed into other flavors along the way to Earth, rendering them invisible to the detectors in the Homestake Mine and contemporary neutrino observatories. The supernova 1987A indicated that neutrinos might have mass because of the difference in time of arrival of the neutrinos detected at Kamiokande and IMB . [ 9 ] However, because very few neutrino events were detected, it was difficult to draw any conclusions with certainty. If Kamiokande and IMB had high-precision timers to measure the travel time of the neutrino burst through the Earth, they could have more definitively established whether or not neutrinos had mass. If neutrinos were massless, they would travel at the speed of light; if they had mass, they would travel at velocities slightly less than that of light. Since the detectors were not intended for supernova neutrino detection, they didn't have precise timing and this could not be done. Strong evidence for neutrino oscillation came in 1998 from the Super-Kamiokande collaboration in Japan. [ 10 ] It produced observations consistent with muon neutrinos (produced in the upper atmosphere by cosmic rays ) changing into tau neutrinos within the Earth: Fewer atmospheric neutrinos were detected coming through the Earth than coming directly from above the detector. These observations concerned only muon neutrinos; no tau neutrinos were observed at Super-Kamiokande. The result made it more plausible that the deficit in the electron-flavor neutrinos observed in the (relatively low-energy) Homestake experiment also had to do with neutrino mass and flavor-changing. One year later, the Sudbury Neutrino Observatory (SNO) started collecting data. That experiment aimed at the 8 B solar neutrinos , which at around 10 MeV are not much affected by oscillation in both the Sun and the Earth. A large deficit is nevertheless expected due to the Mikheyev–Smirnov–Wolfenstein effect as had been calculated by Alexei Smirnov in 1985. SNO's unique design employing a large quantity of heavy water as the detection medium was proposed by Herb Chen , also in 1985. [ 11 ] SNO observed electron neutrinos specifically, and all flavors of neutrinos collectively, hence the fraction of electron neutrinos could be calculated. [ 12 ] After extensive statistical analysis, the SNO collaboration determined that fraction to be about 34%, [ 13 ] in perfect agreement with prediction. The total number of detected 8 B neutrinos also agrees with the then-rough predictions from the solar model. [ 14 ]
https://en.wikipedia.org/wiki/Solar_neutrino_problem
Solar observation is the scientific endeavor of studying the Sun and its behavior and relation to the Earth and the remainder of the Solar System . Deliberate solar observation began thousands of years ago. That initial era of direct observation gave way to telescopes in the 1600s followed by satellites in the twentieth century. Stratigraphic data suggest that solar cycles have occurred for hundreds of millions of years, if not longer; measuring varves in precambrian sedimentary rock has revealed repeating peaks in layer thickness corresponding to the cycle. It is possible that the early atmosphere on Earth was more sensitive to solar irradiation than today, so that greater glacial melting (and thicker sediment deposits) could have occurred during years with greater sunspot activity. [ 1 ] [ 2 ] This would presume annual layering; however, alternative explanations (diurnal) have also been proposed. [ 3 ] Analysis of tree rings revealed a detailed picture of past solar cycles: Dendrochronologically dated radiocarbon concentrations have allowed for a reconstruction of sunspot activity covering 11,400 years. [ 4 ] Solar activity and related events have been regularly recorded since the time of the Babylonians . In the 8th century BC, [ 5 ] they described solar eclipses and possibly predicted them from numerological rules. The earliest extant report of sunspots dates back to the Chinese Book of Changes , c. 800 BC . The phrases used in the book translate to "A dou is seen in the Sun" and "A mei is seen in the Sun", where dou and mei would be darkening or obscuration (based on the context). Observations were regularly noted by Chinese and Korean astronomers at the behest of the emperors, rather than independently. [ 5 ] The first clear mention of a sunspot in Western literature, around 300 BC, was by the ancient Greek scholar Theophrastus , student of Plato and Aristotle and successor to the latter. [ 6 ] The Royal Frankish Annals record that beginning on 17 March AD 807 a large sunspot that was visible for eight days and incorrectly conclude that it was a transit of Mercury . [ 7 ] The earliest surviving record of deliberate sunspot observation dates from 364 BC, based on comments by Chinese astronomer Gan De in a star catalogue . [ 8 ] By 28 BC, Chinese astronomers were regularly recording sunspot observations in official imperial records. [ 9 ] A large sunspot was observed at the time of Charlemagne 's death in AD 813. [ 10 ] Sunspot activity in 1129 was described by John of Worcester and Averroes provided a description of sunspots later in the 12th century; [ 11 ] however, these observations were also misinterpreted as planetary transits. [ 12 ] The first unambiguous mention of the solar corona was by Leo Diaconus , a Byzantine historian. He wrote of the 22 December 968 total eclipse, which he experienced in Constantinople (modern-day Istanbul, Turkey): [ 13 ] at the fourth hour of the day ... darkness covered the earth and all the brightest stars shone forth. And it was possible to see the disk of the Sun, dull and unlit, and a dim and feeble glow like a narrow band shining in a circle around the edge of the disk. The earliest known record of a sunspot drawing was in 1128, by John of Worcester . [ 14 ] In the third year of Lothar, emperor of the Romans, in the twenty-eighth year of King Henry of the English...on Saturday, 8 December, there appeared from the morning right up to the evening two black spheres against the sun. Another early observation was of solar prominences, described in 1185 in the Russian Chronicle of Novgorod . [ 13 ] In the evening there as an eclipse of the sun. It was getting very gloomy and stars were seen ... The sun became similar in appearance to the moon and from its horns came out somewhat like live embers. Giordano Bruno and Johannes Kepler suggested the idea that the sun rotated on its axis. [ 16 ] Sunspots were first observed telescopically on 18 December 1610 (Gregorian calendar, not yet adopted in England) by English astronomer Thomas Harriot , as recorded in his notebooks. [ 17 ] On 9 March 1611 (Gregorian calendar, also not yet adopted in East Frisia) they were observed by Frisian medical student Johann Goldsmid (latinised name Johannes Fabricius ) who subsequently teamed up with his father David Fabricius , a pastor and astronomer, to make further observations and to publish a description in a pamphlet in June 1611. [ 18 ] The Fabricius' used camera obscura telescopy to get a better view of the solar disk, and like Harriot made observations shortly after sunrise and shortly before sunset. Johann was the first to realize that sunspots revealed solar rotation, but he died on 19 March 1616, aged 26 and his father a year later. Several scientists such as Johannes Kepler , Simon Marius , and Michael Maestlin were aware of the Fabricius' early sunspot work, and indeed Kepler repeatedly referred to it his writings. However, like that of Harriot, their work was otherwise not well known. Galileo Galilei almost certainly began telescopic sunspot observations around the same time as Harriot, given he made his first telescope in 1609 on hearing of the Dutch patent of the device, and that he had managed previously to make naked-eye observations of sunspots. He is also reported to have shown sunspots to astronomers in Rome, but we do not have records of the dates. The records of telescopic observations of sunspots that we do have from Galileo do not start until 1612, for when they are of unprecedented quality and detail as by then he had developed the telescope design and greatly increased its magnification. [ 19 ] Likewise Christoph Scheiner had probably been observing the spots using an improved helioscope of his own design. Galileo and Scheiner, neither of whom knew of the work of Harriot or Fabricius vied for the credit for the discovery. In 1613, in Letters on Sunspots , Galileo refuted Scheiner's 1612 claim that sunspots were planets inside Mercury's orbit, showing that sunspots were surface features. [ 18 ] [ 20 ] Although the physical aspects of sunspots were not identified until the 20th century, observations continued. [ 21 ] Study was hampered during the 17th century due to the low number of sunspots during what is now recognized as an extended period of low solar activity, known as the Maunder Minimum . By the 19th century, then-sufficient sunspot records allowed researchers to infer periodic cycles in sunspot activity. In 1845, Henry and Alexander observed the Sun with a thermopile and determined that sunspots emitted less radiation than surrounding areas. The emission of higher than average amounts of radiation later were observed from the solar faculae . [ 22 ] Sunspots had some importance in the debate over the nature of the Solar System . They showed that the Sun rotated, and their comings and goings showed that the Sun changed, contrary to Aristotle, who had taught that all celestial bodies were perfect, unchanging spheres. Sunspots were rarely recorded between 1650 and 1699. Later analysis revealed the problem to be a reduced number of sunspots, rather than observational lapses. Building upon Gustav Spörer 's work, the wife-and-husband team of Annie Maunder and Edward Maunder suggested that the Sun had changed from a period in which sunspots all but disappeared to a renewal of sunspot cycles starting in about 1700. Adding to this understanding of the absence of solar cycles were observations of aurorae , which were absent at the same time, except at the very highest magnetic latitudes [ 23 ] The lack of a solar corona during solar eclipses was also noted prior to 1715. [ 24 ] The period of low sunspot activity from 1645 to 1717 later became known as the " Maunder Minimum ". [ 25 ] Observers such as Johannes Hevelius , Jean Picard and Jean Dominique Cassini confirmed this change. [ 20 ] After the detection of infra-red radiation by William Herschel in 1800 and of Ultraviolet radiation by Johann Wilhelm Ritter , solar spectrometry began in 1817 when William Hyde Wollaston noticed that dark lines appeared in the solar spectrum when viewed through a glass prism . Joseph von Fraunhofer later independently discovered the lines and they were named Fraunhofer lines after him. Other physicists discerned that properties of the solar atmosphere could be determined from them. Notable scientists to advance spectroscopy were David Brewster , Gustav Kirchhoff , Robert Wilhelm Bunsen and Anders Jonas Ångström . [ 26 ] The cyclic variation of the number of sunspots was first observed by Samuel Heinrich Schwabe between 1826 and 1843. [ 27 ] Rudolf Wolf studied the historical record in an attempt to establish a history of solar variations. His data extended only to 1755. He also established in 1848 a relative sunspot number formulation to compare the work of different astronomers using varying equipment and methodologies, now known as the Wolf (or Zürich) sunspot number . Gustav Spörer later suggested a 70-year period before 1716 in which sunspots were rarely observed as the reason for Wolf's inability to extend the cycles into the 17th century. Also in 1848, Joseph Henry projected an image of the Sun onto a screen and determined that sunspots were cooler than the surrounding surface. [ 28 ] Around 1852, Edward Sabine, Wolf, Jean-Alfred Gautier and Johann von Lamont independently found a link between the solar cycle and geomagnetic activity, sparking the first research into interactions between the Sun and the Earth. [ 29 ] In the second half of the nineteenth century Richard Carrington and Spörer independently noted the migration of sunspot activity towards the solar equator as the cycle progresses. This pattern is best visualized in the form of the so-called butterfly diagram, first constructed by Edward Walter Maunder and Annie Scott Dill Maunder in the early twentieth century (see graph). Images of the Sun are divided into latitudinal strips, and the monthly-averaged fractional surface of sunspots calculated. This is plotted vertically as a color-coded bar, and the process is repeated month after month to produce a time-series diagram. Half a century later, the father-and-son team of Harold and Horace Babcock showed that the solar surface is magnetized even outside of sunspots; that this weaker magnetic field is to first order a dipole ; and that this dipole undergoes polarity reversals with the same period as the sunspot cycle (see graph below). These observations established that the solar cycle is a spatiotemporal magnetic process unfolding over the Sun as a whole. The Sun was photographed for the first time, on 2 April 1845, by French physicists Louis Fizeau and Léon Foucault . Sunspots, as well as the limb darkening effect, are visible in their daguerrotypes . Photography assisted in the study of solar prominences, granulation and spectroscopy. Charles A. Young first captured a prominence in 1870. Solar eclipses were also photographed, with the most useful early images taken in 1851 by Berkowski and in 1860 by De la Rue's team in Spain. [ 29 ] Early estimates of the Sun's rotation period varied between 25 and 28 days. The cause was determined independently in 1858 by Richard C. Carrington and Spörer . They discovered that the latitude with the most sunspots decreases from 40° to 5° during each cycle, and that at higher latitudes sunspots rotate more slowly. The Sun's rotation was thus shown to vary by latitude and that its outer layer must be fluid. In 1871 Hermann Vogel , and shortly thereafter by Charles Young confirmed this spectroscopically. Nils Dúner 's spectroscopic observation in the 1880s showed a 30% difference between the Sun's faster equatorial regions and its slower polar regions. [ 29 ] The first modern, and clearly described, accounts of a solar flare and coronal mass ejection occurred in 1859 and 1860 respectively. On 1 September 1859, Richard C. Carrington, while observing sunspots, saw patches of increasingly bright light within a group of sunspots, which then dimmed and moved across that area within a few minutes. This event, also reported by R. Hodgson, is a description of a solar flare. The widely viewed total solar eclipse on 18 July 1860 resulted in many drawings, depicting an anomalous feature that corresponds with modern CME observations. [ 26 ] For many centuries, the earthly effects of solar variation were noticed but not understood. E.g., displays of auroral light have long been observed at high latitudes, but were not linked to the Sun. In 1724, George Graham reported that the needle of a magnetic compass was regularly deflected from magnetic north over the course of each day. This effect was eventually attributed to overhead electric currents flowing in the ionosphere and magnetosphere by Balfour Stewart in 1882, and confirmed by Arthur Schuster in 1889 from analysis of magnetic observatory data. In 1852, astronomer and British major general Edward Sabine showed that the probability of the occurrence of magnetic storms on Earth was correlated with the number of sunspots , thus demonstrating a novel solar-terrestrial interaction. In 1859, a great magnetic storm caused brilliant auroral displays and disrupted global telegraph operations. Richard Carrington correctly connected the storm with a solar flare that he had observed the day before in the vicinity of a large sunspot group—thus demonstrating that specific solar events could affect the Earth. Kristian Birkeland explained the physics of aurora by creating artificial aurora in his laboratory and predicted the solar wind . Early in the 20th century, interest in astrophysics grew in America, and multiple observatories were built. [ 30 ] : 320 Solar telescopes (and thus, solar observatories), were installed at Mount Wilson Observatory in California in 1904, [ 30 ] : 324 and in the 1930s at McMath–Hulbert Observatory . [ 31 ] Interest also grew in other parts of the world, with the establishment of the Kodaikanal Solar Observatory in India at the turn of the century, [ 32 ] the Einsteinturm in Germany in 1924, [ 33 ] and the Solar Tower Telescope at the National Observatory of Japan in 1930. [ 34 ] Around 1900, researchers began to explore connections between solar variations and Earth's weather. Smithsonian Astrophysical Observatory (SAO) assigned Abbot and his team to detect changes in the radiation of the Sun. They began by inventing instruments to measure solar radiation. Later, when Abbot was SAO head, they established a solar station at Calama, Chile to complement its data from Mount Wilson Observatory . He detected 27 harmonic periods within the 273-month Hale cycles , including 7, 13, and 39-month patterns. He looked for connections to weather by means such as matching opposing solar trends during a month to opposing urban temperature and precipitation trends. With the advent of dendrochronology , scientists such as Glock attempted to connect variation in tree growth to periodic solar variations and infer long-term secular variability in the solar constant from similar variations in millennial-scale chronologies. [ 35 ] Until the 1930s, little progress was made on understanding the Sun's corona, as it could only be viewed during infrequent total solar eclipses. Bernard Lyot 's 1931 invention of the Coronagraph – a telescope with an attachment to block out the direct light of the solar disk – allowed the corona to be studied in full daylight. [ 26 ] American astronomer George Ellery Hale , as an MIT undergraduate, invented the spectroheliograph , with which he made the discovery of solar vortices . In 1908, Hale used a modified spectroheliograph to show that the spectra of hydrogen exhibited the Zeeman effect whenever the area of view passed over a sunspot on the solar disc. This was the first indication that sunspots were basically magnetic phenomena, which appeared in opposite polarity pairs. [ 36 ] Hale's subsequent work demonstrated a strong tendency for east-west alignment of magnetic polarities in sunspots, with mirror symmetry across the solar equator; and that the magnetic polarity for sunspots in each hemisphere switched orientation from one solar cycle to the next. [ 37 ] This systematic property of sunspot magnetic fields is now commonly referred to as the Hale–Nicholson law , [ 38 ] or in many cases simply Hale's laws . The introduction of radio revealed periods of extreme static or noise. Severe radar jamming during a large solar event in 1942 led to the discovery of solar radio bursts. Many satellites in Earth orbit or in the heliosphere have deployed solar telescopes and instruments of various kinds for in situ measurements of particles and fields. Skylab , a notable large solar observational facility, grew out if the impetus of the International Geophysical Year campaign and the facilities of NASA . Other spacecraft, in an incomplete list, have included the OSO series, the Solar Maximum Mission , Yohkoh , SOHO , ACE , TRACE , and SDO among many others; still other spacecraft (such as MESSENGER , Fermi , and NuSTAR ) have contributed solar measurements by individual instruments. Modulation of solar bolometric radiation by magnetically active regions, and more subtle effects, was confirmed by satellite measurements of the total solar irradiance (TSI) by the ACRIM1 experiment on the Solar Maximum Mission (launched in 1980). [ 39 ] The modulations were later confirmed in the results of the ERB experiment launched on the Nimbus 7 satellite in 1978. [ 40 ] Satellite observation was continued by ACRIM-3 and other satellites. [ 41 ] Direct irradiance measurements have been available during the last three cycles and are a composite of multiple observing satellites. [ 41 ] [ 42 ] However, the correlation between irradiance measurements and other proxies of solar activity make it reasonable to estimate solar activity for earlier cycles. Most important among these proxies is the record of sunspot observations that has been recorded since ~1610. Solar radio emissions at 10.7 cm wavelength provide another proxy that can be measured from the ground, since the atmosphere is transparent to such radiation. Other proxy data – such as the abundance of cosmogenic isotopes – have been used to infer solar magnetic activity, and thus likely brightness, over several millennia. Total solar irradiance has been claimed to vary in ways that are not predicted by sunspot changes or radio emissions. These shifts may be the result of inaccurate satellite calibration. [ 43 ] [ 44 ] A long-term trend may exist in solar irradiance. [ 45 ] The Sun was, until the 1990s, the only star whose surface had been resolved. [ 46 ] Other major achievements included understanding of: [ 47 ] The most powerful flare observed by satellite instrumentation began on 4 November 2003 at 19:29 UTC, and saturated instruments for 11 minutes. Region 486 has been estimated to have produced an X-ray flux of X28 . Holographic and visual observations indicate significant activity continued on the far side of the Sun. Sunspot and infrared spectral line measurements made in the latter part of the first decade of the 2000s suggested that sunspot activity may again be disappearing, possibly leading to a new minimum. [ 48 ] From 2007 to 2009, sunspot levels were far below average. In 2008, the Sun was spot-free 73 percent of the time, extreme even for a solar minimum. Only 1913 was more pronounced, with no sunspots for 85 percent of that year. The Sun continued to languish through mid-December 2009, when the largest group of sunspots to emerge for several years appeared. Even then, sunspot levels remained well below those of recent cycles. [ 49 ] In 2006, NASA predicted that the next sunspot maximum would reach between 150 and 200 around the year 2011 (30–50% stronger than cycle 23), followed by a weak maximum at around 2022. [ 50 ] [ 51 ] Instead, the sunspot cycle in 2010 was still at its minimum, when it should have been near its maximum, demonstrating its unusual weakness. [ 52 ] Cycle 24's minimum occurred around December 2008 and the next maximum was predicted to reach a sunspot number of 90 around May 2013. [ 53 ] The monthly mean sunspot number in the northern solar hemisphere peaked in November 2011, while the southern hemisphere appears to have peaked in February 2014, reaching a peak monthly mean of 102. Subsequent months declined to around 70 (June 2014). [ 54 ] In October 2014, sunspot AR 12192 became the largest observed since 1990. [ 55 ] The flare that erupted from this sunspot was classified as an X3.1-class solar storm. [ 56 ] Independent scientists of the National Solar Observatory (NSO) and the Air Force Research Laboratory (AFRL) predicted in 2011 that Cycle 25 would be greatly reduced or might not happen at all. [ 57 ]
https://en.wikipedia.org/wiki/Solar_observation
Solar physics is the branch of astrophysics that specializes in the study of the Sun . It intersects with many disciplines of pure physics and astrophysics. Because the Sun is uniquely situated for close-range observing (other stars cannot be resolved with anything like the spatial or temporal resolution that the Sun can), there is a split between the related discipline of observational astrophysics (of distant stars) and observational solar physics. The study of solar physics is also important as it provides a "physical laboratory" for the study of plasma physics. [ 1 ] Babylonians were keeping a record of solar eclipses, with the oldest record originating from the ancient city of Ugarit, in modern-day Syria. This record dates to about 1300 BC. [ 2 ] Ancient Chinese astronomers were also observing solar phenomena (such as solar eclipses and visible sunspots) with the purpose of keeping track of calendars, which were based on lunar and solar cycles. Unfortunately, records kept before 720 BC are very vague and offer no useful information. However, after 720 BC, 37 solar eclipses were noted over the course of 240 years. [ 3 ] Astronomical knowledge flourished in the Islamic world during medieval times. Many observatories were built in cities from Damascus to Baghdad, where detailed astronomical observations were taken. Particularly, a few solar parameters were measured and detailed observations of the Sun were taken. Solar observations were taken with the purpose of navigation, but mostly for timekeeping. Islam requires its followers to pray five times a day, at specific position of the Sun in the sky. As such, accurate observations of the Sun and its trajectory on the sky were needed. In the late 10th century, Iranian astronomer Abu-Mahmud Khojandi built a massive observatory near Tehran. There, he took accurate measurements of a series of meridian transits of the Sun, which he later used to calculate the obliquity of the ecliptic. [ 4 ] Following the fall of the Western Roman Empire, Western Europe was cut from all sources of ancient scientific knowledge, especially those written in Greek. This, plus de-urbanisation and diseases such as the Black Death led to a decline in scientific knowledge in medieval Europe, especially in the early Middle Ages. During this period, observations of the Sun were taken either in relation to the zodiac, or to assist in building places of worship such as churches and cathedrals. [ 5 ] In astronomy, the renaissance period started with the work of Nicolaus Copernicus . He proposed that planets revolve around the Sun and not around the Earth, as it was believed at the time. This model is known as the heliocentric model. [ 6 ] His work was later expanded by Johannes Kepler and Galileo Galilei . Particularly, Galilei used his new telescope to look at the Sun. In 1610, he discovered sunspots on its surface. In the autumn of 1611, Johannes Fabricius wrote the first book on sunspots, De Maculis in Sole Observatis ("On the spots observed in the Sun"). [ 7 ] Modern day solar physics is focused towards understanding the many phenomena observed with the help of modern telescopes and satellites. Of particular interest are the structure of the solar photosphere, the coronal heat problem and sunspots. [ citation needed ] The Solar Physics Division of the American Astronomical Society boasts 555 members (as of May 2007), compared to several thousand in the parent organization. [ 8 ] A major thrust of current (2009) effort in the field of solar physics is integrated understanding of the entire Solar System including the Sun and its effects throughout interplanetary space within the heliosphere and on planets and planetary atmospheres . Studies of phenomena that affect multiple systems in the heliosphere, or that are considered to fit within a heliospheric context, are called heliophysics , a new coinage that entered usage in the early years of the current millennium. Helios-A and Helios-B are a pair of spacecraft launched in December 1974 and January 1976 from Cape Canaveral, as a joint venture between the German Aerospace Center and NASA. Their orbits approach the Sun closer than Mercury. They included instruments to measure the solar wind, magnetic fields, cosmic rays, and interplanetary dust. Helios-A continued to transmit data until 1986. [ 9 ] [ 10 ] The Solar and Heliospheric Observatory, SOHO, is a joint project between NASA and ESA that was launched in December 1995. It was launched to probe the interior of the Sun, make observations of the solar wind and phenomena associated with it and investigate the outer layers of the Sun. [ 11 ] A publicly funded mission led by the Japanese Aerospace Exploration Agency, the HINODE satellite, launched in 2006, consists of a coordinated set of optical, extreme ultraviolet and X-ray instruments. These investigate the interaction between the solar corona and the Sun's magnetic field. [ 12 ] [ 13 ] The Solar Dynamics Observatory (SDO) was launched by NASA in February 2010 from Cape Canaveral. The main goals of the mission are understanding how solar activity arises and how it affects life on Earth by determining how the Sun's magnetic field is generated and structured and how the stored magnetic energy is converted and released into space. [ 14 ] The Parker Solar Probe (PSP) was launched in 2018 with the mission of making detailed observations of the outer solar corona. It has made the closest approaches to the Sun of any artificial object. [ 15 ] The Advanced Technology Solar Telescope (ATST) is a solar telescope facility that is under construction in Maui. Twenty-two institutions are collaborating on the ATST project, with the main funding agency being the National Science Foundation. [ 16 ] Sunspot Solar Observatory (SSO) operates the Richard B. Dunn Solar Telescope (DST) on behalf of the NSF. The Big Bear Solar Observatory in California houses several telescopes including the New Solar Telescope(NTS) which is a 1.6 meter, clear-aperture, off-axis Gregorian telescope. The NTS saw first light in December 2008. Until the ATST comes on line, the NTS remains the largest solar telescope in the world. The Big Bear Observatory is one of several facilities operated by the Center for Solar-Terrestrial Research at New Jersey Institute of Technology (NJIT). [ 17 ] The Extreme Ultraviolet Normal Incidence Spectrograph (EUNIS) is a two channel imaging spectrograph that first flew in 2006. It observes the solar corona with high spectral resolution. So far, it has provided information on the nature of coronal bright points, cool transients and coronal loop arcades. Data from it also helped calibrating SOHO and a few other telescopes. [ 18 ]
https://en.wikipedia.org/wiki/Solar_physics
Solar radiation modification ( SRM ) (or solar geoengineering) is a group of large-scale approaches to reduce global warming by increasing the amount of sunlight that is reflected away from Earth and back to space . It is not intended to replace efforts to reduce greenhouse gas emissions , [ 1 ] but rather to complement them as a potential way to limit global warming. [ 2 ] : 1489 SRM is a form of geoengineering or climate engineering. The most-researched SRM method is stratospheric aerosol injection (SAI), in which small reflective particles would be introduced into the upper atmosphere to reflect sunlight. [ 3 ] : 350 Other approaches include marine cloud brightening (MCB), which would increase the reflectivity of clouds over the oceans, or constructing a space sunshade or a space mirror , to reduce the amount of sunlight reaching earth. Climate models have consistently shown that SRM could reduce global warming and many effects of climate change , [ 4 ] [ 5 ] [ 6 ] including some potential climate tipping points . [ 7 ] However, its effects would vary by region and season, and the resulting climate would differ from one that had not experienced warming. Scientific understanding of these regional effects, including potential environmental risks and side effects, remains limited. [ 2 ] : 1491–1492 SRM also raises complex political, social, and ethical issues. Some worry that its development could reduce the urgency of cutting emissions. Its relatively low direct costs and technical feasibility suggest that it could, in theory, be deployed unilaterally, prompting concerns about international governance . Currently, no comprehensive global framework exists to regulate SRM research or deployment. Interest in SRM has grown in recent years, [ 8 ] driven by continued global warming and slow progress in emissions reductions. This has led to increased scientific research, policy debate, and public discussion, although SRM remains controversial. SRM is also known as sunlight reflection methods, solar climate engineering, albedo modification, and solar radiation management. The interest in solar radiation modification (SRM) arises from ongoing global warming , increasing risks to both human and natural systems. [ 10 ] In principle, achieving net-zero emissions through emissions reductions and carbon dioxide removal (CDR) could halt global warming. However, emissions reductions have consistently fallen short of targets, and large-scale CDR may not be feasible. [ 11 ] [ 12 ] The 2024 UN Environment Programme (UNEP) Emissions Gap Report said that current policies would likely lead to 3.1°C global warming country’s commitments and pledges to reduce emissions would likely lead to 1.9°C warming. [ 13 ] : xviii SRM aims to increase Earth's brightness ( albedo ) by modifying the atmosphere or surface to reflect more sunlight. A 1% increase in planetary albedo could reduce radiative forcing by 2.35 W/m², offsetting most of the warming from current greenhouse gas concentrations. A 2% increase could counteract the warming effect of a doubling of atmospheric carbon dioxide . [ 4 ] : 625 Unlike emissions reduction or CDR, SRM could reduce global temperatures within months of deployment. [ 14 ] : vii [ 5 ] : 14 This rapid effect means SRM could help limit the worst climate impacts while emissions reductions and CDR are scaled up. However, SRM would not reduce atmospheric carbon dioxide concentrations, meaning that ocean acidification and other climate change effects would persist. The IPCC Sixth Assessment Report emphasizes that SRM is not a substitute for emissions reductions or CDR, stating: "There is high agreement in the literature that for addressing climate change risks, SRM cannot be the main policy response to climate change and is, at best, a supplement to achieving sustained net zero or net negative CO₂ emission levels globally." [ 2 ] : 1489 Global dimming provides both evidence of SRM's potential efficacy and further urgency of human-caused climate change. Industrial processes have increased the quantity of aerosols in the troposphere, or lower atmosphere. This has cooled the planet, offsetting some global warming, [ 4 ] : 855–857 caused by the aerosol's reflectivity (the basis for stratospheric aerosol injection ) and by increasing' clouds' reflectivity (the basis for marine cloud brightening ). [ 4 ] : 860–861 As regulation has reduced tropospheric aerosols, global dimming has decreased and the planet has warmed at a faster rate. [ 4 ] : 851–853 In 1965, during the administration of U.S. President Lyndon B. Johnson , the President's Science Advisory Committee delivered Restoring the Quality of Our Environment , the first report which warned of the harmful effects of carbon dioxide emissions from fossil fuel use. To counteract global warming , the report mentioned "deliberately bringing about countervailing climatic changes," including "raising the albedo , or reflectivity, of the Earth". [ 15 ] [ 16 ] In 1974, Russian climatologist Mikhail Budyko suggested that if global warming ever became a serious threat, it could be countered by releasing aerosols into the stratosphere. He proposed that aircraft burning sulfur could generate aerosols that would reflect sunlight away from the Earth, cooling the planet. [ 17 ] [ 18 ] : 38 Along with carbon dioxide removal , SRM was discussed under the broader concept of geoengineering in a 1992 climate change report from the US National Academies . [ 19 ] The first modeled results of and review article on SRM were published in 2000. [ 20 ] [ 21 ] In 2006, Nobel Laureate Paul Crutzen published an influential paper arguing that, given the lack of adequate greenhouse gas emissions reductions, research on the feasibility and environmental consequences of climate engineering should not be dismissed. [ 22 ] Major reports evaluating the potential benefits and risks of SRM include those by: In the late 2010s, SRM was increasingly distinguished from carbon dioxide removal, and "geoengineering" and similar terms were used less often. [ 24 ] [ 3 ] : 550 The atmospheric methods for SRM include stratospheric aerosol injection (SAI), marine cloud brightening (MCB), and cirrus cloud thinning (CCT). [ 4 ] : 624 For stratospheric aerosol injection (SAI), small particles would be introduced into the upper atmosphere to reflect sunlight and induce global dimming . Of all the proposed SRM methods, SAI has received the most sustained attention. The IPCC concluded in 2021 that SAI "is the most-researched SRM method, with high agreement that it could limit warming to below 1.5 °C." [ 3 ] : 350 This technique would replicate natural cooling phenomena observed following large volcano eruptions. [ 4 ] : 627 Sulfates are the most commonly proposed aerosol due to their natural occurrence in volcanic eruptions. Alternative substances, including calcium carbonate and titanium dioxide have also been suggested. [ 4 ] : 624 Custom-designed aircraft are considered the most feasible delivery method, with artillery and balloons occasionally proposed. [ 26 ] SAI could produce up to 8 W/m² of negative radiative forcing . [ 4 ] : 624 The World Meteorological Organization 's 2022 Scientific Assessment of Ozone Depletion stated that "Stratospheric Aerosol Injection (SAI) has the potential to limit the rise in global surface temperatures by increasing the concentrations of particles in the stratosphere... However, SAI comes with significant risks and can cause unintended consequences." [ 6 ] : 21 A key concern with SAI is its potential to delay the recovery of the ozone layer , depending on which aerosols are used. [ 6 ] : 21 Marine cloud brightening (MCB), also known as marine cloud seeding or marine cloud engineering, may be a way to make stratocumulus clouds over the sea brighter, thus reflecting more sunlight back into space in order to limit global warming . It is one of two such methods that might feasibly have a substantial climate impact, but is lower in the atmosphere than stratospheric aerosol injection . [ 28 ] It may be able to keep local areas from overheating. If used on a large scale it might reduce the Earth's albedo ; and so, in combination with greenhouse gas emissions reduction , limit climate change and its risks to people and the environment . If implemented, the cooling effect would be expected to be felt rapidly and to be reversible on fairly short time scales. However, technical barriers remain to large-scale marine cloud brightening, and it could not offset all the current warming. [ 29 ] [ 30 ] As clouds are complicated and poorly understood, the risks of marine cloud brightening are unclear as of 2025. Cirrus cloud thinning (CCT) involves seeding cirrus clouds to reduce their optical thickness and decrease cloud lifetime, allowing more outgoing longwave radiation to escape into space. [ 4 ] : 628 Cirrus clouds generally have a net warming effect. By dispersing them through targeted interventions, CCT could enhance Earth's ability to radiate heat away. However, the method remains highly uncertain, as some studies suggest CCT could cause net warming rather than cooling due to complex cloud-aerosol interactions. [ 35 ] This method is often grouped with SRM despite working primarily by increasing outgoing radiation rather than reducing incoming shortwave radiation . [ 4 ] : 624 The IPCC describes surface-based albedo modification as "increase ocean albedo by creating microbubbles;... paint the roof of buildings white...; increase albedo of agriculture land, add reflective material to increase sea ice albedo." [ 4 ] : 624 Surface-based approaches could be considered localized and would have limited global impact. [ 4 ] : 624 While urban cooling could be achieved through reflective roofs and pavement, large-scale desert albedo modification could significantly alter regional precipitation patterns. [ 4 ] : 629 Covering glaciers with reflective materials has been proposed to slow melting, though feasibility and effectiveness at scale remains uncertain. [ 4 ] : 629 Space-based SRM involves deploying mirrors, reflective particles, or shading structures at lower Earth orbit, geosynchronous orbit, or near the L1 Lagrange point between Earth and the Sun. Unlike atmospheric methods, space-based approaches would not directly interfere with Earth's climate systems. Historically, proposals have included orbiting mirrors, space dust clouds, and electromagnetically tethered reflectors. The Royal Society (2009) and later assessments concluded that while space-based methods may be viable in the future, costs and deployment challenges make them infeasible for near-term climate intervention. [ 23 ] [ 26 ] Assessments conclude that space-based SRM is not feasible at reasonable costs. [ 26 ] : 12 The most recent IPCC Assessment Report (in 2021) did not consider these methods. [ 4 ] SRM could have relatively low direct financial costs of deployment compared to the projected economic damages of unmitigated climate change. [ 2 ] : 1492, 1494 These costs could be on the order of billions to tens of billions of US dollars per degree of cooling. [ 5 ] : 36 Stratospheric aerosol injection (SAI) is the most studied and has the most cost estimates. UNEP reported a cost of $18 billion per degree, [ 5 ] : 32 although individual studies have estimated that SAI deployment could cost between $5 billion to $10 billion per year. [ 36 ] MCB could cost, according to UNEP, $1 to 2 billion per W/m2 of negative radiative forcing, [ 5 ] : 32 which implies $1.5 to 3 billion per degree. Cirrus cloud thinning (CCT) is even less studied, and no formal cost estimates exist. [ 5 ] : 32 Modelling studies have consistently concluded that moderate SRM use would significantly reduce many of the impacts of global warming, including changes to average and extreme temperature, extreme precipitation, Arctic and terrestrial ice, cyclone intensity and frequency , and the Atlantic Meridional Overturning Circulation . [ 4 ] : 625 SRM would take effect rapidly, unlike mitigation or carbon dioxide removal, making it the only known method to lower global temperatures within months. [ 5 ] : 14 The IPCC Sixth Assessment Report states: "SRM could offset some of the effects of increasing greenhouse gases on global and regional climate, including the carbon and water cycles. However, there would be substantial residual or overcompensating climate change at the regional scales and seasonal timescales, and large uncertainties associated with aerosol–cloud–radiation interactions persist. The cooling caused by SRM would increase the global land and ocean CO 2 sinks, but this would not stop CO 2 from increasing in the atmosphere or affect the resulting ocean acidification under continued anthropogenic emissions." [ 4 ] : 69 SRM could partially offset agricultural losses arising from climate change. [ 26 ] : 66 The CO₂ fertilization effect, which enhances plant growth under high CO₂ levels, would continue under SRM. Some studies indicate that SRM might improve crop yields, while others suggest that reducing overall sunlight could slightly decrease agricultural productivity. [ 37 ] [ 38 ] Some studies suggest that SRM could prevent coral decline and mass bleaching events by reducing sea surface temperatures . [ 26 ] : 67 SRM would not perfectly reverse climate change effects. Differences in regional precipitation patterns, cloud cover, and atmospheric circulation could persist, with some regions experiencing overcompensation or residual warming and cooling effects. [ 4 ] : 625 This is because greenhouse gases warm throughout the globe and year, whereas SRM reflects light more effectively at low latitudes and in the hemispheric summer (due to the sunlight's angle of incidence ) and only during daytime. Deployment regimes might be able to compensate for some of this heterogeneity by changing and optimizing injection rates by latitude and season. [ 4 ] : 627 Models indicate that SRM would reverse warming-induced changes to precipitation more effectively than changes to temperature. [ 4 ] : 625–626 Therefore, using SRM to fully return global mean temperature to a preindustrial level would overcorrect for precipitation changes. This has led to claims that it would dry the planet or even cause drought, [ 39 ] [ citation needed ] but this would depend on the intensity (i.e. radiative forcing) of SRM. Furthermore, soil moisture is more important for plants than average annual precipitation. Because SRM would reduce evaporation, it more precisely compensates for changes to soil moisture than for average annual precipitation. [ 4 ] : 627 The intensity of tropical monsoons is increased by climate change and would generally be decreased by SRM and especially SAI. [ 4 ] : 624 [ 40 ] : 458–459 A net reduction in tropical monsoon intensity might manifest at moderate use of SRM, although to some degree the effect of this on humans and ecosystems would be mitigated averted heat. [ 40 ] : 458–459 Ultimately the impact would depend on the particular implementation regime. [ 4 ] : 625 SRM would change the ratio between direct and indirect solar radiation, affecting plant life and solar energy . Visible light, useful for photosynthesis, is reduced proportionally more than is the infrared portion of the solar spectrum due to the mechanism of Mie scattering . [ 41 ] As a result, deployment of atmospheric SRM would affect the growth rates of plants, with the expected impact differing between canopy and subcanopy plants. [ 2 ] : 1491 [ 26 ] : 62–63, 66 Uniformly reduced net shortwave radiation would reduce solar power , [ 26 ] : 61, 66 but the real-world impact would be complex. SAI would affect stratospheric ozone, which protects organisms from harmful ultraviolet radiation , with the effect depending on the characteristics of deployment. [ 4 ] : 624, 627–628 [ 6 ] Sulfates, the most commonly proposed aerosol, would delay the current recovery of stratospheric ozone. SRM does not directly influence atmospheric carbon dioxide concentration and thus does not reduce ocean acidification . [ 2 ] : 1492 While not a risk of SRM per se , this indicates a critical limitation of relying on it to the exclusion of emissions reduction. While climate models indicate that SRM could reduce many global warming hazards, limitations in model accuracy, aerosol-cloud interactions, and the response of regional climate systems remain key uncertainties. [ 4 ] : 624–625 Therefore, much uncertainty remains about some of SRM's likely effects. [ 4 ] : 624–625 Most of the evidence regarding SRM's expected effects comes from climate models and volcanic eruptions. Some uncertainties in climate models (such as aerosol microphysics, stratospheric dynamics, and sub-grid scale mixing) are particularly relevant to SRM and are a target for future research. [ 42 ] Volcanoes are an imperfect analogue as they release the material in the stratosphere in a single pulse, as opposed to sustained injection. [ 5 ] : 11 A 2023 UNEP report concluded that while an operational SRM deployment could reduce some climate hazards it would also introduce new risks to ecosystems and human societies. [ 5 ] : 15 Ecosystem impacts are not yet well understood. An EU report concluded "The potential effects on societies and especially ecosystems of SAI and SD are identified as a critical knowledge gap, with studies emphasising that the impacts and risks would vary based on the implementation scenario, geographic region and specific characteristics of ecosystems. SAI implementation may prevent some of the consequences of climate change on societies and ecosystems but it could also have unintended, and potentially unexpected, impacts." [ 26 ] : 65 Terrestrial ecosystems could experience uncertain shifts in composition and plant productivity. [ 26 ] : 62, 65 SRM raises a variety of governance issues. The IPCC lists these potential objectives of SRM governance: (i) Guard against potential risks and harm; (ii) Enable appropriate research and development of scientific knowledge; (iii) Legitimise any future research or policymaking through active and informed public and expert community engagement; (iv) Ensure that SRM is considered only as a part of a broader, mitigation-centred portfolio of responses to climate change. [ 2 ] : 1494 A common concern regarding SRM research and potential deployment is that it might reduce political and social momentum for climate change mitigation , especially the reduction of greenhouse gas emissions. [ 2 ] : 1493 This hypothesis is often called " moral hazard ." The likelihood and significance of moral hazard effects remain uncertain and contested among experts. Some have argued that this is unlikely and--even if true--is not a compelling reason to forgo researching and evaluating SRM if it could greatly reduce global warming and its impacts, [ 43 ] while others see the prospect as a reason to not pursue SRM. [ 44 ] Empirical evidence from game-theoretic modeling, opinion surveys, and behavioral experiments inconclusive. [ 26 ] : 99 A recent review article calls evidence for mitigation displacement "weak" but notes that these research methods fail to account for "the precise concern that real political decisions under interest-group mobilization will cut emissions too little in the presence of SRM." [ 45 ] : 355 Another common concern with SRM is that, because its high leverage, low apparent direct costs (at least of SAI), and technical feasibility as well as issues of power and jurisdiction suggest that uni- or minilateral use is possible, without international agreement or sufficient understanding of its expected effects. [ 2 ] : 1494–1495 A key issue is under what governance regime(s) the use could be controlled, monitored, and supervised. Yet leaders of countries and other actors may disagree as to whether, how, and to what degree SRM be used. This could result in suboptimal deployments and create international tensions, especially if local harms were perceived. [ 2 ] : 1494 Experts diverge on whether uni- or minilateral use is likely and whether effective governance would be feasible [ 46 ] [ 47 ] [ 48 ] and on whether nonstate actors could deploy SRM at a significant scale. [ 49 ] [ 50 ] This is further complicated in two important ways. First, since SRM technologies are still emerging, there is a concern that premature regulations might be either "too restrictive or too permissive," failing to adapt adequately to future political, technological, or geophysical developments. [ 2 ] : 1494 Second, because international law is generally consensual, any governance regime would need to particularly engage and secure cooperation from countries that perceive themselves as potential users of SRM. [ 26 ] : 153 If SRM were masking significant warming and abruptly ceased without resumption within a short period (roughly a year), the climate would rapidly warm toward levels that would have existed without SRM, a phenomenon sometimes call "termination shock." [ 2 ] : 1493 A sudden and sustained termination of SRM in a world of atmospheric high greenhouse-gas concentrations would trigger rapid global temperature rise, intensified precipitation changes, sea level rise, land drying, weakened carbon sinks, and accelerated CO₂ accumulation. [ 4 ] : 629 The IPCC notes that a gradual phase-out of SRM combined with mitigation would reduce the impacts of SRM's termination. [ 4 ] : 629 Furthermore, some scholars argue that this risk might be manageable, as states would have strong incentives to resume deployment if necessary, and maintaining backup SRM infrastructure could enhance system resilience and provide a buffer against abrupt cessation. [ 51 ] [ 52 ] A large-scale deployment of SRM would likely require a multi-decade to century-long commitment to maintain its intended climate effects. [ 5 ] : 8–10 [ 26 ] : 14 This may be necessary to achieve sustained cooling, particularly as greenhouse-gas concentrations continue to rise due to continued net emissions and carbon dioxide's long atmospheric lifetime. There is currently no dedicated, formal law specifically governing SRM research, development, or deployment, though certain multilateral agreements, rules of customary international law, national and European laws, and nonbinding legal documents contain provisions that may be applicable to some SRM activities. [ 2 ] : 1493, 1495 The UN Framework Convention on Climate Change and its related treaties do not address SRM, though it could be considered within the framework of the Paris Agreement’s goal to limit global warming to well below 2°C, with efforts to stay within 1.5°C. [ 26 ] : 163 While the UNFCCC is founded on the precautionary principle , [ 5 ] : 137 its specific implications for SRM remain uncertain. [ 26 ] : 1636–167 The UN Convention on the Law of the Sea could support SRM research by permitting legitimate scientific activities and encouraging studies that assess SRM’s effects on the marine environment. Its provisions to protect the marine environment may justify SRM research aimed at mitigating climate impacts on oceans, such as efforts to reduce warming or protect coral reefs. However, UNCLOS could also impose constraints on large-scale outdoor activities, particularly if activities under a state’s jurisdiction risk polluting or harming marine ecosystems. Additionally, because SRM does not directly address ocean acidification, its alignment with UNCLOS' environmental protection objectives remains uncertain. [ 14 ] : 101–102 The Environmental Modification Convention is the only international treaty that directly regulates deliberate manipulation of natural processes with "widespread, long-lasting or severe effects" of a transboundary nature. SRM falls within ENMOD’s definition of environmental modification techniques and is therefore subject to its prohibition on military or hostile use. At the same time, the treaty states that it "shall not hinder the use of environmental modification techniques for peaceful purposes." ENMOD also encourages the exchange of information and international cooperation on peaceful environmental modification, with parties "in a position to do so" expected to support scientific and economic collaboration. [ 26 ] : 162 The Vienna Convention for the Protection of the Ozone Layer and its Montreal Protocol obligate parties to take measures to reduce or prevent human activities that could have harmful effects from modifying the ozone layer, which some forms of SAI might have. Article 2 specifically requires states to cooperate to "protect human health and the environment against adverse effects resulting or likely to result from human activities which modify or are likely to modify the ozone layer." [ 26 ] : 162 The rule of prevention of transboundary harm under customary international law obligates states to prevent significant transboundary environmental harm and to reduce the risks thereof. This rule would be relevant to large-scale outdoor SRM activities, if they were to present risk of causing significant transboundary harm on human health, ecosystems, or the climate system. Under this rule, states must exercise due diligence to prevent significant transboundary environmental harm by conducting environmental impact assessments, notifying and consulting affected states, and cooperating in good faith to mitigate risks. Failure to meet these obligations could result in state responsibility for harm caused by activities within their jurisdiction. Scholars have debated whether SRM research and deployment should be held to different legal standards. Furthermore, international cooperation obligations may require states to collaborate on impact assessments, data sharing, and governance mechanisms. [ 26 ] : 156–161 The International Law Commission developed draft guidelines for the protection of the atmosphere. One guidelines state, in its entirety: Activities aimed at intentional large-scale modification of the atmosphere should only be conducted with prudence and caution, and subject to any applicable rules of international law, including those relating to environmental impact assessment. [ 53 ] The Conference of Parties to the Convention on Biological Diversity have made several decisions regarding "climate related geoengineering," which would include SRM. That of 2010 established "a comprehensive non-binding normative framework" [ 54 ] : 106 for "climate-related geoengineering activities that may affect biodiversity," requesting that such activities be justified by the need to gather specific scientific data, undergo prior environmental assessment, be subject to effective regulatory oversight. [ 14 ] : 96–97 [ 26 ] : 161–162 The Parties' 2016 decision called for "more transdisciplinary research and sharing of knowledge... in order to better understand the impacts of climate-related geoengineering." [ 26 ] : 161-162 [ 55 ] As with international law, existing areas of national and subnational law—such as environmental regulation , tort liability, and intellectual property —would govern certain aspects of SRM. For example, in the US, [ 14 ] : 91–96 under the National Environmental Protection Act and similar state laws, federally sponsored or authorized outdoor SRM research may require environmental review if it poses risk of significant physical impacts, though small-scale experiments are often exempt. Several federal regulatory statutes, including the Clean Air Act , Clean Water Act , Ocean Dumping Act , and Federal Aviation Administration rules, may apply to SRM field experiments depending on their design, particularly regarding emissions into air or water and the use of aircraft. Outdoor experiments could also expose researchers to tort liability under state common law theories such as negligence , strict liability , or nuisance , though plaintiffs may face challenges in proving causation and demonstrating that potential harms outweigh societal benefits. Intellectual property law, particularly patent rights, may influence the development of SRM technologies by incentivizing innovation while potentially limiting access, although current patent activity in the field remains limited. The Ministry of Environment and Natural Resources of Mexico announced in 2023 that it would prohibit SRM experiments in that country. [ 56 ] In 2025, several US states implemented or are considering prohibitions on "geoengineering." However, these are aimed not at SRM per se but at purported chemtrails or weather modification. [ 57 ] Groups of academics, research networks, and the broader SRM research community have developed multiple sets of principles or guidelines to help govern SRM activities. [ 14 ] : 106 [ 26 ] : 134 For example, the Oxford Principles (which address SRM and carbon dioxide removal as "geoengineering") are the most prominent: [ 25 ] : 21 More recently, the American Geophysical Union issued an ethical framework for researching "climate intervention" (again, SRM and carbon dioxide removal). [ 59 ] [ 60 ] An article in MIT Technology Review stated in 2017 : "Few serious scientists would argue that we should begin deploying geoengineering anytime soon." [ 61 ] Support for SRM research has come from scientists, NGOs, international organizations, and governments. The leading argument in support of SRM research is that there are large and immediate risks from climate change, and SRM is the only known way to quickly stop (or reverse) warming. Leading this effort have been some well-known climate scientists, some of whom have endorsed one or both public letters that support further SRM research. [ 62 ] [ 63 ] For example, in a publication of 2025 James Hansen and others said "Research on purposeful global cooling should be pursued, as recommended by the U.S. National Academy of Sciences". [ 43 ] Scientific and other large organizations that have called for further research on SRM include: Two sign-on letters in 2023 from scientists and other experts have called for expanded "responsible SRM research". One wants to "objectively evaluate the potential for SRM to reduce climate risks and impacts, to understand and minimize the risks of SRM approaches, and to identify the information required for governance". It was endorsed by "more than 110 physical and biological scientists studying climate and climate impacts about the role of physical sciences research." [ 73 ] Another called for "balance in research and assessment of solar radiation modification" and was endorsed by about 150 experts, mostly scientists. [ 74 ] Some nongovernmental organizations actively support SRM research and governance dialogues. Environmental Defense Fund is developing an SRM research program. [ 75 ] [ 76 ] The Degrees Initiative is a UK registered charity, established to build capacity in developing countries to evaluate SRM. [ 77 ] It works toward "changing the global environment in which SRM is evaluated, ensuring informed and confident representation from developing countries." [ 77 ] A researcher from the German NGO Geoengineering Monitor is of the opinion that this charity is "imposing its research agenda onto the Global South" and is "predominantly funded by foundations run by technology and finance billionaires based in the Global North". [ 78 ] Operaatio Arktis is a Finnish youth climate organisation that supports research into solar radiation modification alongside mitigation and carbon sequestration as a potential means to preserve polar ice caps and prevent tipping points. [ 79 ] SilverLining is an American organization that advances SRM research as part of "climate interventions to reduce near-term climate risks and impacts." [ 80 ] It is funded by "philanthropic foundations and individual donors focused on climate change". [ 80 ] [ 81 ] One of their funders is Quadrature Climate Foundation which "plans to provide $40 million for work in this field over the next three years" (as of 2024). [ 82 ] The Alliance for Just Deliberation on Solar Geoengineering advances "just and inclusive deliberation" regarding SRM, in particular by engaging civil society organizations in the Global South and supporting a broader conversation on SRM governance. [ 83 ] The Carnegie Climate Governance Initiative catalyzed governance of SRM and carbon dioxide removal, [ 84 ] although it ended operations in 2023. The Climate Overshoot Commission is a group of global, eminent, and independent figures. [ 85 ] It investigated and developed a comprehensive strategy to reduce climate risks . The Commission recommended additional research on SRM alongside a moratorium on deployment and large-scale outdoor experiments. It also concluded that "governance of SRM research should be expanded". [ 86 ] : 15 Campaigners have claimed that the fossil fuels lobby advocates for SRM research. [ 87 ] [ 88 ] However, researchers have pointed out the lack of evidence in support of this claim. [ 89 ] Opposition to SRM research and deployment has come from activist non-governmental organizations (NGOs), academics, [ 48 ] and U.S. Republican policymakers. [ 90 ] [ 91 ] [ 92 ] [ 93 ] Common concerns include that SRM could undermine efforts to reduce greenhouse gas emissions, prove difficult to govern at a global scale, or trigger international tensions and conflict. Opponents often emphasize that strong mitigation would also deliver public health and environmental co-benefits , such as reduced air pollution , which might be deprioritized if SRM gains traction. [ 94 ] The ETC Group , an NGO focused on the socioeconomic and ecological impacts of emerging technologies, was a pioneer in opposing SRM research. [ 95 ] It was later joined by the Heinrich Böll Foundation , [ 96 ] a German political organization affiliated with the Green Party , and the Center for International Environmental Law . [ 97 ] Climate Action Network , a global network of organizations promoting climate action, also opposes outdoor experiments and the use of SRM. [ 94 ] In 2021, researchers at Harvard University paused plans for a small-scale SRM field experiment in Sweden after opposition from the Saami Council , an Indigenous advocacy group. The Council objected to a test flight over their ancestral land. [ 98 ] [ 99 ] Although the flight would not have released any material, the Saami Council criticized the lack of consultation and expressed broader concerns about the ethics and risks of SRM. A coalition of scholars and advocates has proposed an “International Non-Use Agreement on Solar Geoengineering,” calling for governments to prohibit funding, experimentation, patenting , deployment, and institutional legitimization of SRM, which they argue is too risky, politically ungovernable, and likely to undermine mitigation. As of December 2024, their effort has been supported by nearly 540 academics [ 100 ] and 60 advocacy organizations. [ 101 ] Although the campaign describes the former as "scientists," [ 102 ] the large majority is social scientists. Their campaign was launched with an essay in an academic journal, Wiley Interdisciplinary Reviews (WIREs): Climate Change . [ 48 ] The initiative does not disclose its ultimate funding source. [ 103 ] The same journal later published two follow-up items. First, the publisher, Wiley , attached an editorial note to the essay acknowledging a conflict of interest in the peer review process. Mike Hulme , the journal's editor-in-chief who oversaw the review of the article for WIREs Climate Change, had co-authored an earlier version of the article, which had been rejected by another journal. Wiley concluded that this was a conflict of interest. Hulme resigned as editor-in-chief during the publisher’s investigation. [ 104 ] Second, in a published response, a group of scholars argue that the “Non-Use Agreement” campaign misrepresents the state of research and exaggerates the risks of experimentation. They contend that such an agreement would stifle legitimate scientific inquiry, marginalize voices from developing countries, and hinder the responsible governance of emerging technologies. [ 105 ] Since 2024, and especially following the re-election of Donald Trump as U.S. President, lawmakers in at least 28 U.S. states have introduced or supported bills to prohibit SRM or related practices. [ 106 ] These efforts often target SRM [ 90 ] and weather modification [ 91 ] specifically. These bills are influenced by the chemtrails conspiracy theory . [ 92 ] In April 2024, Tennessee enacted such a bill, approved along party lines [ 107 ] and signed into law by Governor Bill Lee . [ 108 ] Members of the Trump administration have endorsed the effort. Secretary of Health and Human Services in the Trump administration, Robert F. Kennedy Jr. posted on X : “24 States move to ban geoengineering our climate by dousing our citizens, our waterways and landscapes with toxins. This is a movement every MAHA ( Make America Healthy Again ) needs to support. HHS will do its part.” [ 92 ] When the U.S. Environmental Protection Agency took action against the startup Make Sunsets (see below), EPA Administrator Lee Zeldin was quoted in the agency’s press release, stating: “The idea that individuals, supported by venture capitalists, are putting criteria air pollutants into the air to sell ‘cooling’ credits shows how climate extremism has overtaken common sense.” [ 109 ] Before 2019, total research funding worldwide remained modest, at less than 10 million US dollars annually. [ 110 ] [ needs update ] Almost all research into SRM was consisted of computer modeling or laboratory tests, [ 111 ] and there were calls for more research funding as the science is poorly understood. [ 112 ] [ 14 ] : 17 A study from 2022 investigated where the funding for SRM research came from globally concluded there are "close ties to mostly US financial and technological capital as well as a number of billionaire philanthropists". [ 113 ] Under the World Climate Research Programme there is a Lighthouse Activity called Research on Climate Intervention as of 2024 . This will include research on all possible climate interventions (another term for climate engineering): "large-scale Carbon Dioxide Removal (CDR; also known as Greenhouse Gas Removal, or Negative Emissions Technologies) and Solar Radiation Modification (SRM; also known as Solar Reflection Modification, Albedo Modification, or Radiative Forcing Management)". [ 72 ] In 2025, the UK government invested more than 60 million pounds on SRM research, which includes outdoor geoengineering experiments, making it a big SRM funder in the world. [ 114 ] Few countries have an explicit governmental position on SRM. Those that do, such as the United Kingdom [ 115 ] and Germany, [ 116 ] : 58 support some SRM research even if they do not see it as a current climate policy option. For example, the German Federal Government does have an explicit position on SRM and stated in 2023 in a strategy document climate foreign policy: "Due to the uncertainties, implications and risks, the German Government is not currently considering solar radiation management (SRM) as a climate policy option". The document also stated: "Nonetheless, in accordance with the precautionary principle we will continue to analyse and assess the extensive scientific, technological, political, social and ethical risks and implications of SRM, in the context of technology-neutral basic research as distinguished from technology development for use at scale". [ 116 ] : 58 Some countries, such as the U.S., U.K., Argentina, Germany, China, Finland, Norway, and Japan, as well as the European Union, have funded SRM research. [ 110 ] NOAA in the United States spent $22 million USD from 2019 to 2022, with only a few outdoor tests carried out. [ 117 ] As of 2024, NOAA provides about $11 million USD a year through their solar geoengineering research program. [ 82 ] As of 2025 the federal US government does not have a policy on SRM. [ 118 ] In late 2024, the Advanced Research and Invention Agency , a British funding agency, announced that research funds totaling 57 million pounds (about $75 million USD) will be made available to support projects which explore "Climate Cooling". [ 119 ] This includes outdoor experiments: "This programme aims to answer fundamental questions as to the practicality, measurability, controllability and possible (side-)effects of such approaches through indoor and (where necessary) small, controlled, outdoor experiments." [ 120 ] Successful applicants will be announced in 2025. [ 121 ] Aria’s programme was called a “‘dangerous distraction’ from cutting emissions” by some senior scientists. [ citation needed ] The programme, together with another programme funded by Natural Environment Research Council (NERC, which is also backed by the UK government) that worths 10 million pounds, makes the UK “one of the biggest funders of geoengineering research in the world”. [ 114 ] [ 122 ] There are also research activities on SRM that are funded by philanthropy . According to Bloomberg News , as of 2024 several American billionaires are funding research into SRM: "A growing number of Silicon Valley founders and investors are backing research into blocking the sun by spraying reflective particles high in the atmosphere or making clouds brighter." [ 123 ] The article listed the following billionaires as being notable geoengineering research supporters : Mike Schroepfer , Sam Altman , Matt Cohler , Rachel Pritzker , Bill Gates , Dustin Moskovitz . [ 123 ] SRM research initiatives, or non-profit knowledge hubs , include for example SRM360 which is "supporting an informed, evidence-based discussion of sunlight reflection methods (SRM)". [ 124 ] Funding comes from the LAD Climate Fund. [ 125 ] [ 126 ] Another example is Reflective, which is "a philanthropically-funded initiative focused on sunlight reflection research and technology development". [ 127 ] Their funding is "entirely by grants or donations from a number of leading philanthropies focused on addressing climate change": Outlier Projects, Navigation Fund, Astera Institute, Open Philanthropy , Crankstart, Matt Cohler, Richard and Sabine Wood. [ 127 ] Make Sunsets [ 128 ] is a private startup that sells "cooling credits" for its small-scale SRM activities, claiming that each US$10 credit offsets the warming effect of one ton of carbon dioxide for a year. [ 129 ] The firm releases balloons containing helium and sulfur dioxide . Make Sunsets conducted some of its first activities in Mexico, causing the Mexican government announced its intention to prohibit SRM experiments within its borders. [ 130 ] Even those who advocate for more research into SRM criticize Make Sunsets' undertaking. [ 131 ] In April 2025, the U.S. Environmental Protection Agency demanded information from the startup regarding its releases of sulfur dioxide into the atmosphere. [ 109 ] Overall, public opinion on SRM is nascent, ambivalent, and context-dependent, with greater support for research than for deployment. [ 26 ] : 100 Public awareness of SRM remains low globally, with 75–80% of respondents in recent multi-country surveys reporting little to no familiarity. [ 26 ] : 96 Despite this, social science research on public attitudes toward SRM is growing and diversifying, although the UK, US, and Germany still dominate the existing academic literature. [ 26 ] : 92 Public opinion in the Global South remains less well examined, though several studies thus far consistently find greater openness to SRM there, where climate impacts are perceived as more immediate. [ 26 ] : 100-101 Methodologically, research has shifted toward large-scale surveys, but concerns remain about the durability of preferences given low baseline knowledge. [ 26 ] : 98 Across studies, public views are shaped by values, perceived climate risk, and how SRM is framed. Common concerns include the fear of displacing mitigation, the unnaturalness of intervening in climate systems, justice and equity, and a desire to inform and consult with the public prior to use. [ 26 ] : 99-100 SRM is generally viewed less favorably than greenhouse-gas emissions reduction and carbon dioxide removal. [ 26 ] : 99 Europeans tend to be more averse, especially in central and northern countries (e.g. Germany, Austria, Switzerland), while southern European and Global South populations are more accepting, particularly when facing high climate vulnerability. [ 26 ] : 98 Some studies also highlight links between SRM and conspiracy theories, such as chemtrails , which can further complicate public understanding. [ 26 ] : 100 The chemtrail conspiracy theory / ˈ k ɛ m t r eɪ l / is the erroneous [ 132 ] belief that long-lasting condensation trails left in the sky by high-flying aircraft are actually "chemtrails" consisting of chemical or biological agents , sprayed for nefarious purposes undisclosed to the general public. [ 133 ] Believers in this conspiracy theory say that while normal contrails dissipate relatively quickly, contrails that linger must contain additional substances. [ 134 ] [ 135 ] Those who subscribe to the theory speculate that the purpose of the chemical release may be solar radiation management , [ 134 ] weather modification , psychological manipulation , human population control , biological or chemical warfare, or testing of biological or chemical agents on a population, and that the trails are causing respiratory illnesses and other health problems. [ 133 ] [ 136 ]
https://en.wikipedia.org/wiki/Solar_radiation_modification
Solar radio emission refers to radio waves that are naturally produced by the Sun , primarily from the lower and upper layers of the atmosphere called the chromosphere and corona , respectively. The Sun produces radio emissions through four known mechanisms, each of which operates primarily by converting the energy of moving electrons into electromagnetic radiation . The four emission mechanisms are thermal bremsstrahlung (braking) emission, gyromagnetic emission, plasma emission, and electron- cyclotron maser emission. The first two are incoherent mechanisms, which means that they are the summation of radiation generated independently by many individual particles. These mechanisms are primarily responsible for the persistent "background" emissions that slowly vary as structures in the atmosphere evolve. The latter two processes are coherent mechanisms, which refers to special cases where radiation is efficiently produced at a particular set of frequencies. Coherent mechanisms can produce much larger brightness temperatures (intensities) and are primarily responsible for the intense spikes of radiation called solar radio bursts, which are byproducts of the same processes that lead to other forms of solar activity like solar flares and coronal mass ejections . Radio emission from the Sun was first reported in the scientific literature by Grote Reber in 1944. [ 1 ] Those were observations of 160 MHz frequency (2 meters wavelength) microwave emission emanating from the chromosphere . However, the earliest known observation was in 1942 during World War II by British radar operators who detected an intense low-frequency solar radio burst; that information was kept secret as potentially useful in evading enemy radar, but was later described in a scientific journal after the war. [ 2 ] One of the most significant discoveries from early solar radio astronomers such as Joseph Pawsey was that the Sun produces much more radio emission than expected from standard black body radiation . [ 3 ] The explanation for this was proposed by Vitaly Ginzburg in 1946, who suggested that thermal bremsstrahlung emission from a million-degree corona was responsible. [ 4 ] The existence of such extraordinarily high temperatures in the corona had previously been indicated by optical spectroscopy observations, but the idea remained controversial until it was later confirmed by the radio data. [ 5 ] Prior to 1950, observations were conducted mainly using antennas that recorded the intensity of the whole Sun at a single radio frequency. [ 6 ] Observers such as Ruby Payne-Scott and Paul Wild used simultaneous observations at numerous frequencies to find that the onset times of radio bursts varied depending on frequency, suggesting that radio bursts were related to disturbances that propagate outward, away from the Sun, through different layers of plasma with different densities. [ 7 ] These findings motivated the development of radiospectrographs that were capable of continuously observing the Sun over a range of frequencies. This type of observation is called a dynamic spectrum , and much of the terminology used to describe solar radio emission relates to features observed in dynamic spectra, such as the classification of solar radio bursts. [ 8 ] Examples of dynamic spectra are shown below in the radio burst section. Notable contemporary solar radiospectrographs include the Radio Solar Telescope Network , the e-CALLISTO network, and the WAVES instrument on-board the Wind spacecraft . Radiospectrographs do not produce images, however, and so they cannot be used to locate features spatially. This can make it very difficult to understand where a specific component of the solar radio emission is coming from and how it relates to features seen at other wavelengths. Producing a radio image of the Sun requires an interferometer, which in radio astronomy means an array of many telescopes that operate together as a single telescope to produce an image. This technique is a sub-type of interferometry called aperture synthesis . Beginning in the 1950s, a number of simple interferometers were developed that could provide limited tracking of radio bursts. [ 6 ] This also included the invention of sea interferometry , which was used to associate radio activity with sunspots . [ 9 ] Routine imaging of the radio Sun began in 1967 with the commissioning of the Culgoora Radioheliograph, which operated until 1986. [ 10 ] A radioheliograph is simply an interferometer that is dedicated to observing the Sun. In addition to Culgoora, notable examples include the Clark Lake Radioheliograph, [ 11 ] Nançay Radioheliograph , Nobeyama Radioheliograph , Gauribidanur Radioheliograph , Siberian Radioheliograph , and Chinese Spectral Radioheliograph. [ 12 ] Additionally, interferometers that are used for other astrophysical observations can also be used to observe the Sun. General-purpose radio telescopes that also perform solar observations include the Very Large Array , Atacama Large Millimeter Array , Murchison Widefield Array , and Low-Frequency Array . The collage above shows antennas from several low-frequency radio telescopes used to observe the Sun. All of the processes described below produce radio frequencies that depend on the properties of the plasma where the radiation originates, particularly electron density and magnetic field strength. Two plasma physics parameters are particularly important in this context: The electron plasma frequency , and the electron gyrofrequency , where n e {\displaystyle n_{e}} is the electron density in cm −3 , B {\displaystyle B} is the magnetic field strength in Gauss (G), e {\displaystyle e} is the electron charge , m e {\displaystyle m_{e}} is the electron mass , and c {\displaystyle c} is the speed of light . The relative sizes of these two frequencies largely determine which emission mechanism will dominate in a particular environment. For example, high-frequency gyromagnetic emission dominates in the chromosphere, where the magnetic field strengths are comparatively large, whereas low-frequency thermal bremsstrahlung and plasma emission dominates in the corona, where the magnetic field strengths and densities are generally lower than in the chromosphere. [ 13 ] In the images below, the first four on the upper left are dominated by gyromagnetic emission from the chromosphere, transition region , and low-corona, while the three images on the right are dominated by thermal bremsstrahlung emission from the corona, [ 14 ] with lower frequencies being generated at larger heights above the surface. Bremsstrahlung emission, from the German "braking radiation", refers to electromagnetic waves produced when a charged particle accelerates and some of its kinetic energy is converted into radiation. [ 15 ] Thermal bremsstrahlung refers to radiation from a plasma in thermal equilibrium and is primarily driven by Coulomb collisions where an electron is deflected by the electric field of an ion . This is often referred to as free-free emission for a fully ionized plasma like the solar corona because it involves collisions of "free" particles, as opposed to electrons transitioning between bound states in an atom. This is the main source of quiescent background emission from the corona, where quiescent means outside of radio burst periods. [ 16 ] The radio frequency of bremsstrahlung emission is related to a plasma's electron density through the electron plasma frequency ( f p {\displaystyle f_{p}} ) from Equation 1 . [ 17 ] A plasma with a density n e {\displaystyle n_{e}} can produce emission only at or below the corresponding f p {\displaystyle f_{p}} . [ 18 ] Density in the corona generally decreases with height above the visible "surface", or photosphere , meaning that lower-frequency emission is produced higher in the atmosphere, and the Sun appears larger at lower frequencies. This type of emission is most prominent below 300 MHz due to typical coronal densities, but particularly dense structures in the corona and chromosphere can generate bremsstrahlung emission with frequencies into the GHz range. [ 19 ] Gyromagnetic emission is also produced from the kinetic energy of a charge particle, generally an electron. However in this case, an external magnetic field causes the particle's trajectory to exhibit a spiral gyromotion, resulting in a centripetal acceleration that in turn produces the electromagnetic waves . [ 16 ] Different terminology is used for the same basic phenomenon depending on how fast the particle is spiraling around the magnetic field, which is due to the different mathematics required to describe the physics. Gyroresonance emission refers to slower, non- relativistic speeds and is also called magneto-bremsstrahlung or cyclotron emission. Gyrosynchrotron corresponds to the mildly relativistic case, where the particles rotate at a small but significant fraction of light speed, and synchrotron emission refers to the relativistic case where the speeds approach that of light. Gyroresonance and gyrosynchrotron are most-important in the solar context, although there may be special cases in which synchrotron emission also operates. [ 20 ] For any sub-type, gyromagnetic emission occurs near the electron gyrofrequency ( f B {\displaystyle f_{B}} ) from Equation 2 or one of its harmonics . This mechanism dominates when the magnetic field strengths are large such that f B {\displaystyle f_{B}} > f p {\displaystyle f_{p}} . This is mainly true in the chromosphere, where gyroresonance emission is the primary source of quiescent (non-burst) radio emission, producing microwave radiation in the GHz range. [ 13 ] Gyroresonance emission can also be observed from the densest structures in the corona, where it can be used to measure the coronal magnetic field strength. [ 21 ] Gyrosynchrotron emission is responsible for certain types of microwave radio bursts from the chromosphere and is also likely responsible for certain types of coronal radio bursts. [ 22 ] Plasma emission refers to a set of related process that partially convert the energy of Langmuir waves into radiation. [ 23 ] It is the most common form of coherent radio emission from the Sun and is commonly accepted as the emission mechanism for most types of solar radio bursts, which can exceed the background radiation level by several orders of magnitude for brief periods. [ 16 ] Langmuir waves , also called electron plasma waves or simply plasma oscillations , are electron density oscillations that occur when a plasma is perturbed so that a population of electrons is displaced relative to the ions. [ 24 ] Once displaced, the Coloumb force pulls the electrons back toward and ultimately past the ions, leading them to oscillate back and forth. Langmuir waves are produced in the solar corona by a plasma instability that occurs when a beam of nonthermal (fast-moving) electrons moves through the ambient plasma. [ 25 ] The electron beam may be accelerated either by magnetic reconnection , the process that underpins solar flares , or by a shock wave , and these two basic processes operate in different contexts to produce different types of solar radio bursts. [ 26 ] The instability that generates Langmuir waves is the two-stream instability , which is also called the beam or bump-on-tail instability in cases such as this where an electron beam is injected into a plasma, creating a "bump" on the high-energy tail of the plasma's particle velocity distribution. [ 23 ] This bump facilitates exponential Langmuir wave growth in the ambient plasma through the transfer of energy from the electron beam into specific Langmuir wave modes. A small fraction of the Langmuir wave energy can then be converted into electromagnetic radiation through interactions with other wave modes, namely ion sound waves . [ 23 ] A flowchart of the plasma emission stages is shown on the right. Depending on these wave interactions, coherent radio emission may be produced at the fundamental electron plasma frequency ( f p {\displaystyle f_{p}} ; Equation 1 ) or its harmonic (2 f p {\displaystyle f_{p}} ). [ 27 ] [ 28 ] Emission at f p {\displaystyle f_{p}} is often referred to as fundamental plasma emission , while emission at 2 f p {\displaystyle f_{p}} is called harmonic plasma emission . This distinction is important because the two types have different observed properties and imply different plasma conditions. For example, fundamental plasma emission exhibits a much larger circular polarization fraction [ 29 ] and originates from plasma that is four times denser than harmonic plasma emission. [ 30 ] The final, and least common, solar radio emission mechanism is electron-cyclotron maser emission (ECME). Maser is an acronym for "microwave amplification by stimulated emission of radiation", which originally referred to a laboratory device that can produce intense radiation of a specific frequency through stimulated emission . Stimulated emission is a process by which a group of atoms are moved into higher energy levels (above thermal equilibrium ) and then stimulated to release that extra energy all at once. Such population inversions can occur naturally to produce astrophysical masers , which are sources of very intense radiation of specific spectral lines . [ 31 ] Electron-cyclotron maser emission, however, does not involve population inversions of atomic energy levels. [ 32 ] The term maser was adopted here as an analogy is somewhat of a misnomer . In ECME, the injection of nonthermal, semi-relativistic electrons into a plasma produces a population inversion analogous to that of a maser in the sense that a high-energy population was added to an equilibrium distribution. This is very similar to the beginning of the plasma emission process described in the previous section, but when the plasma density is low and/or the magnetic field strength is high such that f B {\displaystyle f_{B}} > f p {\displaystyle f_{p}} (Equations 1 and 2 ), energy from the nonthermal electrons cannot efficiently be converted into Langmuir waves. [ 32 ] This leads instead to direct emission at f B {\displaystyle f_{B}} through a plasma instability that is expressed analytically as a negative absorption coefficient (i.e. positive growth rate) for a particular particle distribution, most famously the loss-cone distribution. [ 33 ] [ 23 ] [ 34 ] ECME is the accepted mechanism for microwave spike bursts from the chromosphere [ 16 ] and is sometimes invoked to explain features of coronal radio bursts that cannot be explained by plasma emission or gyrosynchrotron emission. [ 35 ] [ 36 ] Magnetoionic theory describes the propagation of electromagnetic waves in environments where an ionized plasma is subjected to an external magnetic field, such as the solar corona and Earth's ionosphere . [ 37 ] [ 18 ] The corona is generally treated with the "cold plasma approach," which assumes that the characteristic velocities of the waves are much faster than the thermal velocities of the plasma particles. [ 17 ] [ 38 ] This assumption allows thermal effects to be neglected, and most approaches also ignore the motions of ions and assume that the particles do not interact through collisions. Under these approximations, the dispersion equation for electromagnetic waves includes two free-space modes that can escape the plasma as radiation (radio waves). These are called the ordinary ( o {\displaystyle o} ) and extraordinary ( x {\displaystyle x} ) modes. [ 18 ] The ordinary mode is "ordinary" in the sense that the plasma response is the same as if there were no magnetic field, while the x {\displaystyle x} -mode has a somewhat different refractive index. Importantly, each mode is polarized in opposite senses that depend on the angle with respect to the magnetic field. A quasi-circular approximation generally applies, in which case both modes are 100% circularly polarized with opposite senses. [ 18 ] The x {\displaystyle x} - and o {\displaystyle o} -modes are produced at different rates depending on the emission mechanism and plasma parameters, which leads to a net circular polarization signal. For example, thermal bremsstrahlung slightly favors the x {\displaystyle x} -mode, while plasma emission heavily favors the o {\displaystyle o} -mode. [ 29 ] This makes circular polarization an extremely important property for studies of solar radio emission, as it can be used to help understand how the radiation was produced. While circular polarization is most prevalent in solar radio observations, it is also possible to produce linear polarizations in certain circumstances. [ 39 ] However, the presence of intense magnetic fields leads to Faraday rotation that distorts linearly-polarized signals, making them extremely difficult or impossible to detect. [ 40 ] However, it is possible to detect linearly-polarized background astrophysical sources that are occulted by the corona, [ 41 ] in which case the impact of Faraday rotation can be used to measure the coronal magnetic field strength. [ 42 ] The appearance of solar radio emission, particularly at low frequencies, is heavily influenced by propagation effects. [ 43 ] A propagation effect is anything that impacts the path or state of an electromagnetic wave after it is produced. These effects therefore depend on whatever mediums the wave passed through before being observed. The most dramatic impacts to solar radio emission occur in the corona and in Earth's ionosphere . There are three primary effects: refraction, scattering, and mode coupling. Refraction is the bending of light's path as it enters a new medium or passes through a material with varying density. The density of the corona generally decreases with distance from the Sun, which causes radio waves to refract toward the radial direction. [ 44 ] [ 45 ] When solar radio emission enters Earth's ionosphere, refraction may also severely distort the source's apparent location depending on the viewing angle and ionospheric conditions. [ 46 ] The x {\displaystyle x} - and o {\displaystyle o} -modes discussed in the previous section also have slightly different refractive indices , which can lead to separation of the two modes. [ 29 ] The counterpart to refraction is reflection . A radio wave can be reflected in the solar atmosphere when it encounters a region of particularly high density compared to where it was produced, and such reflections can occur many times before a radio wave escapes the atmosphere. This process of many successive reflections is called scattering , and it has many important consequences. [ 47 ] Scattering increases the apparent size of the entire Sun and compact sources within it, which is called angular broadening . [ 48 ] [ 49 ] Scattering increases the cone-angle over which directed emission can be observed, which can even allow for the observation of low-frequency radio bursts that occurred on the far-side of the Sun. [ 50 ] Because the high-density fibers that are primarily responsible for scattering are not randomly aligned and are generally radial, random scattering against them may also systematically shift the observed location of a radio burst to a larger height than where it was actually produced. [ 51 ] [ 30 ] Finally, scattering tends to depolarize emission and is likely why radio bursts often exhibit much lower circular polarization fractions than standard theories predict. [ 52 ] Mode coupling refers to polarization state changes of the x {\displaystyle x} - and o {\displaystyle o} -modes in response to different plasma conditions. [ 53 ] If a radio wave passes through a region where the magnetic field orientation is nearly perpendicular to the direction of travel, which is called a quasi-transverse region, [ 54 ] the polarization sign (i.e. left or right; positive or negative) may flip depending on the radio frequency and plasma parameters. [ 55 ] This concept is crucial to interpreting polarization observations of solar microwave radiation [ 56 ] [ 57 ] and may also be important for certain low-frequency radio bursts. [ 58 ] Solar radio bursts are brief periods during which the Sun's radio emission is elevated above the background level. [ 16 ] They are signatures of the same processes that lead to the more widely-known forms of solar activity such as sunspots , solar flares, and coronal mass ejections . [ 17 ] Radio bursts can exceed the background radiation level only slightly or by several orders of magnitude (e.g. by 10 to 10,000 times) depending on a variety of factors that include the amount of energy released, the plasma parameters of the source region, the viewing geometry, and the mediums through which the radiation propagated before being observed. Most types of solar radio bursts are produced by the plasma emission mechanism operating in different contexts, although some are caused by (gyro)synchrotron and/or electron-cyclotron maser emission. Solar radio bursts are classified largely based on how they appear in dynamic spectrum observations from radiospectrographs. The first three types, shown in the image on the right, were defined by Paul Wild and Lindsay McCready in 1950 using the earliest radiospectrograph observations of metric (low-frequency) bursts. [ 8 ] This classification scheme is based primarily on how a burst's frequency drifts over time. Types IV and V were added within a few years of the initial three, and a number of other types and sub-types have since been identified. Type I bursts are radiation spikes that last around one second and occur over a relatively narrow frequency range ( Δ f / f ≈ 0.025 {\displaystyle \Delta {}f/f\approx 0.025} ) with little-to-no discernible drift in frequency. [ 59 ] They tend to occur in groups called noise storms that are often superimposed on enhanced continuum (broad-spectrum) emission with the same frequency range. [ 60 ] While each individual Type I burst does not drift in frequency, a chain of Type I bursts in a noise storm may slowly drift from higher to lower frequencies over a few minutes. Noise storms can last from hours to weeks, and they are generally observed at relatively low frequencies between around 50 and 500 MHz. Noise storms are associated with active regions . [ 61 ] Active regions are regions in the solar atmosphere with high concentrations of magnetic fields, and they include a sunspot at their base in the photosphere except in cases where the magnetic fields are fairly weak. [ 62 ] The association with active regions has been known for decades, but the conditions required to produce noise storms are still mysterious. Not all active regions that produce other forms of activity such as flares generate noise storms, and unlike other types of solar radio bursts, it is often difficult to identify non-radio signatures of Type I bursts. [ 63 ] [ 64 ] The emission mechanism for Type I bursts is generally agreed to be fundamental plasma emission due to the high circular polarization fractions that are frequently observed. However, there is no consensus yet on what process accelerates the electrons needed to stimulate plasma emission. The leading ideas are minor magnetic reconnection events or shock waves driven by upward-propagating waves. [ 65 ] [ 66 ] Since the year 2000, different magnetic reconnection scenarios have generally been favored. One scenario involves reconnection between the open and closed magnetic fields at the boundaries of active regions, [ 67 ] and another involves moving magnetic features in the photosphere. [ 68 ] Type II bursts exhibit a relatively slow drift from high to low frequencies of around 0.05 MHz per second, [ 69 ] typically over the course of a few minutes. [ 70 ] They often exhibit two distinct bands of emission that correspond to fundamental and harmonic plasma emission emanating from the same region. [ 71 ] Type II bursts are associated with coronal mass ejections (CMEs) and are produced at the leading edge of a CME, where a shock wave accelerates the electrons responsible for stimulating plasma emission. [ 72 ] The frequency drifts from higher to lower values because it depends on the electron density, and the shock propagates outward away from the Sun through lower and lower densities. By using a model for the Sun's atmospheric density, the frequency drift rate can then be used to estimate the speed of the shock wave. This exercise typically results in speeds of around 1000 km/s, which matches that of CME shocks determined from other methods. [ 73 ] While plasma emission is the accepted mechanism, Type II bursts do not exhibit significant amounts of circular polarization as would be expected by standard plasma emission theory. [ 74 ] The reason for this is unknown, but a leading hypothesis is that the polarization level is suppressed by dispersion effects related to having an inhomogeneous magnetic field near a magnetohydrodynamic shock. [ 75 ] Type II bursts sometimes exhibit fine structures called herringbone bursts that emanate from the main burst, as it appears in a dynamic spectrum, and extend to lower frequencies. Herringbone structures are believed to result from shock-accelerated electrons that were able to escape far beyond the shock region to excite Langmuir waves in plasma of lower density than the primary burst region. [ 76 ] [ 77 ] Like Type II bursts, Type IIIs also drift from high to low frequencies and are widely attributed to the plasma emission mechanism. [ 78 ] However, Type III bursts drift much more rapidly, around 100 MHz per second, and must therefore be related to disturbances that move more quickly than the shock waves responsible for Type IIs. [ 79 ] Type III bursts are associated with electrons beams that are accelerated to small fractions of light speed ( ≈ {\displaystyle \approx } 0.1 to 0.3 c) by magnetic reconnection, the process responsible for solar flares. In the image below, the chain of color contours show the locations of three Type III bursts at different frequencies. The progression from violet to red corresponds to the trajectories of electron beams moving away from the Sun and exciting lower and lower frequency plasma emission as they encounter lower and lower densities. Given that they are ultimately caused by magnetic reconnection, Type IIIs are strongly associated with X-ray flares and are indeed observed during nearly all large flares. [ 80 ] However, small-to-moderate X-ray flares do not always exhibit Type III bursts and vice versa due to the somewhat different conditions that are required for the high- and low-energy emission to be produced and observed. [ 81 ] [ 82 ] Type III bursts can occur alone, in small groups, or in chains referred to as Type III storms that may last many minutes. They are often subdivided into two types, coronal and interplanetary Type III bursts. [ 78 ] Coronal refers to the case for which an electron beam is traveling in the corona within a few solar radii of the photosphere. They typically start at frequencies in the hundreds of MHz and drift down to tens of MHz over a few seconds. The electron beams that excite radiation travel along specific magnetic field lines that may be closed or open to interplanetary space. [ 83 ] Electron beams that escape into interplanetary space may excite Langmuir waves in the solar wind plasma to produce interplanetary Type III bursts that can extend down to 20 kHz and below for beams that reach 1 Astronomical Unit and beyond. [ 78 ] The very low frequencies of interplanetary bursts are below the ionospheric cutoff ( ≈ {\displaystyle \approx } 10 MHz), meaning they are blocked by Earth's ionosphere and are observable only from space. Direct, in situ observations of the electrons and Langmuir waves (plasma oscillations) associated with interplanetary Type III bursts are among the most important pieces of evidence for the plasma emission theory of solar radio bursts. [ 84 ] [ 85 ] Type III bursts exhibit moderate levels of circular polarization, typically less than 50%. [ 86 ] This is lower than expected from plasma emission and is likely due to depolarization from scattering by density inhomogeneities and other propagation effects. [ 52 ] Type IV bursts are spikes of broad-band continuum emission that include a few distinct sub-types associated with different phenomena and different emission mechanisms. The first type to be defined was the moving Type IV burst, which requires imaging observations (i.e. interferometry) to detect. [ 87 ] They are characterized by an outward-moving continuum source that is often preceded by a Type II burst in association with a coronal mass ejection (CME). [ 75 ] The emission mechanism for Type IV bursts is generally attributed to gyrosynchrotron emission, plasma emission, or some combination of both that results from fast-moving electrons trapped within the magnetic fields of an erupting CME. [ 16 ] [ 88 ] Stationary Type IV bursts are more common and are not associated with CMEs. [ 75 ] They are broad-band continuum emissions associated either with solar flares or Type I bursts. [ 16 ] Flare-associated Type IV bursts are also called flare continuum bursts, and they typically begin at or shortly after a flare's impulsive phase. Larger flares often include a storm continuum phase that follows after the flare continuum. [ 89 ] The storm continuum can last from hours to days and may transition into an ordinary Type I noise storm in long-duration events. [ 6 ] Both flare and storm continuum Type IV bursts are attributed to plasma emission, but the storm continuum exhibits much larger degrees of circular polarization for reasons that are not fully known. [ 16 ] Type V bursts are the least common of the standard 5 types. [ 75 ] They are continuum emissions that last from one to a few minutes immediately after a group of Type III bursts, generally occurring below around 120 MHz. [ 16 ] Type Vs are generally thought to be caused by harmonic plasma emission associated with same streams of electrons responsible for the associated Type III bursts. [ 90 ] They sometimes exhibit significant positional offsets from the Type III bursts, which may be due to the electrons traveling along somewhat different magnetic field structures. [ 91 ] Type V bursts persist for much longer than Type IIIs because they are driven by a slower and less- collimated electron population, which produces broader-band emission and also leads to a reversal in the circular polarization sign from that of the associated Type III bursts due to the different Langmuir wave distribution. [ 92 ] While plasma emission is the commonly-accepted mechanism, electron-cyclotron maser emission has also been proposed. [ 93 ] In addition to the classic five types, there are a number of additional types of solar radio bursts. These include variations of the standard types, fine structure within another type, and entirely distinct phenomena. Variant examples include Types J and U bursts, which are Type III bursts for which the frequency drift reverses to go from lower to higher frequencies, suggesting that an electron beam first traveled away and then back toward the Sun along a closed magnetic field trajectory. [ 78 ] Fine structure bursts include zebra patterns [ 94 ] and fibre bursts [ 95 ] that may be observed within Type IV bursts, along with the herringbone bursts [ 76 ] that sometimes accompany Type IIs. Type S bursts, which last only milliseconds, are an example of a distinct class. [ 96 ] There are also a variety of high-frequency microwave burst types, such as microwave Type IV bursts, impulsive bursts, postbursts, and spike bursts. [ 97 ] Due to its proximity to Earth, the Sun is the brightest source of astronomical radio emission. But of course, other stars also produce radio emission and may produce much more intense radiation in absolute terms than is observed from the Sun. For "normal" main sequence stars, the mechanisms that produce stellar radio emission are the same as those that produce solar radio emission. [ 16 ] However, emission from " radio stars " may exhibit significantly different properties compared to the Sun, and the relative importance of the different mechanisms may change depending on the properties of the star, particularly with respect to size and rotation rate , the latter of which largely determines the strength of a star's magnetic field . Notable examples of stellar radio emission include quiescent steady emission from stellar chromospheres and coronae, radio bursts from flare stars , radio emission from massive stellar winds , and radio emission associated with close binary stars . [ 16 ] Pre-main-sequence stars such as T Tauri stars also exhibit radio emission through reasonably well-understood processes, namely gyrosynchrotron and electron cyclotron maser emission. [ 98 ] Different radio emission processes also exist for certain pre-main-sequence stars , along with post-main sequence stars such as neutron stars . [ 16 ] These objects have very high rotation rates, which leads to very intense magnetic fields that are capable of accelerating large amounts of particles to highly- relativistic speeds. Of particular interest is the fact that there is no consensus yet on the coherent radio emission mechanism responsible for pulsars , which cannot be explained by the two well-established coherent mechanisms discussed here, plasma emission and electron cyclotron maser emission. [ 99 ] Proposed mechanisms for pulsar radio emission include coherent curvature emission, relativistic plasma emission, anomalous Doppler emission, and linear acceleration emission or free-electron maser emission. [ 99 ] All of these processes still involve the transfer of energy from moving electrons into radiation. However, in this case the electrons are moving at nearly the speed of light, and the debate revolves around what process accelerates these electrons and how their energy is converted into radiation. [ 100 ]
https://en.wikipedia.org/wiki/Solar_radio_emission
Solar reforming is the sunlight-driven conversion of diverse carbon waste resources (including solid, liquid, and gaseous waste streams such as biomass , plastics , industrial by-products, atmospheric carbon dioxide , etc.) into sustainable fuels (or energy vectors) and value-added chemicals. It encompasses a set of ideas focused on solar solar energy. [ 1 ] Solar reforming offers an attractive and unifying solution to address the contemporary challenges of climate change and environmental pollution by creating a sustainable circular network of waste upcycling, clean fuel (and chemical) generation and the consequent mitigation of greenhouse emissions (in alignment with the United Nations Sustainable Development Goals ). [ 1 ] The earliest sunlight-driven reforming (now referred to as photoreforming or PC reforming which forms a small sub-section of solar reforming; see Definition and classifications section) of waste-derived substrates involved the use of TiO 2 semiconductor photocatalyst (generally loaded with a hydrogen evolution co-catalyst such as Pt). Kawai and Sakata from the Institute for Molecular Science , Okazaki, Japan in the 1980s reported that the organics derived from different solid waste matter could be used as electron donors to drive the generation of hydrogen gas over TiO 2 photocatalyst composites. [ 2 ] [ 3 ] In 2017, Wakerley, Kuehnel and Reisner at the University of Cambridge , UK demonstrated the photocatalytic production of hydrogen using raw lignocellulosic biomass substrates in the presence of visible-light responsive CdS|CdO x quantum dots under alkaline conditions. [ 4 ] This was followed by the utilization of less-toxic, carbon-based, visible-light absorbing photocatalyst composites (for example carbon-nitride based systems) for biomass and plastics photoreforming to hydrogen and organics by Kasap, Uekert and Reisner. [ 5 ] [ 6 ] In addition to variations of carbon nitride, other photocatalyst composite systems based on graphene oxides , MXenes , co-ordination polymers and metal chalcogenides were reported during this period. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] A major limitation of PC reforming is the use of conventional harsh alkaline pre-treatment conditions (pH >13 and high temperatures) for polymeric substrates such as condensation plastics , accounting for more than 80% of the operation costs. [ 15 ] This was circumvented with the introduction of a new chemoenzymatic reforming pathway in 2023 by Bhattacharjee, Guo, Reisner and Hollfelder, which employed near-neutral pH, moderate temperatures for pre-treating plastics and nanoplastics. [ 16 ] In 2020, Jiao and Xie reported the photocatalytic conversion of addition plastics such as polyethylene and polypropylene to high energy-density to C 2 fuels over a Nb 2 O 5 catalyst under natural conditions. [ 17 ] The photocatalytic process (referred to as PC reforming; see Categorization and configurations section below) offers a simple, one-pot and facile deployment scope, but has several major limitations, making it challenging for commercial implementation. [ 15 ] In 2021, sunlight-driven photoelectrochemical (PEC) systems/technologies operating with no external bias or voltage input were introduced by Bhattacharjee and Reisner at the University of Cambridge . [ 18 ] These PEC reforming (see Categorization and configurations section) systems reformed diverse pre-treated waste streams (such as lignocellulose and PET plastics ) to selective value-added chemicals with the simultaneous generation of green hydrogen , and achieving areal production rates 100-10000 times higher than conventional photocatalytic processes. [ 18 ] In 2023, Bhattacharjee, Rahaman and Reisner extended the PEC platform to a solar reactor which could reduce greenhouse gas CO 2 to different energy vectors ( CO , syngas , formate depending on the type of catalyst integrated) and convert waste PET plastics to glycolic acid at the same time. [ 19 ] This further inspired the direct capture and conversion of CO 2 to products from flue gas and air ( direct air capture ) in a PEC reforming process (with simultaneous plastic conversion). [ 20 ] Choi and Ryu demonstrated a polyoxometallate -medated PEC process to achieve biomass conversion with unassisted hydrogen production in 2022. [ 21 ] Similarly, Pan and Chu, in 2023 reported a PEC cell for renewable formate production from sunlight, CO 2 and biomass-derived sugars. [ 22 ] In 2025, Andrei, Roh and Yang demonstrated solar-driven hydrocarbon synthesis by interfacing copper nanoflower catalysts on perovskite -based artificial leaves at the University of California, Berkeley . Devices can produce ethane and ethylene at high rates by coupling CO 2 reduction with glycerol oxidation into value-added chemicals, which replaces the thermodynamically demanding O 2 evolution. [ 23 ] [ 24 ] These developments has led solar reforming (and electroreforming, where renewable electricity drives redox processes) to gradually emerge as an active area of exploration. Solar reforming is the sunlight-driven transformation of waste substrates to valuable products (such as sustainable fuels and chemicals) as defined by scientists Subhajit Bhattacharjee, Stuart Linley, and Erwin Reisner in their 2024 Nature Reviews Chemistry article where they conceptualized and formalized the field by introducing its concepts, classification, configurations and metrics. [ 1 ] It generally operates without external heating and pressure, and also introduces a thermodynamic advantage over traditional green hydrogen or CO 2 reduction fuel-producing methods such as water splitting or CO 2 splitting, respectively. Depending on solar spectrum utilization, solar reforming can be classified into two categories: "solar catalytic reforming" and "solar thermal reforming". [ 1 ] Solar catalytic reforming refers to transformation processes primarily driven by ultraviolet (UV) or visible light . [ 1 ] It also includes the subset of 'photoreforming' encompassing utilization of high energy photons in the UV or near-UV region of the solar spectrum (for example, by semiconductor photocatalysts such as TiO 2 ). Solar thermal reforming, on the other hand, exploits the infrared (IR) region for waste upcycling to generate products of high economic value. [ 1 ] An important aspect of solar reforming is value creation, which means that the overall value creation from product formation must be greater than substrate value destruction. [ 1 ] In terms of deployment architectures, solar catalytic reforming can be further categorized into: photocatalytic reforming (PC reforming), photoelectrochemical reforming (PEC reforming), and photovoltaic-electrochemical reforming (PV-EC reforming). [ 1 ] Solar reforming offers several advantages over conventional methods of waste management or fuel/chemical production. It offers a less energy-intensive and low-carbon alternative to methods of waste reforming such as pyrolysis and gasification which require high energy input. [ 1 ] Solar reforming also provides several benefits over traditional green hydrogen production methods such as water splitting (H 2 O → H 2 + ⁠ 1 / 2 ⁠ O 2 , ΔG° = 237 kJ mol −1 ). It offers a thermodynamic advantage over water splitting by circumventing the energetically and kinetically demanding water oxidation half reaction (E 0 = +1.23 V vs. reversible hydrogen electrode (RHE)) by energetically neutral oxidation of waste-derived organics (C x H y O z + (2 x − z )H 2 O → (2 x − z + y /2)H 2 + x CO 2 ; ΔG° ~0 kJ mol −1 ). [ 1 ] This results in better performance in terms of higher production rates, and also translates to other similar processes which depend on water oxidation as the counter-reaction such as CO 2 splitting. Furthermore, concentrated streams of hydrogen produced from solar reforming are safer than explosive mixtures of oxygen and hydrogen (from traditional water splitting), which otherwise require additional separation costs. [ 1 ] The added economic advantage of forming two different valuable products (for example, gaseous reductive fuels and liquid oxidative chemicals) simultaneously makes solar reforming suitable for commercial applications. [ 1 ] Solar reforming encompasses a range of technological processes and configurations and therefore, suitable performance metrics can evaluate the commercial viability. In artificial photosynthesis , the most common metric is the solar-to-fuel conversion efficiency (η STF ) as shown below, where 'r' is the product formation rate, 'ΔG' is the Gibbs free energy change during the process, 'A' is the sunlight irradiation area and 'P' is the total light intensity flux. [ 1 ] [ 25 ] The η STF can be adopted as a metric for solar reforming but with certain considerations. Since the ΔG values for solar reforming processes are very low (ΔG ~0 kJ mol ‒1 ), this makes the η STF per definition close to zero, despite the high production rates and quantum yields . However, replacing the ΔG for product formation (during solar reforming) with that of product utilisation (|ΔG use |; such as combustion of the hydrogen fuel generated) can give a better representation of the process efficiency. [ 1 ] η S T F = r S R ( m o l ⋅ s − 1 ) × Δ G S R ( J ⋅ m o l − 1 ) P total ( W ⋅ m − 2 ) × A ( m 2 ) {\displaystyle \eta _{\mathrm {STF} }={\frac {\mathrm {r} _{\mathrm {SR} }\left(\mathrm {mol} \cdot \mathrm {s} ^{-1}\right)\times \Delta \mathrm {G} _{\mathrm {SR} }\left(\mathrm {J} \cdot \mathrm {mol} ^{-1}\right)}{\mathrm {P} _{\text{total }}\left(\mathrm {W} \cdot \mathrm {m} ^{-2}\right)\times \mathrm {A} \left(\mathrm {m} ^{2}\right)}}} Since solar reforming is highly dependent on the light harvester and its area of photon collection, a more technologically relevant metric is the areal production rate (r areal ) as shown, where 'n' is the moles of product formed, 'A' is the sunlight irradiation area and 't' is the time. [ 1 ] r areal = n product ( m o l ) A ( m 2 ) × t ( h ) {\displaystyle \mathrm {r} _{\text{areal}}={\frac {\mathrm {n} _{\text{product}}(\mathrm {mol} )}{\mathrm {A} \left(\mathrm {m} ^{2}\right)\times \mathrm {t} (\mathrm {h} )}}} Although r areal is a more consistent metric for solar reforming, it neglects some key parameters such as type of waste utilized, pre-treatment costs, product value, scaling, other process and separation costs, deployment variables, etc. [ 1 ] Therefore, a more adaptable and robust metric is the solar-to-value creation rate ( r STV ) which can encompass all these factors and provide a more holistic and practical picture from the economic or commercial point of view. [ 1 ] The simplified equation for r STV is shown below, where C i and C k are the costs of the product 'i' and substrate 'k', respectively. C p is the pre-treatment cost for the waste substrate 'k', and n i and n k are amounts (in moles) of the product 'i' formed and substrate 'k' consumed during solar reforming, respectively. Note that the metric is adaptable and can be expanded to include other relevant parameters as applicable. [ 1 ] r S T V = ∑ i = 1 M C i ( $ m o l − 1 ) × n i ( m o l ) − ∑ k = 1 N ( C k + C p ) ( $ m o l − 1 ) × n k ( m o l ) A ( m 2 ) × t ( h ) {\displaystyle r_{\mathrm {STV} }={\frac {{\textstyle \sum _{i=1}^{M}\displaystyle C_{i}(\$mol^{-1})\times n_{i}(mol)}-{\textstyle \sum _{k=1}^{N}\displaystyle {\bigl (}C_{k}+C_{p}{\bigr )}(\$mol^{-1})\times n_{k}(mol)}}{A(m^{2})\times t(h)}}{}} Solar reforming depends on the properties of the light absorber and the catalysts involved, and their selection, screening, and integration to generate maximum value. The design and deployment of solar reforming technologies dictate the efficiency, scale, and target substrates/products. In this context, solar reforming (more specifically, solar catalytic reforming) can be classified into three architectures: [ 1 ] An important concept introduced in the context of solar reforming is the 'photon economy', which, as defined by Bhattacharjee, Linley and Reisner, is the maximum utilization of all incident photons for maximizing product formation and value creation. [ 1 ] An ideal solar reforming process is one where the light absorber can absorb incident UV and visible light photons with maximum quantum yield , generating high charge carrier concentration to drive redox half-reactions at maximum rate. On the other hand, the residual, non-absorbed low-energy IR photons may be used for boosting reaction kinetics, waste pre-treatment or other means of value creation (for example, desalination , [ 36 ] etc.). Therefore, proper light and thermal management through various means (such as using solar concentrators, thermoelectric modules, among others) is encouraged to have both an atom economical and photon economical approach to extract maximum value from solar reforming processes. Deployment of any solar reforming (PC, PEC, or PV-EC) is speculative and depends on many factors. [ 1 ] Solar reforming may not be only limited to the conventional chemical pathways discussed, and may also include other relevant industrial processes such as light-driven organic transformations, flow photochemistry, and integration with industrial electrolysis, among others. [ 1 ] The products from conventional solar reforming such as green hydrogen or other platform chemicals have a broad value-chain. It is also now understood that sustainable fuel/chemical producing technologies of the future will rely on biomass, plastics, and CO 2 as key carbon feedstocks to replace fossil fuels . [ 37 ] Therefore, with sunlight being abundant and the cheapest source of energy, solar reforming is well-positioned to drive decarbonization and facilitate the transition from a linear to circular economy in the coming decades. [ 1 ]
https://en.wikipedia.org/wiki/Solar_reforming
A solar still distills water with substances dissolved in it by using the heat of the Sun to evaporate water so that it may be cooled and collected, thereby purifying it. They are used in areas where drinking water is unavailable, so that clean water is obtained from dirty water or from plants by exposing them to sunlight. Still types include large scale concentrated solar stills and condensation traps . In a solar still, impure water is contained outside the collector, where it is evaporated by sunlight shining through a transparent collector. The pure water vapour condenses on the cool inside surface and drips into a tank. Distillation replicates the way nature makes rain. The sun's energy heats water to the point of evaporation. As the water evaporates, its vapour rises, condensing into water again as it cools. This process leaves behind impurities, such as salts and heavy metals, and eliminates microbiological organisms. The result is pure (potable) water. Condensation traps have been in use since the pre- Incan peoples inhabited the Andes . [ citation needed ] In 1952, the United States military developed a portable solar still for pilots stranded in the ocean. It featured an inflatable 610-millimetre (24 in) floating plastic ball, with a flexible tube in the side. An inner bag hangs from attachment points on the outer bag. Seawater is poured into the inner bag from an opening in the ball's neck. Fresh water is taken out using the side tube. Output ranged from 1.4 litres (1.5 US qt) to 2.4 litres (2.5 US qt) of fresh water per day. [ 1 ] Similar stills are included in some life raft survival kits , though manual reverse osmosis desalinators have mostly replaced them. [ 2 ] Today, a method for gathering water in moisture traps is taught within the Argentinian Army for use by specialist units expected to conduct extended patrols of more than a week's duration in the Andes' arid border areas. [ citation needed ] A collector is placed at the bottom of a pit. Branches are placed vertically in the pit. The branches are long enough to extend over the edge of the pit and form a funnel to direct the water into the collector. A lid is then built over this funnel, using more branches, leaves, grasses, etc. Water is collected each morning. This method relies on the formation of dew or frost on the receptacle, funnel, and lid. Forming dew collects on and runs down the outside of the funnel and into the receptacle. This water would typically evaporate with the morning sun and thus vanish, but the lid traps the evaporating water and raises the humidity within the trap, reducing the amount of lost water. The shade produced by the lid also reduces the temperature within the trap, which further reduces the rate of water loss to evaporation. A solar still can be constructed with two–four stones, plastic film or transparent glass , a central weight to make the funnel and a container for the condensate. [ 3 ] Better materials improve efficiency. A single sheet of plastic can replace the branches and leaves. Greater efficiency arises because the plastic is waterproof, preventing water vapour from escaping. The sheet is attached to the ground on all sides with stones or earth. Weighting the centre of the sheet forms the funnel. Condensate runs down it into the receptacle. One study of pit distillation found that angling the lid at 30 degrees angle captured the most water. The optimal water depth was about 25 millimetres (1 in). [ 4 ] During photosynthesis plants release water through transpiration . Water can be obtained by enclosing a leafy tree branch in clear plastic, [ 5 ] capturing water vapour released by the tree. [ 6 ] The plastic allows photosynthesis to continue. In a 2009 study, variations to the angle of plastic and increasing the internal temperature versus the outside temperature improved output volumes. Unless relieved the vapour pressure around the branch can rise so high that the leaves can no longer transpire, requiring the water to be removed frequently. Alternatively, clumps of grass or small bushes can be placed inside the bag. The foliage must be replaced at regular intervals, particularly if the foliage is uprooted. Efficiency is greatest when the bag receives maximum sunshine. Soft, pulpy roots yield the greatest amount of liquid for the least amount of effort. The wick type solar still is a vapour-tight glass-topped box with an angled roof. [ 7 ] Water is poured in from the top. It is heated by sunlight and evaporates. It condenses on the underside of the glass and runs into the connecting pipe at the bottom. Wicks separate the water into banks to increase surface area. The more wicks, the more heat reaches the water. To aid in absorbing more heat, wicks can be blackened. Glass absorbs less heat than plastic at higher temperatures, although glass is not as flexible. A plastic net can catch the water before it falls into the container and give it more time to heat. When distilling brine or other polluted water, adding a dye can increase the amount of solar radiation absorbed. A reverse still uses the temperature difference between solar-heated ambient air and the device to condense ambient water vapour. One such device produces water without external power. It features an inverted cone on top to deflect ambient heat in the air, and to keep sunlight off the upper surface of the box. This surface is a sheet of glass coated with multiple layers of a polymer and silver. [ 8 ] It reflects sunlight to reduce surface heating. Residual heat that is not reflected is reemitted in a specific ( infrared ) wavelength so that it passes through the atmosphere into space. The box can be as much as 15 °C (27 °F) cooler than the ambient temperature. That stimulates condensation, which gathers on the ceiling. This ceiling is coated in a superhydrophobic material, so that the condensate forms into droplets and falls into a collector. A test system yielded 4.6 ml (0.16 US fl oz) of water per day, using a 10 cm (3.9 in) surface or approximately 1.3 L/m 2 (0.28 gal/ft 2) per day. [ 8 ] An inclined solar still operates by allowing short-wave solar radiation to pass through a transparent glass plate while trapping the long-wave radiation emitted by the heated sand and water inside the still. [ 9 ] This trapped heat raises the water temperature, increasing the evaporation rate. The resulting water vapor condenses on the inner surface of the glass plate and is collected using a channel. This type of still is utilized to produce potable water from brackish sources and to examine its effectiveness for defluoridation. A variation of this method, known as earth–water distillation, involves using wet sand or soil to extract water in arid regions. Sand is used within the inclined still to retain a stable water layer, preventing overflow. Without sand, feed water would spill over if its free surface height exceeded that of the collection channel. [ 9 ] Condensation traps are sources for extending or supplementing existing water sources or supplies. A trap measuring 40 cm (16 in) in diameter by 30 cm (12 in) deep yields around 100 to 150 mL (3.4 to 5.1 US fl oz) per day. Urinating into the pit before adding the receptacle allows some of the urine's water content to be recovered. A pit still may be too inefficient as a survival still, because of the energy/water required for construction. [ 10 ] In desert environments water needs can exceed 3.8 litres (1 US gal) per day for a person at rest, while still production may average only 240 millilitres (8 US fl oz). [ 10 ] [ 11 ] Several days of water collection may be required to equal the water lost during construction. [ 11 ] Solar stills are used in cases where rain, piped, or well water is impractical, such as in remote homes or during power outages. [ 9 ] In subtropical hurricane target areas that can lose power for days, solar distillation can provide an alternative source of clean water. Solar-powered desalination systems can be installed in remote locations where there is little or no infrastructure or energy grid. Solar is still affordable, eco-friendly, and considered an effective method amongst other conventional distillation techniques. Solar still is very effective, especially for supplying fresh water for islanders. This makes them ideal for use in rural areas or developing countries where access to clean water is limited. [ 12 ] [ 13 ] Solar stills have been used by ocean-stranded pilots and included in life raft emergency kits. [ 1 ] Using a condensation trap to distill urine will remove the urea and salt, recycling the body's water. [ 14 ] Solar stills have also been used for the treatment of municipal wastewater , [ 15 ] the dewatering of sewage sludge [ 16 ] as well as for olive mill wastewater management. [ 17 ] Research indicates that ions such as F⁻ and NO − 3 can be present in distillates from solar stills. Imaging and distillation experiments were performed to investigate this phenomenon. [ 18 ] White dots were observed in the vapor space above the interface of hot water poured into containers. The concentrations of ions such as F⁻ and SO 2− 4 in distillates from both thermal and solar distillation experiments were found to be similar when using deionized water as well as fluoride solutions with concentrations of 100 and 10,000 mg/L. These findings suggest that aerosols enter the distillation system through leaks, acting as nuclei for water vapor condensation. The water-soluble components of aerosols dissolve in the forming droplets, some of which are carried into the distillate by buoyancy-driven convection.
https://en.wikipedia.org/wiki/Solar_still
A solar symbol is a symbol representing the Sun . Common solar symbols include circles (with or without rays), crosses, and spirals. In religious iconography, personifications of the Sun or solar attributes are often indicated by means of a halo or a radiate crown . When the systematic study of comparative mythology first became popular in the 19th century, scholarly opinion tended to over-interpret historical myths and iconography in terms of "solar symbolism". This was especially the case with Max Müller and his followers beginning in the 1860s in the context of Indo-European studies . [ 1 ] Many "solar symbols" claimed in the 19th century, such as the swastika , triskele , Sun cross , etc. have tended to be interpreted more conservatively in scholarship since the later 20th century. [ 2 ] The basic element of most solar symbols is the circular solar disk. The disk can be modified in various ways, notably by adding rays (found in the Bronze Age in Egyptian depictions of Aten ) or a cross . In the ancient Near East, the solar disk could also be modified by addition of the Uraeus (rearing cobra), and in ancient Mesopotamia it was shown with wings . Egyptian hieroglyphs have a large inventory of solar symbolism because of the central position of solar deities ( Ra , Horus , Aten etc.) in ancient Egyptian religion . The "Sun" logogram in early Chinese writing , beginning with the oracle bone script (c. 12th century BC) also shows the solar disk with a central dot (analogous to the Egyptian hieroglyph); under the influence of the writing brush, this character evolved into a square shape (modern 日 ). In the Greek and European world, until approximately the 16th century, the astrological symbol for the Sun was a disk with a single ray, ( U+1F71A 🜚 ALCHEMICAL SYMBOL FOR GOLD ). This is the form, for example, in Johannes Kamateros ' 12th century Compendium of Astrology . [ 4 ] The modern astronomical symbol for the Sun, a circled dot ( U+2609 ☉ SUN ), was first used in the Renaissance. A circular disk with alternating triangular and wavy rays emanating from it is a frequent symbol or artistic depiction of the sun. The ancient Mesopotamian "star of Shamash " could be represented with either eight wavy rays, or with four wavy and four triangular rays. The Vergina Sun (also known as the Star of Vergina, Macedonian Star, or Argead Star) is a rayed solar symbol appearing in ancient Greek art from the 6th to 2nd centuries BC. The Vergina Sun appears in art variously with sixteen, twelve, or eight triangular rays. Bianchini's planisphere , produced in the 2nd century, [ 5 ] has a circlet with rays radiating from it. [ 6 ] The iconographic tradition of depicting the Sun with rays and with a human face developed in Western tradition in the high medieval period and became widespread in the Renaissance , harking back to the Sun god ( Sol/Helios ) wearing a radiate crown in classical antiquity. The sunburst was the badge of king Edward III of England , and has thus become the badge of office of Windsor Herald . The modern pictogram representing the Sun as a circle with rays, often eight in number (indicated by either straight lines or triangles; Unicode Miscellaneous Symbols ☀ U+2600; ☼ U+263C) indicates "clear weather" in weather forecasts , originally in television forecasts in the 1970s. [ 8 ] The Unicode 6.0 Miscellaneous Symbols and Pictographs (October 2010) block introduced another set of weather pictograms, including "white sun" without rays 1F323 🌣 , as well as "sun with face" U+1F31E 🌞︎︎ . Two pictograms resembling the Sun with rays are used to represent the settings of luminance in display devices . They have been encoded in Unicode since version 6.0 in the Miscellaneous Symbols and Pictographs block under U+1505 as "low brightness symbol" ( 🔅 ) and U+1F506 as "high brightness symbol" ( 🔆 ). [ 9 ] The " sun cross ", "solar cross", or "wheel cross" (🜨) is often considered to represent the four seasons and the tropical year, and therefore the Sun (though as an astronomical symbol it represented the Earth). [ a ] In the prehistoric religion of Bronze Age Europe , crosses in circles appear frequently on artifacts identified as cult items. An example from the Nordic Bronze Age is the "miniature standard" with amber inlay revealing a cross shape when held against the light ( National Museum of Denmark ). [ 10 ] The Bronze Age symbol has also been connected with the spoked chariot wheel , which at the time was four-spoked (compare the Linear B ideogram 243 "wheel" 𐃏 ). In the context of a culture that celebrated the Sun chariot , the wheel may thus have had a solar connotation (cf. the Trundholm sun chariot ). The Arevakhach ("solar cross") symbol often found in Armenian memorial stelae is claimed as an ancient Armenian solar symbol of eternity and light. [ 11 ] Some Sámi shaman drums have the Beaivi Sámi sun symbol that resembles a sun cross . The swastika has been a long-standing symbol of good fortune in Eurasian cultures: its appropriation by the Nazi Party from 1920 to 1945 is a brief moment in its history. It may be derived from the sun cross, [ 12 ] and is another solar symbol in some contexts. [ 13 ] It is used among Buddhists ( manji ), Jains , and Hindus ; and many other cultures, though not necessarily as a solar symbol. The " Black Sun " (German: Schwarze Sonne ) is a 'sun wheel' with twelve-fold rotational symmetry . The design was incorporated as a mosaic into a floor of Wewelsburg Castle during the Nazi era and may have been inspired by Alemannic Iron Age swastika-like designs in Migration-period Zierscheiben . [ 14 ] It has been adopted by modern Satanist groups and neo-Nazis . The "Kolovrat", or in Polish Słoneczko , represents the Sun in Slavic neopaganism . Official insignia which incorporate rayed solar symbols include the flag of Uruguay , the flag of Kiribati , some versions of the flag of Argentina , the Irish Defence Forces cap badge , and the 1959–1965 coat of arms of Iraq . The depictions of the sun on the flags of the Republic of China (Taiwan) , Kazakhstan , Kurdistan , the Brazilian state of Pernambuco , and Nepal have only straight (triangular) rays; that of Kyrgyzstan has only curvy rays; while that of the Philippines has short diverging rays grouped into threes. Another rayed form of the sun has simple radial lines dividing the background into two colors, as in the military flags of Japan and the flag of North Macedonia , and in the top parts of the flags of Tibet and Arizona . The flag of New Mexico is based on the Zia sun symbol which has four groups of four parallel rays emanating symmetrically from a central circle.
https://en.wikipedia.org/wiki/Solar_symbol