text stringlengths 151 4.06k |
|---|
Development Testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Development Testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process. |
In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). The ultimate goal of this test process is to achieve continuous integration where software updates can be published to the public frequently. |
Bottom Up Testing is an approach to integrated testing where the lowest level components (modules, procedures, and functions) are tested first, then integrated and used to facilitate the testing of higher level components. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. The process is repeated until the components at the top of the hierarchy are tested. This approach is helpful only when all or most of the modules of the same development level are ready.[citation needed] This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.[citation needed] |
It has been proved that each class is strictly included into the next. For instance, testing when we assume that the behavior of the implementation under test can be denoted by a deterministic finite-state machine for some known finite sets of inputs and outputs and with some known number of states belongs to Class I (and all subsequent classes). However, if the number of states is not known, then it only belongs to all classes from Class II on. If the implementation under test must be a deterministic finite-state machine failing the specification for a single trace (and its continuations), and its number of states is unknown, then it only belongs to classes from Class III on. Testing temporal machines where transitions are triggered if inputs are produced within some real-bounded interval only belongs to classes from Class IV on, whereas testing many non-deterministic systems only belongs to Class V (but not all, and some even belong to Class I). The inclusion into Class I does not require the simplicity of the assumed computation model, as some testing cases involving implementations written in any programming language, and testing implementations defined as machines depending on continuous magnitudes, have been proved to be in Class I. Other elaborated cases, such as the testing framework by Matthew Hennessy under must semantics, and temporal machines with rational timeouts, belong to Class II. |
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester. |
Software testing is a part of the software quality assurance (SQA) process.:347 In SQA, software process specialists and auditors are concerned for the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called "defect rate". What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.[citation needed] |
Germany is a federal republic consisting of sixteen federal states (German: Bundesland, or Land).[a] Since today's Germany was formed from an earlier collection of several states, it has a federal constitution, and the constituent states retain a measure of sovereignty. With an emphasis on geographical conditions, Berlin and Hamburg are frequently called Stadtstaaten (city-states), as is the Free Hanseatic City of Bremen, which in fact includes the cities of Bremen and Bremerhaven. The remaining 13 states are called Flächenländer (literally: area states). |
The creation of the Federal Republic of Germany in 1949 was through the unification of the western states (which were previously under American, British, and French administration) created in the aftermath of World War II. Initially, in 1949, the states of the Federal Republic were Baden, Bavaria (in German: Bayern), Bremen, Hamburg, Hesse (Hessen), Lower Saxony (Niedersachsen), North Rhine Westphalia (Nordrhein-Westfalen), Rhineland-Palatinate (Rheinland-Pfalz), Schleswig-Holstein, Württemberg-Baden, and Württemberg-Hohenzollern. West Berlin, while officially not part of the Federal Republic, was largely integrated and considered as a de facto state. |
In 1952, following a referendum, Baden, Württemberg-Baden, and Württemberg-Hohenzollern merged into Baden-Württemberg. In 1957, the Saar Protectorate rejoined the Federal Republic as the Saarland. German reunification in 1990, in which the German Democratic Republic (East Germany) ascended into the Federal Republic, resulted in the addition of the re-established eastern states of Brandenburg, Mecklenburg-West Pomerania (in German Mecklenburg-Vorpommern), Saxony (Sachsen), Saxony-Anhalt (Sachsen-Anhalt), and Thuringia (Thüringen), as well as the reunification of West and East Berlin into Berlin and its establishment as a full and equal state. A regional referendum in 1996 to merge Berlin with surrounding Brandenburg as "Berlin-Brandenburg" failed to reach the necessary majority vote in Brandenburg, while a majority of Berliners voted in favour of the merger. |
Federalism is one of the entrenched constitutional principles of Germany. According to the German constitution (called Grundgesetz or in English Basic Law), some topics, such as foreign affairs and defense, are the exclusive responsibility of the federation (i.e., the federal level), while others fall under the shared authority of the states and the federation; the states retain residual legislative authority for all other areas, including "culture", which in Germany includes not only topics such as financial promotion of arts and sciences, but also most forms of education and job training. Though international relations including international treaties are primarily the responsibility of the federal level, the constituent states have certain limited powers in this area: in matters that affect them directly, the states defend their interests at the federal level through the Bundesrat (literally Federal Council, the upper house of the German Federal Parliament) and in areas where they have legislative authority they have limited powers to conclude international treaties "with the consent of the federal government". |
The use of the term Länder (Lands) dates back to the Weimar Constitution of 1919. Before this time, the constituent states of the German Empire were called Staaten (States). Today, it is very common to use the term Bundesland (Federal Land). However, this term is not used officially, neither by the constitution of 1919 nor by the Basic Law (Constitution) of 1949. Three Länder call themselves Freistaaten (Free States, which is the old-fashioned German expression for Republic), Bavaria (since 1919), Saxony (originally since 1919 and again since 1990), and Thuringia (since 1994). There is little continuity between the current states and their predecessors of the Weimar Republic with the exception of the three free states, and the two city-states of Hamburg and Bremen. |
A new delimitation of the federal territory keeps being debated in Germany, though "Some scholars note that there are significant differences among the American states and regional governments in other federations without serious calls for territorial changes ...", as political scientist Arthur B. Gunlicks remarks. He summarizes the main arguments for boundary reform in Germany: "... the German system of dual federalism requires strong Länder that have the administrative and fiscal capacity to implement legislation and pay for it from own source revenues. Too many Länder also make coordination among them and with the federation more complicated ...". But several proposals have failed so far; territorial reform remains a controversial topic in German politics and public perception. |
Federalism has a long tradition in German history. The Holy Roman Empire comprised many petty states numbering more than 300 around 1796. The number of territories was greatly reduced during the Napoleonic Wars (1796–1814). After the Congress of Vienna (1815), 39 states formed the German Confederation. The Confederation was dissolved after the Austro-Prussian War and replaced by a North German Federation under Prussian hegemony; this war left Prussia dominant in Germany, and German nationalism would compel the remaining independent states to ally with Prussia in the Franco-Prussian War of 1870–71, and then to accede to the crowning of King Wilhelm of Prussia as German Emperor. The new German Empire included 25 states (three of them, Hanseatic cities) and the imperial territory of Alsace-Lorraine. The empire was dominated by Prussia, which controlled 65% of the territory and 62% of the population. After the territorial losses of the Treaty of Versailles, the remaining states continued as republics of a new German federation. These states were gradually de facto abolished and reduced to provinces under the Nazi regime via the Gleichschaltung process, as the states administratively were largely superseded by the Nazi Gau system. |
During the Allied occupation of Germany after World War II, internal borders were redrawn by the Allied military governments. No single state comprised more than 30% of either population or territory; this was intended to prevent any one state from being as dominant within Germany as Prussia had been in the past. Initially, only seven of the pre-War states remained: Baden (in part), Bavaria (reduced in size), Bremen, Hamburg, Hesse (enlarged), Saxony, and Thuringia. The states with hyphenated names, such as Rhineland-Palatinate, North Rhine-Westphalia, and Saxony-Anhalt, owed their existence to the occupation powers and were created out of mergers of former Prussian provinces and smaller states. Former German territory that lie east of the Oder-Neisse Line fell under either Polish or Soviet administration but attempts were made at least symbolically not to abandon sovereignty well into the 1960s. However, no attempts were made to establish new states in these territories as they lay outside the jurisdiction of West Germany at that time. |
Upon its founding in 1949, West Germany had eleven states. These were reduced to nine in 1952 when three south-western states (South Baden, Württemberg-Hohenzollern, and Württemberg-Baden) merged to form Baden-Württemberg. From 1957, when the French-occupied Saar Protectorate was returned and formed into the Saarland, the Federal Republic consisted of ten states, which are referred to as the "Old States" today. West Berlin was under the sovereignty of the Western Allies and neither a Western German state nor part of one. However, it was in many ways de facto integrated with West Germany under a special status. |
Later, the constitution was amended to state that the citizens of the 16 states had successfully achieved the unity of Germany in free self-determination and that the Basic Law thus applied to the entire German people. Article 23, which had allowed "any other parts of Germany" to join, was rephrased. It had been used in 1957 to reintegrate the Saar Protectorate as the Saarland into the Federal Republic, and this was used as a model for German reunification in 1990. The amended article now defines the participation of the Federal Council and the 16 German states in matters concerning the European Union. |
A new delimitation of the federal territory has been discussed since the Federal Republic was founded in 1949 and even before. Committees and expert commissions advocated a reduction of the number of states; academics (Rutz, Miegel, Ottnad etc.) and politicians (Döring, Apel, and others) made proposals – some of them far-reaching – for redrawing boundaries but hardly anything came of these public discussions. Territorial reform is sometimes propagated by the richer states as a means to avoid or reduce fiscal transfers. |
The debate on a new delimitation of the German territory started in 1919 as part of discussions about the new constitution. Hugo Preuss, the father of the Weimar Constitution, drafted a plan to divide the German Reich into 14 roughly equal-sized states. His proposal was turned down due to opposition of the states and concerns of the government. Article 18 of the constitution enabled a new delimitation of the German territory but set high hurdles: Three fifth of the votes handed in, and at least the majority of the population are necessary to decide on the alteration of territory. In fact, until 1933 there were only four changes in the configuration of the German states: The 7 Thuringian states were merged in 1920, whereby Coburg opted for Bavaria, Pyrmont joined Prussia in 1922, and Waldeck did so in 1929. Any later plans to break up the dominating Prussia into smaller states failed because political circumstances were not favorable to state reforms. |
After the Nazi Party seized power in January 1933, the Länder increasingly lost importance. They became administrative regions of a centralised country. Three changes are of particular note: on January 1, 1934, Mecklenburg-Schwerin was united with the neighbouring Mecklenburg-Strelitz; and, by the Greater Hamburg Act (Groß-Hamburg-Gesetz), from April 1, 1937, the area of the city-state was extended, while Lübeck lost its independence and became part of the Prussian province of Schleswig-Holstein. |
As the premiers did not come to an agreement on this question, the Parliamentary Council was supposed to address this issue. Its provisions are reflected in Article 29. There was a binding provision for a new delimitation of the federal territory: the Federal Territory must be revised ... (paragraph 1). Moreover, in territories or parts of territories whose affiliation with a Land had changed after 8 May 1945 without a referendum, people were allowed to petition for a revision of the current status within a year after the promulgation of the Basic Law (paragraph 2). If at least one tenth of those entitled to vote in Bundestag elections were in favour of a revision, the federal government had to include the proposal into its legislation. Then a referendum was required in each territory or part of a territory whose affiliation was to be changed (paragraph 3). The proposal should not take effect if within any of the affected territories a majority rejected the change. In this case, the bill had to be introduced again and after passing had to be confirmed by referendum in the Federal Republic as a whole (paragraph 4). The reorganization should be completed within three years after the Basic Law had come into force (paragraph 6). |
In the Paris Agreements of 23 October 1954, France offered to establish an independent "Saarland", under the auspices of the Western European Union (WEU), but on 23 October 1955 in the Saar Statute referendum the Saar electorate rejected this plan by 67.7% to 32.3% (out of a 96.5% turnout: 423,434 against, 201,975 for) despite the public support of Federal German Chancellor Konrad Adenauer for the plan. The rejection of the plan by the Saarlanders was interpreted as support for the Saar to join the Federal Republic of Germany. |
Paragraph 6 of Article 29 stated that if a petition was successful a referendum should be held within three years. Since the deadline passed on 5 May 1958 without anything happening the Hesse state government filed a constitutional complaint with the Federal Constitutional Court in October 1958. The complaint was dismissed in July 1961 on the grounds that Article 29 had made the new delimitation of the federal territory an exclusively federal matter. At the same time, the Court reaffirmed the requirement for a territorial revision as a binding order to the relevant constitutional bodies. |
In his investiture address, given on 28 October 1969 in Bonn, Chancellor Willy Brandt proposed that the government would consider Article 29 of the Basic Law as a binding order. An expert commission was established, named after its chairman, the former Secretary of State Professor Werner Ernst. After two years of work, the experts delivered their report in 1973. It provided an alternative proposal for both northern Germany and central and southwestern Germany. In the north, either a single new state consisting of Schleswig-Holstein, Hamburg, Bremen and Lower Saxony should be created (solution A) or two new states, one in the northeast consisting of Schleswig-Holstein, Hamburg and the northern part of Lower Saxony (from Cuxhaven to Lüchow-Dannenberg) and one in the northwest consisting of Bremen and the rest of Lower Saxony (solution B). In the Center and South West either Rhineland-Palatinate (with the exception of the Germersheim district but including the Rhine-Neckar region) should be merged with Hesse and the Saarland (solution C), the district of Germersheim would then become part of Baden-Württemberg. |
The Basic Law of the Federal Republic of Germany, the federal constitution, stipulates that the structure of each Federal State's government must "conform to the principles of republican, democratic, and social government, based on the rule of law" (Article 28). Most of the states are governed by a cabinet led by a Ministerpräsident (Minister-President), together with a unicameral legislative body known as the Landtag (State Diet). The states are parliamentary republics and the relationship between their legislative and executive branches mirrors that of the federal system: the legislatures are popularly elected for four or five years (depending on the state), and the Minister-President is then chosen by a majority vote among the Landtag's members. The Minister-President appoints a cabinet to run the state's agencies and to carry out the executive duties of the state's government. |
The governments in Berlin, Bremen and Hamburg are designated by the term Senate. In the three free states of Bavaria, Saxony, and Thuringia the government is referred to as the State Government (Staatsregierung), and in the other ten states the term Land Government (Landesregierung) is used. Before January 1, 2000, Bavaria had a bicameral parliament, with a popularly elected Landtag, and a Senate made up of representatives of the state's major social and economic groups. The Senate was abolished following a referendum in 1998. The states of Berlin, Bremen, and Hamburg are governed slightly differently from the other states. In each of those cities, the executive branch consists of a Senate of approximately eight, selected by the state's parliament; the senators carry out duties equivalent to those of the ministers in the larger states. The equivalent of the Minister-President is the Senatspräsident (President of the Senate) in Bremen, the Erster Bürgermeister (First Mayor) in Hamburg, and the Regierender Bürgermeister (Governing Mayor) in Berlin. The parliament for Berlin is called the Abgeordnetenhaus (House of Representatives), while Bremen and Hamburg both have a Bürgerschaft. The parliaments in the remaining 13 states are referred to as Landtag (State Parliament). |
The Districts of Germany (Kreise) are administrative districts, and every state except the city-states of Berlin, Hamburg, and Bremen consists of "rural districts" (Landkreise), District-free Towns/Cities (Kreisfreie Städte, in Baden-Württemberg also called "urban districts", or Stadtkreise), cities that are districts in their own right, or local associations of a special kind (Kommunalverbände besonderer Art), see below. The state Free Hanseatic City of Bremen consists of two urban districts, while Berlin and Hamburg are states and urban districts at the same time. |
Local associations of a special kind are an amalgamation of one or more Landkreise with one or more Kreisfreie Städte to form a replacement of the aforementioned administrative entities at the district level. They are intended to implement simplification of administration at that level. Typically, a district-free city or town and its urban hinterland are grouped into such an association, or Kommunalverband besonderer Art. Such an organization requires the issuing of special laws by the governing state, since they are not covered by the normal administrative structure of the respective states. |
Municipalities (Gemeinden): Every rural district and every Amt is subdivided into municipalities, while every urban district is a municipality in its own right. There are (as of 6 March 2009[update]) 12,141 municipalities, which are the smallest administrative units in Germany. Cities and towns are municipalities as well, also having city rights or town rights (Stadtrechte). Nowadays, this is mostly just the right to be called a city or town. However, in former times there were many other privileges, including the right to impose local taxes or to allow industry only within city limits. |
The municipalities have two major policy responsibilities. First, they administer programs authorized by the federal or state government. Such programs typically relate to youth, schools, public health, and social assistance. Second, Article 28(2) of the Basic Law guarantees the municipalities "the right to regulate on their own responsibility all the affairs of the local community within the limits set by law." Under this broad statement of competence, local governments can justify a wide range of activities. For instance, many municipalities develop and expand the economic infrastructure of their communities through the development of industrial trading estates. |
In southwestern Germany, territorial revision seemed to be a top priority since the border between the French and American occupation zones was set along the Autobahn Karlsruhe-Stuttgart-Ulm (today the A8). Article 118 stated "The division of the territory comprising Baden, Württemberg-Baden and Württemberg-Hohenzollern into Länder may be revised, without regard to the provisions of Article 29, by agreement between the Länder concerned. If no agreement is reached, the revision shall be effected by a federal law, which shall provide for an advisory referendum." Since no agreement was reached, a referendum was held on 9 December 1951 in four different voting districts, three of which approved the merger (South Baden refused but was overruled as the result of total votes was decisive). On 25 April 1952, the three former states merged to form Baden-Württemberg. |
Many applications of silicate glasses derive from their optical transparency, which gives rise to one of silicate glasses' primary uses as window panes. Glass will transmit, reflect and refract light; these qualities can be enhanced by cutting and polishing to make optical lenses, prisms, fine glassware, and optical fibers for high speed data transmission by light. Glass can be colored by adding metallic salts, and can also be painted and printed with vitreous enamels. These qualities have led to the extensive use of glass in the manufacture of art objects and in particular, stained glass windows. Although brittle, silicate glass is extremely durable, and many examples of glass fragments exist from early glass-making cultures. Because glass can be formed or molded into any shape, and also because it is a sterile product, it has been traditionally used for vessels: bowls, vases, bottles, jars and drinking glasses. In its most solid forms it has also been used for paperweights, marbles, and beads. When extruded as glass fiber and matted as glass wool in a way to trap air, it becomes a thermal insulating material, and when these glass fibers are embedded into an organic polymer plastic, they are a key structural reinforcement part of the composite material fiberglass. Some objects historically were so commonly made of silicate glass that they are simply called by the name of the material, such as drinking glasses and reading glasses. |
Most common glass contains other ingredients to change its properties. Lead glass or flint glass is more 'brilliant' because the increased refractive index causes noticeably more specular reflection and increased optical dispersion. Adding barium also increases the refractive index. Thorium oxide gives glass a high refractive index and low dispersion and was formerly used in producing high-quality lenses, but due to its radioactivity has been replaced by lanthanum oxide in modern eyeglasses.[citation needed] Iron can be incorporated into glass to absorb infrared energy, for example in heat absorbing filters for movie projectors, while cerium(IV) oxide can be used for glass that absorbs UV wavelengths. |
Fused quartz is a glass made from chemically-pure SiO2 (silica). It has excellent thermal shock characteristics, being able to survive immersion in water while red hot. However, its high melting-temperature (1723 °C) and viscosity make it difficult to work with. Normally, other substances are added to simplify processing. One is sodium carbonate (Na2CO3, "soda"), which lowers the glass transition temperature. The soda makes the glass water-soluble, which is usually undesirable, so lime (calcium oxide [CaO], generally obtained from limestone), some magnesium oxide (MgO) and aluminium oxide (Al2O3) are added to provide for a better chemical durability. The resulting glass contains about 70 to 74% silica by weight and is called a soda-lime glass. Soda-lime glasses account for about 90% of manufactured glass. |
Following the glass batch preparation and mixing, the raw materials are transported to the furnace. Soda-lime glass for mass production is melted in gas fired units. Smaller scale furnaces for specialty glasses include electric melters, pot furnaces, and day tanks. After melting, homogenization and refining (removal of bubbles), the glass is formed. Flat glass for windows and similar applications is formed by the float glass process, developed between 1953 and 1957 by Sir Alastair Pilkington and Kenneth Bickerstaff of the UK's Pilkington Brothers, who created a continuous ribbon of glass using a molten tin bath on which the molten glass flows unhindered under the influence of gravity. The top surface of the glass is subjected to nitrogen under pressure to obtain a polished finish. Container glass for common bottles and jars is formed by blowing and pressing methods. This glass is often slightly modified chemically (with more alumina and calcium oxide) for greater water resistance. Further glass forming techniques are summarized in the table Glass forming techniques. |
Glass has the ability to refract, reflect, and transmit light following geometrical optics, without scattering it. It is used in the manufacture of lenses and windows. Common glass has a refraction index around 1.5. This may be modified by adding low-density materials such as boron, which lowers the index of refraction (see crown glass), or increased (to as much as 1.8) with high-density materials such as (classically) lead oxide (see flint glass and lead glass), or in modern uses, less toxic oxides of zirconium, titanium, or barium. These high-index glasses (inaccurately known as "crystal" when used in glass vessels) cause more chromatic dispersion of light, and are prized for their diamond-like optical properties. |
The most familiar, and historically the oldest, types of glass are "silicate glasses" based on the chemical compound silica (silicon dioxide, or quartz), the primary constituent of sand. The term glass, in popular usage, is often used to refer only to this type of material, which is familiar from use as window glass and in glass bottles. Of the many silica-based glasses that exist, ordinary glazing and container glass is formed from a specific type called soda-lime glass, composed of approximately 75% silicon dioxide (SiO2), sodium oxide (Na2O) from sodium carbonate (Na2CO3), calcium oxide, also called lime (CaO), and several minor additives. A very clear and durable quartz glass can be made from pure silica, but the high melting point and very narrow glass transition of quartz make glassblowing and hot working difficult. In glasses like soda lime, the compounds added to quartz are used to lower the melting temperature and improve workability, at a cost in the toughness, thermal stability, and optical transmittance. |
Glass is in widespread use largely due to the production of glass compositions that are transparent to visible light. In contrast, polycrystalline materials do not generally transmit visible light. The individual crystallites may be transparent, but their facets (grain boundaries) reflect or scatter light resulting in diffuse reflection. Glass does not contain the internal subdivisions associated with grain boundaries in polycrystals and hence does not scatter light in the same manner as a polycrystalline material. The surface of a glass is often smooth since during glass formation the molecules of the supercooled liquid are not forced to dispose in rigid crystal geometries and can follow surface tension, which imposes a microscopically smooth surface. These properties, which give glass its clearness, can be retained even if glass is partially light-absorbing—i.e., colored. |
Naturally occurring glass, especially the volcanic glass obsidian, has been used by many Stone Age societies across the globe for the production of sharp cutting tools and, due to its limited source areas, was extensively traded. But in general, archaeological evidence suggests that the first true glass was made in coastal north Syria, Mesopotamia or ancient Egypt. The earliest known glass objects, of the mid third millennium BCE, were beads, perhaps initially created as accidental by-products of metal-working (slags) or during the production of faience, a pre-glass vitreous material made by a process similar to glazing. |
Color in glass may be obtained by addition of electrically charged ions (or color centers) that are homogeneously distributed, and by precipitation of finely dispersed particles (such as in photochromic glasses). Ordinary soda-lime glass appears colorless to the naked eye when it is thin, although iron(II) oxide (FeO) impurities of up to 0.1 wt% produce a green tint, which can be viewed in thick pieces or with the aid of scientific instruments. Further FeO and Cr2O3 additions may be used for the production of green bottles. Sulfur, together with carbon and iron salts, is used to form iron polysulfides and produce amber glass ranging from yellowish to almost black. A glass melt can also acquire an amber color from a reducing combustion atmosphere. Manganese dioxide can be added in small amounts to remove the green tint given by iron(II) oxide. When used in art glass or studio glass is colored using closely guarded recipes that involve specific combinations of metal oxides, melting temperatures and 'cook' times. Most colored glass used in the art market is manufactured in volume by vendors who serve this market although there are some glassmakers with the ability to make their own color from raw materials. |
Glass remained a luxury material, and the disasters that overtook Late Bronze Age civilizations seem to have brought glass-making to a halt. Indigenous development of glass technology in South Asia may have begun in 1730 BCE. In ancient China, though, glassmaking seems to have a late start, compared to ceramics and metal work. The term glass developed in the late Roman Empire. It was in the Roman glassmaking center at Trier, now in modern Germany, that the late-Latin term glesum originated, probably from a Germanic word for a transparent, lustrous substance. Glass objects have been recovered across the Roman empire in domestic, industrial and funerary contexts.[citation needed] |
Glass was used extensively during the Middle Ages. Anglo-Saxon glass has been found across England during archaeological excavations of both settlement and cemetery sites. Glass in the Anglo-Saxon period was used in the manufacture of a range of objects including vessels, beads, windows and was also used in jewelry. From the 10th-century onwards, glass was employed in stained glass windows of churches and cathedrals, with famous examples at Chartres Cathedral and the Basilica of Saint Denis. By the 14th-century, architects were designing buildings with walls of stained glass such as Sainte-Chapelle, Paris, (1203–1248) and the East end of Gloucester Cathedral. Stained glass had a major revival with Gothic Revival architecture in the 19th-century. With the Renaissance, and a change in architectural style, the use of large stained glass windows became less prevalent. The use of domestic stained glass increased until most substantial houses had glass windows. These were initially small panes leaded together, but with the changes in technology, glass could be manufactured relatively cheaply in increasingly larger sheets. This led to larger window panes, and, in the 20th-century, to much larger windows in ordinary domestic and commercial buildings. |
In the 20th century, new types of glass such as laminated glass, reinforced glass and glass bricks have increased the use of glass as a building material and resulted in new applications of glass. Multi-storey buildings are frequently constructed with curtain walls made almost entirely of glass. Similarly, laminated glass has been widely applied to vehicles for windscreens. While glass containers have always been used for storage and are valued for their hygienic properties, glass has been utilized increasingly in industry. Optical glass for spectacles has been used since the late Middle Ages. The production of lenses has become increasingly proficient, aiding astronomers as well as having other application in medicine and science. Glass is also employed as the aperture cover in many solar energy systems. |
From the 19th century, there was a revival in many ancient glass-making techniques including cameo glass, achieved for the first time since the Roman Empire and initially mostly used for pieces in a neo-classical style. The Art Nouveau movement made great use of glass, with René Lalique, Émile Gallé, and Daum of Nancy producing colored vases and similar pieces, often in cameo glass, and also using luster techniques. Louis Comfort Tiffany in America specialized in stained glass, both secular and religious, and his famous lamps. The early 20th-century saw the large-scale factory production of glass art by firms such as Waterford and Lalique. From about 1960 onwards there have been an increasing number of small studios hand-producing glass artworks, and glass artists began to class themselves as in effect sculptors working in glass, and their works as part fine arts. |
Addition of lead(II) oxide lowers melting point, lowers viscosity of the melt, and increases refractive index. Lead oxide also facilitates solubility of other metal oxides and is used in colored glasses. The viscosity decrease of lead glass melt is very significant (roughly 100 times in comparison with soda glasses); this allows easier removal of bubbles and working at lower temperatures, hence its frequent use as an additive in vitreous enamels and glass solders. The high ionic radius of the Pb2+ ion renders it highly immobile in the matrix and hinders the movement of other ions; lead glasses therefore have high electrical resistance, about two orders of magnitude higher than soda-lime glass (108.5 vs 106.5 Ohm·cm, DC at 250 °C). For more details, see lead glass. |
There are three classes of components for oxide glasses: network formers, intermediates, and modifiers. The network formers (silicon, boron, germanium) form a highly cross-linked network of chemical bonds. The intermediates (titanium, aluminium, zirconium, beryllium, magnesium, zinc) can act as both network formers and modifiers, according to the glass composition. The modifiers (calcium, lead, lithium, sodium, potassium) alter the network structure; they are usually present as ions, compensated by nearby non-bridging oxygen atoms, bound by one covalent bond to the glass network and holding one negative charge to compensate for the positive ion nearby. Some elements can play multiple roles; e.g. lead can act both as a network former (Pb4+ replacing Si4+), or as a modifier. |
The alkali metal ions are small and mobile; their presence in glass allows a degree of electrical conductivity, especially in molten state or at high temperature. Their mobility decreases the chemical resistance of the glass, allowing leaching by water and facilitating corrosion. Alkaline earth ions, with their two positive charges and requirement for two non-bridging oxygen ions to compensate for their charge, are much less mobile themselves and also hinder diffusion of other ions, especially the alkalis. The most common commercial glasses contain both alkali and alkaline earth ions (usually sodium and calcium), for easier processing and satisfying corrosion resistance. Corrosion resistance of glass can be achieved by dealkalization, removal of the alkali ions from the glass surface by reaction with e.g. sulfur or fluorine compounds. Presence of alkaline metal ions has also detrimental effect to the loss tangent of the glass, and to its electrical resistance; glasses for electronics (sealing, vacuum tubes, lamps...) have to take this in account. |
New chemical glass compositions or new treatment techniques can be initially investigated in small-scale laboratory experiments. The raw materials for laboratory-scale glass melts are often different from those used in mass production because the cost factor has a low priority. In the laboratory mostly pure chemicals are used. Care must be taken that the raw materials have not reacted with moisture or other chemicals in the environment (such as alkali or alkaline earth metal oxides and hydroxides, or boron oxide), or that the impurities are quantified (loss on ignition). Evaporation losses during glass melting should be considered during the selection of the raw materials, e.g., sodium selenite may be preferred over easily evaporating SeO2. Also, more readily reacting raw materials may be preferred over relatively inert ones, such as Al(OH)3 over Al2O3. Usually, the melts are carried out in platinum crucibles to reduce contamination from the crucible material. Glass homogeneity is achieved by homogenizing the raw materials mixture (glass batch), by stirring the melt, and by crushing and re-melting the first melt. The obtained glass is usually annealed to prevent breakage during processing. |
In the past, small batches of amorphous metals with high surface area configurations (ribbons, wires, films, etc.) have been produced through the implementation of extremely rapid rates of cooling. This was initially termed "splat cooling" by doctoral student W. Klement at Caltech, who showed that cooling rates on the order of millions of degrees per second is sufficient to impede the formation of crystals, and the metallic atoms become "locked into" a glassy state. Amorphous metal wires have been produced by sputtering molten metal onto a spinning metal disk. More recently a number of alloys have been produced in layers with thickness exceeding 1 millimeter. These are known as bulk metallic glasses (BMG). Liquidmetal Technologies sell a number of zirconium-based BMGs. Batches of amorphous steel have also been produced that demonstrate mechanical properties far exceeding those found in conventional steel alloys. |
In 2004, NIST researchers presented evidence that an isotropic non-crystalline metallic phase (dubbed "q-glass") could be grown from the melt. This phase is the first phase, or "primary phase," to form in the Al-Fe-Si system during rapid cooling. Interestingly, experimental evidence indicates that this phase forms by a first-order transition. Transmission electron microscopy (TEM) images show that the q-glass nucleates from the melt as discrete particles, which grow spherically with a uniform growth rate in all directions. The diffraction pattern shows it to be an isotropic glassy phase. Yet there is a nucleation barrier, which implies an interfacial discontinuity (or internal surface) between the glass and the melt. |
Glass-ceramic materials share many properties with both non-crystalline glass and crystalline ceramics. They are formed as a glass, and then partially crystallized by heat treatment. For example, the microstructure of whiteware ceramics frequently contains both amorphous and crystalline phases. Crystalline grains are often embedded within a non-crystalline intergranular phase of grain boundaries. When applied to whiteware ceramics, vitreous means the material has an extremely low permeability to liquids, often but not always water, when determined by a specified test regime. |
The term mainly refers to a mix of lithium and aluminosilicates that yields an array of materials with interesting thermomechanical properties. The most commercially important of these have the distinction of being impervious to thermal shock. Thus, glass-ceramics have become extremely useful for countertop cooking. The negative thermal expansion coefficient (CTE) of the crystalline ceramic phase can be balanced with the positive CTE of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net CTE near zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C. |
Mass production of glass window panes in the early twentieth century caused a similar effect. In glass factories, molten glass was poured onto a large cooling table and allowed to spread. The resulting glass is thicker at the location of the pour, located at the center of the large sheet. These sheets were cut into smaller window panes with nonuniform thickness, typically with the location of the pour centered in one of the panes (known as "bull's-eyes") for decorative effect. Modern glass intended for windows is produced as float glass and is very uniform in thickness. |
The observation that old windows are sometimes found to be thicker at the bottom than at the top is often offered as supporting evidence for the view that glass flows over a timescale of centuries, the assumption being that the glass has exhibited the liquid property of flowing from one shape to another. This assumption is incorrect, as once solidified, glass stops flowing. The reason for the observation is that in the past, when panes of glass were commonly made by glassblowers, the technique used was to spin molten glass so as to create a round, mostly flat and even plate (the crown glass process, described above). This plate was then cut to fit a window. The pieces were not absolutely flat; the edges of the disk became a different thickness as the glass spun. When installed in a window frame, the glass would be placed with the thicker side down both for the sake of stability and to prevent water accumulating in the lead cames at the bottom of the window. Occasionally such glass has been found installed with the thicker side at the top, left or right. |
In physics, the standard definition of a glass (or vitreous solid) is a solid formed by rapid melt quenching. The term glass is often used to describe any amorphous solid that exhibits a glass transition temperature Tg. If the cooling is sufficiently rapid (relative to the characteristic crystallization time) then crystallization is prevented and instead the disordered atomic configuration of the supercooled liquid is frozen into the solid state at Tg. The tendency for a material to form a glass while quenched is called glass-forming ability. This ability can be predicted by the rigidity theory. Generally, the structure of a glass exists in a metastable state with respect to its crystalline form, although in certain circumstances, for example in atactic polymers, there is no crystalline analogue of the amorphous phase. |
Some people consider glass to be a liquid due to its lack of a first-order phase transition where certain thermodynamic variables such as volume, entropy and enthalpy are discontinuous through the glass transition range. The glass transition may be described as analogous to a second-order phase transition where the intensive thermodynamic variables such as the thermal expansivity and heat capacity are discontinuous. Nonetheless, the equilibrium theory of phase transformations does not entirely hold for glass, and hence the glass transition cannot be classed as one of the classical equilibrium phase transformations in solids. |
Although the atomic structure of glass shares characteristics of the structure in a supercooled liquid, glass tends to behave as a solid below its glass transition temperature. A supercooled liquid behaves as a liquid, but it is below the freezing point of the material, and in some cases will crystallize almost instantly if a crystal is added as a core. The change in heat capacity at a glass transition and a melting transition of comparable materials are typically of the same order of magnitude, indicating that the change in active degrees of freedom is comparable as well. Both in a glass and in a crystal it is mostly only the vibrational degrees of freedom that remain active, whereas rotational and translational motion is arrested. This helps to explain why both crystalline and non-crystalline solids exhibit rigidity on most experimental time scales. |
First recognized in 1900 by Max Planck, it was originally the proportionality constant between the minimal increment of energy, E, of a hypothetical electrically charged oscillator in a cavity that contained black body radiation, and the frequency, f, of its associated electromagnetic wave. In 1905 the value E, the minimal energy increment of a hypothetical oscillator, was theoretically associated by Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle, as opposed to an electromagnetic wave. It was eventually called the photon. |
Classical statistical mechanics requires the existence of h (but does not define its value). Eventually, following upon Planck's discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be some multiple of a very small quantity, the "quantum of action", now called the Planck constant. Classical physics cannot explain this fact. In many cases, such as for monochromatic light or for atoms, this quantum of action also implies that only certain energy levels are allowed, and values in between are forbidden. |
Equivalently, the smallness of the Planck constant reflects the fact that everyday objects and systems are made of a large number of particles. For example, green light with a wavelength of 555 nanometres (the approximate wavelength to which human eyes are most sensitive) has a frequency of 7014540000000000000♠540 THz (7014540000000000000♠540×1012 Hz). Each photon has an energy E = hf = 6981358000000000000♠3.58×10−19 J. That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light compatible with everyday experience is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, NA ≈ 7023602200000000000♠6.022×1023 mol−1. The result is that green light of wavelength 555 nm has an energy of 7005216000000000000♠216 kJ/mol, a typical energy of everyday life. |
In the last years of the nineteenth century, Planck was investigating the problem of black-body radiation first posed by Kirchhoff some forty years earlier. It is well known that hot objects glow, and that hotter objects glow brighter than cooler ones. The electromagnetic field obeys laws of motion similarly to a mass on a spring, and can come to thermal equilibrium with hot atoms. The hot object in equilibrium with light absorbs just as much light as it emits. If the object is black, meaning it absorbs all the light that hits it, then its thermal light emission is maximized. |
The assumption that black-body radiation is thermal leads to an accurate prediction: the total amount of emitted energy goes up with the temperature according to a definite rule, the Stefan–Boltzmann law (1879–84). But it was also known that the colour of the light given off by a hot object changes with the temperature, so that "white hot" is hotter than "red hot". Nevertheless, Wilhelm Wien discovered the mathematical relationship between the peaks of the curves at different temperatures, by using the principle of adiabatic invariance. At each different temperature, the curve is moved over by Wien's displacement law (1893). Wien also proposed an approximation for the spectrum of the object, which was correct at high frequencies (short wavelength) but not at low frequencies (long wavelength). It still was not clear why the spectrum of a hot object had the form that it has (see diagram). |
Prior to Planck's work, it had been assumed that the energy of a body could take on any value whatsoever – that it was a continuous variable. The Rayleigh–Jeans law makes close predictions for a narrow range of values at one limit of temperatures, but the results diverge more and more strongly as temperatures increase. To make Planck's law, which correctly predicts blackbody emissions, it was necessary to multiply the classical expression by a complex factor that involves h in both the numerator and the denominator. The influence of h in this complex factor would not disappear if it were set to zero or to any other value. Making an equation out of Planck's law that would reproduce the Rayleigh–Jeans law could not be done by changing the values of h, of the Boltzmann constant, or of any other constant or variable in the equation. In this case the picture given by classical physics is not duplicated by a range of results in the quantum picture. |
The black-body problem was revisited in 1905, when Rayleigh and Jeans (on the one hand) and Einstein (on the other hand) independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) to convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The very first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta". Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta". |
The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, when his predictions had been confirmed by the experimental work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real. |
Prior to Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterise different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space(and hence consumes more electricity) than the ordinary bulb, even though the colour of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their own intensity. However, the energy account of the photoelectric effect didn't seem to agree with the wave description of light. |
The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect) Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy. |
Niels Bohr introduced the first quantized model of the atom in 1913, in an attempt to overcome a major shortcoming of Rutherford's classical model. In classical electrodynamics, a charge moving in a circle should radiate electromagnetic radiation. If that charge were to be an electron orbiting a nucleus, the radiation would cause it to lose energy and spiral down into the nucleus. Bohr solved this paradox with explicit reference to Planck's work: an electron in a Bohr atom could only have certain defined energies En |
Bohr also introduced the quantity , now known as the reduced Planck constant, as the quantum of angular momentum. At first, Bohr thought that this was the angular momentum of each electron in an atom: this proved incorrect and, despite developments by Sommerfeld and others, an accurate description of the electron angular momentum proved beyond the Bohr model. The correct quantization rules for electrons – in which the energy reduces to the Bohr model equation in the case of the hydrogen atom – were given by Heisenberg's matrix mechanics in 1925 and the Schrödinger wave equation in 1926: the reduced Planck constant remains the fundamental quantum of angular momentum. In modern terms, if J is the total angular momentum of a system with rotational invariance, and Jz the angular momentum measured along any given direction, these quantities can only take on the values |
where the uncertainty is given as the standard deviation of the measured value from its expected value. There are a number of other such pairs of physically measurable values which obey a similar rule. One example is time vs. energy. The either-or nature of uncertainty forces measurement attempts to choose between trade offs, and given that they are quanta, the trade offs often take the form of either-or (as in Fourier analysis), rather than the compromises and gray areas of time series analysis. |
The Bohr magneton and the nuclear magneton are units which are used to describe the magnetic properties of the electron and atomic nuclei respectively. The Bohr magneton is the magnetic moment which would be expected for an electron if it behaved as a spinning charge according to classical electrodynamics. It is defined in terms of the reduced Planck constant, the elementary charge and the electron mass, all of which depend on the Planck constant: the final dependence on h1/2 (r2 > 0.995) can be found by expanding the variables. |
In principle, the Planck constant could be determined by examining the spectrum of a black-body radiator or the kinetic energy of photoelectrons, and this is how its value was first calculated in the early twentieth century. In practice, these are no longer the most accurate methods. The CODATA value quoted here is based on three watt-balance measurements of KJ2RK and one inter-laboratory determination of the molar volume of silicon, but is mostly determined by a 2007 watt-balance measurement made at the U.S. National Institute of Standards and Technology (NIST). Five other measurements by three different methods were initially considered, but not included in the final refinement as they were too imprecise to affect the result. |
There are both practical and theoretical difficulties in determining h. The practical difficulties can be illustrated by the fact that the two most accurate methods, the watt balance and the X-ray crystal density method, do not appear to agree with one another. The most likely reason is that the measurement uncertainty for one (or both) of the methods has been estimated too low – it is (or they are) not as precise as is currently believed – but for the time being there is no indication which method is at fault. |
The theoretical difficulties arise from the fact that all of the methods except the X-ray crystal density method rely on the theoretical basis of the Josephson effect and the quantum Hall effect. If these theories are slightly inaccurate – though there is no evidence at present to suggest they are – the methods would not give accurate values for the Planck constant. More importantly, the values of the Planck constant obtained in this way cannot be used as tests of the theories without falling into a circular argument. Fortunately, there are other statistical ways of testing the theories, and the theories have yet to be refuted. |
A watt balance is an instrument for comparing two powers, one of which is measured in SI watts and the other of which is measured in conventional electrical units. From the definition of the conventional watt W90, this gives a measure of the product KJ2RK in SI units, where RK is the von Klitzing constant which appears in the quantum Hall effect. If the theoretical treatments of the Josephson effect and the quantum Hall effect are valid, and in particular assuming that RK = h/e2, the measurement of KJ2RK is a direct determination of the Planck constant. |
The gyromagnetic ratio γ is the constant of proportionality between the frequency ν of nuclear magnetic resonance (or electron paramagnetic resonance for electrons) and the applied magnetic field B: ν = γB. It is difficult to measure gyromagnetic ratios precisely because of the difficulties in precisely measuring B, but the value for protons in water at 7002298150000000000♠25 °C is known to better than one part per million. The protons are said to be "shielded" from the applied magnetic field by the electrons in the water molecule, the same effect that gives rise to chemical shift in NMR spectroscopy, and this is indicated by a prime on the symbol for the gyromagnetic ratio, γ′p. The gyromagnetic ratio is related to the shielded proton magnetic moment μ′p, the spin number I (I = 1⁄2 for protons) and the reduced Planck constant. |
A further complication is that the measurement of γ′p involves the measurement of an electric current: this is invariably measured in conventional amperes rather than in SI amperes, so a conversion factor is required. The symbol Γ′p-90 is used for the measured gyromagnetic ratio using conventional electrical units. In addition, there are two methods of measuring the value, a "low-field" method and a "high-field" method, and the conversion factors are different in the two cases. Only the high-field value Γ′p-90(hi) is of interest in determining the Planck constant. |
The Faraday constant F is the charge of one mole of electrons, equal to the Avogadro constant NA multiplied by the elementary charge e. It can be determined by careful electrolysis experiments, measuring the amount of silver dissolved from an electrode in a given time and for a given electric current. In practice, it is measured in conventional electrical units, and so given the symbol F90. Substituting the definitions of NA and e, and converting from conventional electrical units to SI units, gives the relation to the Planck constant. |
The X-ray crystal density method is primarily a method for determining the Avogadro constant NA but as the Avogadro constant is related to the Planck constant it also determines a value for h. The principle behind the method is to determine NA as the ratio between the volume of the unit cell of a crystal, measured by X-ray crystallography, and the molar volume of the substance. Crystals of silicon are used, as they are available in high quality and purity by the technology developed for the semiconductor industry. The unit cell volume is calculated from the spacing between two crystal planes referred to as d220. The molar volume Vm(Si) requires a knowledge of the density of the crystal and the atomic weight of the silicon used. The Planck constant is given by |
There are a number of proposals to redefine certain of the SI base units in terms of fundamental physical constants. This has already been done for the metre, which is defined in terms of a fixed value of the speed of light. The most urgent unit on the list for redefinition is the kilogram, whose value has been fixed for all science (since 1889) by the mass of a small cylinder of platinum–iridium alloy kept in a vault just outside Paris. While nobody knows if the mass of the International Prototype Kilogram has changed since 1889 – the value 1 kg of its mass expressed in kilograms is by definition unchanged and therein lies one of the problems – it is known that over such a timescale the many similar Pt–Ir alloy cylinders kept in national laboratories around the world, have changed their relative mass by several tens of parts per million, however carefully they are stored, and the more so the more they have been taken out and used as mass standards. A change of several tens of micrograms in one kilogram is equivalent to the current uncertainty in the value of the Planck constant in SI units. |
The legal process to change the definition of the kilogram is already underway, but it had been decided that no final decision would be made before the next meeting of the General Conference on Weights and Measures in 2011. (For more detailed information, see kilogram definitions.) The Planck constant is a leading contender to form the basis of the new definition, although not the only one. Possible new definitions include "the mass of a body at rest whose equivalent energy equals the energy of photons whose frequencies sum to 7050135639273999999♠135639274×1042 Hz", or simply "the kilogram is defined so that the Planck constant equals 6966662606895999999♠6.62606896×10−34 J⋅s". |
Public policy and political leadership helps to "level the playing field" and drive the wider acceptance of renewable energy technologies. Countries such as Germany, Denmark, and Spain have led the way in implementing innovative policies which has driven most of the growth over the past decade. As of 2014, Germany has a commitment to the "Energiewende" transition to a sustainable energy economy, and Denmark has a commitment to 100% renewable energy by 2050. There are now 144 countries with renewable energy policy targets. |
Total investment in renewable energy (including small hydro-electric projects) was $244 billion in 2012, down 12% from 2011 mainly due to dramatically lower solar prices and weakened US and EU markets. As a share of total investment in power plants, wind and solar PV grew from 14% in 2000 to over 60% in 2012. The top countries for investment in recent years were China, Germany, Spain, the United States, Italy, and Brazil. Renewable energy companies include BrightSource Energy, First Solar, Gamesa, GE Energy, Goldwind, Sinovel, Trina Solar, Vestas and Yingli. |
EU member countries have shown support for ambitious renewable energy goals. In 2010, Eurobarometer polled the twenty-seven EU member states about the target "to increase the share of renewable energy in the EU by 20 percent by 2020". Most people in all twenty-seven countries either approved of the target or called for it to go further. Across the EU, 57 percent thought the proposed goal was "about right" and 16 percent thought it was "too modest." In comparison, 19 percent said it was "too ambitious". |
By the end of 2011, total renewable power capacity worldwide exceeded 1,360 GW, up 8%. Renewables producing electricity accounted for almost half of the 208 GW of capacity added globally during 2011. Wind and solar photovoltaics (PV) accounted for almost 40% and 30% . Based on REN21's 2014 report, renewables contributed 19 percent to our energy consumption and 22 percent to our electricity generation in 2012 and 2013, respectively. This energy consumption is divided as 9% coming from traditional biomass, 4.2% as heat energy (non-biomass), 3.8% hydro electricity and 2% electricity from wind, solar, geothermal, and biomass. |
During the five-years from the end of 2004 through 2009, worldwide renewable energy capacity grew at rates of 10–60 percent annually for many technologies. In 2011, UN under-secretary general Achim Steiner said: "The continuing growth in this core segment of the green economy is not happening by chance. The combination of government target-setting, policy support and stimulus funds is underpinning the renewable industry's rise and bringing the much needed transformation of our global energy system within reach." He added: "Renewable energies are expanding both in terms of investment, projects and geographical spread. In doing so, they are making an increasing contribution to combating climate change, countering energy poverty and energy insecurity". |
According to a 2011 projection by the International Energy Agency, solar power plants may produce most of the world's electricity within 50 years, significantly reducing the emissions of greenhouse gases that harm the environment. The IEA has said: "Photovoltaic and solar-thermal plants may meet most of the world's demand for electricity by 2060 – and half of all energy needs – with wind, hydropower and biomass plants supplying much of the remaining generation". "Photovoltaic and concentrated solar power together can become the major source of electricity". |
In 2013, China led the world in renewable energy production, with a total capacity of 378 GW, mainly from hydroelectric and wind power. As of 2014, China leads the world in the production and use of wind power, solar photovoltaic power and smart grid technologies, generating almost as much water, wind and solar energy as all of France and Germany's power plants combined. China's renewable energy sector is growing faster than its fossil fuels and nuclear power capacity. Since 2005, production of solar cells in China has expanded 100-fold. As Chinese renewable manufacturing has grown, the costs of renewable energy technologies have dropped. Innovation has helped, but the main driver of reduced costs has been market expansion. |
Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2011 IEA report said: "A portfolio of renewable energy technologies is becoming cost-competitive in an increasingly broad range of circumstances, in some cases providing investment opportunities without the need for specific economic support," and added that "cost reductions in critical technologies, such as wind and solar, are set to continue." As of 2011[update], there have been substantial reductions in the cost of solar and wind technologies: |
Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today". As of 2012, renewable power generation technologies accounted for around half of all new power generation capacity additions globally. In 2011, additions included 41 gigawatt (GW) of new wind power capacity, 30 GW of PV, 25 GW of hydro-electricity, 6 GW of biomass, 0.5 GW of CSP, and 0.1 GW of geothermal power. |
Biomass for heat and power is a fully mature technology which offers a ready disposal mechanism for municipal, agricultural, and industrial organic wastes. However, the industry has remained relatively stagnant over the decade to 2007, even though demand for biomass (mostly wood) continues to grow in many developing countries. One of the problems of biomass is that material directly combusted in cook stoves produces pollutants, leading to severe health and environmental consequences, although improved cook stove programmes are alleviating some of these effects. First-generation biomass technologies can be economically competitive, but may still require deployment support to overcome public acceptance and small-scale issues. |
Hydroelectricity is the term referring to electricity generated by hydropower; the production of electrical power through the use of the gravitational force of falling or flowing water. It is the most widely used form of renewable energy, accounting for 16 percent of global electricity generation – 3,427 terawatt-hours of electricity production in 2010, and is expected to increase about 3.1% each year for the next 25 years. Hydroelectric plants have the advantage of being long-lived and many existing plants have operated for more than 100 years. |
Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity plants larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela. The cost of hydroelectricity is low, making it a competitive source of renewable electricity. The average cost of electricity from a hydro plant larger than 10 megawatts is 3 to 5 U.S. cents per kilowatt-hour. |
Geothermal power capacity grew from around 1 GW in 1975 to almost 10 GW in 2008. The United States is the world leader in terms of installed capacity, representing 3.1 GW. Other countries with significant installed capacity include the Philippines (1.9 GW), Indonesia (1.2 GW), Mexico (1.0 GW), Italy (0.8 GW), Iceland (0.6 GW), Japan (0.5 GW), and New Zealand (0.5 GW). In some countries, geothermal power accounts for a significant share of the total electricity supply, such as in the Philippines, where geothermal represented 17 percent of the total power mix at the end of 2008. |
Many solar photovoltaic power stations have been built, mainly in Europe. As of July 2012, the largest photovoltaic (PV) power plants in the world are the Agua Caliente Solar Project (USA, 247 MW), Charanka Solar Park (India, 214 MW), Golmud Solar Park (China, 200 MW), Perovo Solar Park (Russia 100 MW), Sarnia Photovoltaic Power Plant (Canada, 97 MW), Brandenburg-Briest Solarpark (Germany 91 MW), Solarpark Finow Tower (Germany 84.7 MW), Montalto di Castro Photovoltaic Power Station (Italy, 84.2 MW), Eggebek Solar Park (Germany 83.6 MW), Senftenberg Solarpark (Germany 82 MW), Finsterwalde Solar Park (Germany, 80.7 MW), Okhotnykovo Solar Park (Russia, 80 MW), Lopburi Solar Farm (Thailand 73.16 MW), Rovigo Photovoltaic Power Plant (Italy, 72 MW), and the Lieberose Photovoltaic Park (Germany, 71.8 MW). |
There are also many large plants under construction. The Desert Sunlight Solar Farm under construction in Riverside County, California and Topaz Solar Farm being built in San Luis Obispo County, California are both 550 MW solar parks that will use thin-film solar photovoltaic modules made by First Solar. The Blythe Solar Power Project is a 500 MW photovoltaic station under construction in Riverside County, California. The California Valley Solar Ranch (CVSR) is a 250 megawatt (MW) solar photovoltaic power plant, which is being built by SunPower in the Carrizo Plain, northeast of California Valley. The 230 MW Antelope Valley Solar Ranch is a First Solar photovoltaic project which is under construction in the Antelope Valley area of the Western Mojave Desert, and due to be completed in 2013. The Mesquite Solar project is a photovoltaic solar power plant being built in Arlington, Maricopa County, Arizona, owned by Sempra Generation. Phase 1 will have a nameplate capacity of 150 megawatts. |
Some of the second-generation renewables, such as wind power, have high potential and have already realised relatively low production costs. Global wind power installations increased by 35,800 MW in 2010, bringing total installed capacity up to 194,400 MW, a 22.5% increase on the 158,700 MW installed at the end of 2009. The increase for 2010 represents investments totalling €47.3 billion (US$65 billion) and for the first time more than half of all new wind power was added outside of the traditional markets of Europe and North America, mainly driven, by the continuing boom in China which accounted for nearly half of all of the installations at 16,500 MW. China now has 42,300 MW of wind power installed. Wind power accounts for approximately 19% of electricity generated in Denmark, 9% in Spain and Portugal, and 6% in Germany and the Republic of Ireland. In Australian state of South Australia wind power, championed by Premier Mike Rann (2002–2011), now comprises 26% of the state's electricity generation, edging out coal fired power. At the end of 2011 South Australia, with 7.2% of Australia's population, had 54%of the nation's installed wind power capacity. Wind power's share of worldwide electricity usage at the end of 2014 was 3.1%. These are some of the largest wind farms in the world: |
As of 2014, the wind industry in the USA is able to produce more power at lower cost by using taller wind turbines with longer blades, capturing the faster winds at higher elevations. This has opened up new opportunities and in Indiana, Michigan, and Ohio, the price of power from wind turbines built 300 feet to 400 feet above the ground can now compete with conventional fossil fuels like coal. Prices have fallen to about 4 cents per kilowatt-hour in some cases and utilities have been increasing the amount of wind energy in their portfolio, saying it is their cheapest option. |
Solar thermal power stations include the 354 megawatt (MW) Solar Energy Generating Systems power plant in the USA, Solnova Solar Power Station (Spain, 150 MW), Andasol solar power station (Spain, 100 MW), Nevada Solar One (USA, 64 MW), PS20 solar power tower (Spain, 20 MW), and the PS10 solar power tower (Spain, 11 MW). The 370 MW Ivanpah Solar Power Facility, located in California's Mojave Desert, is the world's largest solar-thermal power plant project currently under construction. Many other plants are under construction or planned, mainly in Spain and the USA. In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved. |
Nearly all the gasoline sold in the United States today is mixed with 10 percent ethanol, a mix known as E10, and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, DaimlerChrysler, and GM are among the automobile companies that sell flexible-fuel cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol (E85). The challenge is to expand the market for biofuels beyond the farm states where they have been most popular to date. The Energy Policy Act of 2005, which calls for 7.5 billion US gallons (28,000,000 m3) of biofuels to be used annually by 2012, will also help to expand the market. |
According to the International Energy Agency, cellulosic ethanol biorefineries could allow biofuels to play a much bigger role in the future than organizations such as the IEA previously thought. Cellulosic ethanol can be made from plant matter composed primarily of inedible cellulose fibers that form the stems and branches of most plants. Crop residues (such as corn stalks, wheat straw and rice straw), wood waste, and municipal solid waste are potential sources of cellulosic biomass. Dedicated energy crops, such as switchgrass, are also promising cellulose sources that can be sustainably produced in many regions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.