text
stringlengths
16
352k
source
stringclasses
2 values
A cut-off low (or cutoff low), sometimes referred to as the weatherman's woe, is defined as "a closed upper-level low which has become completely displaced (cut off) from basic westerly current, and moves independently of that current" by the National Weather Service. Cut-off lows form in mid-latitudes (usually in the subtropics or between 20° and 45°) and would remain nearly stationary for days. Formation A cut-off low is a cold-core low where wind in the upper levels of the troposphere is "cut off"" from the primary westerly winds of the jet stream. They are formed when a trough in the upper-air flow pinches off and separates into a closed circulation. It is defined by concentric isotherms around the core of the low. Because they are a feature of the mid- to upper-troposphere, they may not be visible on a surface weather analysis. Because they are separated from the main westerly flow, cut-off lows can move slowly and erratically. In certain arrangements, known as a block or a blocking pattern, they can remain in place for long periods of time. Whilst cut-off lows can form at any time of the year, they are more common in autumn, winter and spring in much of the areas affected, particularly in Australia and the Mediterranean Basin, when a mass of polar air is brought towards more southern regions (or northern, in the southern hemisphere) by the jet stream moving between 5 and 9 km altitude. Characteristics The diameter of a cut-off low can vary from a few hundred to a thousand kilometers. The air there is homogeneous and without a front line separating it from the surrounding masses, while having a decisive influence on the weather. It then most often leads to an atmospheric blocking circulation where we witness the formation of an upper level low. High cold drops, between 1,000 and 10,000 m, are regions of low stability, while low cold drops are regions of relatively stable air. Composed of very cold air of polar origin, it typically has a horizontal extent of 300 to 1000 km and is 5,000 to 10,000 meters above sea level. Movement A cut-off low has a slow movement, typically over a confined region, where it produces heavy rainfall. They are volatile, baroclinic systems that meander to the west with strong convergence and an ascending motion, especially when they are deepening. A cut-off low can persist from a few days to more than a week. It may be absorbed into the general circulation while another forms in the same place a few days later. The evolution and movement of a cold low, like any weather blockage, is therefore uncertain. Effects Cut off lows typically create unsettled weather and, in the warm season, they may produce a lot of thunderstorms. A cut-off low has a slow movement, typically over a confined region, where it produces heavy rainfall, and can result in severe flooding. For example, a cut-off low was responsible for the July 2021 floods in Europe Regional impacts In southeastern Australia, cut-off lows can be associated with Australian east coast lows, which are subtropical cyclones or extratropical cyclones that originate from the east. In eastern Australia, a cut-off low can bring accumulating and widespread snowfall at low-level areas and as well as elevated regions in the subtropics. In Spain, cut-off lows are often referred to by the name "cold drop". See also Cold-core low References Meteorological phenomena Cold Weather events Atmospheric dynamics
wiki
Blacktop or asphalt concrete is a composite material used to surface roads. The word blacktop can also be used to refer directly to a paved road. Blacktop or Black Top may also refer to: Black Top Records, a record label Blacktop Peak, a mountain in California Black Top, British Jazz duo of Orphy Robinson and Pat Thomas See also Macadam, a type of road construction Tarmacadam, a road surfacing material of macadam surfaces, tar, and sand
wiki
The white-tailed mountain vole (Alticola albicauda) is a species of vole in the family Cricetidae. It is found in India and Pakistan. References Alticola Rodents of Pakistan Rodents of India Mammals described in 1894 Taxonomy articles created by Polbot Taxobox binomials not recognized by IUCN
wiki
Exemption may refer to: Tax exemption, which allows a certain amount of income or other value to be legally excluded to avoid or reduce taxation Exemption (Catholic canon law), an exemption in the Roman Catholic Church, that is the whole or partial release of an ecclesiastical person, corporation, or institution from the authority of the ecclesiastical superior next higher in rank Stauropegic exemption, a specific type of ecclesiastical exemption in Eastern Christianity Grandfather clause, an exemption that allows a pre-existing condition to continue, even if such a condition is now prohibited from being begun anew Exempt employee, is one who is exempt from the Fair Labor Standards Act, i.e. is not entitled to overtime pay and other worker's benefits stated in the FLSA Loophole, a weakness or exception that allows a system, such as a law or security, to be circumvented or otherwise avoided
wiki
Jambalaya anvendes i flere sammenhænge: Jambalaya (mad) – en ret fra cajun-området i USA. Jambalaya (sang) – en sang af Hank Williams fra 1952
wiki
A library catalog (or library catalogue in British English) is a register of all bibliographic items found in a library or group of libraries, such as a network of libraries at several locations. A catalog for a group of libraries is also called a union catalog. A bibliographic item can be any information entity (e.g., books, computer files, graphics, realia, cartographic materials, etc.) that is considered library material (e.g., a single novel in an anthology), or a group of library materials (e.g., a trilogy), or linked from the catalog (e.g., a webpage) as far as it is relevant to the catalog and to the users (patrons) of the library. The card catalog was a familiar sight to library users for generations, but it has been effectively replaced by the online public access catalog (OPAC). Some still refer to the online catalog as a "card catalog". Some libraries with OPAC access still have card catalogs on site, but these are now strictly a secondary resource and are seldom updated. Many libraries that retain their physical card catalog will post a sign advising the last year that the card catalog was updated. Some libraries have eliminated their card catalog in favor of the OPAC for the purpose of saving space for other use, such as additional shelving. The largest international library catalog in the world is the WorldCat union catalog managed by the non-profit library cooperative OCLC. In January 2021, WorldCat had over 500,000,000 catalog records and over 3 billion library holdings. Goal Antonio Genesio Maria Panizzi in 1841 and Charles Ammi Cutter in 1876 undertook pioneering work in the definition of early cataloging rule sets formulated according to theoretical models. Cutter made an explicit statement regarding the objectives of a bibliographic system in his Rules for a Printed Dictionary Catalog. According to Cutter, those objectives were 1. to enable a person to find a book of which any of the following is known (Identifying objective): the author the title the subject the date of publication 2. to show what the library has (Collocating objective) by a given author on a given subject in a given kind of literature 3. to assist in the choice of a book (Evaluating objective) as to its edition (bibliographically) as to its character (literary or topical) These objectives can still be recognized in more modern definitions formulated throughout the 20th century. Other influential pioneers in this area were Shiyali Ramamrita Ranganathan and Seymour Lubetzky. Cutter's objectives were revised by Lubetzky and the Conference on Cataloging Principles (CCP) in Paris in 1960/1961, resulting in the Paris Principles (PP). A more recent attempt to describe a library catalog's functions was made in 1998 with Functional Requirements for Bibliographic Records (FRBR), which defines four user tasks: find, identify, select, and obtain. A catalog helps to serve as an inventory or bookkeeping of the library's contents. If an item is not found in the catalog, the user may continue their search at another library. Catalog card A catalog card is an individual entry in a library catalog containing bibliographic information, including the author's name, title, and location. Eventually the mechanization of the modern era brought the efficiencies of card catalogs. It was around 1780 that the first card catalog appeared in Vienna. It solved the problems of the structural catalogs in marble and clay from ancient times and the later codex—handwritten and bound—catalogs that were manifestly inflexible and presented high costs in editing to reflect a changing collection. The first cards may have been French playing cards, which in the 1700s were blank on one side. In November 1789, during the dechristianization of France during the French Revolution, the process of collecting all books from religious houses was initiated. Using these books in a new system of public libraries included an inventory of all books. The backs of the playing cards contained the bibliographic information for each book and this inventory became known as the "French Cataloging Code of 1791". English inventor Francis Ronalds began using a catalog of cards to manage his growing book collection around 1815, which has been denoted as the first practical use of the system. In the mid-1800s, Natale Battezzati, an Italian publisher, developed a card system for booksellers in which cards represented authors, titles and subjects. Very shortly afterward, Melvil Dewey and other American librarians began to champion the card catalog because of its great expandability. In some libraries books were cataloged based on the size of the book while other libraries organized based only on the author's name. This made finding a book difficult. The first issue of Library Journal, the official publication of the American Library Association (ALA), made clear that the most pressing issues facing libraries were the lack of a standardized catalog and an agency to administer a centralized catalog. Responding to the standardization matter, the ALA formed a committee that quickly recommended the "Harvard College-size" cards as used at Harvard and the Boston Athenaeum. However, in the same report, the committee also suggested that a larger card, approximately , would be preferable. By the end of the nineteenth century, the bigger card won out, mainly to the fact that the card was already the "postal size" used for postcards. Melvil Dewey saw well beyond the importance of standardized cards and sought to outfit virtually all facets of library operations. To the end he established a Supplies Department as part of the ALA, later to become a stand-alone company renamed the Library Bureau. In one of its early distribution catalogs, the bureau pointed out that "no other business had been organized with the definite purpose of supplying libraries". With a focus on machine-cut index cards and the trays and cabinets to contain them, the Library Bureau became a veritable furniture store, selling tables, chairs, shelves and display cases, as well as date stamps, newspaper holders, hole punchers, paper weights, and virtually anything else a library could possibly need. With this one-stop shopping service, Dewey left an enduring mark on libraries across the country. Uniformity spread from library to library. Dewey and others devised a system where books were organized by subject, then alphabetized based on the author's name. Each book was assigned a call number which identified the subject and location, with a decimal point dividing different sections of the call number. The call number on the card matched a number written on the spine of each book. In 1860, Ezra Abbot began designing a card catalog that was easily accessible and secure for keeping the cards in order; he managed this by placing the cards on edge between two wooden blocks. He published his findings in the annual report of the library for 1863 and they were adopted by many American libraries. Work on the catalog began in 1862 and within the first year, 35,762 catalog cards had been created. Catalog cards were ; the Harvard College size. One of the first acts of the newly formed American Library Association in 1908 was to set standards for the size of the cards used in American libraries, thus making their manufacture and the manufacture of cabinets, uniform. OCLC, major supplier of catalog cards, printed the last one in October 2015. In a physical catalog, the information about each item is on a separate card, which is placed in order in the catalog drawer depending on the type of record. If it was a non-fiction record, Charles A. Cutter's classification system would help the patron find the book they wanted in a quick fashion. Cutter's classification system is as follows: A: encyclopedias, periodicals, society publications B–D: philosophy, psychology, religion E–G: biography, history, geography, travels H–K: social sciences, law L–T: science, technology X–Z: philology, book arts, bibliography Types Traditionally, there are the following types of catalog: Author catalog: a formal catalog, sorted alphabetically according to the names of authors, editors, illustrators, etc. Subject catalog: a catalog that sorted based on the Subject. Title catalog: a formal catalog, sorted alphabetically according to the article of the entries. Dictionary catalog: a catalog in which all entries (author, title, subject, series) are interfiled in a single alphabetical order. This was a widespread form of card catalog in North American libraries prior to the introduction of the computer-based catalog. Keyword catalog: a subject catalog, sorted alphabetically according to some system of keywords. Mixed alphabetic catalog forms: sometimes, one finds a mixed author / title, or an author / title / keyword catalog. Systematic catalog: a subject catalog, sorted according to some systematic subdivision of subjects. Also called a Classified catalog. Shelf list catalog: a formal catalog with entries sorted in the same order as bibliographic items are shelved. This catalog may also serve as the primary inventory for the library. History The earliest librarians created rules for how to record the details of the catalog. By 700 BCE the Assyrians followed the rules set down by the Babylonians. The seventh century BCE Babylonian Library of Ashurbanipal was led by the librarian Ibnissaru who prescribed a catalog of clay tablets by subject. Subject catalogs were the rule of the day, and author catalogs were unknown at that time. The frequent use of subject-only catalogs hints that there was a code of practice among early catalog librarians and that they followed some set of rules for subject assignment and the recording of the details of each item. These rules created efficiency through consistency—the catalog librarian knew how to record each item without reinventing the rules each time, and the reader knew what to expect with each visit. The task of recording the contents of libraries is more than an instinct or a compulsive tic exercised by librarians; it began as a way to broadcast to readers what is available among the stacks of materials. The tradition of open stacks of printed books is paradigmatic to modern American library users, but ancient libraries featured stacks of clay or prepaper scrolls that resisted browsing. As librarian, Gottfried van Swieten introduced the world's first card catalog (1780) as the Prefect of the Imperial Library, Austria. During the early modern period, libraries were organized through the direction of the librarian in charge. There was no universal method, so some books were organized by language or book material, for example, but most scholarly libraries had recognizable categories (like philosophy, saints, mathematics). The first library to list titles alphabetically under each subject was the Sorbonne library in Paris. Library catalogs originated as manuscript lists, arranged by format (folio, quarto, etc.) or in a rough alphabetical arrangement by author. Before printing, librarians had to enter new acquisitions into the margins of the catalog list until a new one was created. Because of the nature of creating texts at this time, most catalogs were not able to keep up with new acquisitions. When the printing press became well-established, strict cataloging became necessary because of the influx of printed materials. Printed catalogs, sometimes called dictionary catalogs, began to be published in the early modern period and enabled scholars outside a library to gain an idea of its contents. Copies of these in the library itself would sometimes be interleaved with blank leaves on which additions could be recorded, or bound as guardbooks in which slips of paper were bound in for new entries. Slips could also be kept loose in cardboard or tin boxes, stored on shelves. The first card catalogs appeared in the late 19th century after the standardization of the 5 in. x 3 in. card for personal filing systems, enabling much more flexibility, and towards the end of the 20th century the online public access catalog was developed (see below). These gradually became more common as some libraries progressively abandoned such other catalog formats as paper slips (either loose or in sheaf catalog form), and guardbooks. The beginning of the Library of Congress's catalog card service in 1911 led to the use of these cards in the majority of American libraries. An equivalent scheme in the United Kingdom was operated by the British National Bibliography from 1956 and was subscribed to by many public and other libraries. c. Seventh century BCE, the royal Library of Ashurbanipal at Nineveh had 30,000 clay tablets, in several languages, organized according to shape and separated by content. Assurbanipal sent scribes to transcribe works in other libraries within the kingdom. c. Third century BCE, Pinakes by Callimachus at the Library of Alexandria was arguably the first library catalog. 9th century: Libraries of Carolingian Schools and monasteries employ library catalog system to organize and loan out books. c. 10th century: The Persian city of Shiraz's library had over 300 rooms and thorough catalogs to help locate texts these were kept in the storage chambers of the library and they covered every topic imaginable. c. 1246: Library at Amiens Cathedral in France uses call numbers associated with the location of books. c. 1542–1605: The Mughul emperor Akbar was a warrior, sportsman, and famous cataloger. He organized a catalog of the Imperial Library's 24,000 texts, and he did most of the classifying himself. 1595: Nomenclator of Leiden University Library appears, the first printed catalog of an institutional library. Renaissance Era: In Paris, France The Sorbonne Library was one of the first libraries to list titles alphabetically based on the subject they happened to fall under. This became a new organization method for catalogs. Early 1600s: Sir Thomas Bodley divided cataloging into three different categories. History, poesy, and philosophy. 1674: Thomas Hyde's catalog for the Bodleian Library. 1791: The French Cataloging Code of 1791 1815: Thomas Jefferson sells his personal library to the US government to establish the Library of Congress. He had organized his library by adapting Francis Bacon's organization of knowledge, specifically using Memory, Reason, and Imagination as his three areas, which were then broken down into 44 subdivisions. 1874/1886: (English: Wroclaw instructions) by Karl Dziatzko 1899: (PI) (English: Prussian instructions) for scientific libraries in German-speaking countries and beyond 1932: DIN 1505 1938: (BA) (English: Berlin instructions) for public libraries in Germany 1961: Paris Principles (PP), internationally agreed upon principles for cataloging 1967: Anglo-American Cataloguing Rules (AACR) 1971: International Standard Bibliographic Description (ISBD) 1976/1977: (RAK) (English: Rules for alphabetical cataloging) in Germany and Austria More about the early history of library catalogs has been collected in 1956 by Strout. Sorting In a title catalog, one can distinguish two sort orders: In the grammatical sort order (used mainly in older catalogs), the most important word of the title is the first sort term. The importance of a word is measured by grammatical rules; for example, the first noun may be defined to be the most important word. In the mechanical sort order, the first word of the title is the first sort term. Most new catalogs use this scheme, but still include a trace of the grammatical sort order: they neglect an article (The, A, etc.) at the beginning of the title. The grammatical sort order has the advantage that often, the most important word of the title is also a good keyword (question 3), and it is the word most users remember first when their memory is incomplete. However, it has the disadvantage that many elaborate grammatical rules are needed, so that only expert users may be able to search the catalog without help from a librarian. In some catalogs, persons' names are standardized (i. e., the name of the person is always cataloged and sorted in a standard form) even if it appears differently in the library material. This standardization is achieved by a process called authority control. Simply put, authority control is defined as the establishment and maintenance of consistent forms of terms – such as names, subjects, and titles – to be used as headings in bibliographic records. An advantage of the authority control is that it is easier to answer question 2 (Which works of some author does the library have?). On the other hand, it may be more difficult to answer question 1 (Does the library have some specific material?) if the material spells the author in a peculiar variant. For the cataloger, it may incur too much work to check whether Smith, J. is Smith, John or Smith, Jack. For some works, even the title can be standardized. The technical term for this is uniform title. For example, translations and re-editions are sometimes sorted under their original title. In many catalogs, parts of the Bible are sorted under the standard name of the book(s) they contain. The plays of William Shakespeare are another frequently cited example of the role played by a uniform title in the library catalog. Many complications about alphabetic sorting of entries arise. Some examples: Some languages know sorting conventions that differ from the language of the catalog. For example, some Dutch catalogs sort IJ as Y. Should an English catalog follow this suit? And should a Dutch catalog sort non-Dutch words the same way? There are also pseudo-ligatures which sometimes come at the beginning of a word, such as Œdipus. See also Collation and Locale (computer software). Some titles contain numbers, for example 2001: A Space Odyssey. Should they be sorted as numbers, or spelled out as Two thousand and one? (Book-titles that begin with non-numeral-non-alphabetic glyphs such as #1 are similarly very difficult. Books which have diacritics in the first letter are a similar but far-more-common problem; casefolding of the title is standard, but stripping the diacritics off can change the meaning of the words.) de Balzac, Honoré or Balzac, Honoré de? Ortega y Gasset, José or Gasset, José Ortega y? (In the first example, "de Balzac" is the legal and cultural last name; splitting it apart would be the equivalent of listing a book about tennis under "-enroe, John Mac-" for instance. In the second example, culturally and legally the lastname is "Ortega y Gasset" which is sometimes shortened to simply "Ortega" as the masculine lastname; again, splitting is culturally incorrect by the standards of the culture of the author, but defies the normal understanding of what a 'last name' is—i.e. the final word in the ordered list of names that define a person—in cultures where multi-word-lastnames are rare. See also authors such as Sun Tzu, where in the author's culture the surname is traditionally printed first, and thus the 'last name' in terms of order is in fact the person's first-name culturally.) Classification In a subject catalog, one has to decide on which classification system to use. The cataloger will select appropriate subject headings for the bibliographic item and a unique classification number (sometimes known as a "call number") which is used not only for identification but also for the purposes of shelving, placing items with similar subjects near one another, which aids in browsing by library users, who are thus often able to take advantage of serendipity in their search process. Online catalogs Online cataloging, through such systems as the Dynix software developed in 1983 and used widely through the late 1990s, has greatly enhanced the usability of catalogs, thanks to the rise of MARC standards (an acronym for MAchine Readable Cataloging) in the 1960s. Rules governing the creation of MARC catalog records include not only formal cataloging rules such as Anglo-American Cataloguing Rules, second edition (AACR2), Resource Description and Access (RDA) but also rules specific to MARC, available from both the U.S. Library of Congress and from OCLC, which builds and maintains WorldCat. MARC was originally used to automate the creation of physical catalog cards, but its use evolved into direct access to the MARC computer files during the search process. OPACs have enhanced usability over traditional card formats because: The online catalog does not need to be sorted statically; the user can choose author, title, keyword, or systematic order dynamically. Most online catalogs allow searching for any word in a title or other field, increasing the ways to find a record. Many online catalogs allow links between several variants of an author's name. The elimination of paper cards has made the information more accessible to many people with disabilities, such as the visually impaired, wheelchair users, and those who suffer from mold allergies or other paper- or building-related problems. Physical storage space is considerably reduced. Updates are significantly more efficient. See also Cataloging International Standard Bibliographic Description (ISBD) Social cataloging application References Sources Further reading Previous edition: Library equipment
wiki
Aretha's Greatest Hits è un album della cantante soul statunitense Aretha Franklin, pubblicato nel 1971 dalla Atlantic Records. Tracce Lato A Lato B Collegamenti esterni
wiki
Is This Love may refer to: "Is This Love" (Daryl Braithwaite song) "Is This Love?" (Clap Your Hands Say Yeah song) "Is This Love?" (The Fireman song) "Is This Love" (Aiden Grimshaw song) "Is This Love" (Bob Marley & The Wailers song) "Is This Love?" (Alison Moyet song) "Is This Love?" (Bonnie Pink song) "Is This Love" (Survivor song) "Is This Love" (Whitesnake song) "Is This Love?", a song by Daði Freyr Pétursson "Is This Love", a song by Chris Brown from his 2005 album Chris Brown "Is This Love ('09)", a song by Eminem, featuring 50 Cent, from his 2022 album Curtain Call 2 "Step You/Is This Love?", a 2005 song by Ayumi Hamasaki See also Is It Love? (disambiguation)
wiki
Pallidol es un dímero de resveratrol. Se encuentra en el vino tinto, en Cissus pallida o en Parthenocissus laetevirens. Referencias Enlaces externos Pallidol on www.phenol-explorer.eu Alcoholes Antioxidantes Fitoquímicos
wiki
Reverse image search is a content-based image retrieval (CBIR) query technique that involves providing the CBIR system with a sample image that it will then base its search upon; in terms of information retrieval, the sample image is very useful in its ways. In particular, reverse image search is characterized by a lack of search terms. This effectively removes the need for a user to guess at keywords or terms that may or may not return a correct result. Reverse image search also allows users to discover content that is related to a specific sample image, popularity of an image, and discover manipulated versions and derivative works. Uses Reverse image search may be used to: Locate the source of an image. Find higher resolution versions. Discover webpages where the image appears. Find the content creator. Get information about an image. Algorithms Commonly used reverse image search algorithms include: Scale-invariant feature transform - to extract local features of an image Maximally stable extremal regions Vocabulary tree Application in popular search systems Yandex Yandex Images offers a global reverse image and photo search. The site uses standard Content Based Image Retrieval (CBIR) technology used by many other sites, but additionally uses artificial intelligence-based technology to locate further results based on query. Users can drag and drop images to the toolbar for the site to complete a search on the internet for similar looking images. The Yandex images searches some obscure social media sites in addition to more common ones offering content owners means of tracking plagiarism of image or photo intellectual property. Google Images Google's Search by Image is a feature that uses reverse image search and allows users to search for related images by uploading an image or copying the image URL. Google accomplishes this by analyzing the submitted picture and constructing a mathematical model of it. It is then compared with other images in Google's databases before returning matching and similar results. When available, Google also uses metadata about the image such as description. In 2022 the feature was replaced by Google Lens as the default visual search method on Google, and the old Search by Image function remains available within Google Lens. TinEye TinEye is a search engine specialized for reverse image search. Upon submitting an image, TinEye creates a "unique and compact digital signature or fingerprint" of said image and matches it with other indexed images. This procedure is able to match even very edited versions of the submitted image, but will not usually return similar images in the results. Pixsy Pixsy reverse image search technology detects image matches on the public internet for images uploaded to the Pixsy platform. New matches are automatically detected and alerts sent to the user. For unauthorized use, Pixsy offers a compensation recovery service for commercial use of the image owners work. Pixsy partners with over 25 law firms and attorneys around the world to bring resolution for copyright infringement. Pixsy is the strategic image monitoring service for the Flickr platform and users. eBay eBay ShopBot uses reverse image search to find products by a user uploaded photo. eBay uses a ResNet-50 network for category recognition, image hashes are stored in Google Bigtable; Apache Spark jobs are operated by Google Cloud Dataproc for image hash extraction; and the image ranking service is deployed by Kubernetes. SK Planet SK Planet uses reverse image search to find related fashion items on its e-commerce website. It developed the vision encoder network based on the TensorFlow inception-v3, with speed of convergence and generalization for production usage. A recurrent neural network is used for multi-class classification, and fashion-product region-of interest detection is based on Faster R-CNN. SK Planet's reverse image search system is built in less than 100 man-months. Alibaba Alibaba released the Pailitao application during 2014. Pailitao (, literally means shopping through a camera) allows users to search for items on Alibaba's E-commercial platform by taking a photo of the query object. The Pailitao application uses a deep CNN model with branches for joint detection and feature learning to discover the detection mask and exact discriminative feature without background disturbance. GoogLeNet V1 is employed as the base model for category prediction and feature learning. Pinterest Pinterest acquired startup company VisualGraph in 2014 and introduced visual search on its platform. In 2015, Pinterest published a paper at the ACM Conference on Knowledge Discovery and Data Mining conference and disclosed the architecture of the system. The pipeline uses Apache Hadoop, the open-source Caffe convolutional neural network framework, Cascading for batch processing, PinLater for messaging, and Apache HBase for storage. Image characteristics, including local features, deep features, salient color signatures and salient pixels are extracted from user uploads. The system is operated by Amazon EC2, and only requires a cluster of 5 GPU instances to handle daily image uploads onto Pinterest. By using reverse image search, Pinterest is able to extract visual features from fashion objects (e.g. shoes, dress, glasses, bag, watch, pants, shorts, bikini, earrings) and offer product recommendations that look similar. LykDat LykDat uses reverse image search to find fashion products across various online stores on the web. LykDat also provides a Twitter bot that helps users carry out reverse image searches of photos they find within Twitter. JD.com JD.com disclosed the design and implementation of its real time visual search system at the Middleware '18 conference. The peer reviewed paper focuses on the algorithms used by JD's distributed hierarchical image feature extraction, indexing and retrieval system, which has 300 million daily active users. The system was able to sustain 80 million updates to its database per hour when it was deployed in production in 2018. Bing Microsoft Bing published the architecture of their reverse image searching of system at the KDD'18 conference. The paper states that a variety of features from a query image submitted by a user are used to describe its content, including using deep neural network encoders, category recognition features, face recognition features, color features and duplicate detection features. Research systems Microsoft Research Asia's Beijing Lab published a paper in the Proceedings of the IEEE on the Arista-SS (Similar Search) and the Arista-DS (Duplicate Search) systems. Arista-DS only performs duplicate search algorithms such as principal component analysis on global image features to lower computational and memory costs. Arista-DS is able to perform duplicate search on 2 billion images with 10 servers but with the trade-off of not detecting near duplicates. Open-source implementations In 2007, the Puzzle library is released under the ISC license. Puzzle is designed to offer reverse image search visually similar images, even after the images have been resized, re-compressed, recolored and/or slightly modified. The image-match opensource project was released in 2016. The project, licensed under the Apache License, implements a reverse image search engine written in Python. Both the Puzzle library and the image-match projects use algorithms published at an IEEE ICIP conference. In 2019, a book published by O'Reilly documents how a simple reverse image search system can be built in a few hours. The book covers image feature extraction and similarity search, together with more advanced topics including scalability using GPUs and search accuracy improvement tuning. The code for the system was made available freely on GitHub. Production reverse image search systems Google Images and Google Lens Bing Yandex Images See also Content-based image retrieval Visual search engine FindFace References Applications of computer vision Image search
wiki
This is a list of the current 63 members of the Althing (Icelandic Parliament), from 2021 until the present. Election results References Lists of Members of the Althing Iceland
wiki
The Kashmir field mouse (Apodemus rusiges) is a species of rodent in the family Muridae. It is found in India, Nepal, and Pakistan. References Rats of Asia Rodents of India Mammals of Pakistan Mammals of Nepal Apodemus Mammals described in 1913 Taxonomy articles created by Polbot
wiki
Sunrise (or sunup) is the moment when the upper rim of the Sun appears on the horizon in the morning. The term can also refer to the entire process of the solar disk crossing the horizon and its accompanying atmospheric effects. Terminology Although the Sun appears to "rise" from the horizon, it is actually the Earth's motion that causes the Sun to appear. The illusion of a moving Sun results from Earth observers being in a rotating reference frame; this apparent motion caused many cultures to have mythologies and religions built around the geocentric model, which prevailed until astronomer Nicolaus Copernicus formulated his heliocentric model in the 16th century. Architect Buckminster Fuller proposed the terms "sunsight" and "sunclipse" to better represent the heliocentric model, though the terms have not entered into common language. Astronomically, sunrise occurs for only an instant: the moment at which the upper limb of the Sun appears tangent to the horizon. However, the term sunrise commonly refers to periods of time both before and after this point: Twilight, the period in the morning during which the sky is brightening, but the Sun is not yet visible. The beginning of morning twilight is called astronomical dawn. The period after the Sun rises during which striking colors and atmospheric effects are still seen. Measurement Angle with respect to horizon The stage of sunrise known as false sunrise actually occurs before the Sun truly reaches the horizon because Earth's atmosphere refracts the Sun's image. At the horizon, the average amount of refraction is 34 arcminutes, though this amount varies based on atmospheric conditions. Also, unlike most other solar measurements, sunrise occurs when the Sun's upper limb, rather than its center, appears to cross the horizon. The apparent radius of the Sun at the horizon is 16 arcminutes. These two angles combine to define sunrise to occur when the Sun's center is 50 arcminutes below the horizon, or 90.83° from the zenith. Time of day The timing of sunrise varies throughout the year and is also affected by the viewer's latitude and longitude, altitude, and time zone. These changes are driven by the axial tilt of Earth, daily rotation of the Earth, the planet's movement in its annual elliptical orbit around the Sun, and the Earth and Moon's paired revolutions around each other. The analemma can be used to make approximate predictions of the time of sunrise. In late winter and spring, sunrise as seen from temperate latitudes occurs earlier each day, reaching its earliest time near the summer solstice; although the exact date varies by latitude. After this point, the time of sunrise gets later each day, reaching its latest sometime around the winter solstice. The offset between the dates of the solstice and the earliest or latest sunrise time is caused by the eccentricity of Earth's orbit and the tilt of its axis, and is described by the analemma, which can be used to predict the dates. Variations in atmospheric refraction can alter the time of sunrise by changing its apparent position. Near the poles, the time-of-day variation is exaggerated, since the Sun crosses the horizon at a very shallow angle and thus rises more slowly. Accounting for atmospheric refraction and measuring from the leading edge slightly increases the average duration of day relative to night. The sunrise equation, however, which is used to derive the time of sunrise and sunset, uses the Sun's physical center for calculation, neglecting atmospheric refraction and the non-zero angle subtended by the solar disc. Location on the horizon Neglecting the effects of refraction and the Sun's non-zero size, whenever sunrise occurs, in temperate regions it is always in the northeast quadrant from the March equinox to the September equinox and in the southeast quadrant from the September equinox to the March equinox. Sunrises occur approximately due east on the March and September equinoxes for all viewers on Earth. Exact calculations of the azimuths of sunrise on other dates are complex, but they can be estimated with reasonable accuracy by using the analemma. The figure on the right is calculated using the solar geometry routine in Ref. as follows: For a given latitude and a given date, calculate the declination of the Sun using longitude and solar noon time as inputs to the routine; Calculate the sunrise hour angle using the sunrise equation; Calculate the sunrise time, which is the solar noon time minus the sunrise hour angle in degree divided by 15; Use the sunrise time as input to the solar geometry routine to get the solar azimuth angle at sunrise. Hemispheric symmetry An interesting feature in the figure on the right is apparent hemispheric symmetry in regions where daily sunrise and sunset actually occur. This symmetry becomes clear if the hemispheric relation in to the sunrise equation is applied to the x- and y-components of the solar vector presented in Ref. Appearance Colors Air molecules and airborne particles scatter white sunlight as it passes through the Earth's atmosphere. This is done by a combination of Rayleigh scattering and Mie scattering. As a ray of white sunlight travels through the atmosphere to an observer, some of the colors are scattered out of the beam by air molecules and airborne particles, changing the final color of the beam the viewer sees. Because the shorter wavelength components, such as blue and green, scatter more strongly, these colors are preferentially removed from the beam. At sunrise and sunset, when the path through the atmosphere is longer, the blue and green components are removed almost completely, leaving the longer-wavelength orange and red hues seen at those times. The remaining reddened sunlight can then be scattered by cloud droplets and other relatively large particles to light up the horizon red and orange. The removal of the shorter wavelengths of light is due to Rayleigh scattering by air molecules and particles much smaller than the wavelength of visible light (less than 50 nm in diameter). The scattering by cloud droplets and other particles with diameters comparable to or larger than the sunlight's wavelengths (more than 600 nm) is due to Mie scattering and is not strongly wavelength-dependent. Mie scattering is responsible for the light scattered by clouds, and also for the daytime halo of white light around the Sun (forward scattering of white light). Sunset colors are typically more brilliant than sunrise colors, because the evening air contains more particles than morning air. Ash from volcanic eruptions, trapped within the troposphere, tends to mute sunset and sunrise colors, while volcanic ejecta that is instead lofted into the stratosphere (as thin clouds of tiny sulfuric acid droplets), can yield beautiful post-sunset colors called afterglows and pre-sunrise glows. A number of eruptions, including those of Mount Pinatubo in 1991 and Krakatoa in 1883, have produced sufficiently high stratospheric sulfuric acid clouds to yield remarkable sunset afterglows (and pre-sunrise glows) around the world. The high altitude clouds serve to reflect strongly reddened sunlight still striking the stratosphere after sunset, down to the surface. Optical illusions and other phenomena Atmospheric refraction causes the Sun to be seen while it is still below the horizon. Light from the lower edge of the Sun's disk is refracted more than light from the upper edge. This reduces the apparent height of the Sun when it appears just above the horizon. The width is not affected, so the Sun appears wider than it is high. The Sun appears larger at sunrise than it does while higher in the sky, in a manner similar to the Moon illusion. The Sun appears to rise above the horizon and circle the Earth, but it is actually the Earth that is rotating, with the Sun remaining fixed. This effect results from the fact that an observer on Earth is in a rotating reference frame. Occasionally a false sunrise occurs, demonstrating a very particular kind of parhelion belonging to the optical phenomenon family of halos. Sometimes just before sunrise or after sunset, a green flash can be seen. This is an optical phenomenon in which a green spot is visible above the Sun, usually for no more than a second or two. See also Analemma Day Daytime Dusk Earth's shadow, visible at sunrise First sunrise Golden hour (photography) Noon Red sky at morning Sunrise equation Sunset References External links Full physical explanation of sky color, in simple terms An Excel workbook with VBA functions for sunrise, sunset, solar noon, twilight (dawn and dusk), and solar position (azimuth and elevation) Sun data for various cities Articles containing video clips Daily events Morning Earth phenomena Parts of a day Solar phenomena
wiki
Zutta may refer to: Derekh Eretz Zutta, non-canonical tractate of the Babylonian Talmud Devarim Zutta, midrash to Deuteronomy which is no longer extant except in references by later authorities Seder Olam Zutta, anonymous chronicle, called "Zuṭa" to distinguish it from the older Seder 'Olam Rabbah Shir ha-Shirim Zutta, midrash, or, rather, homiletic commentary, on Canticles Sifre Zutta, midrash on the Book of Numbers
wiki
The name 2017 German Darts Masters was used for two darts tournaments organised by the Professional Darts Corporation in 2017: The 2017 German Darts Masters (European Tour), an event held in Jena in April 2017 as part of the 2017 European Tour The 2017 German Darts Masters (World Series of Darts), an event held in Düsseldorf in October 2017 as part of the 2017 World Series of Darts
wiki
Rubelsanto Airport is an airport serving the village of Rubelsanto in Alta Verapaz Department, Guatemala. The Rubelsanto non-directional beacon (Ident: RUB) is on the field. See also Transport in Guatemala List of airports in Guatemala References External links OurAirports - Rubelsanto Rubelsanto OpenStreetMap - Rubelsanto Airports in Guatemala Alta Verapaz Department
wiki
Frau Margot is an opera in 3 acts by composer Thomas Pasatieri. The work uses an English language libretto by Frank Corsaro which is based on Corsaro's play Lyric Suite. The opera's premiere was presented by the Fort Worth Opera on June 2, 2007. Corsaro directed the production which used sets by Alison Nalder and costumes by Steven Bryant. A recording of this production was released on CD by Albany Records. Roles References 2007 operas English-language operas Operas Operas by Thomas Pasatieri Operas based on plays
wiki
Realistic silicone masks have been used in crimes throughout the world. In China, criminals can obtain silicone masks cheaply from the internet and have used them for criminal activities. Silicone masks have been used as a disguise to conceal identity to perpetrate crimes. Incidents See also Anti-mask law Police impersonation Ghostface (identity) Guy Fawkes mask References Silicone mask Masks in law Silicone mask Silicon
wiki
Beryllium compounds
wiki
Sunset, also known as sundown, is the daily disappearance of the Sun below the horizon due to Earth's rotation. As viewed from everywhere on Earth (except the North and South poles), the equinox Sun sets due west at the moment of both the spring and autumn equinoxes. As viewed from the Northern Hemisphere, the Sun sets to the northwest (or not at all) in the spring and summer, and to the southwest in the autumn and winter; these seasons are reversed for the Southern Hemisphere. The time of sunset is defined in astronomy as the moment when the upper limb of the Sun disappears below the horizon. Near the horizon, atmospheric refraction causes sunlight rays to be distorted to such an extent that geometrically the solar disk is already about one diameter below the horizon when a sunset is observed. Sunset is distinct from twilight, which is divided into three stages. The first one is civil twilight, which begins once the Sun has disappeared below the horizon, and continues until it descends to 6 degrees below the horizon. The second phase is nautical twilight, between 6 and 12 degrees below the horizon. The third phase is astronomical twilight, which is the period when the Sun is between 12 and 18 degrees below the horizon. Dusk is at the very end of astronomical twilight, and is the darkest moment of twilight just before night. Finally, night occurs when the Sun reaches 18 degrees below the horizon and no longer illuminates the sky. Locations further north than the Arctic Circle and further south than the Antarctic Circle experience no full sunset or sunrise on at least one day of the year, when the polar day or the polar night persists continuously for 24 hours. Occurrence The time of sunset varies throughout the year, and is determined by the viewer's position on Earth, specified by latitude and longitude, altitude, and time zone. Small daily changes and noticeable semi-annual changes in the timing of sunsets are driven by the axial tilt of Earth, daily rotation of the Earth, the planet's movement in its annual elliptical orbit around the Sun, and the Earth and Moon's paired revolutions around each other. During winter and spring, the days get longer and sunsets occur later every day until the day of the latest sunset, which occurs after the summer solstice. In the Northern Hemisphere, the latest sunset occurs late in June or in early July, but not on the Summer solstice of June 21. This date depends on the viewer's latitude (connected with the Earth's slower movement around the aphelion around July 4). Likewise, the earliest sunset does not occur on the winter solstice, but rather about two weeks earlier, again depending on the viewer's latitude. In the Northern Hemisphere, it occurs in early December or late November (influenced by the Earth's faster movement near its perihelion, which occurs around January 3). Likewise, the same phenomenon exists in the Southern Hemisphere, but with the respective dates reversed, with the earliest sunsets occurring some time before June 21 in winter, and latest sunsets occurring some time after December 21 in summer, again depending on one's southern latitude. For a few weeks surrounding both solstices, both sunrise and sunset get slightly later each day. Even on the equator, sunrise and sunset shift several minutes back and forth through the year, along with solar noon. These effects are plotted by an analemma. Neglecting atmospheric refraction and the Sun's non-zero size, whenever and wherever sunset occurs, it is always in the northwest quadrant from the March equinox to the September equinox, and in the southwest quadrant from the September equinox to the March equinox. Sunsets occur almost exactly due west on the equinoxes for all viewers on Earth. Exact calculations of the azimuths of sunset on other dates are complex, but they can be estimated with reasonable accuracy by using the analemma. As sunrise and sunset are calculated from the leading and trailing edges of the Sun, respectively, and not the center, the duration of a daytime is slightly longer than nighttime (by about 10 minutes, as seen from temperate latitudes). Further, because the light from the Sun is refracted as it passes through the Earth's atmosphere, the Sun is still visible after it is geometrically below the horizon. Refraction also affects the apparent shape of the Sun when it is very close to the horizon. It makes things appear higher in the sky than they really are. Light from the bottom edge of the Sun's disk is refracted more than light from the top, since refraction increases as the angle of elevation decreases. This raises the apparent position of the bottom edge more than the top, reducing the apparent height of the solar disk. Its width is unaltered, so the disk appears wider than it is high. (In reality, the Sun is almost exactly spherical.) The Sun also appears larger on the horizon, an optical illusion, similar to the moon illusion. Locations north of the Arctic Circle and south of the Antarctic Circle experience no sunset or sunrise at least one day of the year, when the polar day or the polar night persist continuously for 24 hours. Location on the horizon Approximate locations of sunset on the horizon (azimuth) as described above can be found in Refs. The figure on the right is calculated using the solar geometry routine as follows: For a given latitude and a given date, calculate the declination of the Sun using longitude and solar noon time as inputs to the routine; Calculate the sunset hour angle using the sunset equation; Calculate the sunset time, which is the solar noon time plus the sunset hour angle in degree divided by 15; Use the sunset time as input to the solar geometry routine to get the solar azimuth angle at sunset. An interesting feature in the figure on the right is apparent hemispheric symmetry in regions where daily sunrise and sunset actually occur. This symmetry becomes clear if the hemispheric relation in sunrise equation is applied to the x- and y-components of the solar vector presented in Ref. Colors As a ray of white sunlight travels through the atmosphere to an observer, some of the colors are scattered out of the beam by air molecules and airborne particles, changing the final color of the beam the viewer sees. Because the shorter wavelength components, such as blue and green, scatter more strongly, these colors are preferentially removed from the beam. At sunrise and sunset, when the path through the atmosphere is longer, the blue and green components are removed almost completely, leaving the longer wavelength orange and red hues we see at those times. The remaining reddened sunlight can then be scattered by cloud droplets and other relatively large particles to light up the horizon red and orange. The removal of the shorter wavelengths of light is due to Rayleigh scattering by air molecules and particles much smaller than the wavelength of visible light (less than 50 nm in diameter). The scattering by cloud droplets and other particles with diameters comparable to or larger than the sunlight's wavelengths (> 600 nm) is due to Mie scattering and is not strongly wavelength-dependent. Mie scattering is responsible for the light scattered by clouds, and also for the daytime halo of white light around the Sun (forward scattering of white light). Sunset colors are typically more brilliant than sunrise colors, because the evening air contains more particles than morning air. Sometimes just before sunrise or after sunset a green flash can be seen. Ash from volcanic eruptions, trapped within the troposphere, tends to mute sunset and sunrise colors, while volcanic ejecta that is instead lofted into the stratosphere (as thin clouds of tiny sulfuric acid droplets), can yield beautiful post-sunset colors called afterglows and pre-sunrise glows. A number of eruptions, including those of Mount Pinatubo in 1991 and Krakatoa in 1883, have produced sufficiently high stratus clouds containing sulfuric acid to yield remarkable sunset afterglows (and pre-sunrise glows) around the world. The high altitude clouds serve to reflect strongly reddened sunlight still striking the stratosphere after sunset, down to the surface. Some of the most varied colors at sunset can be found in the opposite or eastern sky after the Sun has set during twilight. Depending on weather conditions and the types of clouds present, these colors have a wide spectrum, and can produce unusual results. Names of compass points In some languages, points of the compass bear names etymologically derived from words for sunrise and sunset. The English words "orient" and "occident", meaning "east" and "west", respectively, are descended from Latin words meaning "sunrise" and "sunset". The word "levant", related e.g. to French "(se) lever" meaning "lift" or "rise" (and also to English "elevate"), is also used to describe the east. In Polish, the word for east wschód (vskhud), is derived from the morpheme "ws" – meaning "up", and "chód" – signifying "move" (from the verb chodzić – meaning "walk, move"), due to the act of the Sun coming up from behind the horizon. The Polish word for west, zachód (zakhud), is similar but with the word "za" at the start, meaning "behind", from the act of the Sun going behind the horizon. In Russian, the word for west, запад (zapad), is derived from the words за – meaning "behind", and пад – signifying "fall" (from the verb падать – padat'), due to the act of the Sun falling behind the horizon. In Hebrew, the word for east is 'מזרח', which derives from the word for rising, and the word for west is 'מערב', which derives from the word for setting. Historical view The 16th-century astronomer Nicolaus Copernicus was the first to present to the world a detailed and eventually widely accepted mathematical model supporting the premise that the Earth is moving and the Sun actually stays still, despite the impression from our point of view of a moving Sun. Planets Sunsets on other planets appear different because of differences in the distance of the planet from the Sun and non-existent or differing atmospheric compositions. Mars On Mars, the setting Sun appears about two-thirds the size it does from Earth, due to the greater distance between Mars and the Sun. The colors are typically hues of blue, but some Martian sunsets last significantly longer and appear far redder than is typical on Earth. The colors of the Martian sunset differ from those on Earth. Mars has a thin atmosphere, lacking oxygen and nitrogen, so the light scattering is not dominated by a Rayleigh Scattering process. Instead, the air is full of red dust, blown into the atmosphere by high winds, so its sky color is mainly determined by a Mie Scattering process, resulting in more blue hues than an Earth sunset. One study also reported that Martian dust high in the atmosphere can reflect sunlight up to two hours after the Sun has set, casting a diffuse glow across the surface of Mars. Gallery See also Afterglow Analemma Astronomy on Mars Dawn Daytime Diffuse sky radiation Dusk Earth's shadow, visible at sunset Golden hour Sundown town Sunrise Sunrise equation Twilight References External links Full physical explanation in simple terms The colors of twilight and sunset The physics of Sunsets - more detailed explanation including the role of clouds Geolocation service to calculate the time of sunrise and sunset Earth phenomena Parts of a day Solar phenomena Daily events Evening
wiki
UNSC may refer to: United Nations United Nations Security Council, the most powerful organ of the United Nations, charged with maintaining peace and security between nations United Nations Special Commission, an organisation which performed inspections in Iraq (correctly abbreviated UNSCOM) United Nations Scientific Committee on the Effects of Atomic Radiation (correctly abbreviated UNSCEAR) United Nations Statistical Commission, oversees the work of the United Nations Statistics Division (UNSD) United Nations System Staff College, a provider of learning and training for UN staff (correctly abbreviated UNSSC) Fiction United Nations Space Command, a fictional organization in the Halo series of games and novels
wiki
Nocturne è un cortometraggio del 1980 scritto e diretto da Lars von Trier. Trama Una donna si sveglia in una stanza in piena notte; non riesce a riaddormentarsi e allora telefona a un'amica. Produzione Riconoscimenti Premio come miglior film al Munich International Festival of Film Schools. Note Collegamenti esterni Film diretti da Lars von Trier
wiki
How To may refer to: An owner's manual A tutorial A user guide How To: Absurd Scientific Advice for Common Real-World Problems, a 2019 book by Randall Munroe How To with John Wilson, a 2020 HBO comedy docuseries How 2, an educational television series HOWTO documents, part of the Linux Documentation Project HowTo, a satirical wiki project, see HowTo.tv, a video website See also Method (disambiguation) wikiHow HowToBasic
wiki
Ski orienteering (SkiO) is a cross-country skiing endurance winter racing sport and one of the four orienteering disciplines recognized by the IOF. A successful ski orienteer combines high physical endurance, strength and excellent technical skiing skills with the ability to navigate and make the best route choices while skiing at a high speed. Standard orienteering maps are commonly used, but since 2019, a separate mapping standard ISSkiOM has been produced which recommends a subset of the symbols used in other disciplines. Ski-orienteering maps uses green symbols to indicate trails and tracks and different symbols to indicate their navigability in snow; other symbols indicate whether any roads are snow-covered or clear. Navigation tactics is similar to mountain bike orienteering. Standard skate-skiing equipment is used, along with a map holder attached to the chest. Compared to cross-country skiing, upper body strength is more important because of double-poling needed along narrow snow trails. Events Ski orienteering events are designed to test both physical strength and navigation skills of the athletes. Ski orienteers use the map to navigate a dense ski track network in order to visit a number of control points in the shortest possible time. The track network is printed on the map, and there is no marked route in the terrain. The control points must be visited in the right order. The map gives all information the athlete needs in order to decide which route is the fastest, including the quality and width of the tracks. The athlete has to take hundreds of route choice decisions at high speed during every race: a slight lack of concentration for just a hundredth of a second may cost the medal. Ski orienteering is time-measured and objective. The clock is the judge: fastest time wins. The electronic card verifies that the athlete has visited all control points in the right order. International competitions The World Ski Orienteering Championships is the official event to award the titles of World Champions in Ski Orienteering. The World Championships is organized every odd year. The programme includes Sprint, Middle and Long Distance competitions, and a Relay for both men and women. The World Cup is the official series of events to find the world's best ski orienteers over a season. The World Cup is organized every even year. Junior World Ski Orienteering Championships and World Masters Ski Orienteering Championships are organized annually. World-wide sport Ski orienteering is practiced on four continents. The events take place in the natural environment, over a variety of outdoor terrains, from city parks to countryside fields, forests and mountain sides - wherever there is snow. The leading ski orienteering regions are Asia, Europe and North America. National teams from 35 countries are expected to participate in the next World Ski Orienteering Championships to be held in Sweden in March 2011. Ski orienteering is on the programme of the Asian Winter Games and the CISM World Military Winter Games. The IOF has applied for inclusion of ski orienteering in the 2018 Olympic Winter Games and will also apply to FISU for inclusion in the 2013 Winter Universiades. World Rankings , the highest ranked male ski-orienteerers are: {| class="wikitable" style="text-align: center;" |- !Rank !Name !Country !Points |- |1 |align="left"|Erik Rost |align="left"| |6995 |- |2 |align="left"|Sergey Gorlanov |align="left"| |6967 |- |3 |align="left"|Lars Moholdt |align="left"| |6963 |- |4 |align="left"|Vladislav Kiselev |align="left"| |6818 |- |5 |align="left"|Eduard Khrennikov |align="left"| |6746 |- |6 |align="left"|Stanimir Belomazhev |align="left"| |6706 |- |7 |align="left"|Gion Schnyder |align="left"| |6641 |- |8 |align="left"|Erik Blomgren |align="left"| |6631 |- |9 |align="left"|Tuomas Kotro |align="left"| |6552 |- |10 |align="left"|Tero Linnainmaa |align="left"| |6544 |- |11 |align="left"|Jyri Uusitalo |align="left"| |6506 |- |12 |align="left"|Andrey Lamov |align="left"| |6501 |- |13 |align="left"|Jorgen Madslien |align="left"| |6481 |- |14 |align="left"|Bjornar Kvale |align="left"| |6445 |- |15 |align="left"|Petr Horvat |align="left"| |6372 |- |16 |align="left"|Martin Hammarberg |align="left"| |6356 |- |17 |align="left"|Janne Hakkinen |align="left"| |6345 |- |18 |align="left"|Oyvind Wiggen |align="left"| |6226 |- |19 |align="left"|Jorgen Baklid |align="left"| |6224 |- |20 |align="left"|Jakub Skoda |align="left"| |6221 |- |21 |align="left"|Mattis Jaama |align="left"| |5959 |- |22 |align="left"|Radek Laciga |align="left"| |5930 |- |23 |align="left"|Audun Heimdal |align="left"| |5910 |- |24 |align="left"|Martin Penchev |align="left"| |5712 |- |25 |align="left"|Rasmus Wickbom |align="left"| |5642 Equipment A person taking part in competitions in ski orienteering is equipped with: Clothing adequate for cross-country skiing, boots and skis and ski poles. An orienteering map provided by the organizer, showing the control points which must be visited in order. The map is designed to give all the information the competitor needs to decide which route is the fastest, such as the quality of the tracks, gradient and distance. Green lines on the map show a trail suited to race on skis. Depending on the thickness and continuity of the lines, the competitor makes decisions about which route is the fastest between control points. Map holder: a map holder attached to the chest makes it possible to view the map while skiing at full speed. Optionally lighter type of compass is attached to the map holder or to the skier's arm. An electronic punching chip (see orienteering control point). Bid for inclusion in 2018 Winter Olympic Games The International Orienteering Federation (IOF) had applied for ski orienteering to be included in the programme of the 2018 Olympic Winter Games. However this was unsuccessful. In the past few years, ski orienteering has grown considerably in terms of global spread. The growth has been boosted by the inclusion of ski orienteering into the Asian Winter Games and the CISM World Military Winter Games. References External links International Orienteering Federation Ski orienteering presentation (Background material to the Press Release: Ski orienteering bids for inclusion in the 2018 Olympic Winter Games) (August 31, 2010) (June 11, 2008) Orienteering Orienteering Multisports Orienteering Snow sports
wiki
Owen Lloyd "Ox" Parry (November 17, 1914 – March 2, 1976) was a professional American football tackle who played three seasons for the New York Giants. References 1914 births 1976 deaths American football tackles Baylor Bears football players New York Giants players Players of American football from San Antonio
wiki
Wind egg or variants may refer to: Cock egg, an egg without a yolk and/or a shell Windegg (disambiguation)
wiki
The Crucifixion Standard (Italian - Stendardo della Crocifissione) is a double-sided c.1502-1505 tempera on panel painting by Luca Signorelli, produced late in his career and now on the high altar of Sant'Antonio Abate church in Sansepolcro. The reverse shows Anthony the Great and John the Evangelist with brothers kneeling before them in hierarchical proportion, whilst the front shows the Crucifixion with Anthony, John, Mary Magdalene and the Virgin Mary. Gallery References 1505 paintings Paintings by Luca Signorelli Paintings in Tuscany Signorelli Paintings of the Virgin Mary Paintings depicting Mary Magdalene Paintings of Anthony the Great
wiki
Hyponatremia or hyponatraemia is a low concentration of sodium in the blood. It is generally defined as a sodium concentration of less than 135 mmol/L (135 mEq/L), with severe hyponatremia being below 120 mEq/L. Symptoms can be absent, mild or severe. Mild symptoms include a decreased ability to think, headaches, nausea, and poor balance. Severe symptoms include confusion, seizures, and coma; death can ensue. The causes of hyponatremia are typically classified by a person's body fluid status into low volume, normal volume, or high volume. Low volume hyponatremia can occur from diarrhea, vomiting, diuretics, and sweating. Normal volume hyponatremia is divided into cases with dilute urine and concentrated urine. Cases in which the urine is dilute include adrenal insufficiency, hypothyroidism, and drinking too much water or too much beer. Cases in which the urine is concentrated include syndrome of inappropriate antidiuretic hormone secretion (SIADH). High volume hyponatremia can occur from heart failure, liver failure, and kidney failure. Conditions that can lead to falsely low sodium measurements include high blood protein levels such as in multiple myeloma, high blood fat levels, and high blood sugar. Treatment is based on the underlying cause. Correcting hyponatremia too quickly can lead to complications. Rapid partial correction with 3% normal saline is only recommended in those with significant symptoms and occasionally those in whom the condition was of rapid onset. Low volume hyponatremia is typically treated with intravenous normal saline. SIADH is typically treated by correcting the underlying cause and with fluid restriction while high volume hyponatremia is typically treated with both fluid restriction and a diet low in salt. Correction should generally be gradual in those in whom the low levels have been present for more than two days. Hyponatremia is the most common type of electrolyte imbalance, and is often found in older adults. It occurs in about 20% of those admitted to hospital and 10% of people during or after an endurance sporting event. Among those in hospital, hyponatremia is associated with an increased risk of death. The economic costs of hyponatremia are estimated at $2.6 billion per annum in the United States. Signs and symptoms Signs and symptoms of hyponatremia include nausea and vomiting, headache, short-term memory loss, confusion, lethargy, fatigue, loss of appetite, irritability, muscle weakness, spasms or cramps, seizures, and decreased consciousness or coma. Lower levels of plasma sodium are associated with more severe symptoms. However, mild hyponatremia (plasma sodium levels at 131–135 mmol/L) may be associated with complications and subtle symptoms (for example, increased falls, altered posture and gait, reduced attention, impaired cognition, and possibly higher rates of death). Neurological symptoms typically occur with very low levels of plasma sodium (usually <115 mmol/L). When sodium levels in the blood become very low, water enters the brain cells and causes them to swell (cerebral edema). This results in increased pressure in the skull and causes hyponatremic encephalopathy. As pressure increases in the skull, herniation of the brain can occur, which is a squeezing of the brain across the internal structures of the skull. This can lead to headache, nausea, vomiting, confusion, seizures, brain stem compression and respiratory arrest, and non-cardiogenic accumulation of fluid in the lungs. This is usually fatal if not immediately treated. Symptom severity depends on how fast and how severe the drop in blood sodium level is. A gradual drop, even to very low levels, may be tolerated well if it occurs over several days or weeks, because of neuronal adaptation. The presence of underlying neurological disease such as a seizure disorder or non-neurological metabolic abnormalities, also affects the severity of neurologic symptoms. Chronic hyponatremia can lead to such complications as neurological impairments. These neurological impairments most often affect gait (walking) and attention, and can lead to increased reaction time and falls. Hyponatremia, by interfering with bone metabolism, has been linked with a doubled risk of osteoporosis and an increased risk of bone fracture. Causes The specific causes of hyponatremia are generally divided into those with low tonicity (lower than normal concentration of solutes), without low tonicity, and falsely low sodiums. Those with low tonicity are then grouped by whether the person has high fluid volume, normal fluid volume, or low fluid volume. Too little sodium in the diet alone is very rarely the cause of hyponatremia. High volume Both sodium and water content increase: Increase in sodium content leads to hypervolemia and water content to hyponatremia. Cirrhosis of the liver Congestive heart failure Nephrotic syndrome in the kidneys Excessive drinking of fluids Normal volume There is volume expansion in the body, no edema, but hyponatremia occurs SIADH (and its many causes) Hypothyroidism Not enough ACTH Beer potomania Normal physiologic change of pregnancy Reset osmostat Low volume Hypovolemia (extracellular volume loss) is due to total body sodium loss. Hyponatremia is caused by a relatively smaller loss in total body water. Any cause of hypovolemia such as prolonged vomiting, decreased oral intake, severe diarrhea Diuretic use (due to the diuretic causing a volume depleted state and thence ADH release, and not a direct result of diuretic-induced urine sodium loss) Addison's disease and congenital adrenal hyperplasia in which the adrenal glands do not produce enough steroid hormones (combined glucocorticoid and mineralocorticoid deficiency) Isolated hyperchlorhidrosis (Carbonic anhydrase XII deficiency), a rare genetic disorder which results in a lifelong tendency to lose excessive amounts of sodium by sweating. Pancreatitis Prolonged exercise and sweating, combined with drinking water without electrolytes is the cause of exercise-associated hyponatremia (EAH). It is common in marathon runners and participants of other endurance events. The use of MDMA (ecstasy) can result in hyponatremia. Medication Antipsychotics have been reported to cause hyponatremia in a review of medical articles from 1946 to 2016. Available evidence suggests that all classes of psychotropics, i.e., antidepressants, antipsychotics, mood stabilizers, and sedative/hypnotics can lead to hyponatremia. Age is a significant factor for drug induced hyponatremia. Other causes Miscellaneous causes that are not included under the above classification scheme include the following: False or pseudo hyponatremia is caused by a false lab measurement of sodium due to massive increases in blood triglyceride levels or extreme elevation of immunoglobulins as may occur in multiple myeloma. Hyponatremia with elevated tonicity can occur with high blood sugar, causing a shift of excess free water into the serum. Pathophysiology The causes of and treatments for hyponatremia can only be understood by having a grasp of the size of the body fluid compartments and subcompartments and their regulation; how under normal circumstances the body is able to maintain the sodium concentration within a narrow range (homeostasis of body fluid osmolality); conditions can cause that feedback system to malfunction (pathophysiology); and the consequences of the malfunction of that system on the size and solute concentration of the fluid compartments. Normal homeostasis There is a hypothalamic-kidney feedback system which normally maintains the concentration of the serum sodium within a narrow range. This system operates as follows: in some of the cells of the hypothalamus, there are osmoreceptors which respond to an elevated serum sodium in body fluids by signalling the posterior pituitary gland to secrete antidiuretic hormone (ADH) (vasopressin). ADH then enters the bloodstream and signals the kidney to bring back sufficient solute-free water from the fluid in the kidney tubules to dilute the serum sodium back to normal, and this turns off the osmoreceptors in the hypothalamus. Also, thirst is stimulated. Normally, when mild hyponatremia begins to occur, that is, the serum sodium begins to fall below 135 mEq/L, there is no secretion of ADH, and the kidney stops returning water to the body from the kidney tubule. Also, no thirst is experienced. These two act in concert to raise the serum sodium to the normal range. Hyponatremia Hyponatremia occurs 1) when the hypothalamic-kidney feedback loop is overwhelmed by increased fluid intake, 2) the feedback loop malfunctions such that ADH is always "turned on", 3) the receptors in the kidney are always "open" regardless of there being no signal from ADH to be open; or 4) there is an increased ADH even though there is no normal stimulus (elevated serum sodium) for ADH to be increased. Hyponatremia occurs in one of two ways: either the osmoreceptor-aquaporin feedback loop is overwhelmed, or it is interrupted. If it is interrupted, it is either related or not related to ADH. If the feedback system is overwhelmed, this is water intoxication with maximally dilute urine and is caused by 1) pathological water drinking (psychogenic polydipsia), 2) beer potomania, 3) overzealous intravenous solute free water infusion, or 4) infantile water intoxication. "Impairment of urine diluting ability related to ADH" occurs in nine situations: 1) arterial volume depletion 2) hemodynamically mediated, 3) congestive heart failure, 4) cirrhosis, 5) nephrosis, 6) spinal cord disease, 7) Addison's disease, 8) cerebral salt wasting, and 9) syndrome of inappropriate antidiuretic hormone secretion (SIADH). If the feed-back system is normal, but an impairment of urine diluting ability unrelated to ADH occurs, this is 1) oliguric kidney failure, 2) tubular interstitial kidney disease, 3) diuretics, or 4) nephrogenic syndrome of antidiuresis. Sodium is the primary positively charged ion outside of the cell and cannot cross from the interstitial space into the cell. This is because charged sodium ions attract around them up to 25 water molecules, thereby creating a large polar structure too large to pass through the cell membrane: "channels" or "pumps" are required. Cell swelling also produces activation of volume-regulated anion channels which is related to the release of taurine and glutamate from astrocytes. Diagnosis The history, physical exam, and laboratory testing are required to determine the underlying cause of hyponatremia. A blood test demonstrating a serum sodium less than 135 mmol/L is diagnostic for hyponatremia. The history and physical exam are necessary to help determine if the person is hypovolemic, euvolemic, or hypervolemic, which has important implications in determining the underlying cause. An assessment is also made to determine if the person is experiencing symptoms from their hyponatremia. These include assessments of alertness, concentration, and orientation. False hyponatremia False hyponatremia, also known as spurious, pseudo, hypertonic, or artifactual hyponatremia is when the lab tests read low sodium levels but there is no hypotonicity. In hypertonic hyponatremia, resorption of water by molecules such as glucose (hyperglycemia or diabetes) or mannitol (hypertonic infusion) occurs. In isotonic hyponatremia a measurement error due to high blood triglyceride level (most common) or paraproteinemia occurs. It occurs when using techniques that measure the amount of sodium in a specified volume of serum/plasma, or that dilute the sample before analysis. True hyponatremia True hyponatremia, also known as hypotonic hyponatremia, is the most common type. It is often simply referred to as "hyponatremia." Hypotonic hyponatremia is categorized in 3 ways based on the person's blood volume status. Each category represents a different underlying reason for the increase in ADH that led to the water retention and thence hyponatremia: High volume hyponatremia, wherein there is decreased effective circulating volume (less blood flowing in the body) even though total body volume is increased (by the presence of edema or swelling, especially in the ankles). The decreased effective circulating volume stimulates the release of anti-diuretic hormone (ADH), which in turn leads to water retention. Hypervolemic hyponatremia is most commonly the result of congestive heart failure, liver failure, or kidney disease. Normal volume hyponatremia, wherein the increase in ADH is secondary to either physiologic but excessive ADH release (as occurs with nausea or severe pain) or inappropriate and non-physiologic secretion of ADH, that is, syndrome of inappropriate antidiuretic hormone hypersecretion (SIADH). Often categorized under euvolemic is hyponatremia due to inadequate urine solute (not enough chemicals or electrolytes to produce urine) as occurs in beer potomania or "tea and toast" hyponatremia, hyponatremia due to hypothyroidism or central adrenal insufficiency, and those rare instances of hyponatremia that are truly secondary to excess water intake. Low volume hyponatremia, wherein ADH secretion is stimulated by or associated with volume depletion (not enough water in the body) due to decreased effective circulating volume. Acute versus chronic Chronic hyponatremia is when sodium levels drop gradually over several days or weeks and symptoms and complications are typically moderate. Chronic hyponatremia is often called asymptomatic hyponatremia in clinical settings because it is thought to have no symptoms; however, emerging data suggests that "asymptomatic" hyponatremia is not actually asymptomatic. Acute hyponatremia is when sodium levels drop rapidly, resulting in potentially dangerous effects, such as rapid brain swelling, which can result in coma and death. Treatment The treatment of hyponatremia depends on the underlying cause. How quickly treatment is required depends on a person's symptoms. Fluids are typically the cornerstone of initial management. In those with severe disease an increase in sodium of about 5 mmol/L over one to four hours is recommended. A rapid rise in serum sodium is anticipated in certain groups when the cause of the hyponatremia is addressed thus warranting closer monitoring in order to avoid overly rapid correction of the blood sodium concentration. These groups include persons who have hypovolemic hyponatremia and receive intravenous fluids (thus correcting their hypovolemia), persons with adrenal insufficiency who receive hydrocortisone, persons in whom a medication causing increased ADH release has been stopped, and persons who have hyponatremia due to decreased salt and/or solute intake in their diet who are treated with a higher solute diet. If large volumes of dilute urine are seen, this can be a warning sign that overcorrection is imminent in these individuals. Sodium deficit = (140 – serum sodium) x total body water Total body water = kilograms of body weight x 0.6 Fluids Options include: Mild and asymptomatic hyponatremia is treated with adequate solute intake (including salt and protein) and fluid restriction starting at 500 millilitres per day (mL/d) of water with adjustments based on serum sodium levels. Long-term fluid restriction of 1,200–1,800 mL/d may maintain the person in a symptom-free state. Moderate and/or symptomatic hyponatremia is treated by raising the serum sodium level by 0.5 to 1 mmol per liter per hour for a total of 8 mmol per liter during the first day with the use of furosemide and replacing sodium and potassium losses with 0.9% saline. Severe hyponatremia or severe symptoms (confusion, convulsions, or coma): consider hypertonic saline (3%) 1–2 mL/kg IV in 3–4 h. Hypertonic saline may lead to a rapid dilute diuresis and fall in the serum sodium. It should not be used in those with an expanded extracellular fluid volume. Electrolyte abnormalities In persons with hyponatremia due to low blood volume (hypovolemia) from diuretics with simultaneous low blood potassium levels, correction of the low potassium level can assist with correction of hyponatremia. Medications American and European guidelines come to different conclusions regarding the use of medications. In the United States they are recommended in those with SIADH, cirrhosis, or heart failure who fail limiting fluid intake. In Europe they are not generally recommended. There is tentative evidence that vasopressin receptor antagonists (vaptans), such as conivaptan, may be slightly more effective than fluid restriction in those with high volume or normal volume hyponatremia. They should not be used in people with low volume. They may also be used in people with chronic hyponatremia due to SIADH that is insufficiently responsive to fluid restriction and/or sodium tablets. Demeclocycline, while sometimes used for SIADH, has significant side effects including potential kidney problems and sun sensitivity. In many people it has no benefit while in others it can result in overcorrection and high blood sodium levels. Daily use of urea by mouth, while not commonly used due to the taste, has tentative evidence in SIADH. However, it is not available in many areas of the world. Precautions Raising the serum sodium concentration too rapidly may cause osmotic demyelination syndrome. Rapid correction of sodium levels can also lead to central pontine myelinolysis (CPM). It is recommended not to raise the serum sodium by more than 10 mEq/L/day. Epidemiology Hyponatremia is the most commonly seen water–electrolyte imbalance. The disorder is more frequent in females, the elderly, and in people who are hospitalized. The number of cases of hyponatremia depends largely on the population. In hospital it affects about 15–20% of people; however, only 3–5% of people who are hospitalized have a sodium level less than 130 mmol/L. Hyponatremia has been reported in up to 30% of the elderly in nursing homes and is also present in approximately 30% of people who are depressed on selective serotonin reuptake inhibitors. People who have hyponatremia who require hospitalisation have a longer length of stay (with associated increased costs) and also have a higher likelihood of requiring readmission. This is particularly the case in men and in the elderly. References Further reading External links Hyponatremia at the Mayo Clinic Sodium at Lab Tests Online ICD-10 code for Hyponatremia - Diagnosis Code Electrolyte disturbances Mineral deficiencies Sodium Wikipedia medicine articles ready to translate Wikipedia neurology articles ready to translate Wilderness medical emergencies
wiki
Daks or variation, may refer to: Events Daks Day, or Groundhog Day Daks Tournament, a golf tournament in England, UK Companies, business, organizations DAKS, a British fashion house DAKS Simpson, a department store in Piccadilly, Westminster, London, England, UK Dakota Short Line (DAKS), see List of reporting marks: D Other uses daks, or sweatpants Daks Davidson (born 1998), an Australian AFLW player of Australian-rules football See also DAK (disambiguation) DACS (disambiguation) Dack (disambiguation)
wiki
Opus Dei is a personal prelature of the Catholic Church. Opus Dei may also refer to: Opus Dei (album), an album by Laibach Opus Dei (book), a 2005 book by John L. Allen Jr. Opus Dei, prayers in the Liturgy of the Hours of the Catholic Church
wiki
Kansai International Airport () commonly known as is the primary international airport in the Greater Osaka Area of Japan and the closest international airport to the cities of Osaka, Kyoto, and Kobe. It is located on an artificial island () in the middle of Osaka Bay off the Honshu shore, southwest of Ōsaka Station, located within three municipalities, including Izumisano (north), Sennan (south), and Tajiri (central), in Osaka Prefecture. Kansai opened on 4 September 1994 to relieve overcrowding at the original Osaka International Airport, referred to as Itami Airport, which is closer to the city of Osaka. It consists of two terminals: Terminal 1 and Terminal 2. Terminal 1, designed by Italian architect Renzo Piano, is the longest airport terminal in the world with a length of . The airport serves as an international hub for All Nippon Airways, Japan Airlines, and Nippon Cargo Airlines, and also serves as a hub for Peach, the first international low-cost carrier in Japan. In 2016, 25.2 million passengers used the airport, making it the 30th busiest airport in Asia and third busiest in Japan. The freight volume was 802,162 tonnes total: 757,414 t international (18th in the world) and 44,748 t domestic. The second runway was opened on 2 August 2007. , Kansai Airport has become an Asian hub, with 780 weekly flights to Asia and Australasia (including freight 119), 59 weekly flights to Europe and the Middle East (freight 5), and 80 weekly flights to North America (freight 42). In 2020, Kansai received Skytrax's awards for Best Airport Staff in Asia, World's Best Airport Staff, and World's Best Airport for Baggage Delivery. History In the 1960s, when the Kansai region was rapidly losing trade to Tokyo, planners proposed a new airport near Kobe and Osaka. The city's original international airport, Itami Airport, located in the densely populated suburbs of Itami and Toyonaka, was surrounded by buildings; it could not be expanded, and many of its neighbours had filed complaints because of noise pollution problems. After the protests surrounding New Tokyo International Airport (now Narita International Airport), which was built with expropriated land in a rural part of Chiba Prefecture, planners decided to build the airport offshore. The new airport was part of a number of new developments to revitalize Osaka, which had been losing economic and cultural ground to Tokyo for most of the century. Initially, the airport was planned to be built near Kobe, but the city of Kobe refused the plan, so the airport was moved to a more southerly location on Osaka Bay. There it could be open 24 hours per day, unlike its predecessor in the city. Construction An artificial island, long and wide, was proposed. Engineers needed to overcome the extremely high risks of earthquakes and typhoons (with storm surges of up to ). The water depth is on top of of soft Holocene clay which holds 70% water. A million sand drains were built into the clay to remove water and solidify the clay. Construction started in 1987. The sea wall was finished in 1989 (made of rock and 48,000 tetrapods). Three mountains were excavated for , and was used to construct island 1. Over three years, 10,000 workers using 80 ships took 10 million man-hours to complete the layer of earth over the sea floor and inside the sea wall. In 1990, a bridge was completed to connect the island to the mainland at Rinku Town, at a cost of $1 billion. Completion of the artificial island increased the area of Osaka Prefecture just enough so that it is no longer the smallest prefecture in Japan (Kagawa Prefecture is now the smallest). The bidding and construction of the airport was a source of international trade friction during the late 1980s and early 1990s. Prime Minister Yasuhiro Nakasone responded to American concerns, particularly from Senator Frank Murkowski, that bids would be rigged in Japanese companies' favour by providing special offices for prospective international contractors, which ultimately did little to ease the participation of foreign contractors in the bidding process. Later, foreign airlines complained that two-thirds of the departure hall counter space had been allocated to Japanese carriers, disproportionately to the actual carriage of passengers through the airport. The island had been predicted to sink by the most optimistic estimate as the weight of the material used for construction compressed the seabed silts. However, by 1999, the island had sunk – almost 50% more than predicted. The project became the most expensive civil works project in modern history after twenty years of planning, three years of construction and US$15bn of investment. Much of what was learned went into the successful artificial islands in silt deposits for New Kitakyushu Airport, Kobe Airport, and Chūbu Centrair International Airport. The lessons of Kansai Airport were also applied in the construction of Hong Kong International Airport. In 1991, the terminal construction commenced. To compensate for the sinking of the island, adjustable columns were designed to support the terminal building. These are extended by inserting thick metal plates at their bases. Government officials proposed reducing the length of the terminal to cut costs, but architect Renzo Piano insisted on keeping the terminal at its full planned length. The airport was opened on 4 September 1994. On 17 January 1995, Japan was struck by the Great Hanshin earthquake, the epicenter of which was about away from KIX and killed 6,434 people on Japan's main island of Honshū. Due to its earthquake engineering, the airport emerged unscathed, mostly due to the use of sliding joints. Even the glass in the windows remained intact. On 22 September 1998, the airport survived a typhoon with wind speeds over . On 19 April 2001, the airport was one of ten structures given the "Civil Engineering Monument of the Millennium" award by the American Society of Civil Engineers. , the total cost of Kansai Airport was $20 billion including land reclamation, two runways, terminals, and facilities. Most additional costs were initially due to the island sinking, expected due to the soft soils of Osaka Bay. After construction the rate of sinking was considered so severe that the airport was widely criticized as a geotechnical engineering disaster. The sink rate fell from per year during 1994 to per year in 2008. Operation Opened on 4 September 1994, the airport serves as a hub for several airlines such as All Nippon Airways, Japan Airlines, and Nippon Cargo Airlines. It is the international gateway for Japan's Kansai region, which contains the major cities of Kyoto, Kobe, and Osaka. Other Kansai domestic flights fly from the older but more conveniently located Osaka International Airport in Itami, or from the newer Kobe Airport. The airport had been deeply in debt, losing $560 million in interest every year. Airlines had been kept away by high landing fees (about $7,500 for a Boeing 747), the second most expensive in the world after Narita's. In the early years of the airport's operation, excessive terminal rent and utility bills for on-site concessions also drove up operating costs: some estimates before opening held that a cup of coffee would have to cost US$10. Osaka business owners pressed the government to take a greater burden of the construction cost to keep the airport attractive to passengers and airlines. On 17 February 2005, Chubu Centrair International Airport opened in Nagoya, just east of Osaka. The opening of the airport was expected to increase competition between Japan's international airports. Despite this, passenger totals were up 11% in 2005 over 2004, and international passengers increased to 3.06 million in 2006, up 10% over 2005. Adding to the competition were the opening of Kobe Airport, less than away, in 2006 and the lengthening of the runway at Tokushima Airport in Shikoku in 2007. The main rationale behind the expansions was to compete with Incheon International Airport and Hong Kong International Airport as a gateway to Asia, as Tokyo area airports were severely congested. Kansai saw a 5% year-on-year increase in international traffic in summer 2013, largely supported by low-cost carrier traffic to Taiwan and Southeast Asia overcoming a decrease in traffic to China and South Korea. The airport authority was allotted four billion yen in government support for fiscal year 2013, and the Ministry of Land, Infrastructure, and Transport and the Ministry of Finance agreed to reduce this amount in stages through fiscal year 2015, although local governments in the Kansai region have pressed for continued subsidies. Kansai has been marketed as an alternative to Narita Airport for international travelers from the Greater Tokyo Area. By flying to Kansai from Haneda Airport and connecting to international flights there, travelers can save the additional time required to get to Narita: up to one and a half hours for many residents of Kanagawa Prefecture and southern Tokyo. Expansion The airport was at its limit during peak times, owing especially to freight flights, so a portion of Phase II expansion—the second runway—was made a priority. Thus, in 2003, believing that the sinking problem was almost over, the airport operators started to construct a second runway and terminal. The second runway opened on 2 August 2007, but with the originally planned terminal portion postponed. This lowered the project cost to JPY¥910 billion (approx. US$8 billion), saving ¥650 billion from the first estimate. The additional runway development, which was opened in time for the IAAF world athletics championships in Osaka, has expanded the airport size to . The second runway is used for landings and when there are incidents prohibiting takeoff from runway A. The new runway allowed the airport to start 24-hour operations in September 2007. A new terminal building opened in late 2012. There are additional plans for several new aprons, a third runway (06C/24C) with a length of , a new cargo terminal and expanding the airport size to . However, the Japanese government has currently postponed these plans for economic reasons. Relationship with Itami Airport Since July 2008, Osaka Prefecture governor Toru Hashimoto has been a vocal critic of Itami Airport, arguing that the Chuo Shinkansen maglev line will make much of its domestic role irrelevant, and that its domestic functions should be transferred to Kansai Airport in conjunction with upgraded high-speed access to Kansai from central Osaka. In 2009, Hashimoto also publicly proposed moving the functions of Marine Corps Air Station Futenma to Kansai Airport as a possible solution for the political crisis surrounding the base. In May 2011, the Diet of Japan passed legislation to form a new Kansai International Airport Corporation using the state's existing equity stake in Kansai Airport and its property holdings at Itami Airport. The move was aimed at offsetting Kansai Airport's debt burden. The merger of the Itami and Kansai airport authorities was completed in July 2012. Shortly following the merger, Kansai Airport announced a 5% reduction in landing fees effective October 2012, with additional reductions during overnight hours when the airport is underutilized, and further discounts planned for the future, including subsidies for new airlines and routes. these moves were intended to bring Kansai's fees closer to the level of Narita International Airport, where landing fees were around 20% lower than Kansai's, and to improve competitiveness with other Asian hubs such as Incheon International Airport in South Korea. Since its formation, the new operating company has also made efforts toward international expansion, bidding for operating concessions at Yangon International Airport and Hanthawaddy International Airport in Myanmar. KIAC conducted a public tender to sell the operating rights for Kansai and Itami Airport in May 2015. Orix and Vinci SA were the sole bidders for the 45-year contract, at a price of around $18 billion. The new operating company, Kansai Airports, took over on 1 April 2016. It is 80% owned by Orix and Vinci, with the remaining 20% owned by Kansai-based enterprises such as Hankyu Hanshin Holdings and Panasonic. Typhoon Jebi On 4 September 2018, the airport was hit by Typhoon Jebi. The airport had to pause operations after seawater surges inundated the island; runways were hit, and the water reached up to the engines of some aircraft. The situation was further exacerbated when a large tanker crashed into the bridge that links the airport to the mainland, effectively stranding the people remaining at the airport. All flights at the airport were cancelled until 6 September, at which date Prime Minister Shinzō Abe announced the airport would partially resume domestic operations. Train services to the airport resumed from 18 September 2018 after repair works to the Kansai Airport Line and Nankai Airport Line were completed, and the airport resumed regular operations on 1 October 2018. Repairs to the damaged section of the Sky Gate Bridge R were finally completed on 8 April 2019, restoring traffic both to and from the mainland completely. Terminals Terminal 1 The main KIX passenger terminal, Terminal 1, is a single four-storey building designed by Renzo Piano Building Workshop (Renzo Piano and Noriaki Okabe), and has a gross floor space of . , at a total length of from end to end, Terminal 1 is the longest airport terminal in the world. It has a sophisticated people mover system called the Wing Shuttle, which moves passengers from one end of the pier to the other. The terminal's roof is shaped like an airfoil. This shape is used to promote air circulation through the building: giant air conditioning ducts blow air upwards at one side of the terminal, circulate the air across the curvature of the ceiling, and collect the air through intakes at the other side. Mobiles are suspended in the ticketing hall to take advantage of the flowing air. The ticketing hall overlooks the international departures concourse, and the two are separated by a glass partition. During Kansai's early days, visitors were known to throw objects over the partition to friends in the corridor below. The partition was eventually modified to halt this practice. On June 23, 2017, at the terminal's promotion space, a game experience area known as "Nintendo Check In" opened. In this game experience area, guests arriving at Terminal 1 can play Nintendo Switch games free of charge. There is a statue of Mario at the experience area, along with Super Mario Cappy caps from Super Mario Odyssey for passengers to take photos with. There also Amiibo figurines on display there. In the northern and southern arrival routes of Terminal 1, there are decorations of Nintendo characters like Mario, Luigi, Princess Peach, and others welcoming arriving passengers. Terminal 2 Terminal 2 is a low-cost carrier (LCC) terminal designed to attract more LCCs by providing lower landing fees than Terminal 1. It is exclusively occupied by Peach, Spring Airlines, and Jeju Air. Other LCCs serving Kansai, such as Jetstar Airways, Jetstar Japan, and Cebu Pacific, use the main Terminal 1. Peach requested that Terminal 2 have a simplified design in order to minimize operating costs. The terminal is a single-story building, thus eliminating the cost of elevators. Passageways to aircraft have no air conditioning. The terminal also has no jet bridges, having one boarding gate for domestic departures and one boarding gate for international departures. In case of rain, passengers are lent umbrellas to use as they walk to the aircraft. Terminal 2 is not directly connected to Terminal 1 or to Kansai Airport Station. Free shuttle buses run between the two terminals, and between Terminal 2 and the railway and ferry stations. It is also possible to walk between the terminals through the KIX Sora Park, a four-hectare park located adjacent to Terminal 2. Statistics Airlines and destinations Passenger Cargo Ground transportation Rail Kansai International Airport is connected only by the Sky Gate Bridge R and by a road and railroad bridge to Rinku Town and the mainland. The lower railroad level of the bridge is used by two railroad operators: JR West and Nankai Electric Railway. JR West operates the Haruka limited express train services for Kansai Airport Station from Tennōji, Ōsaka, Shin-Ōsaka, and Kyoto Station. JR West also offers services for Kansai Airport Station from Ōsaka, Kyōbashi and several stations on the way, with connecting train service to Wakayama available at Hineno Station. Various connections, such as buses, subways, trams, and other railroads, are available at each station. Nankai operates the rapi:t, a limited express train service to Namba Station on the southern edge of downtown Osaka. Osaka Metro connections are available at Namba and Tengachaya Station. Rail connections to and from Kansai Airport are expected to further improve access to and from Umeda with the opening of the Naniwasuji Line in 2031. Bus Kansai Airport Transportation Enterprise and other bus operators offer scheduled express bus services, called "Airport Limousines", for Kansai International Airport. Parking Two six storey parking structures, called P1 and P2, are located above a railroad terminal station, while the other two level parking facilities, called P3 and P4, are situated next to "Aeroplaza", a hotel complex. The airport is only accessible from the Sky Gate Bridge R, a part of Kansai Airport Expressway. The expressway immediately connects to Hanshin Expressways Route 5, "Wangan Route", and Hanwa Expressway. Ferry service In July 2007, high-speed ferry service began. OM Kobe operates "Bay Shuttle" between Kobe Airport and KIX. The journey takes about thirty minutes. Other facilities – Houses the The head office of the is on the fourth floor. The Peach Aviation head office is on the fifth floor. is located on the west side of Kansai Airport Station. It includes a hotel, restaurants, rental car counters, and other businesses Hotel Nikko Kansai Airport (north portion of Kansai Airport) Head office of Peach Aviation was previously located on the third floor (central portion of Kansai Airport) Central power station (KEPCO) energy center, 40 MW JAL Cargo import and export facilities (in southern portion) Japan Coast Guard Kansai airport Coast Guard air base Japan Coast Guard Special Security Team Base Osaka international post office ( carrying about 19,000 tonnes per year of international postal matter) Oil tanker berths (three berths) and Fuel Supply center Airport access bridge ("The Sky Gate Bridge R"), which as of 2013 is the longest truss bridge in the world at . The double-decker bridge consists of a lower deck devoted to rail, with the upper for road. See also Kansai Airports Kobe Airport Itami Airport References Further reading Hausler, E. and N. Sitar. "Performance of Soil Improvement Techniques in Earthquakes." (Archive) (Report in Progress) Pacific Earthquake Engineering Research Center, University of California Berkeley. External links History of KIX at Kansai Airports Kansai International Airport Land Co., Ltd. Kansai International Airport Project by Focchi Group 1994 establishments in Japan Airports established in 1994 Airports in Japan Artificial island airports Artificial islands of Japan Buildings and structures in Osaka Prefecture Kansai region Ove Arup buildings and structures Renzo Piano buildings Transport in Osaka Prefecture Izumisano Sennan, Osaka Tajiri, Osaka
wiki
The Bank of Calcutta (a precursor to the present State Bank of India) was founded on 2 June 1806, mainly to fund General Wellesley's wars against Tipu Sultan and the Marathas. It was the tenth oldest bank in India and was renamed Bank of Bengal on 2 January 1809. History The bank opened branches at Rangoon (1861), Patna (1862), Mirzapur (1862), and Benares (1862). When it became known that the bank intended to open a branch at Dacca, negotiations began that resulted in Bank of Bengal in 1862 amalgamating The Dacca Bank (1846). A branch at Cawnpore followed. Famous Customers Among the bank's renowned customers were scholar and politician Dadabhai Naoroji, scientist Jagadish Chandra Bose, India's first President Rajendra Prasad, Nobel laureate Rabindranath Tagore, and educationalist Ishwar Chandra Vidyasagar. Work The bank was risk averse and would not lend for more than three months, leading to local businessmen, both British and Indian launching private banks, many of which failed. The most storied bank failure was The Union Bank (1828) founded by Dwarakanath Tagore in partnership with British companies. The Bank of Calcutta, and the two other Presidency banks — the Bank of Bombay and the Bank of Madras — amalgamated on 27 January 1921. The reorganized banking entity assumed the name Imperial Bank of India. The Reserve Bank of India, which is the central banking organization of India, in the year 1955, acquired a controlling interest in the Imperial Bank of India and the Imperial Bank of India was renamed on 30 April 1955 as the State Bank of India. See also Banking in India History of banking Citations and references Citations References Further reading 1806 establishments in British India 1861 establishments in Burma 1861 establishments in the British Empire 19th century in Kolkata Banks established in 1806 Banks disestablished in 1921 Banks based in Kolkata Calcutta Bengal Presidency Indian companies established in 1806
wiki
A deposit is the act of placing cash (or cash equivalent) with some entity, most commonly with a financial institution, such as a bank. The deposit is a credit for the party (individual or organization) who placed it, and it may be taken back (withdrawn) in accordance with the terms agreed at time of deposit, transferred to some other party, or used for a purchase at a later date. Deposits are usually the main source of funding for banks. Types Demand deposit A demand deposit is a deposit that can be withdrawn or otherwise debited on short notice. Transaction accounts (known as "checking" or "current" accounts depending on the country) can be used to pay other parties, while savings accounts are typically payable only to the depositor or another bank account, and may have limits on the frequency of withdrawal. Time deposit Deposits which are kept for any specific time period are called time deposit or often as term deposit. Term deposit (or time deposit), bear a fixed time and fixed interest rate Fixed deposit in India Certificate of deposit in the U.S. and Canada Overnight lending occurs usually from noon to noon, using a special rate to give as security or in part payment. Special deposit Normally any money deposited to a bank becomes property of the bank, for which it is liable to return the same monetary value, but not the same money. This the foundation of fractional-reserve banking, since the bank can lend out the money that it owns while owing an obligation to the depositor. A special deposit is one made under an agreement to hold the deposit separately from the bank's assets, so that the same assets can be returned. Items placed in a safe deposit box are examples of special deposits. See also Deposit slip Passbook Deposit account Security deposit References Banking terms
wiki
Alico may refer to: Alico Arena, an athletics facility at Florida Gulf Coast University donated by Alico (company) ALICO Building, in Waco, Texas American Life Insurance Company, now part of MetLife Platani (river), a river in Sicily
wiki
'Detroit Red' is a variable apple cultivar, possibly the same as 'Detroit Black', that gives fruit of mediocre quality, somewhat unreliably or biennially. It was grown at Monticello by Thomas Jefferson. References Apple cultivars
wiki
Dusk occurs at the darkest stage of twilight, or at the very end of astronomical twilight after sunset and just before nightfall. At predusk, during early to intermediate stages of twilight, enough light in the sky under clear conditions may occur to read outdoors without artificial illumination; however, at the end of civil twilight (when Earth rotates to a point at which the center of the Sun's disk is 6° below the local horizon), such lighting is required to read outside. The term dusk usually refers to astronomical dusk, or the darkest part of twilight before night begins. Technical definitions The time of dusk is the moment at the very end of astronomical twilight, just before the minimum brightness of the night sky sets in, or may be thought of as the darkest part of evening twilight. However, technically, the three stages of dusk are as follows: At civil dusk, the center of the Sun's disc goes 6° below the horizon in the evening. It marks the end of civil twilight, which begins at sunset. At this time objects are still distinguishable and depending on weather conditions some stars and planets may start to become visible to the naked eye. The sky has many colors at this time, such as orange and red. Beyond this point artificial light may be needed to carry out outdoor activities, depending on atmospheric conditions and location. At nautical dusk, the Sun apparently moves to 12° below the horizon in the evening. It marks the end of nautical twilight, which begins at civil dusk. At this time, objects are less distinguishable, and stars and planets appear to brighten. At astronomical dusk, the Sun's position is 18° below the horizon in the evening. It marks the end of astronomical twilight, which begins at nautical dusk. After this time the Sun no longer illuminates the sky, and thus no longer interferes with astronomical observations. Media Dusk can be used to create an ominous tone and has been used as a title for many projects. One instance of this is the 2018 first person shooter Dusk (video game) by New Blood Interactive whose setting is in a similar lighting as the actual time of day. Gallery See also Dawn Sunrise Sunset Twilight References Earth phenomena Parts of a day Night
wiki
Athlon XP var tidligere AMD's top serie af processorer, den er nu blevet afløst af Athlon 64. Se også AMD Athlon 64 Eksterne henvisninger X86 mikroprocessorer AMD
wiki
The Order of the Württemberg Crown (Orden der Württembergischen Krone) was an order of chivalry in Württemberg. History First established in 1702 as the St.-Hubertus-Jagdorden (Order of St Hubert), in 1807 it was renamed the "Ritterorden vom Goldenen Adler" (Order of the Golden Eagle) by Frederick I, and on 23 September 1818 renewed and restructured (at the same time as the civil orders) by William I as the "Order of the Württemberg Crown" with (initially) 3 classes (grand cross, komtur, knight). In 1918 the order was expanded and changed. Its motto reads : Furchtlos und treu (fearless and loyal). Until 1913 the higher orders were restricted to the nobility. In descending order, its ranks were: Grand cross for sovereigns Grand cross Commander with star (since 1889) Commander Honour cross (Ehrenkreuz; Steckkreuz since 1892) Knight (since 1892 with golden lions, and since 1864 also with a crown, as a special honour) Gold service medal (Verdienstmedaille) Silver service medal (Verdienstmedaille, abolished 1892) Insignia Cross The order's cross was a white enameled Maltese cross with gold lions in its four angles. The lions came as standard for the grand cross and Komtur, but were only on knight's crosses as a special honour. On the upper arm, a golden crown was secured by means of two gold bands, from which – except in the honour cross in stuck form – the cross hung. The medallion was gold and blue on the front, and in the middle were the golden initials of king Frederick I and a crown – on the back was a golden crown, on red. All grades could since 1866 be awarded with swords. With the changes of 1890, the swords were only granted in awards of a higher class. Since 1892 the lowest grades (1870–1886 knight 2nd class, after that honour-cross) also had the special honours of golden lion and (since 1864) lion added. Stars The grand-cross was a silver 8-pointed star in whose middle was a reduced cross in a medallion with a circular motto in the centre. Sovereigns received the star in gold. The Komtur (since 1889 no longer of the Komtur with star) had a 4-pointed silver star whose rays went through the cross angles. Ribbon The ribbon was carmine red with black stripes and carmine borders. Members of reigning houses received insignia of the grand-cross with a ribbon in scarlet. Awards Many awards were made – in the First World War alone, the numbers were: Gold Verdienstmedaille: 141 Knight cross with swords: insgesamt ca. 350 Knight cross with swords and lions: 80 Ehrenkreuz with swords: ca. 160 Komtur with swords: 75 Komtur with star and swords: 6 Grand cross with swords: 6 As an extraordinary instance, the grand-cross "in Brillanten" was granted to Reichskanzler Otto von Bismarck in 1871. Grand Crosses Prince Adalbert of Prussia (1811–1873) Prince Adalbert of Prussia (1884–1948) Duke Adam of Württemberg Adolf I, Prince of Schaumburg-Lippe Duke Adolf Friedrich of Mecklenburg Prince Adolf of Schaumburg-Lippe Adolphe, Grand Duke of Luxembourg Adolphus Frederick V, Grand Duke of Mecklenburg-Strelitz Albert I, Prince of Monaco Prince Albert of Prussia (1809–1872) Albert of Saxony Prince Albert of Saxony (1875–1900) Archduke Albrecht, Duke of Teschen Albert, Prince Consort Prince Albert of Prussia (1837–1906) Albrecht, Duke of Württemberg Alexander II of Russia Alexander III of Russia Alexander Frederick, Landgrave of Hesse Alexander of Battenberg Prince Alexander of Hesse and by Rhine Duke Alexander of Oldenburg Duke Alexander of Württemberg (1771–1833) Duke Alexander of Württemberg (1804–1881) Duke Alexander of Württemberg (1804–1885) Alexander, Prince of Orange Grand Duke Alexei Alexandrovich of Russia Alexis, Prince of Bentheim and Steinfurt Alfonso XIII Alfred, Duke of Saxe-Coburg and Gotha Alfred, 2nd Prince of Montenuovo Prince Arnulf of Bavaria Alexander Cambridge, 1st Earl of Athlone Prince August of Württemberg August, Prince of Hohenlohe-Öhringen Maximilian de Beauharnais, 3rd Duke of Leuchtenberg Prince Bernhard of Saxe-Weimar-Eisenach (1792–1862) Theobald von Bethmann Hollweg Friedrich Ferdinand von Beust Hans Alexis von Biehler Friedrich Wilhelm von Bismarck Herbert von Bismarck Otto von Bismarck Leonhard Graf von Blumenthal Jérôme Bonaparte Jérôme Napoléon Bonaparte Felix Graf von Bothmer Paul Bronsart von Schellendorff Walther Bronsart von Schellendorff Bernhard von Bülow Count Karl Ferdinand von Buol Adolphus Cambridge, 1st Marquess of Cambridge Leo von Caprivi Carl, Duke of Württemberg Carol I of Romania Jean-Baptiste de Nompère de Champagny Charles I of Württemberg Charles Augustus, Hereditary Grand Duke of Saxe-Weimar-Eisenach (1844–1894) Prince Charles of Prussia Archduke Charles Stephen of Austria Chlodwig, Prince of Hohenlohe-Schillingsfürst Christian IX of Denmark Constantine I of Greece Constantine, Prince of Hohenzollern-Hechingen Duke Constantine Petrovich of Oldenburg Diane, Duchess of Württemberg Grand Duke Dmitry Konstantinovich of Russia Eduard, Duke of Anhalt Edward VII Prince Edward of Saxe-Weimar Prince Eitel Friedrich of Prussia Ernest II, Duke of Saxe-Coburg and Gotha Ernest Louis, Grand Duke of Hesse Ernst I, Prince of Hohenlohe-Langenburg Ernst I, Duke of Saxe-Altenburg Ernst II, Prince of Hohenlohe-Langenburg Ernst Gunther, Duke of Schleswig-Holstein Ernst II, Duke of Saxe-Altenburg Ernst Leopold, 4th Prince of Leiningen Max von Fabeck Géza Fejérváry Ferdinand IV, Grand Duke of Tuscany Archduke Ferdinand Karl of Austria Francis II of the Two Sicilies Francisco de Asís, Duke of Cádiz Frederic von Franquemont Archduke Franz Ferdinand of Austria Franz Joseph I of Austria Archduke Franz Karl of Austria Prince Franz of Bavaria Frederick Augustus II, Grand Duke of Oldenburg Frederick Augustus III of Saxony Frederick Francis II, Grand Duke of Mecklenburg-Schwerin Frederick Francis III, Grand Duke of Mecklenburg-Schwerin Frederick I, Duke of Anhalt Frederick I, Grand Duke of Baden Frederick III, German Emperor Prince Frederick of Württemberg Prince Frederick of the Netherlands Friedrich II, Duke of Anhalt Friedrich Ferdinand, Duke of Schleswig-Holstein Friedrich Hermann Otto, Prince of Hohenzollern-Hechingen Prince Friedrich Leopold of Prussia Archduke Friedrich, Duke of Teschen Charles Egon II, Prince of Fürstenberg Maximilian Egon II, Prince of Fürstenberg Georg II, Duke of Saxe-Meiningen George I of Greece George V of Hanover George V George, King of Saxony George Victor, Prince of Waldeck and Pyrmont Friedrich von Georgi Friedrich von Gerok (officer) Heinrich von Gossler Gustaf V Wilhelm von Hahnke Max von Hausen Samu Hazai Heinrich XXVII, Prince Reuss Younger Line Prince Henry of Prussia (1862–1929) Heinrich VII, Prince Reuss of Köstritz Prince Henry of the Netherlands (1820–1879) Hermann, Prince of Hohenlohe-Langenburg Prince Hermann of Saxe-Weimar-Eisenach (1825–1901) Philip, Landgrave of Hesse-Homburg Paul von Hindenburg Prince Konrad of Hohenlohe-Schillingsfürst Henning von Holtzendorff Dietrich von Hülsen-Haeseler Prince Joachim of Prussia Prince Johann Georg of Saxony John of Saxony Duke John Albert of Mecklenburg Archduke Joseph Karl of Austria Joseph, Duke of Saxe-Altenburg Georg von Kameke Karl Anton, Prince of Hohenzollern Prince Karl of Bavaria (1874–1927) Prince Karl Theodor of Bavaria Karl Theodor, Duke in Bavaria Karl, Prince of Hohenzollern-Sigmaringen Hans von Koester Grand Duke Konstantin Konstantinovich of Russia Grand Duke Konstantin Nikolayevich of Russia Konstantin of Hohenlohe-Schillingsfürst Hermann Kövess von Kövessháza Leopold II of Belgium Archduke Leopold Ferdinand of Austria Prince Leopold, Duke of Albany Prince Leopold of Bavaria Eugen Maximilianovich, 5th Duke of Leuchtenberg George Maximilianovich, 6th Duke of Leuchtenberg Louis III, Grand Duke of Hesse Louis II, Grand Duke of Baden Prince Louis of Battenberg Ludwig I of Bavaria Ludwig III of Bavaria Archduke Ludwig Viktor of Austria Luís I of Portugal Luitpold, Prince Regent of Bavaria Maximilian Karl, 6th Prince of Thurn and Taxis Prince Maximilian of Baden Julius von Mayer Emperor Meiji Grand Duke Michael Nikolaevich of Russia Milan I of Serbia Helmuth von Moltke the Elder Georg Alexander von Müller Napoleon III Duke Nicholas of Württemberg Nicholas I of Russia Nicholas II of Russia Nicholas Alexandrovich, Tsesarevich of Russia Grand Duke Nicholas Nikolaevich of Russia (1831–1891) Grand Duke Nicholas Nikolaevich of Russia (1856–1929) Prince Nikolaus Wilhelm of Nassau Alexey Fyodorovich Orlov Oscar II Archduke Otto of Austria (1865–1906) Prince Paul of Württemberg Duke Paul Wilhelm of Württemberg Peter II, Grand Duke of Oldenburg Duke Peter of Oldenburg Philipp Albrecht, Duke of Württemberg Duke Philipp of Württemberg Prince Philippe, Count of Flanders Hans von Plessen Moritz Karl Ernst von Prittwitz Joseph Radetzky von Radetz Antoni Wilhelm Radziwiłł Archduke Rainer Ferdinand of Austria Duke Robert of Württemberg Albrecht von Roon Prince Rudolf of Liechtenstein Rudolf, Crown Prince of Austria Rupprecht, Crown Prince of Bavaria Prince William of Schaumburg-Lippe Sigismund von Schlichting Alfred von Schlieffen Ludwig von Schröder Grand Duke Sergei Alexandrovich of Russia Archduke Stephen of Austria (Palatine of Hungary) Rudolf Stöger-Steiner von Steinstätten Otto Graf zu Stolberg-Wernigerode Ludwig Freiherr von und zu der Tann-Rathsamhausen Francis, Duke of Teck Alfred von Tirpitz Umberto I of Italy Victor Emmanuel III of Italy Grand Duke Vladimir Alexandrovich of Russia Illarion Vorontsov-Dashkov Alfred von Waldersee Karl von Weizsäcker August von Werder Wilhelm II, German Emperor Wilhelm Karl, Duke of Urach Prince Wilhelm of Prussia (1783–1851) Prince Wilhelm of Saxe-Weimar-Eisenach Wilhelm, Duke of Urach Prince William of Baden William I, German Emperor William II of Württemberg William IV William Ernest, Grand Duke of Saxe-Weimar-Eisenach Duke William Frederick Philip of Württemberg Prince William of Baden (1829–1897) Duke William of Württemberg William, Duke of Brunswick William, Prince of Hohenzollern William, Prince of Wied Duke Eugen of Württemberg (1788–1857) Duke Eugen of Württemberg (1820–1875) Duke Eugen of Württemberg (1846–1877) Duke Ferdinand Frederick Augustus of Württemberg Ferdinand von Zeppelin Commanders Erwin Bälz Paul von Bruns Victor von Bruns Adolf von Deines Karl Ludwig d'Elsa Christian Wilhelm von Faber du Faur Maximilian Vogel von Falckenstein Wilhelm von Gümbel Jakob von Hartmann Eberhard von Hofacker Johann Baptist von Keller Carl Friedrich Kielmeyer Wilhelm Frederick von Ludwig Karl von Luz August von Mackensen Curt von Morgen Christian Friedrich von Otto Friedrich von Payer Friedrich August von Quenstedt Rudolf von Roth Gustav Rümelin Hans von Seeckt Gustav von Senden-Bibran Christoph von Sigwart Bertel Thorvaldsen Karl Heinrich Weizsäcker Sir James Wylie, 1st Baronet Honour Crosses Hermann Bauer Paul Clemens von Baumgarten Alexander von Brill William G. S. Cadogan Max Eyth Wilhelm Groener Paul Grützner Erich von Gündell Carl Magnus von Hell Adolf Wild von Hohenborn Ewald von Lochow Robert von Ostertag Eduard Pfeiffer Edmund Pfleiderer Hubert von Rebeur-Paschwitz Walther Reinhardt Max von Schillings Kilian von Steiner Hermann Vöchting Oskar von Watter Knights Fedor von Bock Theodor Endres Alexander von Falkenhausen Hans von Feldmann Victor Franke Hans von Gronau Eduard von Kallee Fritz von Loßberg Eberhard Graf von Schmettow Otto von Stülpnagel Gold Service Medal]] Silver Service Medal]] Unclassified Rudolf von Brudermann Fevzi Çakmak Kurt Eberhard Justus Hecker Otto Keller (philologist) Otto von Moser Christian Wirth Nikola Zhekov Bibliography Jörg Nimmergut, Handbuch Deutsche Orden, Saarbrücken 1989, 315-320 ders., Deutsche Orden und Ehrenzeichen 1800-1945, Bd. III Sachsen - Württemberg I, München 1999, 1677–1704, Crown (Wurttemberg), Order of the Orders, decorations, and medals of Württemberg 1702 establishments in the Holy Roman Empire Awards established in 1702
wiki
Personnalités Grace Kelly (1929-1982), actrice américaine, princesse de Monaco ; Grace Kelly (1992-), musicienne de jazz ; (1877–1950), peintre et critique d'art. Titres Grace de Monaco, film réalisé par Olivier Dahan et sorti en 2014 ; Grace Kelly, chanson de Mika sortie en 2007 et single extrait de son premier album Life in Cartoon Motion. .
wiki
Mexicana may refer to: a woman born in Mexico Mexicana de Aviación, a former airline of Mexico Mexicana (ship), a topsail schooner built in 1791 by the Spanish Navy Mexicana (film), a 1945 American film Mexicana (genus), a genus of monogenean parasites Mexicana (website), a web portal Mexicana (Mexicana Con Orgullo), a Mexican soft drink See also Mexicano (disambiguation)
wiki
The Seder for the night of Rosh Hashanah being Judaic Minhag the compleate eating of symbolic aliments, reciting psalms during the Supper of Rosh HaShanah. Generally for symbolic foods that shall be eaten during the Seder it is known the Simanim (symbolic aliments) order and are provided together "blessings" and "worships". At the Rosh Hashanah seder, special foods known as simanim (signs) are served. History According to author Rahel Musleah, the tradition of holding a seder on Rosh Hashanah is at least 2000 years old. It has especially been practiced among the Sephardi communities of the Mediterranean region. Foods The following foods are traditionally eaten, though individual customs vary: Beets Dates Leeks Pomegranates Pumpkins Beans Most commonly, a piece of apple is dipped in honey. See also Apples and honey References External links Rosh Hashanah seder according to Sephardi customs Rosh Hashanah Jewish festive meals
wiki
The Trojan Horse is a 1940 thriller novel by the British writer Hammond Innes. A London lawyer decides to help a German inventor suspected of murder. References Bibliography James Vinson & D. L. Kirkpatrick. Contemporary Novelists. St. James Press, 1986. 1940 British novels Novels by Hammond Innes British thriller novels Novels set in London William Collins, Sons books
wiki
The State of California has in existence an automobile Liability insurance program (LCA) that assists people whose income is below a certain level to purchase insurance at greatly reduced rates. The objective is to give all residents of California the opportunity to be insured by providing affordable options. When you apply for the program, you have to meet certain income requirements. As an example, a single person cannot have income that exceeds an amount over 250% of the poverty level. Most states in the contiguous U.S. have a program like LCA. The rates, or premiums, vary by county in the State of California. The down payment is 15% of the nominal premium. The payments are bi-monthly (every other month), and the remaining balance is divided into six installments. Details on the LCA program can be found on the CA DMV website. References Insurance in the United States
wiki
In biology, a subculture is either a new cell culture or a microbiological culture made by transferring some or all cells from a previous culture to fresh growth medium. This action is called subculturing or passaging the cells. Subculturing is used to prolong the lifespan and/or increase the number of cells or microorganisms in the culture. Role Cell lines and microorganisms cannot be held in culture indefinitely due to the gradual rise in toxic metabolites, use of nutrients and increase in cell number due to growth. Once nutrients are depleted and levels of toxic byproducts increase, the bacteria in the overnight culture enter the stationary phase, where proliferation is greatly reduced or ceased (the cell density value plateaus). When microorganisms from this overnight culture are transferred into the fresh media, nutrients trigger the growth of the microorganism and it goes through the lag phase, a period of slow growth and adaptation to the new environment, and then the log phase, a period where the cells grow exponentially. Subculture is therefore used to produce a new culture with a lower density of cells than the originating culture, fresh nutrients and no toxic metabolites allowing continued growth of the cells without risk of cell death. Subculture is important for both proliferating (e.g. a microorganism like E. coli) and non-proliferating (e.g. terminally differentiated white blood cells) cells. Subculturing can also be used for growth curve calculations (ex. generation time) and obtaining log-phase microorganisms for experiments (ex. Bacterial transformation). Typically, subculture is from a culture of a certain volume into fresh growth medium of equal volume, this allows long-term maintenance of the cell line. Subculture into a larger volume of growth medium is used when wanting to increase the number of cells for, for example, use in an industrial process or scientific experiment. Passage number It is often important to record the approximate number of divisions cells have had in culture by recording the number of passages or subcultures. In the case of plant tissue cells somaclonal variation may arise over long periods in culture. Similarly in mammalian cell lines chromosomal aberrations have a tendency to increase over time. For microorganisms there is a tendency to adapt to culture conditions, which is rarely precisely like the microorganism's natural environment, which can alter their biology. Protocols for passaging The protocol for subculturing cells depends heavily on the properties of the cells involved. Non-adherent cells Many cell types, in particular, many microorganisms, grow in solution and not attached to a surface. These cell types can be subcultured by simply taking a small volume of the parent culture and diluting it in fresh growth medium. Cell density in these cultures is normally measured in cells per milliliter for large eukaryotic cells, or as optical density for 600nm light for smaller cells like bacteria. The cells will often have a preferred range of densities for optimal growth and subculture will normally try to keep the cells in this range. Adherent cells Adherent cells, for example many mammalian cell lines, grow attached to a surface such as the bottom of the culture flask. These cell types have to be detached from the surface before they can be subcultured. For adherent cells cell density is normally measured in terms of confluency, the percentage of the growth surface covered by cells. The cells will often have a preferred range of confluencies for optimal growth, for example a mammalian cell line like HeLa or Raw 264.7 generally prefer confluencies over 10% but under 100%, and subculture will normally try to keep the cells in this range. For subculture cells may be detached by one of several methods including trypsin treatment to break down the proteins responsible for surface adherence, chelating calcium ions with EDTA which disrupts some protein adherence mechanisms, or mechanical methods like repeated washing or use of a cell scraper. The detached cells are then resuspended in fresh growth medium and allowed to settle back onto their growth surface. See also Trypsinization References Microbiology terms
wiki
Year of the Horse è un film documentario del 1997 diretto da Jim Jarmusch su un concerto rock di Neil Young insieme ai Crazy Horse. Produzione Trama Note Collegamenti esterni Film documentari statunitensi Film documentari musicali Film diretti da Jim Jarmusch
wiki
Chibchanomys is a genus of rodent in the family Cricetidae. It contains the following species: Las Cajas water mouse (Chibchanomys orcesi) Chibchan water mouse (Chibchanomys trichotis) References Rodent genera Rodents of South America Taxonomy articles created by Polbot
wiki
Club Paradise is a 1986 American comedy film. Club Paradise may also refer to: "Club Paradise", a song from the album Take Care by Canadian recording artist Drake Club Paradise Tour, a concert tour by Drake Sensation Hunters (1945 film), a film also known as Club Paradise
wiki
Broad Street – stacja końcowa metra nowojorskiego, na linii J i Z. Znajduje się w dzielnicy Manhattan, w Nowym Jorku i zlokalizowana jest za stacją Fulton Street. Została otwarta 30 maja 1931. Przypisy Linki zewnętrzne Stacje metra na Manhattanie
wiki
Opteron er AMD's linje af x86-64 serverprocessorer. X86 mikroprocessorer AMD
wiki
Induction sealing is the process of bonding thermoplastic materials by induction heating. This involves controlled heating an electrically conducting object (usually aluminum foil) by electromagnetic induction, through heat generated in the object by eddy currents. Induction sealing is used in many types of manufacturing. In packaging it is used for package fabrication such as forming tubes from flexible materials, attaching plastic closures to package forms, etc. Probably the most common use of induction sealing is cap sealing, a non-contact method of heating an inner seal to hermetically seal the top of plastic and glass containers. This sealing process takes place after the container has been filled and capped. Sealing process The closure is supplied to the bottler with an aluminum foil layer liner already inserted. Although there are various liners to choose from, a typical induction liner is multi-layered. The top layer is a paper pulp that is generally spot-glued to the cap. The next layer is wax that is used to bond a layer of aluminum foil to the pulp. The bottom layer is a polymer film laminated to the foil. After the cap or closure is applied, the container passes under an induction coil, which emits an oscillating electromagnetic field. As the container passes under the induction coil (sealing head) the conductive aluminum foil liner begins to heat due to eddy currents. The heat melts the wax, which is absorbed into the pulp backing and releases the foil from the cap. The polymer film also heats and flows onto the lip of the container. When cooled, the polymer creates a bond with the container resulting in a hermetically sealed product. Neither the container nor its contents are negatively affected, and the heat generated does not harm the contents. It is possible to overheat the foil causing damage to the seal layer and to any protective barriers. This could result in faulty seals, even weeks after the initial sealing process, so proper sizing of the induction sealing is vital to determine the exact system necessary to run a particular product. Sealing can be done with either a hand held unit or on a conveyor system. A more recent development (which suits a small number of applications better) allows for induction sealing to be used to apply a foil seal to a container without the need for a closure. In this case, foil is supplied pre-cut or in a reel. Where supplied in a reel, it is die cut and transferred onto the container neck. When the foil is in place, it is pressed down by the seal head, the induction cycle is activated and the seal is bonded to the container. This process is known as direct application or sometimes "capless" induction sealing. Reasons that induction sealing may be useful There are a variety of reasons companies choose to use induction sealing: Tamper evidence Leak prevention Freshness Retention Protection against package pilferage Sustainability Production Speed Tamper evidence With the U.S. Food and Drug Administration (FDA) regulations concerning tamper-resistant packaging, pharmaceutical packagers must find ways to comply as outlined in Sec. 450.500 Tamper-Resistant Packaging Requirements for Certain over-the-counter (OTC) Human Drug Products (CPG 7132a.17). Induction sealing systems meet or exceed these government regulations. As stated in section 6 of Packaging Systems: “…6. CONTAINER MOUTH INNER SEALS. Paper, thermal plastic, plastic film, foil, or a combination thereof, is sealed to the mouth of a container (e.g., bottle) under the cap. The seal must be torn or broken to open the container and remove the product. The seal cannot be removed and reapplied without leaving visible evidence of entry. Seals applied by heat induction to plastic containers appear to offer a higher degree of tamper-resistance than those that depend on an adhesive to create the bond…” Leak prevention/protection Some shipping companies require liquid chemical products to be sealed prior to shipping to prevent hazardous chemicals from spilling on other shipments. Freshness Induction sealing keeps unwanted pollutants from seeping into food products, and may assist in extending shelf life of certain products. Pilferage protection Induction-sealed containers help prevent the product from being broken into by leaving a noticeable residue on plastic containers from the liner itself. Pharmaceutical companies purchase liners that will purposely leave liner film/foil residue on bottles. Food companies that use induction seals do not want the liner residue as it could potentially interfere with the product itself upon dispensing. They, in turn, put a notice on the product that it has been induction-sealed for their protection; letting the consumer know it was sealed upon leaving the factory and they should check for an intact seal before using. Sustainability In some applications, induction sealing can be considered to contribute towards sustainability goals by allowing lower bottle weights as the pack relies on the presence of an induction foil seal for its security, rather than a mechanically strong bottle neck and closure. Induction heating analysis Some manufacturers have produced devices which can monitor the magnetic field strength present at the induction head (either directly or indirectly via such mechanisms as pick up coils), dynamically predicting the heating effect in the foil. Such devices provide quantifiable data post-weld in a production environment where uniformity - particularly in parameters such as foil peel-off strength, is important. Analysers may be portable or designed to work in conjunction with conveyor belt systems. High speed power analysis techniques (Voltage and Current measurement in near real time) can be used to intercept power delivery from mains to generator or generator to head in order to calculate energy delivered to the foil and the statistical profile of that process. As the thermal capacity of the foil is typically static, such information as true power, apparent power and power factor may be used to predict foil heating with good relevance to final weld parameters and in a dynamic manner. Many other derivative parameters may be calculated for each weld, yielding confidence in a production environment that is notably more difficult to achieve in conduction transfer systems, where analysis, if present is generally post-weld as relatively large thermal mass of heating and conduction elements combined impair rapid temperature change. Inductive heating with quantitative feedback such as that provided by power analysis techniques further allows for the possibility of dynamic adjustments in energy delivery profile to the target. This opens the possibility of feed-forward systems where the induction generator properties are adjusted in near real-time as the heating process proceeds, allowing for a specific heating profile track and subsequent compliance feedback - something that is not generally practical for conduction heating processes. Benefits of induction vs. conduction sealing Conduction sealing requires a hard metal plate to make perfect contact with the container being sealed. Conduction sealing systems delay production time because of required system warm-up time. They also have complex temperature sensors and heaters. Unlike conduction sealing systems, induction sealing systems require very little power resources, deliver instant startup time, and have a sealing head which can conform to “out of specification” containers when sealing. Induction sealing also offers advantages when sealing to glass: Using a conduction sealer to seal a simple foil structure to glass gives no tolerance or compressibility to allow for any irregularity in the glass surface finish. With an induction sealer, the contact face can be of a compressible material, ensuring a perfect bond each time Variety of products that use induction sealing Pharmaceutical Nutraceutical Food Dairy Beverage Cosmetics, health & beauty Automotive petroleum products Chemical Agricultural and ag chem. Animal care and medicines Sporting goods supplies Children’s toys (clays, bubbles, etc.) Pastes Paints Home remodeling products Musical instrument supplies (cleaners, resins, lubricants, polishes) Dental Personal pleasure products Hunting / fishing aids Computer aids/ inks Laundry detergent /products Manufacturing shop supplies School supply products Inks, dyes, carbon products…. Condoms History 1957-1958 - Original concept and method for Induction Sealing is conceived and proven by Jack Palmer (a process engineer at that time for the FR Corporation - Bronx, NY) as a means of solving liquid leakage from polyethylene bottles during shipment 1960 - is awarded to Jack Palmer, in which his concept and process of Induction Sealing is made public Mid-1960s - Induction sealing is used worldwide 1973 – First solid state cap sealer introduced 1982 – Chicago Tylenol murders 1983 – First transistorized air-cooled power supply for induction cap sealing 1985 – Universal coil technology debuted 1992 – Water-cooled, IGBT-based sealer introduced 1997 – Waterless cap sealers introduced (half the size and relatively maintenance free) 2004 – 6 kW system introduced References Further reading Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, External links FDA’s regulations concerning tamper-resistant packaging http://www.enerconind.co.uk/ Induction heating Packaging machinery
wiki
In physics, Planck's law describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature , when there is no net flow of matter or energy between the body and its environment. At the end of the 19th century, physicists were unable to explain why the observed spectrum of black-body radiation, which by then had been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, German physicist Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, , that was proportional to the frequency of its associated electromagnetic wave. This resolved the problem of the ultraviolet catastrophe predicted by classical physics. This discovery was a pioneering insight of modern physics and is of fundamental importance to quantum theory. The law Every physical body spontaneously and continuously emits electromagnetic radiation and the spectral radiance of a body, , describes the spectral emissive power per unit area, per unit solid angle, per unit frequency for particular radiation frequencies. The relationship given by Planck's radiation law, given below, shows that with increasing temperature, the total radiated energy of a body increases and the peak of the emitted spectrum shifts to shorter wavelengths. According to this, the spectral radiance of a body for frequency at absolute temperature is given by where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum. The SI units of are . The spectral radiance can also be expressed per unit wavelength instead of per unit frequency. In addition, the law may be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. In the limit of low frequencies (i.e. long wavelengths), Planck's law tends to the Rayleigh–Jeans law, while in the limit of high frequencies (i.e. small wavelengths) it tends to the Wien approximation. Max Planck developed the law in 1900 with only empirically determined constants, and later showed that, expressed as an energy distribution, it is the unique stable distribution for radiation in thermodynamic equilibrium. As an energy distribution, it is one of a family of thermal equilibrium distributions which include the Bose–Einstein distribution, the Fermi–Dirac distribution and the Maxwell–Boltzmann distribution. Black-body radiation A black-body is an idealised object which absorbs and emits all radiation frequencies. Near thermodynamic equilibrium, the emitted radiation is closely described by Planck's law and because of its dependence on temperature, Planck radiation is said to be thermal radiation, such that the higher the temperature of a body the more radiation it emits at every wavelength. Planck radiation has a maximum intensity at a wavelength that depends on the temperature of the body. For example, at room temperature (~), a body emits thermal radiation that is mostly infrared and invisible. At higher temperatures the amount of infrared radiation increases and can be felt as heat, and more visible radiation is emitted so the body glows visibly red. At higher temperatures, the body is bright yellow or blue-white and emits significant amounts of short wavelength radiation, including ultraviolet and even x-rays. The surface of the sun (~) emits large amounts of both infrared and ultraviolet radiation; its emission is peaked in the visible spectrum. This shift due to temperature is called Wien's displacement law. Planck radiation is the greatest amount of radiation that any body at thermal equilibrium can emit from its surface, whatever its chemical composition or surface structure. The passage of radiation across an interface between media can be characterized by the emissivity of the interface (the ratio of the actual radiance to the theoretical Planck radiance), usually denoted by the symbol . It is in general dependent on chemical composition and physical structure, on temperature, on the wavelength, on the angle of passage, and on the polarization. The emissivity of a natural interface is always between and 1. A body that interfaces with another medium which both has and absorbs all the radiation incident upon it, is said to be a black body. The surface of a black body can be modelled by a small hole in the wall of a large enclosure which is maintained at a uniform temperature with opaque walls that, at every wavelength, are not perfectly reflective. At equilibrium, the radiation inside this enclosure is described by Planck's law, as is the radiation leaving the small hole. Just as the Maxwell–Boltzmann distribution is the unique maximum entropy energy distribution for a gas of material particles at thermal equilibrium, so is Planck's distribution for a gas of photons. By contrast to a material gas where the masses and number of particles play a role, the spectral radiance, pressure and energy density of a photon gas at thermal equilibrium are entirely determined by the temperature. If the photon gas is not Planckian, the second law of thermodynamics guarantees that interactions (between photons and other particles or even, at sufficiently high temperatures, between the photons themselves) will cause the photon energy distribution to change and approach the Planck distribution. In such an approach to thermodynamic equilibrium, photons are created or annihilated in the right numbers and with the right energies to fill the cavity with a Planck distribution until they reach the equilibrium temperature. It is as if the gas is a mixture of sub-gases, one for every band of wavelengths, and each sub-gas eventually attains the common temperature. The quantity is the spectral radiance as a function of temperature and frequency. It has units of W·m−2·sr−1·Hz−1 in the SI system. An infinitesimal amount of power is radiated in the direction described by the angle from the surface normal from infinitesimal surface area into infinitesimal solid angle in an infinitesimal frequency band of width centered on frequency . The total power radiated into any solid angle is the integral of over those three quantities, and is given by the Stefan–Boltzmann law. The spectral radiance of Planckian radiation from a black body has the same value for every direction and angle of polarization, and so the black body is said to be a Lambertian radiator. Different forms Planck's law can be encountered in several forms depending on the conventions and preferences of different scientific fields. The various forms of the law for spectral radiance are summarized in the table below. Forms on the left are most often encountered in experimental fields, while those on the right are most often encountered in theoretical fields. These distributions represent the spectral radiance of blackbodies—the power emitted from the emitting surface, per unit projected area of emitting surface, per unit solid angle, per spectral unit (frequency, wavelength, wavenumber or their angular equivalents). Since the radiance is isotropic (i.e. independent of direction), the power emitted at an angle to the normal is proportional to the projected area, and therefore to the cosine of that angle as per Lambert's cosine law, and is unpolarized. Correspondence between spectral variable forms Different spectral variables require different corresponding forms of expression of the law. In general, one may not convert between the various forms of Planck's law simply by substituting one variable for another, because this would not take into account that the different forms have different units. Wavelength and frequency units are reciprocal. Corresponding forms of expression are related because they express one and the same physical fact: for a particular physical spectral increment, a corresponding particular physical energy increment is radiated. This is so whether it is expressed in terms of an increment of frequency, , or, correspondingly, of wavelength, . Introduction of a minus sign can indicate that an increment of frequency corresponds with decrement of wavelength. In order to convert the corresponding forms so that they express the same quantity in the same units we multiply by the spectral increment. Then, for a particular spectral increment, the particular physical energy increment may be written which leads to Also, , so that . Substitution gives the correspondence between the frequency and wavelength forms, with their different dimensions and units. Consequently, Evidently, the location of the peak of the spectral distribution for Planck's law depends on the choice of spectral variable. Nevertheless, in a manner of speaking, this formula means that the shape of the spectral distribution is independent of temperature, according to Wien's displacement law, as detailed below in the sub-section Percentiles of the section Properties. Spectral energy density form Planck's law can also be written in terms of the spectral energy density () by multiplying by : These distributions have units of energy per volume per spectral unit. First and second radiation constants In the above variants of Planck's law, the wavelength and wavenumber variants use the terms and which comprise physical constants only. Consequently, these terms can be considered as physical constants themselves, and are therefore referred to as the first radiation constant and the second radiation constant with and Using the radiation constants, the wavelength variant of Planck's law can be simplified to and the wavenumber variant can be simplified correspondingly. is used here instead of because it is the SI symbol for spectral radiance. The in refers to that. This reference is necessary because Planck's law can be reformulated to give spectral radiant exitance rather than spectral radiance , in which case replaces , with so that Planck's law for spectral radiant exitance can be written as As measuring techniques have improved, the General Conference on Weights and Measures has revised its estimate of ; see for details. Physics Planck's law describes the unique and characteristic spectral distribution for electromagnetic radiation in thermodynamic equilibrium, when there is no net flow of matter or energy. Its physics is most easily understood by considering the radiation in a cavity with rigid opaque walls. Motion of the walls can affect the radiation. If the walls are not opaque, then the thermodynamic equilibrium is not isolated. It is of interest to explain how the thermodynamic equilibrium is attained. There are two main cases: (a) when the approach to thermodynamic equilibrium is in the presence of matter, when the walls of the cavity are imperfectly reflective for every wavelength or when the walls are perfectly reflective while the cavity contains a small black body (this was the main case considered by Planck); or (b) when the approach to equilibrium is in the absence of matter, when the walls are perfectly reflective for all wavelengths and the cavity contains no matter. For matter not enclosed in such a cavity, thermal radiation can be approximately explained by appropriate use of Planck's law. Classical physics led, via the equipartition theorem, to the ultraviolet catastrophe, a prediction that the total blackbody radiation intensity was infinite. If supplemented by the classically unjustifiable assumption that for some reason the radiation is finite, classical thermodynamics provides an account of some aspects of the Planck distribution, such as the Stefan–Boltzmann law, and the Wien displacement law. For the case of the presence of matter, quantum mechanics provides a good account, as found below in the section headed Einstein coefficients. This was the case considered by Einstein, and is nowadays used for quantum optics. For the case of the absence of matter, quantum field theory is necessary, because non-relativistic quantum mechanics with fixed particle numbers does not provide a sufficient account. Photons Quantum theoretical explanation of Planck's law views the radiation as a gas of massless, uncharged, bosonic particles, namely photons, in thermodynamic equilibrium. Photons are viewed as the carriers of the electromagnetic interaction between electrically charged elementary particles. Photon numbers are not conserved. Photons are created or annihilated in the right numbers and with the right energies to fill the cavity with the Planck distribution. For a photon gas in thermodynamic equilibrium, the internal energy density is entirely determined by the temperature; moreover, the pressure is entirely determined by the internal energy density. This is unlike the case of thermodynamic equilibrium for material gases, for which the internal energy is determined not only by the temperature, but also, independently, by the respective numbers of the different molecules, and independently again, by the specific characteristics of the different molecules. For different material gases at given temperature, the pressure and internal energy density can vary independently, because different molecules can carry independently different excitation energies. Planck's law arises as a limit of the Bose–Einstein distribution, the energy distribution describing non-interactive bosons in thermodynamic equilibrium. In the case of massless bosons such as photons and gluons, the chemical potential is zero and the Bose–Einstein distribution reduces to the Planck distribution. There is another fundamental equilibrium energy distribution: the Fermi–Dirac distribution, which describes fermions, such as electrons, in thermal equilibrium. The two distributions differ because multiple bosons can occupy the same quantum state, while multiple fermions cannot. At low densities, the number of available quantum states per particle is large, and this difference becomes irrelevant. In the low density limit, the Bose–Einstein and the Fermi–Dirac distribution each reduce to the Maxwell–Boltzmann distribution. Kirchhoff's law of thermal radiation Kirchhoff's law of thermal radiation is a succinct and brief account of a complicated physical situation. The following is an introductory sketch of that situation, and is very far from being a rigorous physical argument. The purpose here is only to summarize the main physical factors in the situation, and the main conclusions. Spectral dependence of thermal radiation There is a difference between conductive heat transfer and radiative heat transfer. Radiative heat transfer can be filtered to pass only a definite band of radiative frequencies. It is generally known that the hotter a body becomes, the more heat it radiates at every frequency. In a cavity in an opaque body with rigid walls that are not perfectly reflective at any frequency, in thermodynamic equilibrium, there is only one temperature, and it must be shared in common by the radiation of every frequency. One may imagine two such cavities, each in its own isolated radiative and thermodynamic equilibrium. One may imagine an optical device that allows radiative heat transfer between the two cavities, filtered to pass only a definite band of radiative frequencies. If the values of the spectral radiances of the radiations in the cavities differ in that frequency band, heat may be expected to pass from the hotter to the colder. One might propose to use such a filtered transfer of heat in such a band to drive a heat engine. If the two bodies are at the same temperature, the second law of thermodynamics does not allow the heat engine to work. It may be inferred that for a temperature common to the two bodies, the values of the spectral radiances in the pass-band must also be common. This must hold for every frequency band. This became clear to Balfour Stewart and later to Kirchhoff. Balfour Stewart found experimentally that of all surfaces, one of lamp-black emitted the greatest amount of thermal radiation for every quality of radiation, judged by various filters. Thinking theoretically, Kirchhoff went a little further, and pointed out that this implied that the spectral radiance, as a function of radiative frequency, of any such cavity in thermodynamic equilibrium must be a unique universal function of temperature. He postulated an ideal black body that interfaced with its surrounds in just such a way as to absorb all the radiation that falls on it. By the Helmholtz reciprocity principle, radiation from the interior of such a body would pass unimpeded, directly to its surrounds without reflection at the interface. In thermodynamic equilibrium, the thermal radiation emitted from such a body would have that unique universal spectral radiance as a function of temperature. This insight is the root of Kirchhoff's law of thermal radiation. Relation between absorptivity and emissivity One may imagine a small homogeneous spherical material body labeled at a temperature , lying in a radiation field within a large cavity with walls of material labeled at a temperature . The body emits its own thermal radiation. At a particular frequency , the radiation emitted from a particular cross-section through the centre of in one sense in a direction normal to that cross-section may be denoted , characteristically for the material of . At that frequency , the radiative power from the walls into that cross-section in the opposite sense in that direction may be denoted , for the wall temperature . For the material of , defining the absorptivity as the fraction of that incident radiation absorbed by , that incident energy is absorbed at a rate . The rate of accumulation of energy in one sense into the cross-section of the body can then be expressed Kirchhoff's seminal insight, mentioned just above, was that, at thermodynamic equilibrium at temperature , there exists a unique universal radiative distribution, nowadays denoted , that is independent of the chemical characteristics of the materials and , that leads to a very valuable understanding of the radiative exchange equilibrium of any body at all, as follows. When there is thermodynamic equilibrium at temperature , the cavity radiation from the walls has that unique universal value, so that . Further, one may define the emissivity of the material of the body just so that at thermodynamic equilibrium at temperature , one has . When thermal equilibrium prevails at temperature , the rate of accumulation of energy vanishes so that . It follows that in thermodynamic equilibrium, when , Kirchhoff pointed out that it follows that in thermodynamic equilibrium, when , Introducing the special notation for the absorptivity of material at thermodynamic equilibrium at temperature (justified by a discovery of Einstein, as indicated below), one further has the equality at thermodynamic equilibrium. The equality of absorptivity and emissivity here demonstrated is specific for thermodynamic equilibrium at temperature and is in general not to be expected to hold when conditions of thermodynamic equilibrium do not hold. The emissivity and absorptivity are each separately properties of the molecules of the material but they depend differently upon the distributions of states of molecular excitation on the occasion, because of a phenomenon known as "stimulated emission", that was discovered by Einstein. On occasions when the material is in thermodynamic equilibrium or in a state known as local thermodynamic equilibrium, the emissivity and absorptivity become equal. Very strong incident radiation or other factors can disrupt thermodynamic equilibrium or local thermodynamic equilibrium. Local thermodynamic equilibrium in a gas means that molecular collisions far outweigh light emission and absorption in determining the distributions of states of molecular excitation. Kirchhoff pointed out that he did not know the precise character of , but he thought it important that it should be found out. Four decades after Kirchhoff's insight of the general principles of its existence and character, Planck's contribution was to determine the precise mathematical expression of that equilibrium distribution . Black body In physics, one considers an ideal black body, here labeled , defined as one that completely absorbs all of the electromagnetic radiation falling upon it at every frequency (hence the term "black"). According to Kirchhoff's law of thermal radiation, this entails that, for every frequency , at thermodynamic equilibrium at temperature , one has , so that the thermal radiation from a black body is always equal to the full amount specified by Planck's law. No physical body can emit thermal radiation that exceeds that of a black body, since if it were in equilibrium with a radiation field, it would be emitting more energy than was incident upon it. Though perfectly black materials do not exist, in practice a black surface can be accurately approximated. As to its material interior, a body of condensed matter, liquid, solid, or plasma, with a definite interface with its surroundings, is completely black to radiation if it is completely opaque. That means that it absorbs all of the radiation that penetrates the interface of the body with its surroundings, and enters the body. This is not too difficult to achieve in practice. On the other hand, a perfectly black interface is not found in nature. A perfectly black interface reflects no radiation, but transmits all that falls on it, from either side. The best practical way to make an effectively black interface is to simulate an 'interface' by a small hole in the wall of a large cavity in a completely opaque rigid body of material that does not reflect perfectly at any frequency, with its walls at a controlled temperature. Beyond these requirements, the component material of the walls is unrestricted. Radiation entering the hole has almost no possibility of escaping the cavity without being absorbed by multiple impacts with its walls. Lambert's cosine law As explained by Planck, a radiating body has an interior consisting of matter, and an interface with its contiguous neighbouring material medium, which is usually the medium from within which the radiation from the surface of the body is observed. The interface is not composed of physical matter but is a theoretical conception, a mathematical two-dimensional surface, a joint property of the two contiguous media, strictly speaking belonging to neither separately. Such an interface can neither absorb nor emit, because it is not composed of physical matter; but it is the site of reflection and transmission of radiation, because it is a surface of discontinuity of optical properties. The reflection and transmission of radiation at the interface obey the Stokes–Helmholtz reciprocity principle. At any point in the interior of a black body located inside a cavity in thermodynamic equilibrium at temperature the radiation is homogeneous, isotropic and unpolarized. A black body absorbs all and reflects none of the electromagnetic radiation incident upon it. According to the Helmholtz reciprocity principle, radiation from the interior of a black body is not reflected at its surface, but is fully transmitted to its exterior. Because of the isotropy of the radiation in the body's interior, the spectral radiance of radiation transmitted from its interior to its exterior through its surface is independent of direction. This is expressed by saying that radiation from the surface of a black body in thermodynamic equilibrium obeys Lambert's cosine law. This means that the spectral flux from a given infinitesimal element of area of the actual emitting surface of the black body, detected from a given direction that makes an angle with the normal to the actual emitting surface at , into an element of solid angle of detection centred on the direction indicated by , in an element of frequency bandwidth , can be represented as where denotes the flux, per unit area per unit frequency per unit solid angle, that area would show if it were measured in its normal direction . The factor is present because the area to which the spectral radiance refers directly is the projection, of the actual emitting surface area, onto a plane perpendicular to the direction indicated by . This is the reason for the name cosine law. Taking into account the independence of direction of the spectral radiance of radiation from the surface of a black body in thermodynamic equilibrium, one has and so Thus Lambert's cosine law expresses the independence of direction of the spectral radiance of the surface of a black body in thermodynamic equilibrium. Stefan–Boltzmann law The total power emitted per unit area at the surface of a black body () may be found by integrating the black body spectral flux found from Lambert's law over all frequencies, and over the solid angles corresponding to a hemisphere () above the surface. The infinitesimal solid angle can be expressed in spherical polar coordinates: So that: where is known as the Stefan–Boltzmann constant. Radiative transfer The equation of radiative transfer describes the way in which radiation is affected as it travels through a material medium. For the special case in which the material medium is in thermodynamic equilibrium in the neighborhood of a point in the medium, Planck's law is of special importance. For simplicity, we can consider the linear steady state, without scattering. The equation of radiative transfer states that for a beam of light going through a small distance , energy is conserved: The change in the (spectral) radiance of that beam () is equal to the amount removed by the material medium plus the amount gained from the material medium. If the radiation field is in equilibrium with the material medium, these two contributions will be equal. The material medium will have a certain emission coefficient and absorption coefficient. The absorption coefficient is the fractional change in the intensity of the light beam as it travels the distance , and has units of length−1. It is composed of two parts, the decrease due to absorption and the increase due to stimulated emission. Stimulated emission is emission by the material body which is caused by and is proportional to the incoming radiation. It is included in the absorption term because, like absorption, it is proportional to the intensity of the incoming radiation. Since the amount of absorption will generally vary linearly as the density of the material, we may define a "mass absorption coefficient" which is a property of the material itself. The change in intensity of a light beam due to absorption as it traverses a small distance will then be The "mass emission coefficient" is equal to the radiance per unit volume of a small volume element divided by its mass (since, as for the mass absorption coefficient, the emission is proportional to the emitting mass) and has units of power⋅solid angle−1⋅frequency−1⋅density−1. Like the mass absorption coefficient, it too is a property of the material itself. The change in a light beam as it traverses a small distance will then be The equation of radiative transfer will then be the sum of these two contributions: If the radiation field is in equilibrium with the material medium, then the radiation will be homogeneous (independent of position) so that and: which is another statement of Kirchhoff's law, relating two material properties of the medium, and which yields the radiative transfer equation at a point around which the medium is in thermodynamic equilibrium: Einstein coefficients The principle of detailed balance states that, at thermodynamic equilibrium, each elementary process is equilibrated by its reverse process. In 1916, Albert Einstein applied this principle on an atomic level to the case of an atom radiating and absorbing radiation due to transitions between two particular energy levels, giving a deeper insight into the equation of radiative transfer and Kirchhoff's law for this type of radiation. If level 1 is the lower energy level with energy , and level 2 is the upper energy level with energy , then the frequency of the radiation radiated or absorbed will be determined by Bohr's frequency condition: If and are the number densities of the atom in states 1 and 2 respectively, then the rate of change of these densities in time will be due to three processes: Spontaneous emission Stimulated emission Photo-absorption where is the spectral energy density of the radiation field. The three parameters , and , known as the Einstein coefficients, are associated with the photon frequency produced by the transition between two energy levels (states). As a result, each line in a spectra has its own set of associated coefficients. When the atoms and the radiation field are in equilibrium, the radiance will be given by Planck's law and, by the principle of detailed balance, the sum of these rates must be zero: Since the atoms are also in equilibrium, the populations of the two levels are related by the Boltzmann factor: where and are the multiplicities of the respective energy levels. Combining the above two equations with the requirement that they be valid at any temperature yields two relationships between the Einstein coefficients: so that knowledge of one coefficient will yield the other two. For the case of isotropic absorption and emission, the emission coefficient () and absorption coefficient () defined in the radiative transfer section above, can be expressed in terms of the Einstein coefficients. The relationships between the Einstein coefficients will yield the expression of Kirchhoff's law expressed in the Radiative transfer section above, namely that These coefficients apply to both atoms and molecules. Properties Peaks The distributions , , and peak at a photon energy of where is the Lambert W function and is Euler's number. The distributions and however, peak at a different energy The reason for this is that, as mentioned above, one cannot go from (for example) to simply by substituting by . In addition, one must also multiply the result of the substitution by This factor shifts the peak of the distribution to higher energies. These peaks are the mode energy of a photon, when binned using equal-size bins of frequency or wavelength, respectively. Dividing () by these energy expression gives the wavelength of the peak. The spectral radiance at these peaks is given by: with and with Meanwhile, the average energy of a photon from a blackbody is where is the Riemann zeta function. Approximations In the limit of low frequencies (i.e. long wavelengths), Planck's law becomes the Rayleigh–Jeans law or The radiance increases as the square of the frequency, illustrating the ultraviolet catastrophe. In the limit of high frequencies (i.e. small wavelengths) Planck's law tends to the Wien approximation: or Both approximations were known to Planck before he developed his law. He was led by these two approximations to develop a law which incorporated both limits, which ultimately became Planck's law. Percentiles There is no closed-form expression for the integral of the Planck formula between two wavelengths, but there are infinite sum expressions. where This series converges for all positive values of , but only slowly at low , where another series works better, based on the generating function of the Bernoulli numbers: This series does not converge for because of poles at The integral over all frequencies is where is the Riemann zeta function. Similar series exist for the number of photons per unit area per steradian: The integral over all frequencies is now Wien's displacement law in its stronger form states that the shape of Planck's law is independent of temperature. It is therefore possible to list the percentile points of the total radiation as well as the peaks for wavelength and frequency, in a form which gives the wavelength when divided by temperature . The second column of the following table lists the corresponding values of , that is, those values of for which the wavelength is micrometers at the radiance percentile point given by the corresponding entry in the first column. That is, 0.01% of the radiation is at a wavelength below  µm, 20% below , etc. The wavelength and frequency peaks are in bold and occur at 25.0% and 64.6% respectively. The 41.8% point is the wavelength-frequency-neutral peak (i.e. the peak in power per unit change in logarithm of wavelength or frequency). These are the points at which the respective Planck-law functions , and , respectively, divided by attain their maxima. The much smaller gap in ratio of wavelengths between 0.1% and 0.01% (1110 is 22% more than 910) than between 99.9% and 99.99% (113374 is 120% more than 51613) reflects the exponential decay of energy at short wavelengths (left end) and polynomial decay at long. Which peak to use depends on the application. The conventional choice is the wavelength peak at 25.0% given by Wien's displacement law in its weak form. For some purposes the median or 50% point dividing the total radiation into two-halves may be more suitable. The latter is closer to the frequency peak than to the wavelength peak because the radiance drops exponentially at short wavelengths and only polynomially at long. The neutral peak occurs at a shorter wavelength than the median for the same reason. Solar radiation can be compared to black-body radiation at about 5778 K (but see graph). The table on the right shows how the radiation of a black body at this temperature is partitioned, and also how sunlight is partitioned for comparison. Also for comparison a planet modeled as a black body is shown, radiating at a nominal 288 K (15 °C) as a representative value of the Earth's highly variable temperature. Its wavelengths are more than twenty times that of the Sun, tabulated in the third column in micrometers (thousands of nanometers). That is, only 1% of the Sun's radiation is at wavelengths shorter than 296 nm, and only 1% at longer than 3728 nm. Expressed in micrometers this puts 98% of the Sun's radiation in the range from 0.296 to 3.728 µm. The corresponding 98% of energy radiated from a 288 K planet is from 5.03 to 79.5 µm, well above the range of solar radiation (or below if expressed in terms of frequencies instead of wavelengths ). A consequence of this more-than-order-of-magnitude difference in wavelength between solar and planetary radiation is that filters designed to pass one and block the other are easy to construct. For example, windows fabricated of ordinary glass or transparent plastic pass at least 80% of the incoming 5778 K solar radiation, which is below 1.2 µm in wavelength, while blocking over 99% of the outgoing 288 K thermal radiation from 5 µm upwards, wavelengths at which most kinds of glass and plastic of construction-grade thickness are effectively opaque. The Sun's radiation is that arriving at the top of the atmosphere (TOA). As can be read from the table, radiation below 400 nm, or ultraviolet, is about 8%, while that above 700 nm, or infrared, starts at about the 48% point and so accounts for 52% of the total. Hence only 40% of the TOA insolation is visible to the human eye. The atmosphere shifts these percentages substantially in favor of visible light as it absorbs most of the ultraviolet and significant amounts of infrared. Derivation Consider a cube of side with conducting walls filled with electromagnetic radiation in thermal equilibrium at temperature . If there is a small hole in one of the walls, the radiation emitted from the hole will be characteristic of a perfect black body. We will first calculate the spectral energy density within the cavity and then determine the spectral radiance of the emitted radiation. At the walls of the cube, the parallel component of the electric field and the orthogonal component of the magnetic field must vanish. Analogous to the wave function of a particle in a box, one finds that the fields are superpositions of periodic functions. The three wavelengths , , and , in the three directions orthogonal to the walls can be: where the are positive integers. For each set of integers there are two linearly independent solutions (known as modes). The two modes for each set of these correspond to the two polarization states of the photon which has a spin of 1. According to quantum theory, the total energy of a mode is given by: The number can be interpreted as the number of photons in the mode. For the energy of the mode is not zero. This vacuum energy of the electromagnetic field is responsible for the Casimir effect. In the following we will calculate the internal energy of the box at absolute temperature . According to statistical mechanics, the equilibrium probability distribution over the energy levels of a particular mode is given by: Here The denominator , is the partition function of a single mode and makes properly normalized: Here we have implicitly defined which is the energy of a single photon. As explained here, the average energy in a mode can be expressed in terms of the partition function: This formula, apart from the first vacuum energy term, is a special case of the general formula for particles obeying Bose–Einstein statistics. Since there is no restriction on the total number of photons, the chemical potential is zero. If we measure the energy relative to the ground state, the total energy in the box follows by summing over all allowed single photon states. This can be done exactly in the thermodynamic limit as approaches infinity. In this limit, becomes continuous and we can then integrate over this parameter. To calculate the energy in the box in this way, we need to evaluate how many photon states there are in a given energy range. If we write the total number of single photon states with energies between and as , where is the density of states (which is evaluated below), then we can write: To calculate the density of states we rewrite equation () as follows: where is the norm of the vector : For every vector with integer components larger than or equal to zero, there are two photon states. This means that the number of photon states in a certain region of -space is twice the volume of that region. An energy range of corresponds to shell of thickness in -space. Because the components of have to be positive, this shell spans an octant of a sphere. The number of photon states , in an energy range , is thus given by: Inserting this in Eq. () gives: From this equation one can derive the spectral energy density as a function of frequency and as a function of wavelength : where And: where This is also a spectral energy density function with units of energy per unit wavelength per unit volume. Integrals of this type for Bose and Fermi gases can be expressed in terms of polylogarithms. In this case, however, it is possible to calculate the integral in closed form using only elementary functions. Substituting in Eq. (), makes the integration variable dimensionless giving: where is a Bose–Einstein integral given by: The total electromagnetic energy inside the box is thus given by: where is the volume of the box. The combination has the value . This is not the Stefan–Boltzmann law (which provides the total energy radiated by a black body per unit surface area per unit time), but it can be written more compactly using the Stefan–Boltzmann constant , giving The constant is sometimes called the radiation constant. Since the radiation is the same in all directions, and propagates at the speed of light (), the spectral radiance of radiation exiting the small hole is which yields It can be converted to an expression for in wavelength units by substituting by and evaluating Dimensional analysis shows that the unit of steradians, shown in the denominator of the right hand side of the equation above, is generated in and carried through the derivation but does not appear in any of the dimensions for any element on the left-hand-side of the equation. This derivation is based on . History Balfour Stewart In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie. He wrote "Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power." Stewart measured radiated power with a thermo-pile and sensitive galvanometer read with a microscope. He was concerned with selective thermal radiation, which he investigated with plates of substances that radiated and absorbed selectively for different qualities of radiation rather than maximally for all qualities of radiation. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Helmholtz reciprocity principle (though he did not use an eponym for it). He did not in this paper mention that the qualities of the rays might be described by their wavelengths, nor did he use spectrally resolving apparatus such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared by heating to equilibrium with boiling water. His measurements confirmed that substances that emit and absorb selectively respect the principle of selective equality of emission and absorption at thermal equilibrium. Stewart offered a theoretical proof that this should be the case separately for every selected quality of thermal radiation, but his mathematics was not rigorously valid. According to historian D. M. Siegel: "He was not a practitioner of the more sophisticated techniques of nineteenth-century mathematical physics; he did not even make use of the functional notation in dealing with spectral distributions." He made no mention of thermodynamics in this paper, though he did refer to conservation of vis viva. He proposed that his measurements implied that radiation was both absorbed and emitted by particles of matter throughout depths of the media in which it propagated. He applied the Helmholtz reciprocity principle to account for the material interface processes as distinct from the processes in the interior material. He concluded that his experiments showed that, in the interior of an enclosure in thermal equilibrium, the radiant heat, reflected and emitted combined, leaving any part of the surface, regardless of its substance, was the same as would have left that same portion of the surface if it had been composed of lamp-black. He did not mention the possibility of ideally perfectly reflective walls; in particular he noted that highly polished real physical metals absorb very slightly. Gustav Kirchhoff In 1859, not knowing of Stewart's work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber. Kirchhoff then went on to consider bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at temperature . Here is used a notation different from Kirchhoff's. Here, the emitting power denotes a dimensioned quantity, the total radiation emitted by a body labeled by index at temperature . The total absorption ratio of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature . (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.) Thus the ratio of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power, because is dimensionless. Also here the wavelength-specific emitting power of the body at temperature is denoted by and the wavelength-specific absorption ratio by . Again, the ratio of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power. In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers. His theoretical proof was and still is considered by some writers to be invalid. His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorption ratio has one and the same common value for all bodies that emit and absorb at that wavelength. In symbols, the law stated that the wavelength-specific ratio has one and the same value for all bodies, that is for all values of index . In this report there was no mention of black bodies. In 1860, still not knowing of Stewart's measurements for selected qualities of radiation, Kirchhoff pointed out that it was long established experimentally that for total heat radiation, of unselected quality, emitted and absorbed by a body in equilibrium, the dimensioned total radiation ratio , has one and the same value common to all bodies, that is, for every value of the material index . Again without measurements of radiative powers or other new experimental data, Kirchhoff then offered a fresh theoretical proof of his new principle of the universality of the value of the wavelength-specific ratio at thermal equilibrium. His fresh theoretical proof was and still is considered by some writers to be invalid. But more importantly, it relied on a new theoretical postulate of "perfectly black bodies", which is the reason why one speaks of Kirchhoff's law. Such black bodies showed complete absorption in their infinitely thin most superficial surface. They correspond to Balfour Stewart's reference bodies, with internal radiation, coated with lamp-black. They were not the more realistic perfectly black bodies later considered by Planck. Planck's black bodies radiated and absorbed only by the material in their interiors; their interfaces with contiguous media were only mathematical surfaces, capable neither of absorption nor emission, but only of reflecting and transmitting with refraction. Kirchhoff's proof considered an arbitrary non-ideal body labeled as well as various perfect black bodies labeled . It required that the bodies be kept in a cavity in thermal equilibrium at temperature . His proof intended to show that the ratio was independent of the nature of the non-ideal body, however partly transparent or partly reflective it was. His proof first argued that for wavelength and at temperature , at thermal equilibrium, all perfectly black bodies of the same size and shape have the one and the same common value of emissive power , with the dimensions of power. His proof noted that the dimensionless wavelength-specific absorption ratio of a perfectly black body is by definition exactly 1. Then for a perfectly black body, the wavelength-specific ratio of emissive power to absorption ratio is again just , with the dimensions of power. Kirchhoff considered, successively, thermal equilibrium with the arbitrary non-ideal body, and with a perfectly black body of the same size and shape, in place in his cavity in equilibrium at temperature . He argued that the flows of heat radiation must be the same in each case. Thus he argued that at thermal equilibrium the ratio was equal to , which may now be denoted , a continuous function, dependent only on at fixed temperature , and an increasing function of at fixed wavelength , at low temperatures vanishing for visible but not for longer wavelengths, with positive values for visible wavelengths at higher temperatures, which does not depend on the nature of the arbitrary non-ideal body. (Geometrical factors, taken into detailed account by Kirchhoff, have been ignored in the foregoing.) Thus Kirchhoff's law of thermal radiation can be stated: For any material at all, radiating and absorbing in thermodynamic equilibrium at any given temperature , for every wavelength , the ratio of emissive power to absorptive ratio has one universal value, which is characteristic of a perfect black body, and is an emissive power which we here represent by . (For our notation , Kirchhoff's original notation was simply .) Kirchhoff announced that the determination of the function was a problem of the highest importance, though he recognized that there would be experimental difficulties to be overcome. He supposed that like other functions that do not depend on the properties of individual bodies, it would be a simple function. That function has occasionally been called 'Kirchhoff's (emission, universal) function', though its precise mathematical form would not be known for another forty years, till it was discovered by Planck in 1900. The theoretical proof for Kirchhoff's universality principle was worked on and debated by various physicists over the same time, and later. Kirchhoff stated later in 1860 that his theoretical proof was better than Balfour Stewart's, and in some respects it was so. Kirchhoff's 1860 paper did not mention the second law of thermodynamics, and of course did not mention the concept of entropy which had not at that time been established. In a more considered account in a book in 1862, Kirchhoff mentioned the connection of his law with "Carnot's principle", which is a form of the second law. According to Helge Kragh, "Quantum theory owes its origin to the study of thermal radiation, in particular to the "blackbody" radiation that Robert Kirchhoff had first defined in 1859–1860." Empirical and theoretical ingredients for the scientific induction of Planck's law In 1860, Kirchhoff predicted experimental difficulties for the empirical determination of the function that described the dependence of the black-body spectrum as a function only of temperature and wavelength. And so it turned out. It took some forty years of development of improved methods of measurement of electromagnetic radiation to get a reliable result. In 1865, John Tyndall described radiation from electrically heated filaments and from carbon arcs as visible and invisible. Tyndall spectrally decomposed the radiation by use of a rock salt prism, which passed heat as well as visible rays, and measured the radiation intensity by means of a thermopile. In 1880, André-Prosper-Paul Crova published a diagram of the three-dimensional appearance of the graph of the strength of thermal radiation as a function of wavelength and temperature. He determined the spectral variable by use of prisms. He analyzed the surface through what he called "isothermal" curves, sections for a single temperature, with a spectral variable on the abscissa and a power variable on the ordinate. He put smooth curves through his experimental data points. They had one peak at a spectral value characteristic for the temperature, and fell either side of it towards the horizontal axis. Such spectral sections are widely shown even today. In a series of papers from 1881 to 1886, Langley reported measurements of the spectrum of heat radiation, using diffraction gratings and prisms, and the most sensitive detectors that he could make. He reported that there was a peak intensity that increased with temperature, that the shape of the spectrum was not symmetrical about the peak, that there was a strong fall-off of intensity when the wavelength was shorter than an approximate cut-off value for each temperature, that the approximate cut-off wavelength decreased with increasing temperature, and that the wavelength of the peak intensity decreased with temperature, so that the intensity increased strongly with temperature for short wavelengths that were longer than the approximate cut-off for the temperature. Having read Langley, in 1888, Russian physicist V.A. Michelson published a consideration of the idea that the unknown Kirchhoff radiation function could be explained physically and stated mathematically in terms of "complete irregularity of the vibrations of ... atoms". At this time, Planck was not studying radiation closely, and believed in neither atoms nor statistical physics. Michelson produced a formula for the spectrum for temperature: where denotes specific radiative intensity at wavelength and temperature , and where and are empirical constants. In 1898, Otto Lummer and Ferdinand Kurlbaum published an account of their cavity radiation source. Their design has been used largely unchanged for radiation measurements to the present day. It was a platinum box, divided by diaphragms, with its interior blackened with iron oxide. It was an important ingredient for the progressively improved measurements that led to the discovery of Planck's law. A version described in 1901 had its interior blackened with a mixture of chromium, nickel, and cobalt oxides. The importance of the Lummer and Kurlbaum cavity radiation source was that it was an experimentally accessible source of black-body radiation, as distinct from radiation from a simply exposed incandescent solid body, which had been the nearest available experimental approximation to black-body radiation over a suitable range of temperatures. The simply exposed incandescent solid bodies, that had been used before, emitted radiation with departures from the black-body spectrum that made it impossible to find the true black-body spectrum from experiments. Planck's views before the empirical facts led him to find his eventual law Planck first turned his attention to the problem of black-body radiation in 1897. Theoretical and empirical progress enabled Lummer and Pringsheim to write in 1899 that available experimental evidence was approximately consistent with the specific intensity law where and denote empirically measurable constants, and where and denote wavelength and temperature respectively. For theoretical reasons, Planck at that time accepted this formulation, which has an effective cut-off of short wavelengths. Gustav Kirchhoff was Max Planck's teacher and surmised that there was a universal law for blackbody radiation and this was called "Kirchhoff's challenge". Planck, a theorist, believed that Wilhelm Wien had discovered this law and Planck expanded on Wien's work presenting it in 1899 to the meeting of the German Physical Society. Experimentalists Otto Lummer, Ferdinand Kurlbaum, Ernst Pringsheim Sr., and Heinrich Rubens did experiments that appeared to support Wien's law especially at higher frequency short wavelengths which Planck so wholly endorsed at the German Physical Society that it began to be called the Wien-Planck Law. However, by September 1900, the experimentalists had proven beyond a doubt that the Wien-Planck law failed at the longer wavelengths. They would present their data on October 19. Planck was informed by his friend Rubens and quickly created a formula within a few days. In June of that same year, Lord Raleigh had created a formula that would work for short lower frequency wavelengths based on the widely accepted theory of equipartition. So Planck submitted a formula combining both Raleigh's Law (or a similar equipartition theory) and Wien's law which would be weighted to one or the other law depending on wavelength to match the experimental data. However, although this equation worked, Planck himself said unless he could explain the formula derived from a "lucky intuition" into one of "true meaning" in physics, it did not have true significance. Planck explained that thereafter followed the hardest work of his life. Planck did not believe in atoms, nor did he think the second law of thermodynamics should be statistical because probability does not provide an absolute answer, and Boltzmann's entropy law rested on the hypothesis of atoms and was statistical. But Planck was unable to find a way to reconcile his Blackbody equation with continuous laws such as Maxwell's wave equations. So in what Planck called "an act of desperation", he turned to Boltzmann's atomic law of entropy as it was the only one that made his equation work. Therefore, he used the Boltzmann constant k and his new constant h to explain the blackbody radiation law which became widely known through his published paper. Finding the empirical law Max Planck produced his law on 19 October 1900 as an improvement upon the Wien approximation, published in 1896 by Wilhelm Wien, which fit the experimental data at short wavelengths (high frequencies) but deviated from it at long wavelengths (low frequencies). In June 1900, based on heuristic theoretical considerations, Rayleigh had suggested a formula that he proposed might be checked experimentally. The suggestion was that the Stewart–Kirchhoff universal function might be of the form . This was not the celebrated Rayleigh–Jeans formula , which did not emerge until 1905, though it did reduce to the latter for long wavelengths, which are the relevant ones here. According to Klein, one may speculate that it is likely that Planck had seen this suggestion though he did not mention it in his papers of 1900 and 1901. Planck would have been aware of various other proposed formulas which had been offered. On 7 October 1900, Rubens told Planck that in the complementary domain (long wavelength, low frequency), and only there, Rayleigh's 1900 formula fitted the observed data well. For long wavelengths, Rayleigh's 1900 heuristic formula approximately meant that energy was proportional to temperature, . It is known that and this leads to and thence to for long wavelengths. But for short wavelengths, the Wien formula leads to and thence to for short wavelengths. Planck perhaps patched together these two heuristic formulas, for long and for short wavelengths, to produce a formula This led Planck to the formula where Planck used the symbols and to denote empirical fitting constants. Planck sent this result to Rubens, who compared it with his and Kurlbaum's observational data and found that it fitted for all wavelengths remarkably well. On 19 October 1900, Rubens and Kurlbaum briefly reported the fit to the data, and Planck added a short presentation to give a theoretical sketch to account for his formula. Within a week, Rubens and Kurlbaum gave a fuller report of their measurements confirming Planck's law. Their technique for spectral resolution of the longer wavelength radiation was called the residual ray method. The rays were repeatedly reflected from polished crystal surfaces, and the rays that made it all the way through the process were 'residual', and were of wavelengths preferentially reflected by crystals of suitably specific materials. Trying to find a physical explanation of the law Once Planck had discovered the empirically fitting function, he constructed a physical derivation of this law. His thinking revolved around entropy rather than being directly about temperature. Planck considered a cavity with perfectly reflective walls; inside the cavity, there are finitely many distinct but identically constituted resonant oscillatory bodies of definite magnitude, with several such oscillators at each of finitely many characteristic frequencies. These hypothetical oscillators were for Planck purely imaginary theoretical investigative probes, and he said of them that such oscillators do not need to "really exist somewhere in nature, provided their existence and their properties are consistent with the laws of thermodynamics and electrodynamics.". Planck did not attribute any definite physical significance to his hypothesis of resonant oscillators but rather proposed it as a mathematical device that enabled him to derive a single expression for the black body spectrum that matched the empirical data at all wavelengths. He tentatively mentioned the possible connection of such oscillators with atoms. In a sense, the oscillators corresponded to Planck's speck of carbon; the size of the speck could be small regardless of the size of the cavity, provided the speck effectively transduced energy between radiative wavelength modes. Partly following a heuristic method of calculation pioneered by Boltzmann for gas molecules, Planck considered the possible ways of distributing electromagnetic energy over the different modes of his hypothetical charged material oscillators. This acceptance of the probabilistic approach, following Boltzmann, for Planck was a radical change from his former position, which till then had deliberately opposed such thinking proposed by Boltzmann. In Planck's words, "I considered the [quantum hypothesis] a purely formal assumption, and I did not give it much thought except for this: that I had obtained a positive result under any circumstances and at whatever cost." Heuristically, Boltzmann had distributed the energy in arbitrary merely mathematical quanta , which he had proceeded to make tend to zero in magnitude, because the finite magnitude had served only to allow definite counting for the sake of mathematical calculation of probabilities, and had no physical significance. Referring to a new universal constant of nature, , Planck supposed that, in the several oscillators of each of the finitely many characteristic frequencies, the total energy was distributed to each in an integer multiple of a definite physical unit of energy, , not arbitrary as in Boltzmann's method, but now for Planck, in a new departure, characteristic of the respective characteristic frequency. His new universal constant of nature, , is now known as the Planck constant. Planck explained further that the respective definite unit, , of energy should be proportional to the respective characteristic oscillation frequency of the hypothetical oscillator, and in 1901 he expressed this with the constant of proportionality : Planck did not propose that light propagating in free space is quantized. The idea of quantization of the free electromagnetic field was developed later, and eventually incorporated into what we now know as quantum field theory. In 1906, Planck acknowledged that his imaginary resonators, having linear dynamics, did not provide a physical explanation for energy transduction between frequencies. Present-day physics explains the transduction between frequencies in the presence of atoms by their quantum excitability, following Einstein. Planck believed that in a cavity with perfectly reflecting walls and with no matter present, the electromagnetic field cannot exchange energy between frequency components. This is because of the linearity of Maxwell's equations. Present-day quantum field theory predicts that, in the absence of matter, the electromagnetic field obeys nonlinear equations and in that sense does self-interact. Such interaction in the absence of matter has not yet been directly measured because it would require very high intensities and very sensitive and low-noise detectors, which are still in the process of being constructed. Planck believed that a field with no interactions neither obeys nor violates the classical principle of equipartition of energy, and instead remains exactly as it was when introduced, rather than evolving into a black body field. Thus, the linearity of his mechanical assumptions precluded Planck from having a mechanical explanation of the maximization of the entropy of the thermodynamic equilibrium thermal radiation field. This is why he had to resort to Boltzmann's probabilistic arguments. Some recent proposals in the possible physical explanation of the Planck constant suggest that, following de Broglie's spirit of wave-particle duality, if, regarding the radiation as a wave packet, the Planck constant is determined by the physical properties of the vacuum and a critical amount of disturbance in the electromagnetic field. Planck's law may be regarded as fulfilling the prediction of Gustav Kirchhoff that his law of thermal radiation was of the highest importance. In his mature presentation of his own law, Planck offered a thorough and detailed theoretical proof for Kirchhoff's law, theoretical proof of which until then had been sometimes debated, partly because it was said to rely on unphysical theoretical objects, such as Kirchhoff's perfectly absorbing infinitely thin black surface. Subsequent events It was not until five years after Planck made his heuristic assumption of abstract elements of energy or of action that Albert Einstein conceived of really existing quanta of light in 1905 as a revolutionary explanation of black-body radiation, of photoluminescence, of the photoelectric effect, and of the ionization of gases by ultraviolet light. In 1905, "Einstein believed that Planck's theory could not be made to agree with the idea of light quanta, a mistake he corrected in 1906." Contrary to Planck's beliefs of the time, Einstein proposed a model and formula whereby light was emitted, absorbed, and propagated in free space in energy quanta localized in points of space. As an introduction to his reasoning, Einstein recapitulated Planck's model of hypothetical resonant material electric oscillators as sources and sinks of radiation, but then he offered a new argument, disconnected from that model, but partly based on a thermodynamic argument of Wien, in which Planck's formula played no role. Einstein gave the energy content of such quanta in the form . Thus Einstein was contradicting the undulatory theory of light held by Planck. In 1910, criticizing a manuscript sent to him by Planck, knowing that Planck was a steady supporter of Einstein's theory of special relativity, Einstein wrote to Planck: "To me it seems absurd to have energy continuously distributed in space without assuming an aether." According to Thomas Kuhn, it was not till 1908 that Planck more or less accepted part of Einstein's arguments for physical as distinct from abstract mathematical discreteness in thermal radiation physics. Still in 1908, considering Einstein's proposal of quantal propagation, Planck opined that such a revolutionary step was perhaps unnecessary. Until then, Planck had been consistent in thinking that discreteness of action quanta was to be found neither in his resonant oscillators nor in the propagation of thermal radiation. Kuhn wrote that, in Planck's earlier papers and in his 1906 monograph, there is no "mention of discontinuity, [nor] of talk of a restriction on oscillator energy, [nor of] any formula like ." Kuhn pointed out that his study of Planck's papers of 1900 and 1901, and of his monograph of 1906, had led him to "heretical" conclusions, contrary to the widespread assumptions of others who saw Planck's writing only from the perspective of later, anachronistic, viewpoints. Kuhn's conclusions, finding a period till 1908, when Planck consistently held his 'first theory', have been accepted by other historians. In the second edition of his monograph, in 1912, Planck sustained his dissent from Einstein's proposal of light quanta. He proposed in some detail that absorption of light by his virtual material resonators might be continuous, occurring at a constant rate in equilibrium, as distinct from quantal absorption. Only emission was quantal. This has at times been called Planck's "second theory". It was not till 1919 that Planck in the third edition of his monograph more or less accepted his 'third theory', that both emission and absorption of light were quantal. The colourful term "ultraviolet catastrophe" was given by Paul Ehrenfest in 1911 to the paradoxical result that the total energy in the cavity tends to infinity when the equipartition theorem of classical statistical mechanics is (mistakenly) applied to black-body radiation. But this had not been part of Planck's thinking, because he had not tried to apply the doctrine of equipartition: when he made his discovery in 1900, he had not noticed any sort of "catastrophe". It was first noted by Lord Rayleigh in 1900, and then in 1901 by Sir James Jeans; and later, in 1905, by Einstein when he wanted to support the idea that light propagates as discrete packets, later called 'photons', and by Rayleigh and by Jeans. In 1913, Bohr gave another formula with a further different physical meaning to the quantity . In contrast to Planck's and Einstein's formulas, Bohr's formula referred explicitly and categorically to energy levels of atoms. Bohr's formula was where and denote the energy levels of quantum states of an atom, with quantum numbers and . The symbol denotes the frequency of a quantum of radiation that can be emitted or absorbed as the atom passes between those two quantum states. In contrast to Planck's model, the frequency has no immediate relation to frequencies that might describe those quantum states themselves. Later, in 1924, Satyendra Nath Bose developed the theory of the statistical mechanics of photons, which allowed a theoretical derivation of Planck's law. The actual word 'photon' was invented still later, by G.N. Lewis in 1926, who mistakenly believed that photons were conserved, contrary to Bose–Einstein statistics; nevertheless the word 'photon' was adopted to express the Einstein postulate of the packet nature of light propagation. In an electromagnetic field isolated in a vacuum in a vessel with perfectly reflective walls, such as was considered by Planck, indeed the photons would be conserved according to Einstein's 1905 model, but Lewis was referring to a field of photons considered as a system closed with respect to ponderable matter but open to exchange of electromagnetic energy with a surrounding system of ponderable matter, and he mistakenly imagined that still the photons were conserved, being stored inside atoms. Ultimately, Planck's law of black-body radiation contributed to Einstein's concept of quanta of light carrying linear momentum, which became the fundamental basis for the development of quantum mechanics. The above-mentioned linearity of Planck's mechanical assumptions, not allowing for energetic interactions between frequency components, was superseded in 1925 by Heisenberg's original quantum mechanics. In his paper submitted on 29 July 1925, Heisenberg's theory accounted for Bohr's above-mentioned formula of 1913. It admitted non-linear oscillators as models of atomic quantum states, allowing energetic interaction between their own multiple internal discrete Fourier frequency components, on the occasions of emission or absorption of quanta of radiation. The frequency of a quantum of radiation was that of a definite coupling between internal atomic meta-stable oscillatory quantum states. At that time, Heisenberg knew nothing of matrix algebra, but Max Born read the manuscript of Heisenberg's paper and recognized the matrix character of Heisenberg's theory. Then Born and Jordan published an explicitly matrix theory of quantum mechanics, based on, but in form distinctly different from, Heisenberg's original quantum mechanics; it is the Born and Jordan matrix theory that is today called matrix mechanics. Heisenberg's explanation of the Planck oscillators, as non-linear effects apparent as Fourier modes of transient processes of emission or absorption of radiation, showed why Planck's oscillators, viewed as enduring physical objects such as might be envisaged by classical physics, did not give an adequate explanation of the phenomena. Nowadays, as a statement of the energy of a light quantum, often one finds the formula , where , and denotes angular frequency, and less often the equivalent formula . This statement about a really existing and propagating light quantum, based on Einstein's, has a physical meaning different from that of Planck's above statement about the abstract energy units to be distributed amongst his hypothetical resonant material oscillators. An article by Helge Kragh published in Physics World gives an account of this history. See also Emissivity Radiance Sakuma–Hattori equation Wien's displacement law References Bibliography Translated in part as "On quantum mechanics" in Translated in and a nearly identical version Translated in See also . Translated as "Quantum-theoretical Re-interpretation of kinematic and mechanical relations" in a translation of Frühgeschichte der Quantentheorie (1899–1913), Physik Verlag, Mosbach/Baden, 1969. Translated by Guthrie, F. as Translated in Translated in Translated in Translated in External links Summary of Radiation Radiation of a Blackbody – interactive simulation to play with Planck's law Scienceworld entry on Planck's Law Statistical mechanics Foundational quantum physics Max Planck Old quantum theory 1900 in science 1900 in Germany
wiki
Orra or ORRA may refer to: People: Orra White Hitchcock (1796–1863), American scientific illustrator Orra E. Monnette (1873–1936), American attorney, author and banker Other: Orra Jewellery, Indian jewellery store chain Oria, Apulia, village in southern Italy Oriental Rug Retailers of America Obamacare Repeal Reconciliation Act of 2017, a proposed name for the American Health Care Act of 2017 See also Ora (disambiguation)
wiki
The Land God Gave to Cain is a 1958 thriller novel by the British writer Hammond Innes. It was released in the United States by the publishers Knopf. After a plane crash in a remote part of Labrador, a British pilot heads out to investigate based on some radio messages his father has overheard. References Bibliography James Vinson & D. L. Kirkpatrick. Contemporary Novelists. St. James Press, 1986. 1958 British novels Novels by Hammond Innes British thriller novels Novels set in Canada William Collins, Sons books
wiki
est un single d'Ozzy Osbourne sorti en 1982. Titres Revelation (Mother Earth) Iron Man (Live) Chanson interprétée par Ozzy Osbourne en:Revelation (Mother Earth)
wiki
Tormento di sant'Antonio – dipinto di Michelangelo Tormento di sant'Antonio – dipinto di Giovanni Gerolamo Savoldo Pagine correlate Tentazioni di sant'Antonio Tentazioni di sant'Antonio abate
wiki
Monazite is a primarily reddish-brown phosphate mineral that contains rare-earth elements. Due to variability in composition, monazite is considered a group of minerals. The most common species of the group is monazite-(Ce), that is, the cerium-dominant member of the group. It occurs usually in small isolated crystals. It has a hardness of 5.0 to 5.5 on the Mohs scale of mineral hardness and is relatively dense, about 4.6 to 5.7 g/cm3. There are five different most common species of monazite, depending on the relative amounts of the rare earth elements in the mineral: monazite-(Ce), (Ce,La,Nd,Th)PO4 (the most common member), monazite-(La), (La,Ce,Nd)PO4, monazite-(Nd), (Nd,La,Ce)PO4, monazite-(Sm), (Sm,Gd,Ce,Th)PO4, monazite-(Pr), (Pr,Ce,Nd,Th)PO4. The elements in parentheses are listed in the order of their relative proportion within the mineral: lanthanum is the most common rare-earth element in monazite-(La), and so forth. Silica (SiO2) is present in trace amounts, as well as small amounts of uranium and thorium. Due to the alpha decay of thorium and uranium, monazite contains a significant amount of helium, which can be extracted by heating. The following analyses are of monazite from: (I.) Burke County, North Carolina, USA; (II.) Arendal, Norway; (III.) Emmaville, New South Wales, Australia. Monazite is an important ore for thorium, lanthanum, and cerium. It is often found in placer deposits. India, Madagascar, and South Africa have large deposits of monazite sands. The deposits in India are particularly rich in monazite. Monazite is radioactive due to the presence of thorium and, less commonly, uranium. The radiogenic decay of uranium and thorium to lead enables monazite to be dated through monazite geochronology. Monazite crystals often have multiple distinct zones that formed through successive geologic events that lead to monazite crystallization. These domains can be dated to gain insight into the geologic history of its host rocks. The name monazite comes from the (to be solitary), via German Monazit, in allusion to its isolated crystals. Structure All monazites adopt the same structure, meaning that the connectivity of the atoms is very similar to other compounds of the type M(III)PO4. The M(III) centers have a distorted coordination sphere being surrounded by eight oxides with M–O distances around 2.6 Å in length. The phosphate anion is tetrahedral, as usual. The same structural motif is observed for lead chromate (PbCrO4). Mining history Monazite sand from Brazil was first noticed in sand carried in ship's ballast by Carl Auer von Welsbach in the 1880s. Von Welsbach was looking for thorium for his newly invented incandescent mantles. Monazite sand was quickly adopted as the thorium source and became the foundation of the rare-earth industry. Monazite sand was also briefly mined in North Carolina, but, shortly thereafter, extensive deposits in southern India were found. Brazilian and Indian monazite dominated the industry before World War II, after which major mining activity transferred to South Africa. There are also large monazite deposits in Australia. Monazite was the only significant source of commercial lanthanides, but concern over the disposal of the radioactive daughter products of thorium, bastnäsite came to displace monazite in the production of lanthanides in the 1960s due to its much lower thorium content. Increased interest in thorium for nuclear energy may bring monazite back into commercial use. Mineralization and extraction Because of their high density, monazite minerals concentrate in alluvial sands when released by the weathering of pegmatites. These so-called placer deposits are often beach or fossil beach sands and contain other heavy minerals of commercial interest such as zircon and ilmenite. Monazite can be isolated as a nearly pure concentrate by the use of gravity, magnetic, and electrostatic separation. Monazite sand deposits are prevalently of the monazite-(Ce) composition. Typically, the lanthanides in such monazites contain about 45–48% cerium, about 24% lanthanum, about 17% neodymium, about 5% praseodymium, and minor quantities of samarium, gadolinium, and yttrium. Europium concentrations tend to be low, about 0.05%. South African "rock" monazite, from Steenkampskraal, was processed in the 1950s and early 1960s by the Lindsay Chemical Division of American Potash and Chemical Corporation, at the time the largest producer of lanthanides in the world. Steenkampskraal monazite provided a supply of the complete set of lanthanides. Very low concentrations of the heaviest lanthanides in monazite justified the term "rare" earth for these elements, with prices to match. Thorium content of monazite is variable and sometimes can be up to 20–30%. Monazite from certain carbonatites or from Bolivian tin ore veins is essentially thorium-free. However, commercial monazite sands typically contain between 6 and 12% thorium oxide. Acid cracking The original process for "cracking" monazite so as to extract the thorium and lanthanide content was to heat it with concentrated sulfuric acid to temperatures between for several hours. Variations in the ratio of acid to ore, the extent of heating, and the extent to which water was added afterwards led to several different processes to separate thorium from the lanthanides. One of the processes caused the thorium to precipitate out as a phosphate or pyrophosphate in crude form, leaving a solution of lanthanide sulfates, from which the lanthanides could be easily precipitated as a double sodium sulfate. The acid methods led to the generation of considerable acid waste, and loss of the phosphate content of the ore. Alkaline cracking A more recent process uses hot sodium hydroxide solution (73%) at about . This process allows the valuable phosphate content of the ore to be recovered as crystalline trisodium phosphate. The lanthanide/thorium hydroxide mixture can be treated with hydrochloric acid to provide a solution of lanthanide chlorides, and an insoluble sludge of the less-basic thorium hydroxide. Extraction of rare-earth metals from monazite ore The extraction of rare-earth metals from monazite ore begins with digestion with sulfuric acid followed by aqueous extraction. The process requires many neutralizations and filtrations. The final products yielded for this process are thorium-phosphate concentrate, RE hydroxides, and uranium concentrate. Depending on the relative market prices of uranium, thorium, and rare earth elements as well as the availability of customers and the logistics of delivering to them, some or all of those products may be economical to sell or further process into a marketable form, while others constitute tailings for disposal. Products of the uranium and thorium decay series, particularly radium will be present in trace amounts and form a radiotoxic hazard. While radium-228 (a product of thorium decay) will be present only in extremely minute amounts (less than one milligram per metric ton of thorium), and will decay away with a half life of roughly 5.75 years, radium-226 will be present at a ratio above 300 milligrams per metric ton of uranium and due to its long half life (~1600 years) will essentially remain with the residue. As radium forms the least soluble alkaline earth metal sulfate known, radium sulfate will be present among the solid filtration products after sulfuric acid has been added. References Further reading J. C. Bailar et al., Comprehensive Inorganic Chemistry, Pergamon Press, 1973. R. J. Callow, The Industrial Chemistry of the Lanthanons, Yttrium, Thorium and Uranium, Pergamon Press 1967. . Gupta, C. K. and N. Krishnamurthy, Extactive Metallurgy of Rare Earths, CRC Press, 2005, . Gupta, C. K., and T. K. Mukherjee. Hydrometallurgy in Extraction Processes, Boca Raton, Florida: CRC Press, 1990. Print. Price List, Lindsay Chemical Division, American Potash and Chemical Corporation, 1960. R. C. Vickery, Chemistry of the Lanthanons, Butterworths and Academic Press, 1953. External links Monazite An Unusual State Of Matter Poem about monazite by Roald Hoffman "British Monazite Mine, Shelby, N.C." in Durwood Barbour Collection of North Carolina Postcards (P077), North Carolina Collection Photographic Archives, Wilson Library, UNC-Chapel Hill ; the third in a series of videos about a Monazite beach in Brazil. Monazite, thorium, and mesothorium (1915) Lanthanide minerals Thorium minerals Phosphate minerals Monoclinic minerals Minerals in space group 14
wiki
Goodbye Cruel World may refer to: Music Goodbye Cruel World (Elvis Costello album), an album by Elvis Costello and the Attractions Goodbye Cruel World (Custard album), an album by Custard Goodbye Cruel World, a 1999 album by Brutal Truth "Goodbye Cruel World" (James Darren song), a song by James Darren "Goodbye Cruel World" (Pink Floyd song), a song by Pink Floyd "Goodbye Cruel World" (Shakespears Sister song), a song by Shakespears Sister Other uses Goodbye Cruel World (TV series), a 1992 British drama starring Sue Johnston and Alum Armstrong See also Suicide note
wiki
Infinifilm, first introduced in 2001, was New Line Cinema's brand of specialized DVDs containing a feature to notify viewers of special features on the disc applicable to the scene currently playing, such as interviews, behind-the-scenes footage or deleted scenes. If the user chose to watch one of these special features, the movie would be paused and the special feature would then be played. After that, the user would be returned to the point in the movie where they left off. The last Infinifilm DVD release was The Number 23 in 2007. Many of the label's films that New Line ported to Blu-ray featuring the original Infinifilm content were rebranded as "Focus Points." New Line then abandoned Infinifilm, most likely due to the 2008 absorption by Warner Bros. Warner Bros. Discovery brands Home video lines Audiovisual introductions in 2001
wiki
Jewish on Campus is a student-led Jewish nonprofit organization dedicated towards addressing discrimination against Jewish college students. History In 2020, Jewish on Campus was co-created by Jewish college students, Isaac de Castro and Julia Jassey, as an Instagram page meant to bring awareness to individual incidents of antisemitism on college campuses. However, after online popularity, it transformed into a nonprofit organization. As an organization, Jewish on Campus expanded its work to include a platform for student journalism, data collection and analysis to better understand the state of antisemitism on campuses, and an ambassador program to unify Jewish students across the United States and Canada to make statements representative of American Jewish and Canadian Jewish students. In November 2021, Jewish on Campus became a partner organization of the World Jewish Congress. See also Universities and antisemitism AMCHA Initiative References External links Jewish organizations Anti-racist organizations Anti-racist organizations in the United States Opposition to antisemitism Civil liberties advocacy groups in the United States Jewish organizations established in the 21st century 2020 establishments in the United States Student organizations established in the 21st century Student organizations in the United States
wiki
A drainage basin is an area of land where all flowing surface water converges to a single point, such as a river mouth, or flows into another body of water, such as a lake or ocean. A basin is separated from adjacent basins by a perimeter, the drainage divide, made up of a succession of elevated features, such as ridges and hills. A basin may consist of smaller basins that merge at river confluences, forming a hierarchical pattern. Other terms for a drainage basin are catchment area, catchment basin, drainage area, river basin, water basin, and impluvium. In North America, they are commonly called a watershed, though in other English-speaking places, "watershed" is used only in its original sense, that of a drainage divide. A drainage basin's boundaries are determined by watershed delineation, a common task in environmental engineering and science. In a closed drainage basin, or endorheic basin, rather than flowing to the ocean, water converges toward the interior of the basin, known as a sink, which may be a permanent lake, a dry lake, or a point where surface water is lost underground. Drainage basins are similar but not identical to hydrologic units, which are drainage areas delineated so as to nest into a multi-level hierarchical drainage system. Hydrologic units are defined to allow multiple inlets, outlets, or sinks. In a strict sense, all drainage basins are hydrologic units but not all hydrologic units are drainage basins. Major drainage basins of the world Ocean basins The following is a list of the major ocean basins: About 48.71% of the world's land drains to the Atlantic Ocean. In North America, surface water drains to the Atlantic via the Saint Lawrence River and Great Lakes basins, the Eastern Seaboard of the United States, the Canadian Maritimes, and most of Newfoundland and Labrador. Nearly all of South America east of the Andes also drains to the Atlantic, as does most of Western and Central Europe and the greatest portion of western Sub-Saharan Africa, as well as Western Sahara and part of Morocco. The two major mediterranean seas of the world also flow to the Atlantic: The Caribbean Sea and Gulf of Mexico basin includes most of the U.S. interior between the Appalachian and Rocky Mountains, a small part of the Canadian provinces of Alberta and Saskatchewan, eastern Central America, the islands of the Caribbean and the Gulf, and a small part of northern South America. The Mediterranean Sea basin, with the Black Sea, includes much of North Africa, east-central Africa (through the Nile River), Southern, Central, and Eastern Europe, Turkey, and the coastal areas of Israel, Lebanon, and Syria. The Arctic Ocean drains most of Western and Northern Canada east of the Continental Divide, northern Alaska and parts of North Dakota, South Dakota, Minnesota, and Montana in the United States, the north shore of the Scandinavian peninsula in Europe, central and northern Russia, and parts of Kazakhstan and Mongolia in Asia, which totals to about 17% of the world's land. Just over 13% of the land in the world drains to the Pacific Ocean. Its basin includes much of China, eastern and southeastern Russia, Japan, the Korean Peninsula, most of Indochina, Indonesia and Malaysia, the Philippines, all of the Pacific Islands, the northeast coast of Australia, and Canada and the United States west of the Continental Divide (including most of Alaska), as well as western Central America and South America west of the Andes. The Indian Ocean's drainage basin also comprises about 13% of Earth's land. It drains the eastern coast of Africa, the coasts of the Red Sea and the Persian Gulf, the Indian subcontinent, Burma, and most parts of Australia. The Southern Ocean drains Antarctica. Antarctica comprises approximately eight percent of the Earth's land. Largest river basins The five largest river basins (by area), from largest to smallest, are those of the Amazon (7M km), the Congo (4M km), the Nile (3.4M km), the Mississippi (3.22M km), and the Río de la Plata (3.17M km). The three rivers that drain the most water, from most to least, are the Amazon, Ganges, and Congo rivers. Endorheic drainage basins Endorheic drainage basins are inland basins that do not drain to an ocean. Around 18% of all land drains to endorheic lakes or seas or sinks. The largest of these consists of much of the interior of Asia, which drains into the Caspian Sea, the Aral Sea, and numerous smaller lakes. Other endorheic regions include the Great Basin in the United States, much of the Sahara Desert, the drainage basin of the Okavango River (Kalahari Basin), highlands near the African Great Lakes, the interiors of Australia and the Arabian Peninsula, and parts in Mexico and the Andes. Some of these, such as the Great Basin, are not single drainage basins but collections of separate, adjacent closed basins. In endorheic bodies of standing water where evaporation is the primary means of water loss, the water is typically more saline than the oceans. An extreme example of this is the Dead Sea. Importance Geopolitical boundaries Drainage basins have been historically important for determining territorial boundaries, particularly in regions where trade by water has been important. For example, the English crown gave the Hudson's Bay Company a monopoly on the fur trade in the entire Hudson Bay basin, an area called Rupert's Land. Bioregional political organization today includes agreements of states (e.g., international treaties and, within the US, interstate compacts) or other political entities in a particular drainage basin to manage the body or bodies of water into which it drains. Examples of such interstate compacts are the Great Lakes Commission and the Tahoe Regional Planning Agency. Hydrology In hydrology, the drainage basin is a logical unit of focus for studying the movement of water within the hydrological cycle. The process of finding a drainage boundary is referred to as watershed delineation. Finding the area and extent of a drainage basin is an important step in many areas of science and engineering. The majority of water that discharges from the basin outlet originated as precipitation falling on the basin. A portion of the water that enters the groundwater system beneath the drainage basin may flow towards the outlet of another drainage basin because groundwater flow directions do not always match those of their overlying drainage network. Measurement of the discharge of water from a basin may be made by a stream gauge located at the basin's outlet. Depending on the conditions of the drainage basin, as rainfall occurs some of it seeps directly into the ground. This water will either remain underground, slowly making its way downhill and eventually reaching the basin, or it will permeate deeper into the soil and consolidate into groundwater aquifers. As water flows through the basin, it can form tributaries that change the structure of the land. There are three different main types, which are affected by the rocks and ground underneath. Rock that is quick to erode forms dendritic patterns, and these are seen most often. The two other types of patterns that form are trellis patterns and rectangular patterns. Rain gauge data is used to measure total precipitation over a drainage basin, and there are different ways to interpret that data. If the gauges are many and evenly distributed over an area of uniform precipitation, using the arithmetic mean method will give good results. In the Thiessen polygon method, the drainage basin is divided into polygons with the rain gauge in the middle of each polygon assumed to be representative for the rainfall on the area of land included in its polygon. These polygons are made by drawing lines between gauges, then making perpendicular bisectors of those lines form the polygons. The isohyetal method involves contours of equal precipitation are drawn over the gauges on a map. Calculating the area between these curves and adding up the volume of water is time-consuming. Isochrone maps can be used to show the time taken for runoff water within a drainage basin to reach a lake, reservoir or outlet, assuming constant and uniform effective rainfall. Geomorphology Drainage basins are the principal hydrologic unit considered in fluvial geomorphology. A drainage basin is the source for water and sediment that moves from higher elevation through the river system to lower elevations as they reshape the channel forms. Ecology Drainage basins are important in ecology. As water flows over the ground and along rivers it can pick up nutrients, sediment, and pollutants. With the water, they are transported towards the outlet of the basin, and can affect the ecological processes along the way as well as in the receiving water source. Modern use of artificial fertilizers, containing nitrogen, phosphorus, and potassium, has affected the mouths of drainage basins. The minerals are carried by the drainage basin to the mouth, and may accumulate there, disturbing the natural mineral balance. This can cause eutrophication where plant growth is accelerated by the additional material. Resource management Because drainage basins are coherent entities in a hydrological sense, it has become common to manage water resources on the basis of individual basins. In the U.S. state of Minnesota, governmental entities that perform this function are called "watershed districts". In New Zealand, they are called catchment boards. Comparable community groups based in Ontario, Canada, are called conservation authorities. In North America, this function is referred to as "watershed management". In Brazil, the National Policy of Water Resources, regulated by Act n° 9.433 of 1997, establishes the drainage basin as the territorial division of Brazilian water management. When a river basin crosses at least one political border, either a border within a nation or an international boundary, it is identified as a transboundary river. Management of such basins becomes the responsibility of the countries sharing it. Nile Basin Initiative, OMVS for Senegal River, Mekong River Commission are a few examples of arrangements involving management of shared river basins. Management of shared drainage basins is also seen as a way to build lasting peaceful relationships among countries. Catchment factors The catchment is the most significant factor determining the amount or likelihood of flooding. Catchment factors are: topography, shape, size, soil type, and land use (paved or roofed areas). Catchment topography and shape determine the time taken for rain to reach the river, while catchment size, soil type, and development determine the amount of water to reach the river. Topography Generally, topography plays a big part in how fast runoff will reach a river. Rain that falls in steep mountainous areas will reach the primary river in the drainage basin faster than flat or lightly sloping areas (e.g., > 1% gradient). Shape Shape will contribute to the speed with which the runoff reaches a river. A long thin catchment will take longer to drain than a circular catchment. Size Size will help determine the amount of water reaching the river, as the larger the catchment the greater the potential for flooding. It is also determined on the basis of length and width of the drainage basin. Soil type Soil type will help determine how much water reaches the river. The runoff from the drainage area is dependent on the soil type. Certain soil types such as sandy soils are very free-draining, and rainfall on sandy soil is likely to be absorbed by the ground. However, soils containing clay can be almost impermeable and therefore rainfall on clay soils will run off and contribute to flood volumes. After prolonged rainfall even free-draining soils can become saturated, meaning that any further rainfall will reach the river rather than being absorbed by the ground. If the surface is impermeable the precipitation will create surface run-off which will lead to higher risk of flooding; if the ground is permeable, the precipitation will infiltrate the soil. Land use Land use can contribute to the volume of water reaching the river, in a similar way to clay soils. For example, rainfall on roofs, pavements, and roads will be collected by rivers with almost no absorption into the groundwater. A drainage basin is an area of land where all flowing surface water converges to a single point, such as a river mouth, or flows into another body of water, such as a lake or ocean. See also Catchment hydrology References Citations Sources DeBarry, Paul A. (2004). Watersheds: Processes, Assessment and Management. John Wiley & Sons. External links Instructional video: Manual watershed delineation is a five-step process Instructional video: To delineate a watershed you must identify land surface features from topographic contours Science week catchment factsheet Catchment Modelling Toolkit Water Evaluation And Planning System (WEAP) - modeling hydrologic processes in a drainage basin New Mexico State University - Water Task Force Recommended Watershed Terminology Watershed Condition Classification Technical Guide United States Forest Service Science in Your Watershed, USGS Studying Watersheds: A Confluence of Important Ideas Water Sustainability Project Sustainable water management through demand management and ecological governance, with the POLIS Project at the University of Victoria Map of the Earth's primary watersheds, WRI What is a watershed and why should I care? Cycleau - A project looking at approaches to managing catchments in North West Europe flash animation of how rain falling onto the landscape will drain into a river depending on the terrain StarHydro – software tool that covers concepts of fluvial geomorphology and watershed hydrology EPA Surf your watershed Florida Watersheds and River Basins - Florida DEP Fluvial landforms Freshwater ecology Geomorphology Hydrology Rivers Water and the environment Water streams
wiki
The brush-tailed rabbit rat (Conilurus penicillatus) is a species of rodent in the family Muridae. It is found in Australia and Papua New Guinea. Description The brush-tailed rabbit-rat is one of three Conilurus species that were extant in Australia prior to European colonisation, and represents the sole surviving species of the genus. The other two species, the white-footed rabbit-rat (C. albipes) and the Capricorn rabbit-rat (C. capricornensis), are now extinct. Morphological analysis established three subspecies of C. penicillatus, of which one is on Papua New Guinea and two are present within Australia: one on the Tiwi Islands off the coast of the Northern Territory, and another on the Australian mainland. Behaviour The brush-tailed rabbit-rat is a semi-arboreal, nocturnal species that spends some of its time foraging on the ground. Individuals tend to den in trees such as Eucalyptus miniata and Eucalyptus tetrodonta, as well as hollow logs on the ground. The species makes use of smaller hollows and hollows that are closer to the ground, than other co-occurring and larger-bodied mammal species such as the common brushtail possum (Trichosurus vulpeculus) and the black-footed tree-rat (Mesembriomys gouldii). This may make the brush-tailed rabbit-rat more susceptible to predation and destruction by high-intensity savanna fires than these species. Distribution The brush-tailed rabbit-rat has a small, poorly known distribution in Papua New Guinea, and a larger distribution within Australia. On the Australian mainland, the species has substantially declined, with a study in the Northern Territory finding that its extent of occurrence has declined by more than 65%. The same study found that the species is contracting towards geographic areas that are wetter and lower than where it was found historically. Population declines are not limited to the mainland, with one study finding a 64% reduction in trapping success on the Tiwi Islands between the year 2002 and 2015. The brush-tailed rabbit-rat was formerly much more common and widespread than it is currently. The species has very few contemporary records from the Western Australian portion of the species distribution, but was formerly known from the Mitchell Plateau region of the Kimberley, with sparse records from other areas (e.g Prince Regent National Park). In the Northern Territory, there have been no mainland records from outside of the Cobourg Peninsula in more than ten years. The species was reintroduced to the Darwin region, however this reintroduction attempt failed and the species is also considered extirpated from Kakadu National Park (where many vertebrate species have declined despite the 'protected' status of the region). Population genomic analysis of the two Australian subspecies found high levels of differentiation among populations, including between the Tiwi Islands of Bathurst and Melville. The same study showed a substantial reduction in relatedness among individuals over 5 km distances, although significant values of spatial autocorrelation of genotypes persisted for distances of more than 100 km. This suggests that individuals tend to disperse much smaller distances than the co-occurring northern quoll (Dasyurus hallucatus), for which significant spatial autocorrelation exists at 500 km. Genetic diversity of the brush-tailed rabbit-rat was found to be highest on Melville Island, followed by Cobourg Peninsula, and lowest on Bathurst Island and at the Mitchell Plateau. References Conilurus Mammals of the Northern Territory Mammals of Western Australia Rodents of Australia Mammals described in 1842 Taxonomy articles created by Polbot Rodents of New Guinea
wiki
A nosegay, posy, or tussie-mussie is a small flower bouquet, typically given as a gift. They have existed in some form since at least medieval times, when they were carried or worn around the head or bodice. Doilies are traditionally used to bind the stems in these arrangements. Alternatively, "posy holders", available in a variety of shapes and materials (although often silver), enable the wearing of these arrangements "at the waist, in the hair, or secured with a brooch". The term nosegay arose in fifteenth-century Middle English as a combination of nose and gay (the latter then meaning "ornament"). A nosegay is, thus, an ornament that appeals to the nose or nostril. The term (also ) comes from the reign of Queen Victoria (1837–1901), when the small bouquets became a popular fashion accessory. Typically, tussie-mussies include floral symbolism from the language of flowers, and therefore may be used to send a message to the recipient. In modern times the term specifically refers to small bouquets in a conical metal holder, or the holder itself, particularly when used at a white wedding. See also Corsage Floral design Floristry Ring a Ring o' Roses Sachet References Fashion accessories Floristry Flowers
wiki
Caramelized pork and eggs (, , ) is a Cambodian and Vietnamese dish traditionally consisting of small pieces of marinated pork and boiled eggs braised in coconut juice. Although it is a familiar part of an everyday meal amongst the Khmer Krom and Vietnamese in Southern Vietnam, it is also one of the traditional dishes during Vietnamese New Year. Before it is served for general consumption, the food is offered to deceased ancestors or family members on altars. In Vietnam, rice is commonly served alongside this dish. It is similar to tau yu bak (豆油肉), a traditional Hokkien dish. See also Tết References Cambodian cuisine Vietnamese cuisine Egg dishes
wiki
The French Mondain is a breed of fancy pigeon developed over many years of selective breeding. French Mondains, along with other varieties of domesticated pigeons, are all descendants from the rock pigeon (Columba livia). The breed was originally developed in France as a utility pigeon. American and European styles The French Mondain is available in two different styles. The American and European French Mondain are actually different breeds that share the same name. Mr Pieter AH Du Toit of Southern-Africa is currently the owner of the World Champion French Mondain Pigeon called Mufasa, who won the International Pigeon Award in Brussels, Belgium, in 2010; he is commonly known as the top breeder and expert of this breed across the globe. See also List of pigeon breeds References Pigeon breeds Pigeon breeds originating in France External links French Mondain Pigeon: Breed Guide - Pigeonpedia
wiki
Pushover or Push Over may refer to: Pushover EP, by Australian singer Lisa Miller Pushover (film), a 1954 film noir starring Fred MacMurray Pushover (video game), a 1992 game from Ocean Pushover analysis, a type of seismic analysis "Push Over", a segment game from The Price Is Right "Pushover", a song by Etta James from the 1963 album Etta James Top Ten "Pushover", a song by The Long Winters from the 2006 album Putting the Days to Bed Pushover try, a try scored from a set-piece scrum in rugby union; see Scrum (rugby union)#Awarding Push Over (band), an American post-hardcore band
wiki
Psychosociology or psycho-sociology is the study of problems common to psychology and sociology, particularly the way individual behavior is influenced by the groups the person belongs to. For example, in the study of criminals, psychology studies the personality of the criminal shaped by the criminal's upbringing. Sociology studies the behavior of the entire group itself: the methods the criminal group uses to recruits members and the way the group changes over time. Psychosociology studies the criminal's behavior, which is created by the group they belong to, such as the young people living in the same neighborhood block. There are many social factors that can affect the psychology of others. An example of this is social cliques. Whether one gets accepted into their desired clique or not, it changes the way they think of themselves, and the people around them. Friendships at young ages while growing up also have much to do with not only psychological development, but social skills and social behavior as well. The same goes for common laws in society. Whether an individual's group decides to obey by them or not, it affects that individual's outlook on the law and their group as a whole. The way people may act or speak dictates the way others see them in a society. For example, individuals can see authority in many different ways depending on their experiences and what others have told them. Because of this, based on people's social knowledge of authority, their opinions and ideas are very different. Every individual has their own unique and personal psychological thought process in which they use to analyze the world around them. People internalize and process sociological factors in ways relative to their psychological thought process. This relationship is reciprocated with one another as society can alter and morph the ways that people think while at the same time, society can be influenced by the externalized psychological thinking of the individuals within the society itself. Due to this, one could see how psychology is instrumental in helping the sociologist to view the wide spread effects of social facts within an individuals behavior. References Interdisciplinary branches of psychology Interdisciplinary subfields of sociology
wiki
New wave reggae may refer to: 2 Tone New wave
wiki
The tamalito or "tamalitos" is a common dish prepared by the Maya (Belize). The appearance of the "tamalitos" is of the tamales which is wrapped with leaves but without meat. Preparation Tamalitos is prepared by using fresh corn "maiz" preferably the ones which have been harvested one or two days ago. The fresher the corn the sweeter and softer the tamalito. Twenty fresh corns caters for fifteen tamalitos. See also List of dumplings References Peruvian cuisine Culture of Amazonas Region Dumplings
wiki
Michael or Mike McDonald may refer to: Michael McDonald (musician) (born 1952), American blue-eyed soul singer Mike McDonald (footballer) (born 1950), footballer for Stoke City and a number of Scottish clubs Mike McDonald (American football) (born 1958), American football player Michael McDonald (costume designer) (born c. 1963), American costume designer Michael McDonald (comedian) (born 1964), American actor-comedian Michael McDonald (kickboxer) (born 1965), Canadian K-1 fighter Michael McDonald (basketball) (born 1969), American basketball player Michael McDonald (runner) (born 1975), Jamaican runner Michael McDonald (poker player) (born 1989), Canadian poker player Michael McDonald (MMA fighter) (born 1991), American mixed martial artist Michael McDonald (rugby union) (born 1999), Australian rugby union player Michael Cassius McDonald (1839–1907), American crime boss and political boss Michael Phillip McDonald, Australian judge See also Michael MacDonald (disambiguation)
wiki
Niall O'Brien may refer to: Niall O'Brien (actor) (1946–2009), Irish actor Niall O'Brien (cricketer) (born 1981), Irish cricketer Niall O'Brien (hurler) (born 1994), Irish hurler Niall O'Brien (priest) (1939–2004), Irish Columban missionary priest See also Neil O'Brien (disambiguation)
wiki
The paired (right and left) laterodorsal thalamic veins () originate each from the lateral dorsal part of the corresponding half of the thalamus. Benno Shlesinger in 1976 classified these veins as belonging to the lateral group of thalamic veins (). References Thalamic veins
wiki
Ararirá River is a river of Amazonas state in north-western Brazil. See also List of rivers of Amazonas References Brazilian Ministry of Transport Rivers of Amazonas (Brazilian state) Tributaries of the Rio Negro (Amazon)
wiki
Omar Núñez (full name: Omar Yasser Núñez Munguía) is an Olympic swimmer from Nicaragua. He swam for Nicaragua at the: Olympics: 2008, 2012 World Championships: 2001, 2003, 2005, 2007, 2009, 2011 Pan American Games: 2003, 2007, 2011 Central American and Caribbean Games: 2002,2006 Short Course Worlds: 2008, 2010 Central American Sports Games: 1997,2001,2006 (Silver and Bronze Medals),2010 External links Living people Nicaraguan male swimmers Olympic swimmers of Nicaragua Swimmers at the 2008 Summer Olympics Swimmers at the 2012 Summer Olympics Swimmers at the 2003 Pan American Games Swimmers at the 2007 Pan American Games Swimmers at the 2011 Pan American Games Pan American Games competitors for Nicaragua Year of birth missing (living people)
wiki
Themistians may refer to: Asteroids of the Themis family Agnoetae, Christian sect founded by Themistius Calonymus
wiki
Mortality rate, or death rate, is a measure of the number of deaths (in general, or due to a specific cause) in a particular population, scaled to the size of that population, per unit of time. Mortality rate is typically expressed in units of deaths per 1,000 individuals per year; thus, a mortality rate of 9.5 (out of 1,000) in a population of 1,000 would mean 9.5 deaths per year in that entire population, or 0.95% out of the total. It is distinct from "morbidity", which is either the prevalence or incidence of a disease, and also from the incidence rate (the number of newly appearing cases of the disease per unit of time). An important specific mortality rate measure is the crude death rate, which looks at mortality from all causes in a given time interval for a given population. , for instance, the CIA estimates that the crude death rate globally will be 7.7 deaths per 1,000 people in a population per year. In a generic form, mortality rates can be seen as calculated using , where d represents the deaths from whatever cause of interest is specified that occur within a given time period, p represents the size of the population in which the deaths occur (however this population is defined or limited), and is the conversion factor from the resulting fraction to another unit (e.g., multiplying by to get mortality rate per 1,000 individuals). Crude death rate, globally The crude death rate is defined as "the mortality rate from all causes of death for a population," calculated as the "[t]otal number of deaths during a given time interval" divided by the "[m]id-interval population", per 1,000 or 100,000; for instance, the population of the U.S. was around 290,810,000 in 2003, and in that year, approximately 2,419,900 deaths occurred in total, giving a crude death (mortality) rate of 832 deaths per 100,000. , the CIA estimates the U.S. crude death rate will be 8.3 per 1,000, while it estimates that the global rate will be 7.7 per 1,000. According to the World Health Organization, the ten leading causes of death, globally, in 2016, for both sexes and all ages, were as presented in the table below. Crude death rate, per 100,000 population Ischaemic heart disease, 126 Stroke, 77 Chronic obstructive pulmonary disease, 41 Lower respiratory infections, 40 Alzheimer's disease and other dementias, 27 Trachea, bronchus, lung cancers, 23 Diabetes mellitus, 21 Road injury, 19 Diarrhoeal diseases, 19 Tuberculosis, 17 Mortality rate is also measured per thousand. It is determined by how many people of a certain age die per thousand people. Decrease of mortality rate is one of the reasons for increase of population. Development of medical science and other technologies has resulted in the decrease of mortality rate in all the countries of the world for some decades. In 1990, the mortality rate of children under 5 years of age was 144 per thousand, but in 2015 the child mortality rate was 38 per thousand. Related measures of mortality Other specific measures of mortality include: For any of these, a "sex-specific mortality rate" refers to "a mortality rate among either males or females", where the calculation involves both "numerator and denominator... limited to the one sex". Use in epidemiology In most cases there are few if any ways to obtain exact mortality rates, so epidemiologists use estimation to predict correct mortality rates. Mortality rates are usually difficult to predict due to language barriers, health infrastructure related issues, conflict, and other reasons. Maternal mortality has additional challenges, especially as they pertain to stillbirths, abortions, and multiple births. In some countries, during the 1920s, a stillbirth was defined as "a birth of at least twenty weeks' gestation in which the child shows no evidence of life after complete birth". In most countries, however, a stillbirth was defined as "the birth of a fetus, after 28 weeks of pregnancy, in which pulmonary respiration does not occur". Census data and vital statistics Ideally, all mortality estimation would be done using vital statistics and census data. Census data will give detailed information about the population at risk of death. The vital statistics provide information about live births and deaths in the population. Often, either census data and vital statistics data is not available. This is common in developing countries, countries that are in conflict, areas where natural disasters have caused mass displacement, and other areas where there is a humanitarian crisis Household surveys Household surveys or interviews are another way in which mortality rates are often assessed. There are several methods to estimate mortality in different segments of the population. One such example is the sisterhood method, which involves researchers estimating maternal mortality by contacting women in populations of interest and asking whether or not they have a sister, if the sister is of child-bearing age (usually 15) and conducting an interview or written questions about possible deaths among sisters. The sisterhood method, however, does not work in cases where sisters may have died before the sister being interviewed was born. Orphanhood surveys estimate mortality by questioning children are asked about the mortality of their parents. It has often been criticized as an adult mortality rate that is very biased for several reasons. The adoption effect is one such instance in which orphans often do not realize that they are adopted. Additionally, interviewers may not realize that an adoptive or foster parent is not the child's biological parent. There is also the issue of parents being reported on by multiple children while some adults have no children, thus are not counted in mortality estimates. Widowhood surveys estimate adult mortality by responding to questions about the deceased husband or wife. One limitation of the widowhood survey surrounds the issues of divorce, where people may be more likely to report that they are widowed in places where there is the great social stigma around being a divorcee. Another limitation is that multiple marriages introduce biased estimates, so individuals are often asked about first marriage. Biases will be significant if the association of death between spouses, such as those in countries with large AIDS epidemics. Sampling Sampling refers to the selection of a subset of the population of interest to efficiently gain information about the entire population. Samples should be representative of the population of interest. Cluster sampling is an approach to non-probability sampling; this is an approach in which each member of the population is assigned to a group (cluster), and then clusters are randomly selected, and all members of selected clusters are included in the sample. Often combined with stratification techniques (in which case it is called multistage sampling), cluster sampling is the approach most often used by epidemiologists. In areas of forced migration, there is more significant sampling error. Thus cluster sampling is not the ideal choice. Mortality statistics Causes of death vary greatly between developed and less developed countries; see also list of causes of death by rate for worldwide statistics. <div> According to Jean Ziegler (the United Nations Special Rapporteur on the Right to Food for 2000 to March 2008), mortality due to malnutrition accounted for 58% of the total mortality in 2006: "In the world, approximately 62 million people, all causes of death combined, die each year. In 2006, more than 36 million died of hunger or diseases due to deficiencies in micronutrients". Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes. In industrialized nations, the proportion is much higher, reaching 90%. Economics Scholars have stated that there is a significant relationship between a low standard of living that results from low income; and increased mortality rates. A low standard of living is more likely to result in malnutrition, which can make people more susceptible to disease and more likely to die from these diseases. A lower standard of living may lead to as a lack of hygiene and sanitation, increased exposure to and the spread of disease, and a lack of access to proper medical care and facilities. Poor health can in turn contribute to low and reduced incomes, which can create a loop known as the health-poverty trap. Indian economist and philosopher Amartya Sen has stated that mortality rates can serve as an indicator of economic success and failure. Historically, mortality rates have been adversely affected by short term price increases. Studies have shown that mortality rates increase at a rate concurrent with increases in food prices. These effects have a greater impact on vulnerable, lower-income populations than they do on populations with a higher standard of living. In more recent times, higher mortality rates have been less tied to socio-economic levels within a given society, but have differed more between low and high-income countries. It is now found that national income, which is directly tied to standard of living within a country, is the largest factor in mortality rates being higher in low-income countries. Preventable mortality These rates are especially pronounced for children under 5 years old, particularly in lower-income, developing countries. These children have a much greater chance of dying of diseases that have become mostly preventable in higher-income parts of the world. More children die of malaria, respiratory infections, diarrhea, perinatal conditions, and measles in developing nations. Data shows that after the age of 5 these preventable causes level out between high and low-income countries. See also Biodemography Compensation law of mortality Demography Gompertz–Makeham law of mortality List of causes of death by rate List of countries by birth rate List of countries by death rate List of countries by life expectancy Maximum life span Micromort Mortality displacement Risk adjusted mortality rate Vital statistics Medical statistics Weekend effect World population References Sources Crude death rate (per 1,000 population) based on World Population Prospects The 2008 Revision, United Nations. Retrieved 22 June 2010 Rank Order – Death rate in CIA World Factbook Mortality in The Medical Dictionary, Medterms. Retrieved 22 June 2010 "WISQARS Leading Causes of Death Reports, 1999–2007", US Centers for Disease Control Retrieved 22 June 2010 Edmond Halley, An Estimate of the Degrees of the Mortality of Mankind (1693) External links DeathRiskRankings: Calculates risk of dying in the next year using MicroMorts and displays risk rankings for up to 66 causes of death Data regarding death rates by age and cause in the United States (from Data360) Complex Emergency Database (CE-DAT): Mortality data from conflict-affected populations Human Mortality Database: Historic mortality data from developed nations Deaths this year OUR WORLD IN DATA: Number of deaths per year, World Population Demography Epidemiology Medical aspects of death Actuarial science Population ecology Medical statistics Temporal rates
wiki
Nelson's collared lemming (Dicrostonyx nelsoni) is a species of rodent in the family Cricetidae. It is found in western and southwestern Alaska in the United States. References See also Musser, G. G., and M. D. Carleton. (2005). Superfamily Muroidea. pp. 894–1531 in Mammal Species of the World a Taxonomic and Geographic Reference. D. E. Wilson and D. M. Reeder eds. Johns Hopkins University Press, Baltimore. Dicrostonyx Mammals described in 1900 Arctic land animals Mammals of the United States Endemic fauna of the United States Endemic fauna of Alaska Mammals of the Arctic Taxonomy articles created by Polbot Taxa named by Clinton Hart Merriam
wiki
The Arctic lemming (Dicrostonyx torquatus) is a species of rodent in the family Cricetidae. Although generally classified as a "least concern" species, the Novaya Zemlya subspecies (Dicrostonyx torquatus ungulatus) is considered a vulnerable species under Russian nature conservation legislation (included in Red Book of Russian Federation since 1998). Biology It is found only in the Arctic biomes in the Russian Federation, and it is the most common mammal on Severnaya Zemlya. Specimens were once found in England, but they are now extirpated. For the most part, lemmings of the genus Lemmus can coexist with those of genus Dicrostonyx. Arctic lemmings migrate when population density becomes too great, and they resort to swimming in search of a new habitat. The disappearance of lemmings and the lemming cycles in the Arctic have shown that they are the causes of fluctuations in local breeding among geese and waders. Recovery of lemmings after years of low density is associated with a period of successful breeding and maintenance of their young in the snow. The diet of the Arctic lemming has been studied, and it has been found to consist of 86% dicotyledons, 14% monocotyledons, and less than 1% mosses. The diet of a family of lemmings consists mostly of Saliceae, although Poaceae are also in their diet. They are a well studied example of a cyclic predator−prey relationship. Terns in the Arctic target lemmings that move in groups; after attacks, lemmings seek shelter in holes or elsewhere out of the terns' territory to avoid additional attacks. Environment During the winter, Arctic lemmings make nests in order to help maintain thermoregulation, maintaining their young, and aids in their survival against predators. One of their predators is the Arctic Fox and they would find that it difficult to hunt lemmings because they would burrow themselves deep within the snow. The fox would then have to dig through the snow in order to reach them. Unfortunately, when snow is scarce and there isn't much for the lemmings to make a nest or burrow in, there would be periodic disappearances of lemmings because of hunting by other predators and their inability to protect themselves. References External links Dicrostonyx Mammals of the Arctic Mammals of Russia Endemic fauna of Russia Arctic land animals Mammals described in 1778 Taxonomy articles created by Polbot
wiki
Helyek Stoney (Kansas) Stoney Pond, mesterséges tó Bucks Corners (New York) mellett Stoney (kráter, Hold) Stoney (marsi kráter) Egyéb Stoney (album), Post Malone-album Stoney (zenész) Stoney nyelv, Kanadában beszélt sziú nyelv Stoney (ital) Földrajzinév-egyértelműsítő lapok
wiki
The Auati-Paraná Canal () is a natural canal of Amazonas state in north-western Brazil. It is a distributary that leaves the Solimões River and joins the Japurá River. Course The Auati-Paraná, also called the Ati-Paraná or Ati-Paranã, is sometimes called a river, sometimes a paraná (channel) and sometimes a canal. The last term seems most appropriate, since the natural canal leaves one river and joins another. The canal divides the lower western Amazon plateau to the north from the Amazon plain. The canal forms the boundary between the Auatí-Paraná Extractive Reserve, created in 2001, on the north bank, and the Mamirauá Sustainable Development Reserve on the south bank. The canal is a body of white water, but almost all the streams that flow into it from the extractive reserve are black water. See also List of rivers of Amazonas References Sources Rivers of Amazonas (Brazilian state)
wiki
The Commission for Social Care Inspection was a non-departmental public body and the single, independent inspectorate for social care in England. Its sponsor department was the Department of Health of the United Kingdom government. It incorporated the work formerly done by the Social Services Inspectorate (SSI), the SSI/Audit Commission Joint Review Team and the National Care Standards Commission (NCSC). History The Commission brought together the inspection, regulation and review of all social care services into one organisation. It was created by the Health and Social Care (Community Health and Standards) Act 2003 and became fully operational on 1 April 2004. The Commission received grant in aid from the Department of Health and also raised part of its running costs by charging regulatory fees. The fees were set out in The Commission for Social Care Inspection (Fees and Frequency of Inspections) Regulations 2004. From 1 April 2007 the regulation of Children's Services (Fostering and Adoption Agencies, Boarding Schools and Children's Homes) no longer fell within the remit of the CSCI. These functions were then carried out by Ofsted. The Commission was abolished on 31 March 2009 and was succeeded by the Care Quality Commission. Commissioners Chair - Dame Denise Platt DBE Chief Inspector - Paul Snell Commissioner - John Knight Commissioner - Professor Jim Mansell Commissioner - Dr. Olu Olasode Commissioner - Peter Westland CBE Commissioner - Beryl Seaman CBE See also Social Work Inspection Agency, the equivalent organization in Scotland at one time References Social care in England 2004 establishments in England Defunct organisations based in England Social work organizations Government of England Government agencies established in 2004 Government agencies disestablished in 2009 Defunct public bodies of the United Kingdom
wiki
МШД — аббревиатура: «МастерШеф. Дети» — кулинарное шоу с участием детей. «Мой шумный дом» — американский мультсериал компании Nickelodeon Animation Studio. Мыс Шмидта — посёлок городского типа в городском округе Эгвекинот Чукотского автономного округа России. Магазин шаговой доступности.
wiki
Cet article recense les zones humides de Gambie concernées par la convention de Ramsar. Statistiques La convention de Ramsar est entrée en vigueur en Gambie le . En , le pays compte 3 sites Ramsar, couvrant une superficie de . Liste Annexes Références Articles connexes Convention de Ramsar Liste des zones humides d'importance internationale de la convention de Ramsar Environnement en Gambie Liens externes Liste en rapport avec la Gambie
wiki
Stefanos Tsitsipas defeated Dominic Thiem in the final, 6–7(6–8), 6–2, 7–6(7–4) to win the singles tennis title at the 2019 ATP Finals. Tsitsipas was making his tournament debut. It marked the first instance since 2005, and only the fourth instance overall, that the Tour Finals champion was determined via a final-set tiebreak. Alexander Zverev was the defending champion, but was defeated by Thiem in the semifinals. Alongside Tsitsipas, Daniil Medvedev and Matteo Berrettini made their tournament debuts. This marked the final Tour Finals appearance for six-time champion Roger Federer; age 38, he lost in the semifinals to Tsitsipas. Rafael Nadal secured the year-end No. 1 ranking for the fifth time after Novak Djokovic was eliminated in the round-robin stage. Seeds Alternates Draw Finals Group Andre Agassi Group Björn Borg Standings are determined by: 1. number of wins; 2. number of matches; 3. in two-players-ties, head-to-head results; 4. in three-players-ties, percentage of sets won, then head-to-head result (if two players tied in percentage of sets won and third one is "different") or percentage of games won if all three players have same percentage of sets won, then head-to-head results; 5. ATP rankings. References External links Official website Draw Singles
wiki
The Yijin Jing () is a manual containing a series of exercises, coordinated with breathing, intended to dramatically enhance physical health when practiced consistently. In Chinese yi means "change", jin means "tendons and sinews", while jing means "methods". While some consider these exercises as a form of Qigong, yijin jing is a relatively intense practice that aims to strengthen muscles and tendons, promote strength and flexibility, increase speed and stamina, and improve balance and coordination of the body. These exercises are notable for their incorporation as key elements of the physical conditioning used in Shaolin training. In the modern day, many translations and distinct sets of exercises are derived from the original (the provenance of which is the subject of some debate). See also Baduanjin Liu Zi Jue Qigong Notes References Qigong Chinese martial arts Warrior code Shaolin Temple
wiki
Trampolining or trampoline gymnastics is a competitive Olympic sport in which athletes perform acrobatics while bouncing on a trampoline. In competition, these can include simple jumps in the straight, pike, tuck, or straddle position to more complex combinations of forward and/or backward somersaults and twists. Scoring is based on the difficulty and on the total seconds spent in the air. Points are deducted for bad form and horizontal displacement from the center of the bed. Outside of the Olympics, competitions are referred to as gym sport, trampoline gymnastics, or gymnastics, which includes the events of trampoline, synchronised trampoline, double mini trampoline and tumbling. Origins In the early 1930s, George Nissen observed trapeze artistes performing tricks when bouncing off the safety net. He made the first modern trampoline in his garage to reproduce this on a smaller scale and used it to help with his diving and tumbling activities. He formed a company to build trampolines for sale and used a variant of the Spanish word trampolín (diving board) as a trademark. He used the trampoline to entertain audiences and also let them participate in his demonstrations as part of his marketing strategy. This was the beginning of a new sport. In the United States, trampolining was quickly introduced into school physical education programs and was also used in private entertainment centers. However, following a number of injuries and lawsuits caused by insufficient supervision and inadequate training, trampolining is now mostly conducted in specialist gyms with certified trainers. This has caused a large reduction in the number of competitive athletes in the United States and a consequent decline from the earlier American prominence in the sport. Elsewhere in the world the sport was most strongly adopted in Europe and the former Soviet Union. Since trampolining became an Olympic sport in 2000, many more countries have started developing programs. Basic landing positions Competitive trampolining routines consist of combinations of 10 contacts with the trampoline bed combining varying rotations, twists and shapes with take-off and landing in one of four positions: Feet Seat Front Back A routine must always start and finish on feet. In addition to the 10 contacts with the bed in a routine, competitors must start their routine within 60 seconds after presenting to the judges. They are also permitted up to one "out bounce", a straight jump to control their height at the end of a routine, before sticking the landing. The trampolinist must stop completely—this means that the bed must stop moving as well—and they have to hold still for a count of 3 seconds before moving. Basic shapes In competitions, moves must usually be performed in one of the following 3 basic shapes: A fourth 'shape', known as 'puck' because it appears to be a hybrid of pike and tuck, is often used in multiple twisting somersaults—it is typically used in place of a 'tuck' and in the competition would normally be judged as an open tuck shape. A straddle or straddled pike is a variant of a pike with arms and legs spread wide and is only recognized as a move as a shaped jump and not in any somersault moves. Rotation is performed about the body's longitudinal and lateral axes, producing twists and somersaults respectively. Twists are done in multiples of a half and somersaults in multiples of a quarter. For example, a barani ball out move consists of a take-off from the back followed by a tucked 1¼  front somersault combined with a ½  twist, to land on feet. Rotation around the dorso-ventral axis is also possible (producing side-somersaults and "turntables"), but these are not generally considered to be valid moves within competitions and carry no 'tariff' for difficulty. Trampoline skills can be written in FIG (Federation Internationale de Gymnastique) shorthand. FIG shorthand consists of one digit signifying the number of quarter rotations, followed by digits representing the number of half twists in each somersault, and a symbol representing the position of the skill. "/" represents a straight position, "<" represents a pike position, and "ο" represents a tuck position. For example, 42/ is a back somersault with a full twist in the straight position, 800ο is a double back somersault with no twists in the tuck position, and 821/ is a double somersault that has a full twist in the first full somersault and a half twist in the second full somersault while remaining in a straight position. Competition Individual The first individual trampolining competitions were held in colleges and schools in the US and then in Europe. In the early years of competition there was no defined format with performers often completing lengthy routines and even remounting if falling off partway through. Gradually competitions became more codified such that by the 1950s the 10-bounce routine was the norm thereby paving the way for the first World Championships which were organised by Ted Blake of Nissen, and held in London in 1964. The first World Champions were both American, Dan Millman and Judy Wills Cline. Kurt Baechler of Switzerland and Ted Blake of England were the European pioneers and the first ever televised National Championships were held in England in 1958. Soon after the first World Championships, an inaugural meeting of prominent trampolinists was held in Frankfurt to explore the formation of an International Trampoline Federation. In 1965 in Twickenham, the Federation was formally recognised as the International Governing Body for the sport. In 1969, the first European Championship was held in Paris and Paul Luxon of London was the winner at the age of 18. The ladies winner was Ute Czech from Germany. From that time until 2010, European and World Championships have taken place in alternate years—the European in the odd and the World in the even. Now the World Championships are held annually. In 1973, Ted Blake organised the first World Age Group Competition (WAG) in the newly opened Picketts Lock Sports Centre; these now run alongside the World Championships. Blake also used the first WAG as an opportunity to organise a World Trampoline Safety Conference which was held in the Bloomsbury Hotel, London, in order to codify safety concerns. There is also a World Cup circuit of international competitions which involves a number of competitions every year. There are also international matches between teams from several countries. At first the Americans were successful at World Championship level, but soon European competitors began to dominate the sport and for a number of years, athletes from countries that made up the former Soviet Union have often dominated the sport. Germany and France have been the other strong nations in trampolining and the first four ranking places in World Trampolining used to go to USSR, France, Britain and Germany. In recent years, Canada has also produced Olympic medalists and World champions due in large part to contributions made to the sport by Dave Ross. Ross pioneered the sport in Canada almost 30 years ago and has consistently produced Olympic and World Cup athletes and champions. Since trampolining became an Olympic sport, China has also made a very successful effort to develop world-class trampoline gymnasts, their first major success was in the 2007 Men's World Championship and later in both Men's and Women's gold medals and a bronze in the 2008 Olympic Games held in Beijing. Since then they have won both World Championships and several Olympic medals. Synchronized In synchronized trampolining, two athletes perform exactly the same routine of ten skills at the same time on two adjacent trampolines. Each athlete is scored separately by a pair of judges for their form in the same manner as for individual competitions. Additional judges score the pair for synchronization. Fewer points are deducted for lack of synchronization if the pair are bouncing at the same height at the same time. The degree of difficulty of the routine is determined in the same way as for individual trampoline routines and the points added to the score to determine the winner. Double mini A double mini-trampoline is smaller than a regulation competition trampoline. It has a sloped end and a flat bed. The gymnasts run up and jump onto the sloping end and then jump onto the flat part before dismounting onto a mat. Skills are performed during the jumps or as they dismount. A double mini-trampoline competition consists of two types of pass. In the one, which is known as a mounter pass, the athlete performs one skill in the jump from the sloping end to the flat bed and a second skill as they dismount from the flat bed to the landing mat. In the second, which is known as a spotter pass, the athlete does a straight jump from the sloping end to the flat bed to gain height, then after landing on the flat, performs the first skill, then after landing on the flat a second time, performs a second skill as they dismount. These skills are similar to those performed on a regular trampoline except that there is forward movement along the trampoline. The form and difficulty are judged in a similar manner as for trampolining but there are additional deductions for failing to land cleanly (without stepping) or landing outside a designated area on the mat. Tumbling Tumbling gymnastics is a further discipline of gymnastics competed at national and international events, usually alongside trampoline events. Instead of a sprung trampoline, competitors do a single, long complex tumbling and somersaulting combination along a straight, sprung runway, leading to a high final somersault onto a landing mat. The skills involved are very similar to those used in floor exercise or vault routines in artistic gymnastics, but with an extra emphasis on continuity and directional accuracy than in either of those events. Tumbling is not an Olympic Games event, but has been held as part of the european Games, as well as individual World and continental championships. Format The International Trampoline Federation became part of the Fédération Internationale de Gymnastique in 1999. FIG is now the international governing body for the sport which is paired with tumbling as the skill sets overlap. International competitions are run under the rules of FIG. Individual national gymnastics organizations can make local variations to the rules in matters such as the compulsory and optional routines and number of rounds for national and local competitions. As part of the agreement to merge FIT with FIG, individual trampolining was accepted into the Summer Olympic Games for 2000 as an additional gymnastic sport. The currently accepted basic format for individual trampoline competitions usually consists of two or three routines, one of which may involve a compulsory set of skills. The skills consist of various combinations of somersaults, shaped bounces, body landings and twists performed in various body positions such as the tuck, pike or straight position. The routines are performed on a standard 14-foot-by-7-foot regulation-sized trampoline with a central marker. Each routine consists of the athlete performing ten different skills starting and finishing on the feet. Scoring The routine is marked out of 10 by five judges with deductions for incomplete moves or poor form. Usually the highest and lowest scores are discarded. Additional points can be added depending on the difficulty of the skills being performed. The degree of difficulty (DD or tariff) is calculated by adding a factor for each half turn (or twist) or quarter somersault. Difficulty is important in a routine, however, there are differences in opinion between various coaches whether it is better to focus on increasing the difficulty of routines given that this usually results in a reduced form score or to focus on improving execution scores by displaying better form in an easier routine. In senior level competitions, a "Time of Flight" (ToF) score was added to the overall score from 2010. This benefits athletes who can maintain greater height during their routines. "Time of Flight" is the time spent in the air from the moment the athlete leaves the mat until the time they make contact again and is measured with electronic timing equipment. The score given is the sum the time in seconds of all completed jumps. This is now mainly in all competitions, including Club, County and Regional, as it is a key factor in judging In 2017, the method of determining the horizontal displacement from the centre was changed, new markings were added to the bed and zones set up with deductions based on the distance from the centre of the trampoline bed. The score is determined by a deduction which is the sum of all the landing zone deductions subtracted from 10. The displacement is measured electronically where the equipment is available, or else by two judges observing the landing zones. The total score is a combination of the degree of difficulty (DD) performed plus the total Time of Flight (ToF) minus standardized deductions for poor form and mistakes and the horizontal displacement. Score records The official world record DD for men at a FIG sanctioned event is 18.00, achieved by Jason Burnett of Canada on April 30, 2010, at the Pacific Rim Championships in Melbourne, Australia. He beat his own world record of 17.50 that he had achieved on April 2, 2007, at the Lake Placid, New York Trampoline World Cup. Burnett beat the twenty-year-old record of 17.00 by Igor Gelimbatovsky (USSR, 1986) and Daniel Neale (GBR, 1999). The top competitors usually perform routines with a DD of 16.5 or greater. In 2009 Jason Burnett completed a training routine with a DD of 20.6 at Skyriders Trampoline Place in Canada. The women's world record DD is 16.20 by Samantha Smith (CAN). The top women competitors usually compete routines with a DD greater than 14.50. The women's synchronised trampoline pair of Karen Cockburn and Rosannagh Maclennan also of Canada completed a new world record DD of 14.20 at the same April 2, 2007 Lake Placid World Cup. Safety Although trampoline competitors are highly trained, they are also attempting to perform complex manoeuvres which could lead to accidents and falls. Trampolines used in competitions have their springs covered in pads to reduce the chance of injury when landing off the bed. They also have padded end decks, which are the locations that athletes are most likely to fall off the trampoline. The rules for international competitions (updated by FIG in 2006) also require 200mm thick mats on the floor for 2 metres around each trampoline and for there to be four spotters whose task it is to attempt to catch or reduce the impact of an athlete falling off the side of the trampoline bed. The floor matting rules are typically adopted by national bodies but not always in full; for example in the UK the requirement for National & Regional competition is still 2m but only of 20–25mm matting. Teenage trampoline athletes are at higher risk of injury with higher training loads. Among Olympic athletes at the 2008, 2012, and 2016 games, the injury rate for trampoline gymnasts was about half that for artistic gymnasts. References Sources Some original material extracted from Bounce 2000 information booklet: David Allen, Brisbane, Queensland Australia. External links Everything about trampolining and acrobatic sports Important Trampolining Safety Tips Gymnastics Individual sports Summer Olympic disciplines Acrobatic sports Jumping sports
wiki
Elbert Smith may refer to: Elbert A. Smith (1871–1959), American leader in the Reorganized Church of Jesus Christ of Latter Day Saints Elbert B. Smith (1920–2013), American historian and author Elbert H. Smith, 19th-century American poet Elbert S. Smith (1911–1983), American politician Elbert Smith, an actor in the horror movie, The Blob or 4D Man
wiki
"The Historical Praise of Reason" (original French title: "Éloge historique de la Raison") is a panegyric in the form of a biography, written by the philosopher Voltaire in 1774. Synopsis This fable in the form of a panegyric tells the story of the allegorical figure of Reason, who, after hiding in a well for years, finally emerges and realizes that her reign may have returned. External links English translation by Adi. S Bharat in 'Pusteblume' journal of translation. "Éloge historique de la Raison" (in French). Wikisource. "Éloge historique de la Raison" audiobook Works by Voltaire Panegyrics
wiki
Knuckle threads or round threads are an unusual highly rounded thread form. The large space between the rounded crests and roots provides space for debris to be shifted to not interfere with the thread, making this form resistant to debris and thread damage. Standards Knuckle threads with a flat 30 degree flank thread angle are standardized in DIN 405 for inch pitches and diameters ranging from 8 mm to 200 mm. A more recent standard DIN 20400 uses metric thread pitch and lists diameters from 10 mm to 300 mm. As DIN is a German organization, many instances of the DIN thread charts write numbers with a comma as the decimal marker. For a thread pitch p, the crest and root rounding radius is slightly less than p/4, and approximately the middle third of each thread flank is flat. The American Petroleum Institute (API) specification 5B lists a "round thread" for tubing, often referred to as "8 round" or "8rd" for 8 threads per inch. The only other pitch listed is 10 threads per inch. The thread angle at the flank is 60 degrees, and the crest and root rounding radius is approximately p/6 for threads of pitch p. Both the thread angle and rounding radius are more like ordinary ISO threads than DIN. For 0.125 inch thread pitch (8 threads per inch), the API round thread root radius is 0.017 inch, and the crest radius is 0.020 inch. API threads taper at 3/4 inch of diameter per foot of length. Many knuckle threads follow neither the DIN nor API standards, and are custom designed and fabricated for the particular application. Applications This thread form's good debris tolerance is the source of its use in oilfields, where it provides a leak-free connection in field use. Many European automobiles use a knuckle thread for their "tow eye," an external threaded recess where a steel towing lug can be fitted. For example, some Porsche cars use DIN 405 Rd 20 x 1/8", and some BMW use a thread similar to DIN 405 Rd 16 x 1/8". Other applications use a knuckle thread's rounded edges to reduce the stress on softer materials at a point of connection. Some linear actuators use a knuckle thread to reduce the wear of the steel leadscrew against a plastic sliding nut. Firefighting respirator standard EN 148-1 "Respiratory protective devices - Threads for facepieces" lists a knuckle thread to connect metal and rubber parts. The rounded crest and root of knuckle threads resembles the Edison screw used on light bulbs, although bulbs have a much shallower thread angle than most knuckle threads. The root profile of knuckle threads resembles a ball screw thread, although the flank and crest of ball screw threads is often truncated. References External links Knuckle thread Threading (manufacturing)
wiki
This timeline of online dating services also includes broader events related to technology-assisted dating (not just online dating). Where there are similar services, only major ones or the first of its kind are listed. Dominance of online dating A 2017 survey tracked the change in how Americans meet their spouses and romantic partners since 1940. The results showed a steep increase in the proportion of couples whose first interaction occurred through online media. See also Comparison of online dating services References Online dating services
wiki
This is a comparison of English dictionaries, which are dictionaries about the language of English. The dictionaries listed here are categorized into "full-size" dictionaries (which extensively cover the language, and are targeted to native speakers), "collegiate" (which are smaller, and often contain other biographical or geographical information useful to college students), and "learner's" (which are even smaller, targeted to English language learners, and which all use the International Phonetic Alphabet to indicate pronunciation). Full-size These dictionaries generally aim for extensive coverage of the language for native speakers. They typically only cover one variety of English. Collegiate These dictionaries generally contain fewer entries (and fewer definitions per entry) than their full-size counterparts but may contain additional material, such as biographical or geopolitical information, that would be useful to a college student. They may be revised more often and thus contain more up to date usage. Sometimes the term collegiate or college is used merely to indicate a physically smaller, more economically printed dictionary. Learner's These dictionaries generally contain fewer entries than full-size or collegiate dictionaries but contain additional information that would be useful to a learner of English, such as more extensive usage notes, example sentences or phrases, collocations, and both British and American pronunciations (sometimes multiple variants of the latter). In addition, definitions are usually restricted to a simpler core vocabulary than that expected of a native speaker. All use the IPA to indicate pronunciation. Notes English dictionaries English dictionaries
wiki
Para table tennis is a parasports which follows the rules set by the International Table Tennis Federation (ITTF). The usual table tennis rules are in effect with slight modifications for wheelchair athletes. Athletes from disability groups can take part. Athletes receive classifications between 1-11. Classes 1-5 are for those in wheelchairs and classes 6-10 for those who have disabilities that allow them to play standing. Within those groups, the higher classification means the more function the athlete has. Class 11 is defined for players with an intellectual disability. Classification The roles of classification are to determine eligibility to compete for athletes with disability and to group athletes equitably for competition purposes. Athletes are grouped by reference to functional ability, resulting from their impairment. Sitting classes Class 1:No sitting balance with severe reduction of function in the playing arm. Class 2:No sitting balance with reduction of function in the playing arm. Class 3:No sitting balance, although the upper part of the trunk may show activity.Normal arms, although some slight motor losses can be found in the playing hand without significant effect on table tennis skills.The non-playing arm keeps the trunk in position. Class 4:Existing sitting balance although not optimal because of non-existing anchorage (stabilisation) of the pelvis. Class 5:Normal function of trunk muscles. Standing classes Class 6:Severe impairments of legs and arms. Class 7:Very severe impairments of legs (poor static and dynamic balance), orsevere to moderate impairments of playing arm, orcombination of arms and legs impairments less severe than in class 6. Class 8:Moderate impairments of the legs, ormoderate impairments of playing arm (considering that elbow and shoulder control is very important), ormoderate cerebral palsy, hemiplegia or diplegia with good playing arm. Class 9:Mild impairments of the leg(s), ormild impairments of playing arm, orsevere impairments of non-playing arm, ormild cerebral palsy with hemiparesis or monoplegia. Class 10:Very mild impairments in legs, orvery mild impairment of playing arm, orsevere to moderate impairment of non-playing arm, ormoderate impairment of the trunk. Class 11:For players with an intellectual disability. Laws of table tennis in wheelchair There are no exceptions to the laws of table tennis for standing players with a disability. All players play according to the laws and regulations of the ITTF. The umpire may relax the requirements for a correct service if the compliance is prevented by physical disability. Service If the receiver is in wheelchair, the service shall be a let under the following circumstances: After touching the receiver's court, the ball returns in the direction of the net. The ball comes to rest on the receiver's court. In singles, the ball leaves the receiver's court after touching it by either of its sidelines. If the receiver strikes the ball before it crosses a sideline or takes a second bounce on his or her side of the playing surface, the service is considered good and no let is called. Doubles When two players who are in wheelchairs are a pair playing doubles, the server shall first make a service, the receiver shall then make a return but thereafter either player of the disabled pair may make returns. However, no part of a player's wheelchair shall protrude beyond the imaginary extension of the center line of the table. If it does, the umpire shall award the point to the opposing pair. Limb positions If both players or pairs are in a wheelchair, the player or the pair score a point if: the opponent does not maintain a minimum contact with the seat or cushion(s), with the back of the thigh, when the ball is struck. the opponent touches the table with either hand before striking the ball. the opponent's footrest or foot touches the floor during play. Wheelchairs Wheelchairs must have at least two large wheels and one small wheel. If the wheels on the player's wheelchair become dislodged and the wheelchair has no more than two wheels, then the rally must be stopped immediately and a point awarded to their opponent. The height of one or maximum two cushions is limited to 15 cm in playing conditions with no other addition to the wheelchair. In team and class events, no part of the body above the knees may be attached to the chair as this could improve balance. Equipment and playing conditions A player may not normally wear any part of a tracksuit during play. A player with a physical disability, either in a wheelchair or standing, may wear the trousers portion of a tracksuit during play, but jeans are not permitted. Table legs shall be at least 40 cm from the end line of the table for wheelchair players. In international competitions, the playing space is not less than 14m long, 7m wide and the flooring shall not be concrete. The space for wheelchair events may be reduced to 8m long and 6m wide. The flooring may be of concrete for wheelchair events, which is prohibited on other occasions. Competitions 5 levels of international competitions are sanctioned. Factor of the tournament is counted in the tournament credit system used for qualification purpose of some tournaments. Players participate in regional championships (Fa50) earn 50 credit points. The tournament credit for the 2012 Summer Paralympics is 80 credit points must be met during a period starting on 4 November 2010 until 31 December 2011. It's also required a minimum tournament credit for the qualification of World Para Table Tennis Championships. Notable players Ibrahim Hamadtou, an Egyptian Class 6 Champion, is one of the most well-known Paralympic table tennis players, since his YouTube video went viral. Hamadtou has won many awards, including silver medals in African Para table tennis Championships held in 2013 and 2011. He holds his racket with his mouth. Natalia Partyka, a Four-time Olympian, participates in Class 10 events at the para table tennis tournaments, representing Poland. Born without a right hand and forearm, she participates in competitions for able-bodied athletes as well as in competitions for athletes with disabilities. Partyka reached the last 32 of the London 2012 Olympic women's table tennis. References External links ITTF Para Table Tennis Table Tennis on International Paralympic Committee website Table tennisPara table tennis is a parasports which follows the rules set by the International Table Tennis Federation (ITTF). The usual table tennis rules are in effect with slight modifications for wheelchair athletes. Athletes from disability groups can take part. Athletes receive classifications between 1-11. Classes 1-5 are for those in wheelchairs and classes 6-10 for those who have disabilities that allow them to play standing. Within those groups, the higher classification means the more function the athlete has. Class 11 is defined for players with an intellectual disability.
wiki
The room synchronization technique is a form of concurrency control in computer science. The room synchronization problem involves supporting a set of m mutually exclusive "rooms" where any number of users can execute code simultaneously in a shared room (any one of them), but no two users can simultaneously execute code in separate rooms. Room synchronization can be used to implement asynchronous parallel queues and stacks with constant time access (assuming a fetch-and-add operation). References G.E. Blelloch, P. Cheng, P.B. Gibbons, Room synchronizations, Annual ACM Symposium on Parallel Algorithms and Architectures 2001, 122–133 See also Monitor (synchronization). The Single Threaded Apartment Model in Microsoft's Component Object Model#Threading, as used by Visual Basic. Concurrency control
wiki
Tunebia is een geslacht van kreeftachtigen uit de klasse van de Malacostraca (hogere kreeftachtigen). Soorten Tunebia hatagumoana (Sakai, 1961) Tunebia tutelina (C. G. S. Tan & Ng, 1994) Xanthidae
wiki
Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archive may also refer to: Webarchive, file format for saving and reviewing complete web pages using the Safari web browser Web ARChive, archive format Wayback Machine, digital archive of the World Wide Web and other information on the Internet Web archive file, file that archives inside it all the content of one web page Web application archive, or WAR, file format See also Internet archive (disambiguation)
wiki
Harmonic series may refer to either of two related concepts: Harmonic series (mathematics) Harmonic series (music)
wiki
The Yulungshan vole, Yulong Chinese vole, Yulongxuen Chinese vole, or Yulongxuen red-backed vole (Eothenomys proditor) is a species of rodent in the family Cricetidae, endemic to Jade Dragon Snow Mountain in the Sichuan–Yunnan border region of China. References Musser, G. G. and M. D. Carleton. 2005. Superfamily Muroidea. pp. 894–1531 in Mammal Species of the World a Taxonomic and Geographic Reference. D. E. Wilson and D. M. Reeder eds. Johns Hopkins University Press, Baltimore. Eothenomys Rodents of China Endemic fauna of China Mammals described in 1923 Taxa named by Martin Hinton Taxonomy articles created by Polbot
wiki
Gran Sport may refer to: Buick Gran Sport, 1970s sports car Maserati Gran Sport, 2000s sports car Alfa Romeo Gran Sport Quattroruote, 1960s car
wiki
Gerbillurus is a genus of rodent in the family Muridae. It contains the following species: Hairy-footed gerbil (Gerbillurus paeba) Namib brush-tailed gerbil (Gerbillurus setzeri) Dune hairy-footed gerbil (Gerbillurus tytonis) Bushy-tailed hairy-footed gerbil (Gerbillurus vallinus) References Rodent genera Taxonomy articles created by Polbot
wiki
The bushy-tailed hairy-footed gerbil (Gerbillurus vallinus) is a species of rodent found in Namibia and South Africa. Its natural habitats are dry savanna, temperate grassland, and hot deserts. It is threatened by habitat loss. References Gerbillurus Rodents of Africa Mammals described in 1918 Taxa named by Oldfield Thomas Taxonomy articles created by Polbot
wiki
A machine tool is a machine for handling or machining metal or other rigid materials, usually by cutting, boring, grinding, shearing, or other forms of deformations. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the work piece and provide a guided movement of the parts of the machine. Thus, the relative movement between the workpiece and the cutting tool (which is called the toolpath) is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or "freehand". It is a power-driven metal cutting machine which assists in managing the needed relative motion between cutting tool and the job that changes the size and shape of the job material. The precise definition of the term machine tool varies among users, as discussed below. While all machine tools are "machines that help people to make things", not all factory machines are machine tools. Today machine tools are typically powered other than by the human muscle (e.g., electrically, hydraulically, or via line shaft), used to make manufactured parts (components) in various ways that include cutting or certain other kinds of deformation. With their inherent precision, machine tools enabled the economical production of interchangeable parts. Nomenclature and key concepts, interrelated Many historians of technology consider that true machine tools were born when the toolpath first became guided by the machine itself in some way, at least to some extent, so that direct, freehand human guidance of the toolpath (with hands, feet, or mouth) was no longer the only guidance used in the cutting or forming process. In this view of the definition, the term, arising at a time when all tools up till then had been hand tools, simply provided a label for "tools that were machines instead of hand tools". Early lathes, those prior to the late medieval period, and modern woodworking lathes and potter's wheels may or may not fall under this definition, depending on how one views the headstock spindle itself; but the earliest historical records of a lathe with direct mechanical control of the cutting tool's path are of a screw-cutting lathe dating to about 1483. This lathe "produced screw threads out of wood and employed a true compound slide rest". The mechanical toolpath guidance grew out of various root concepts: First is the spindle concept itself, which constrains workpiece or tool movement to rotation around a fixed axis. This ancient concept predates machine tools per se; the earliest lathes and potter's wheels incorporated it for the workpiece, but the movement of the tool itself on these machines was entirely freehand. The machine slide (tool way), which has many forms, such as dovetail ways, box ways, or cylindrical column ways. Machine slides constrain tool or workpiece movement linearly. If a stop is added, the length of the line can also be accurately controlled. (Machine slides are essentially a subset of linear bearings, although the language used to classify these various machine elements may be defined differently by some users in some contexts, and some elements may be distinguished by contrasting with others) Tracing, which involves following the contours of a model or template and transferring the resulting motion to the toolpath. Cam operation, which is related in principle to tracing but can be a step or two removed from the traced element's matching the reproduced element's final shape. For example, several cams, no one of which directly matches the desired output shape, can actuate a complex toolpath by creating component vectors that add up to a net toolpath. Van Der Waals Force between like materials is high; freehand manufacture of square plates, produces only square, flat, machine tool building reference components, accurate to millionths of an inch, but of nearly no variety. The process of feature replication allows the flatness and squareness of a milling machine cross slide assembly, or the roundness, lack of taper, and squareness of the two axes of a lathe machine to be transferred to a machined work piece with accuracy and precision better than a thousandth of an inch, not as fine as millionths of an inch. As the fit between sliding parts of a made product, machine, or machine tool approaches this critical thousandth of an inch measurement, lubrication and capillary action combine to prevent Van Der Waals force from welding like metals together, extending the lubricated life of sliding parts by a factor of thousands to millions; the disaster of oil depletion in the conventional automotive engine is an accessible demonstration of the need, and in aerospace design, like-to-unlike design is used along with solid lubricants to prevent Van Der Waals welding from destroying mating surfaces. Given the modulus of elasticity of metals, the range of fit tolerances near one thousandth of an inch correlates to the relevant range of constraint between at one extreme, permanent assembly of two mating parts and at the other, a free sliding fit of those same two parts. Abstractly programmable toolpath guidance began with mechanical solutions, such as in musical box cams and Jacquard looms. The convergence of programmable mechanical control with machine tool toolpath control was delayed many decades, in part because the programmable control methods of musical boxes and looms lacked the rigidity for machine tool toolpaths. Later, electromechanical solutions (such as servos) and soon electronic solutions (including computers) were added, leading to numerical control and computer numerical control. When considering the difference between freehand toolpaths and machine-constrained toolpaths, the concepts of accuracy and precision, efficiency, and productivity become important in understanding why the machine-constrained option adds value. Matter-Additive, Matter-Preserving, and Matter-Subtractive "Manufacturing" can proceed in sixteen ways: Firstly, the work may be held either in a hand, or a clamp; secondly, the tool may be held either in a hand, or a clamp; thirdly, the energy can come from either the hand(s) holding the tool and/or the work, or from some external source, including for examples a foot treadle by the same worker, or a motor, without limitation; and finally, the control can come from either the hand(s) holding the tool and/or the work, or from some other source, including computer numerical control. With two choices for each of four parameters, the types are enumerated to sixteen types of Manufacturing, where Matter-Additive might mean painting on canvas as readily as it might mean 3D printing under computer control, Matter-Preserving might mean forging at the coal fire as readily as stamping license plates, and Matter-Subtracting might mean casually whittling a pencil point as readily as it might mean precision grinding the final form of a laser deposited turbine blade. Humans are generally quite talented in their freehand movements; the drawings, paintings, and sculptures of artists such as Michelangelo or Leonardo da Vinci, and of countless other talented people, show that human freehand toolpath has great potential. The value that machine tools added to these human talents is in the areas of rigidity (constraining the toolpath despite thousands of newtons (pounds) of force fighting against the constraint), accuracy and precision, efficiency, and productivity. With a machine tool, toolpaths that no human muscle could constrain can be constrained; and toolpaths that are technically possible with freehand methods, but would require tremendous time and skill to execute, can instead be executed quickly and easily, even by people with little freehand talent (because the machine takes care of it). The latter aspect of machine tools is often referred to by historians of technology as "building the skill into the tool", in contrast to the toolpath-constraining skill being in the person who wields the tool. As an example, it is physically possible to make interchangeable screws, bolts, and nuts entirely with freehand toolpaths. But it is economically practical to make them only with machine tools. In the 1930s, the U.S. National Bureau of Economic Research (NBER) referenced the definition of a machine tool as "any machine operating by other than hand power which employs a tool to work on metal". The narrowest colloquial sense of the term reserves it only for machines that perform metal cutting—in other words, the many kinds of [conventional] machining and grinding. These processes are a type of deformation that produces swarf. However, economists use a slightly broader sense that also includes metal deformation of other types that squeeze the metal into shape without cutting off swarf, such as rolling, stamping with dies, shearing, swaging, riveting, and others. Thus presses are usually included in the economic definition of machine tools. For example, this is the breadth of definition used by Max Holland in his history of Burgmaster and Houdaille, which is also a history of the machine tool industry in general from the 1940s through the 1980s; he was reflecting the sense of the term used by Houdaille itself and other firms in the industry. Many reports on machine tool export and import and similar economic topics use this broader definition. The colloquial sense implying [conventional] metal cutting is also growing obsolete because of changing technology over the decades. The many more recently developed processes labeled "machining", such as electrical discharge machining, electrochemical machining, electron beam machining, photochemical machining, and ultrasonic machining, or even plasma cutting and water jet cutting, are often performed by machines that could most logically be called machine tools. In addition, some of the newly developed additive manufacturing processes, which are not about cutting away material but rather about adding it, are done by machines that are likely to end up labeled, in some cases, as machine tools. In fact, machine tool builders are already developing machines that include both subtractive and additive manufacturing in one work envelope, and retrofits of existing machines are underway. The natural language use of the terms varies, with subtle connotative boundaries. Many speakers resist using the term "machine tool" to refer to woodworking machinery (joiners, table saws, routing stations, and so on), but it is difficult to maintain any true logical dividing line, and therefore many speakers accept a broad definition. It is common to hear machinists refer to their machine tools simply as "machines". Usually the mass noun "machinery" encompasses them, but sometimes it is used to imply only those machines that are being excluded from the definition of "machine tool". This is why the machines in a food-processing plant, such as conveyors, mixers, vessels, dividers, and so on, may be labeled "machinery", while the machines in the factory's tool and die department are instead called "machine tools" in contradistinction. Regarding the 1930s NBER definition quoted above, one could argue that its specificity to metal is obsolete, as it is quite common today for particular lathes, milling machines, and machining centers (definitely machine tools) to work exclusively on plastic cutting jobs throughout their whole working lifespan. Thus the NBER definition above could be expanded to say "which employs a tool to work on metal or other materials of high hardness". And its specificity to "operating by other than hand power" is also problematic, as machine tools can be powered by people if appropriately set up, such as with a treadle (for a lathe) or a hand lever (for a shaper). Hand-powered shapers are clearly "the 'same thing' as shapers with electric motors except smaller", and it is trivial to power a micro lathe with a hand-cranked belt pulley instead of an electric motor. Thus one can question whether power source is truly a key distinguishing concept; but for economics purposes, the NBER's definition made sense, because most of the commercial value of the existence of machine tools comes about via those that are powered by electricity, hydraulics, and so on. Such are the vagaries of natural language and controlled vocabulary, both of which have their places in the business world. History Forerunners of machine tools included bow drills and potter's wheels, which had existed in ancient Egypt prior to 2500 BC, and lathes, known to have existed in multiple regions of Europe since at least 1000 to 500 BC. But it was not until the later Middle Ages and the Age of Enlightenment that the modern concept of a machine tool—a class of machines used as tools in the making of metal parts, and incorporating machine-guided toolpath—began to evolve. Clockmakers of the Middle Ages and renaissance men such as Leonardo da Vinci helped expand humans' technological milieu toward the preconditions for industrial machine tools. During the 18th and 19th centuries, and even in many cases in the 20th, the builders of machine tools tended to be the same people who would then use them to produce the end products (manufactured goods). However, from these roots also evolved an industry of machine tool builders as we define them today, meaning people who specialize in building machine tools for sale to others. Historians of machine tools often focus on a handful of major industries that most spurred machine tool development. In order of historical emergence, they have been firearms (small arms and artillery); clocks; textile machinery; steam engines (stationary, marine, rail, and otherwise) (the story of how Watt's need for an accurate cylinder spurred Boulton's boring machine is discussed by Roe); sewing machines; bicycles; automobiles; and aircraft. Others could be included in this list as well, but they tend to be connected with the root causes already listed. For example, rolling-element bearings are an industry of themselves, but this industry's main drivers of development were the vehicles already listed—trains, bicycles, automobiles, and aircraft; and other industries, such as tractors, farm implements, and tanks, borrowed heavily from those same parent industries. Machine tools filled a need created by textile machinery during the Industrial Revolution in England in the middle to late 1700s. Until that time, machinery was made mostly from wood, often including gearing and shafts. The increase in mechanization required more metal parts, which were usually made of cast iron or wrought iron. Cast iron could be cast in molds for larger parts, such as engine cylinders and gears, but was difficult to work with a file and could not be hammered. Red hot wrought iron could be hammered into shapes. Room temperature wrought iron was worked with a file and chisel and could be made into gears and other complex parts; however, hand working lacked precision and was a slow and expensive process. James Watt was unable to have an accurately bored cylinder for his first steam engine, trying for several years until John Wilkinson invented a suitable boring machine in 1774, boring Boulton & Watt's first commercial engine in 1776. The advance in the accuracy of machine tools can be traced to Henry Maudslay and refined by Joseph Whitworth. That Maudslay had established the manufacture and use of master plane gages in his shop (Maudslay & Field) located on Westminster Road south of the Thames River in London about 1809, was attested to by James Nasmyth who was employed by Maudslay in 1829 and Nasmyth documented their use in his autobiography. The process by which the master plane gages were produced dates back to antiquity but was refined to an unprecedented degree in the Maudslay shop. The process begins with three square plates each given an identification (ex., 1,2 and 3). The first step is to rub plates 1 and 2 together with a marking medium (called bluing today) revealing the high spots which would be removed by hand scraping with a steel scraper, until no irregularities were visible. This would not produce true plane surfaces but a "ball and socket" concave-concave and convex-convex fit, as this mechanical fit, like two perfect planes, can slide over each other and reveal no high spots. The rubbing and marking are repeated after rotating 2 relative to 1 by 90 degrees to eliminate concave-convex "potato-chip" curvature. Next, plate number 3 is compared and scraped to conform to plate number 1 in the same two trials. In this manner plates number 2 and 3 would be identical. Next plates number 2 and 3 would be checked against each other to determine what condition existed, either both plates were "balls" or "sockets" or "chips" or a combination. These would then be scraped until no high spots existed and then compared to plate number 1. Repeating this process of comparing and scraping the three plates could produce plane surfaces accurate to within millionths of an inch (the thickness of the marking medium). The traditional method of producing the surface gages used an abrasive powder rubbed between the plates to remove the high spots, but it was Whitworth who contributed the refinement of replacing the grinding with hand scraping. Sometime after 1825, Whitworth went to work for Maudslay and it was there that Whitworth perfected the hand scraping of master surface plane gages. In his paper presented to the British Association for the Advancement of Science at Glasgow in 1840, Whitworth pointed out the inherent inaccuracy of grinding due to no control and thus unequal distribution of the abrasive material between the plates which would produce uneven removal of material from the plates. With the creation of master plane gages of such high accuracy, all critical components of machine tools (i.e., guiding surfaces such as machine ways) could then be compared against them and scraped to the desired accuracy. The first machine tools offered for sale (i.e., commercially available) were constructed by Matthew Murray in England around 1800. Others, such as Henry Maudslay, James Nasmyth, and Joseph Whitworth, soon followed the path of expanding their entrepreneurship from manufactured end products and millwright work into the realm of building machine tools for sale. Important early machine tools included the slide rest lathe, screw-cutting lathe, turret lathe, milling machine, pattern tracing lathe, shaper, and metal planer, which were all in use before 1840. With these machine tools the decades-old objective of producing interchangeable parts was finally realized. An important early example of something now taken for granted was the standardization of screw fasteners such as nuts and bolts. Before about the beginning of the 19th century, these were used in pairs, and even screws of the same machine were generally not interchangeable. Methods were developed to cut screw thread to a greater precision than that of the feed screw in the lathe being used. This led to the bar length standards of the 19th and early 20th centuries. American production of machine tools was a critical factor in the Allies' victory in World War II. Production of machine tools tripled in the United States in the war. No war was more industrialized than World War II, and it has been written that the war was won as much by machine shops as by machine guns. The production of machine tools is concentrated in about 10 countries worldwide: China, Japan, Germany, Italy, South Korea, Taiwan, Switzerland, US, Austria, Spain and a few others. Machine tool innovation continues in several public and private research centers worldwide. Drive power sources “all the turning of the iron for the cotton machinery built by Mr. Slater was done with hand chisels or tools in lathes turned by cranks with hand power”. David Wilkinson Machine tools can be powered from a variety of sources. Human and animal power (via cranks, treadles, treadmills, or treadwheels) were used in the past, as was water power (via water wheel); however, following the development of high-pressure steam engines in the mid 19th century, factories increasingly used steam power. Factories also used hydraulic and pneumatic power. Many small workshops continued to use water, human and animal power until electrification after 1900. Today most machine tools are powered by electricity; hydraulic and pneumatic power are sometimes used, but this is uncommon. Automatic control Machine tools can be operated manually, or under automatic control. Early machines used flywheels to stabilize their motion and had complex systems of gears and levers to control the machine and the piece being worked on. Soon after World War II, the numerical control (NC) machine was developed. NC machines used a series of numbers punched on paper tape or punched cards to control their motion. In the 1960s, computers were added to give even more flexibility to the process. Such machines became known as computerized numerical control (CNC) machines. NC and CNC machines could precisely repeat sequences over and over, and could produce much more complex pieces than even the most skilled tool operators. Before long, the machines could automatically change the specific cutting and shaping tools that were being used. For example, a drill machine might contain a magazine with a variety of drill bits for producing holes of various sizes. Previously, either machine operators would usually have to manually change the bit or move the work piece to another station to perform these different operations. The next logical step was to combine several different machine tools together, all under computer control. These are known as machining centers, and have dramatically changed the way parts are made. Examples Examples of machine tools are: Broaching machine Drill press Gear shaper Hobbing machine Hone Lathe Screw machines Milling machine Shear (sheet metal) Shaper Bandsaw Saws Planer Stewart platform mills Grinding machines Multitasking machines (MTMs)—CNC machine tools with many axes that combine turning, milling, grinding, and material handling into one highly automated machine tool When fabricating or shaping parts, several techniques are used to remove unwanted metal. Among these are: Electrical discharge machining Grinding (abrasive cutting) Multiple edge cutting tools Single edge cutting tools Other techniques are used to add desired material. Devices that fabricate components by selective addition of material are called rapid prototyping machines. Machine tool manufacturing industry The worldwide market for machine tools was approximately $81 billion in production in 2014 according to a survey by market research firm Gardner Research. The largest producer of machine tools was China with $23.8 billion of production followed by Germany and Japan at neck and neck with $12.9 billion and $12.88 billion respectively. South Korea and Italy rounded out the top 5 producers with revenue of $5.6 billion and $5 billion respectively. See also References Bibliography A history most specifically of Burgmaster, which specialized in turret drills; but in telling Burgmaster's story, and that of its acquirer Houdaille, Holland provides a history of the machine tool industry in general between World War II and the 1980s that ranks with Noble's coverage of the same era (Noble 1984) as a seminal history. Later republished under the title From Industry to Alchemy: Burgmaster, a Machine Tool Company. . The Moore family firm, the Moore Special Tool Company, independently invented the jig borer (contemporaneously with its Swiss invention), and Moore's monograph is a seminal classic of the principles of machine tool design and construction that yield the highest possible accuracy and precision in machine tools (second only to that of metrological machines). The Moore firm epitomized the art and science of the tool and die maker. . A seminal classic of machine tool history. Extensively cited by later works. . Collection of previously published monographs bound as one volume. A collection of seminal classics of machine tool history. Further reading A memoir that contains quite a bit of general history of the industry. . A monograph with a focus on history, economics, and import and export policy. Original 1976 publication: LCCN 75-046133, . One of the most detailed histories of the machine tool industry from the late 18th century through 1932. Not comprehensive in terms of firm names and sales statistics (like Floud focuses on), but extremely detailed in exploring the development and spread of practicable interchangeability, and the thinking behind the intermediate steps. Extensively cited by later works. One of the most detailed histories of the machine tool industry from World War II through the early 1980s, relayed in the context of the social impact of evolving automation via NC and CNC. . A biography of a machine tool builder that also contains some general history of the industry. Ryder, Thomas and Son, Machines to Make Machines 1865 to 1968, a centenary booklet, (Derby: Bemrose & Sons, 1968) External links Milestones in the History of Machine Tools Industrial machinery Machines Machining Tools Woodworking
wiki