text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Eight-bar blues
In music, an eight-bar blues is a typical blues chord progression, "the second most common blues form," "common to folk, rock, and jazz forms of the blues," taking eight or bars to the verse.
Examples include "Sitting on Top of the World" and "Key to the Highway", "Trouble in Mind" and "Stagolee". "Heartbreak Hotel", "How Long Blues", "Ain't Nobody's Business", "Cherry Red", It Hurts Me Too, Worried Life Blues, and "Get a Haircut" are all eight-bar blues standards.
One variant using this progression is to couple one eight-bar blues melody with a different eight-bar blues bridge to create a blues variant of the standard 32-bar song. "Walking By Myself", "I Want a Little Girl" and "(Romancing) In The Dark" are examples of this form. See also blues ballad.
Eight bar blues progressions have more variations than the more rigidly defined twelve bar format. The move to the IV chord usually happens at bar 3 (as opposed to 5 in twelve bar); however, "the I chord moving to the V chord right away, in the second measure, is a characteristic of the eight-bar blues."
In the following examples each box represents a 'bar' of music (the specific time signature is not relevant). The chord in the box is played for the full bar. If two chords are in the box they are each played for half a bar, etc. The chords are represented as scale degrees in Roman numeral analysis. Roman numerals are used so the musician may understand the progression of the chords regardless of the key it is played in.
"Worried Life Blues" (probably the most common eight bar blues progression):
"Heartbreak Hotel" (variation with the I on the first half):
J. B. Lenoir's "Slow Down" and "Key to the Highway" (variation with the V at bar 2):
"Get a Haircut" by George Thorogood (simple progression):
Jimmy Rogers' "Walkin' By Myself" (somewhat unorthodox example of the form):
Howlin Wolf's version of "Sitting on Top of the World" is actually a 9 bar blues that adds an extra "V" chord at the end of the progression. The song uses movement between major and dominant 7th and major and minor fourth:
The first four bar progression used by Wolf is also used in Nina Simone's 1965 version of "Trouble in Mind", but with a more uptempo beat than "Sitting on Top of the World":
The progression may be created by dropping the first four bars from the twelve-bar blues, as in the solo section of Bonnie Raitt's "Love Me Like a Man" and Buddy Guy's "Mary Had a Little Lamb":
There are at least a few very successful songs using somewhat unusual chord progressions as well. For example, the song "Ain't Nobody's Business" as performed by Freddie King at least, uses a I–III–IV–iv progression in each of the first four bars. The same four bar progression is used by the band Radiohead to make up the bulk of the song "Creep".
The same chord progression can also be called a sixteen-bar blues, if each symbol above is taken to be a half note in or time. Examples are "Nine Pound Hammer" and Ray Charles's original instrumental "Sweet Sixteen Bars". | https://en.wikipedia.org/wiki?curid=9719 |
Edward Waring
Edward Waring (15 August 1798) was a British mathematician. He entered Magdalene College, Cambridge as a sizar and became Senior wrangler in 1757. He was elected a Fellow of Magdalene and in 1760 Lucasian Professor of Mathematics, holding the chair until his death. He made the assertion known as Waring's problem without proof in his writings "Meditationes Algebraicae". Waring was elected a Fellow of the Royal Society in 1763 and awarded the Copley Medal in 1784.
Waring was the eldest son of John and Elizabeth Waring, a prosperous farming couple. He received his early education in Shrewsbury School under a Mr Hotchkin and was admitted as a sizar at Magdalene College, Cambridge, on 24 March 1753, being also Millington exhibitioner. His extraordinary talent for mathematics was recognised from his early years in Cambridge. In 1757 he graduated BA as senior wrangler and on 24 April 1758 was elected to a fellowship at Magdalene. He belonged to the Hyson Club, whose members included William Paley.
At the end of 1759 Waring published the first chapter of "Miscellanea Analytica". On 28 January the next year he was appointed Lucasian professor of mathematics, one of the highest positions in Cambridge. William Samuel Powell, then tutor in St John's College, Cambridge opposed Waring's election and instead supported the candidacy of William Ludlam. In the polemic with Powell, Waring was backed by John Wilson. In fact Waring was very young and did not hold the MA, necessary for qualifying for the Lucasian chair, but this was granted him in 1760 by royal mandate. In 1762 he published the full "Miscellanea Analytica", mainly devoted to the theory of numbers and algebraic equations. In 1763 he was elected to the Royal Society. He was awarded its Copley Medal in 1784 but withdrew from the society in 1795, after he had reached sixty, 'on account of [his] age'. Waring was also a member of the academies of sciences of Göttingen and Bologna. In 1767 he took an MD degree, but his activity in medicine was quite limited. He carried out dissections with Richard Watson, professor of chemistry and later bishop of Llandaff. From about 1770 he was physician at Addenbrooke's Hospital at Cambridge, and he also practised at St Ives, Huntingdonshire, where he lived for some years after 1767. His career as a physician was not very successful since he was seriously short-sighted and a very shy man.
Waring had a younger brother, Humphrey, who obtained a fellowship at Magdalene in 1775. In 1776 Waring married Mary Oswell, sister of a draper in Shrewsbury; they moved to Shrewsbury and then retired to Plealey, 8 miles out of the town, where Waring owned an estate of 215 acres in 1797
Waring wrote a number of papers in the "Philosophical Transactions of the Royal Society", dealing with the resolution of algebraic equations, number theory, series, approximation of roots, interpolation, the geometry of conic sections, and dynamics. The "Meditationes Algebraicae" (1770), where many of the results published in "Miscellanea Analytica" were reworked and expanded, was described by Joseph-Louis Lagrange as 'a work full of excellent researches'. In this work Waring published many theorems concerning the solution of algebraic equations which attracted the attention of continental mathematicians, but his best results are in number theory. Included in this work was the so-called Goldbach conjecture (every even integer is the sum of two primes), and also the following conjecture: every odd integer is a prime or the sum of three primes. Lagrange had proved that every positive integer is the sum of not more than four squares; Waring suggested that every positive integer is either a cube or the sum of not more than nine cubes. He also advanced the hypothesis that every positive integer is either a biquadrate (fourth power) or the sum of not more than nineteen biquadrates. These hypotheses form what is known as Waring's problem. He also published a theorem, due to his friend John Wilson, concerning prime numbers; it was later proven rigorously by Lagrange.
In "Proprietates Algebraicarum Curvarum" (1772) Waring reissued in a much revised form the first four chapters of the second part of "Miscellanea Analytica". He devoted himself to the classification of higher plane curves, improving results obtained by Isaac Newton, James Stirling, Leonhard Euler, and Gabriel Cramer. In 1794 he published a few copies of a philosophical work entitled "An Essay on the Principles of Human Knowledge", which were circulated among his friends.
Waring's mathematical style is highly analytical. In fact he criticised those British mathematicians who adhered too strictly to geometry. It is indicative that he was one of the subscribers of John Landen's "Residual Analysis" (1764), one of the works in which the tradition of the Newtonian fluxional calculus was more severely criticised. In the preface of "Meditationes Analyticae" Waring showed a good knowledge of continental mathematicians such as Alexis Clairaut, Jean le Rond d'Alembert, and Euler. He lamented the fact that in Great Britain mathematics was cultivated with less interest than on the continent, and clearly desired to be considered as highly as the great names in continental mathematics—there is no doubt that he was reading their work at a level never reached by any other eighteenth-century British mathematician. Most notably, at the end of chapter three of "Meditationes Analyticae" Waring presents some partial fluxional equations (partial differential equations in Leibnizian terminology); such equations are a mathematical instrument of great importance in the study of continuous bodies which was almost completely neglected in Britain before Waring's researches. One of the most interesting results in "Meditationes Analyticae" is a test for the convergence of series generally attributed to d'Alembert (the 'ratio test'). The theory of convergence of series (the object of which is to establish when the summation of an infinite number of terms can be said to have a finite 'sum') was not much advanced in the eighteenth century.
Waring's work was known both in Britain and on the continent, but it is difficult to evaluate his impact on the development of mathematics. His work on algebraic equations contained in "Miscellanea Analytica" was translated into Italian by Vincenzo Riccati in 1770. Waring's style is not systematic and his exposition is often obscure. It seems that he never lectured and did not habitually correspond with other mathematicians. After Jérôme Lalande in 1796 observed, in "Notice sur la vie de Condorcet", that in 1764 there was not a single first-rate analyst in England, Waring's reply, published after his death as 'Original letter of Dr Waring' in the "Monthly Magazine", stated that he had given 'somewhere between three and four hundred new propositions of one kind or another'.
During his last years he sank into a deep religious melancholy, and a violent cold caused his death, in Plealey, on 15 August 1798. He was buried in the churchyard at Fitz, Shropshire. | https://en.wikipedia.org/wiki?curid=9723 |
Eden Phillpotts
Eden Phillpotts (4 November 1862 – 29 December 1960) was an English author, poet and dramatist. He was born in Mount Abu, India, was educated in Plymouth, Devon, and worked as an insurance officer for 10 years before studying for the stage and eventually becoming a writer.
Eden Phillpotts was a great-nephew of Henry Phillpotts, Bishop of Exeter. His father Henry Phillpotts was a son of the bishop’s younger brother Thomas Phillpotts. James Surtees Phillpotts the reforming headmaster of Bedford School was his second cousin.
Eden Phillpotts was born on 4 November 1862 at Mount Abu in Rajasthan. His father Henry was an officer in the Indian Army, while his mother Adelaide was the daughter of an Indian Civil Service officer posted in Madras, George Jenkins Waters.
Henry Phillpotts died in 1865, leaving Adelaide a widow at the age of 21. With her three small sons, of whom Eden was the eldest, she returned to England and settled in Plymouth.
Phillpotts was educated at Mannamead School in Plymouth. At school he showed no signs of a literary bent. In 1879, aged 17, he left home and went to London to earn his living. He found a job as a clerk with the Sun Fire Office.
Phillpotts' ambition was to be an actor and he attended evening classes at a drama school for two years. He came to the conclusion that he would never make a name as an actor but might have success as a writer. In his spare time out of office hours he proceeded to create a stream of small works which he was able to sell. In due course he left the insurance company to concentrate on his writing, while also working part-time as assistant editor for the weekly Black and White Magazine.
Eden Phillpotts maintained a steady output of three or four books a year for the next half century. He produced poetry, short stories, novels, plays and mystery tales. Many of his novels were about rural Devon life and some of his plays were distinguished by their effective use of regional dialect.
Eden Phillpotts died at his home in Broadclyst near Exeter, Devon, on 29 December 1960.
Phillpotts was for many years the President of the Dartmoor Preservation Association and cared passionately about the conservation of Dartmoor. He was an agnostic and a supporter of the Rationalist Press Association.
Phillpotts was a friend of Agatha Christie, who was an admirer of his work and a regular visitor to his home. In her autobiography she expressed gratitude for his early advice on fiction writing and quoted some of it. Jorge Luis Borges was another Phillpotts admirer. Borges mentioned him numerous times, wrote at least two reviews of his novels, and included him in his "Personal Library", a collection of works selected to reflect his personal literary preferences.
Philpotts allegedly sexually abused his daughter Adelaide. In a 1976 interview for a book about her father, Adelaide described an incestuous "relationship" with him that she says lasted from the age of five or six until her early thirties, when he remarried. When she herself finally married at the age of 55 her father never forgave her, and never communicated with her again.
Phillpotts wrote a great many books with a Dartmoor setting. One of his novels, "Widecombe Fair", inspired by an annual fair at the village of Widecombe-in-the-Moor, provided the scenario for his comic play "The Farmer's Wife" (1916). It went on to become a 1928 silent film of the same name, directed by Alfred Hitchcock. It was followed by a 1941 remake, directed by Norman Lee and Leslie Arliss. It became a BBC TV drama in 1955, directed by
Owen Reed. Jan Stewer played Churdles Ash. The BBC had broadcast the play in 1934.
He co-wrote several plays with his daughter Adelaide Phillpotts, "The Farmer's Wife" and "Yellow Sands" (1926); she later claimed their relationship was incestuous. Eden is best known as the author of many novels, plays and poems about Dartmoor. His Dartmoor cycle of 18 novels and two volumes of short stories still has many avid readers despite the fact that many titles are out of print.
Philpotts also wrote a series of novels, each set against the background of a different trade or industry. Titles include: "Brunel's Tower" (a pottery) and "Storm in a Teacup" (hand-papermaking). Among his other works is "The Grey Room", the plot of which is centered on a haunted room in an English manor house. He also wrote a number of other mystery novels, both under his own name and the pseudonym Harrington Hext. These include: "The Thing at Their Heels", "The Red Redmaynes", "The Monster", "The Clue from the Stars", and "The Captain's Curio". "The Human Boy" was a collection of schoolboy stories in the same genre as Rudyard Kipling's Stalky & Co., though different in mood and style. Late in his long writing career he wrote a few books of interest to science fiction and fantasy readers, the most noteworthy being "Saurus", which involves an alien reptilian observing human life.
Eric Partridge praised the immediacy and impact of his dialect writing.
Novels
Short Fiction Books
Poetry
Plays
Nonfiction | https://en.wikipedia.org/wiki?curid=9724 |
Earned value management
Earned value management (EVM), earned value project management, or earned value performance management (EVPM) is a project management technique for measuring project performance and progress in an objective manner.
Earned value management is a project management technique for measuring project performance and progress. It has the ability to combine measurements of the project management triangle: scope, time, and costs.
In a single integrated system, earned value management is able to provide accurate forecasts of project performance problems, which is an important contribution for project management.
Early EVM research showed that the areas of planning and control are significantly impacted by its use; and similarly, using the methodology improves both scope definition as well as the analysis of overall project performance. More recent research studies have shown that the principles of EVM are positive predictors of project success. Popularity of EVM has grown in recent years beyond government contracting, a sector in which its importance continues to rise (e.g. recent new DFARS rules), in part because EVM can also surface in and help substantiate contract disputes.
Essential features of any EVM implementation include:
EVM implementations for large or complex projects include many more features, such as indicators and forecasts of cost performance (over budget or under budget) and schedule performance (behind schedule or ahead of schedule). However, the most basic requirement of an EVM system is that it quantifies progress using PV and EV.
Project A has been approved for a duration of one year and with the budget of X. It was also planned that the project spends 50% of the approved budget and expects 50% of the work to be complete in the first six months. If now, six months after the start of the project, a project manager would report that he has spent 50% of the budget, one can initially think, that the project is perfectly on plan. However, in reality the provided information is not sufficient to come to such a conclusion. The project can spend 50% of the budget, whilst finishing only 25% of the work, which would mean the project is not doing well; or the project can spend 50% of the budget, whilst completing 75% of the work, which would mean that project is doing better than planned. EVM is meant to address such and similar issues.
EVM emerged as a financial analysis specialty in United States Government programs in the 1960s, but it has since become a significant branch of project management and cost engineering. Project management research investigating the contribution of EVM to project success suggests a moderately strong positive relationship.
Implementations of EVM can be scaled to fit projects of all sizes and complexities.
The genesis of EVM occurred in industrial manufacturing at the turn of the 20th century, based largely on the principle of "earned time" popularized by Frank and Lillian Gilbreth, but the concept took root in the United States Department of Defense in the 1960s. The original concept was called PERT/COST, but it was considered overly burdensome (not very adaptable) by contractors whom were mandated to use it, and many variations of it began to proliferate among various procurement programs. In 1967, the DoD established a criterion-based approach, using a set of 35 criteria, called the Cost/Schedule Control Systems Criteria (C/SCSC). In the 1970s and early 1980s, a subculture of C/SCSC analysis grew, but the technique was often ignored or even actively resisted by project managers in both government and industry. C/SCSC was often considered a financial control tool that could be delegated to analytical specialists.
In 1979, EVM was introduced to the architecture and engineering industry in a "Public Works Magazine" article by David Burstein, a project manager with a national engineering firm. This technique has been taught ever since as part of the project management training program presented by PSMJ Resources, an international training and consulting firm that specializes in the engineering and architecture industry.
In the late 1980s and early 1990s, EVM emerged as a project management methodology to be understood and used by managers and executives, not just EVM specialists. In 1989, EVM leadership was elevated to the Undersecretary of Defense for Acquisition, thus making EVM an element of program management and procurement. In 1991, Secretary of Defense Dick Cheney canceled the Navy A-12 Avenger II Program because of performance problems detected by EVM. This demonstrated conclusively that EVM mattered to secretary-level leadership. In the 1990s, many U.S. Government regulations were eliminated or streamlined. However, EVM not only survived the acquisition reform movement, but became strongly associated with the acquisition reform movement itself. Most notably, from 1995 to 1998, ownership of EVM criteria (reduced to 32) was transferred to industry by adoption of ANSI EIA 748-A standard.
The use of EVM expanded beyond the U.S. Department of Defense. It was adopted by the National Aeronautics and Space Administration, United States Department of Energy and other technology-related agencies. Many industrialized nations also began to utilize EVM in their own procurement programs.
An overview of EVM was included in the Project Management Institute's first PMBOK Guide in 1987 and was expanded in subsequent editions. In the most recent edition of the PMBOK guide, EVM is listed among the general tools and techniques for processes to control project costs.
The construction industry was an early commercial adopter of EVM. Closer integration of EVM with the practice of project management accelerated in the 1990s. In 1999, the Performance Management Association merged with the Project Management Institute (PMI) to become PMI's first college, the College of Performance Management. The United States Office of Management and Budget began to mandate the use of EVM across all government agencies, and, for the first time, for certain internally managed projects (not just for contractors). EVM also received greater attention by publicly traded companies in response to the Sarbanes-Oxley Act of 2002.
In Australia EVM has been codified as standards AS 4817-2003 and AS 4817-2006.
It is helpful to see an example of project tracking that does not include earned value performance management. Consider a project that has been planned in detail, including a time-phased spend plan for all elements of work. Figure 1 shows the cumulative budget (cost) for this project as a function of time (the blue line, labeled PV). It also shows the cumulative actual cost of the project (red line, labeled AC) through week 8. To those unfamiliar with EVM, it might appear that this project was over budget through week 4 and then under budget from week 6 through week 8. However, what is missing from this chart is any understanding of how much work has been accomplished during the project. If the project was actually completed at week 8, then the project would actually be well under budget and well ahead of schedule. If, on the other hand, the project is only 10% complete at week 8, the project is significantly over budget and behind schedule. A method is needed to measure technical performance objectively and quantitatively, and that is what EVM accomplishes.
Consider the same project, except this time the project plan includes pre-defined methods of quantifying the accomplishment of work. At the end of each week, the project manager identifies every detailed element of work that has been completed, and sums the EV for each of these completed elements. Earned value may be accumulated monthly, weekly, or as progress is made.
formula_1
EV is calculated by multiplying %complete of each task (completed or in progress) by its planned value
Figure 2 shows the EV curve (in green) along with the PV curve from Figure 1. The chart indicates that technical performance (i.e. progress) started more rapidly than planned, but slowed significantly and fell behind schedule at week 7 and 8. This chart illustrates the schedule performance aspect of EVM. It is complementary to critical path or critical chain schedule management.
Figure 3 shows the same EV curve (green) with the actual cost data from Figure 1 (in red). It can be seen that the project was actually under budget, relative to the amount of work accomplished, since the start of the project. This is a much better conclusion than might be derived from Figure 1.
Figure 4 shows all three curves together – which is a typical EVM line chart. The best way to read these three-line charts is to identify the EV curve first, then compare it to PV (for schedule performance) and AC (for cost performance). It can be seen from this illustration that a true understanding of cost performance and schedule performance "relies first on measuring technical performance objectively." This is the "foundational principle" of EVM.
The "foundational principle" of EVM, mentioned above, does not depend on the size or complexity of the project. However, the "implementations" of EVM can vary significantly depending on the circumstances. In many cases, organizations establish an all-or-nothing threshold; projects above the threshold require a full-featured (complex) EVM system and projects below the threshold are exempted. Another approach that is gaining favor is to scale EVM implementation according to the project at hand and skill level of the project team.
There are many more small and simple projects than there are large and complex ones, yet historically only the largest and most complex have enjoyed the benefits of EVM. Still, lightweight implementations of EVM are achievable by any person who has basic spreadsheet skills. In fact, spreadsheet implementations are an excellent way to learn basic EVM skills.
The "first step" is to define the work. This is typically done in a hierarchical arrangement called a work breakdown structure (WBS) although the simplest projects may use a simple list of tasks. In either case, it is important that the WBS or list be comprehensive. It is also important that the elements be mutually exclusive, so that work is easily categorized in one and only one element of work. The most detailed elements of a WBS hierarchy (or the items in a list) are called activities (or tasks).
The "second step" is to assign a value, called planned value (PV), to each activity. For large projects, PV is almost always an allocation of the total project budget, and may be in units of currency (e.g. dollar, euro or naira) or in labor hours, or both. However, in very simple projects, each activity may be assigned a weighted “point value" which might not be a budget number. Assigning weighted values and achieving consensus on all PV quantities yields an important benefit of EVM, because it exposes misunderstandings and miscommunications about the scope of the project, and resolving these differences should always occur as early as possible. Some terminal elements can not be known (planned) in great detail in advance, and that is expected, because they can be further refined at a later time.
The "third step" is to define "earning rules" for each activity. The simplest method is to apply just one earning rule, such as the 0/100 rule, to all activities. Using the 0/100 rule, no credit is earned for an element of work until it is finished. A related rule is called the 50/50 rule, which means 50% credit is earned when an element of work is started, and the remaining 50% is earned upon completion. Other fixed earning rules such as a 25/75 rule or 20/80 rule are gaining favor, because they assign more weight to finishing work than for starting it, but they also motivate the project team to identify when an element of work is started, which can improve awareness of work-in-progress. These simple earning rules work well for small or simple projects because generally each activity tends to be fairly short in duration.
These initial three steps define the minimal amount of planning for simplified EVM. The "final step" is to execute the project according to the plan and measure progress. When activities are started or finished, EV is accumulated according to the earning rule. This is typically done at regular intervals (e.g. weekly or monthly), but there is no reason why EV cannot be accumulated in near real-time, when work elements are started/completed. In fact, waiting to update EV only once per month (simply because that is when cost data are available) only detracts from a primary benefit of using EVM, which is to create a technical performance scoreboard for the project team.
In a lightweight implementation such as described here, the project manager has not accumulated cost nor defined a detailed project schedule network (i.e. using a critical path or critical chain methodology). While such omissions are inappropriate for managing large projects, they are a common and reasonable occurrence in many very small or simple projects. Any project can benefit from using EV alone as a real-time score of progress. One useful result of this very simple approach (without schedule models and actual cost accumulation) is to compare EV curves of similar projects, as illustrated in Figure 5. In this example, the progress of three residential construction projects are compared by aligning the starting dates. If these three home construction projects were measured with the same PV valuations, the "relative" schedule performance of the projects can be easily compared.
The actual critical path is ultimately the determining factor of every project's duration. Because earned value schedule metrics take no account of critical path data, big budget activities that are not on the critical path have the potential to dwarf the impact of performing small budget critical path activities. This can lead to "gaming" the SV and Schedule Performance Index or SPI metrics by ignoring critical path activities in favor of big-budget activities that may have much float. This can sometimes even lead to performing activities out-of-sequence just to improve the schedule tracking metrics, which can cause major problems with quality.
A simple two-step process has been suggested to fix this:
In this way, the distorting aspect of float would be eliminated. There would be no benefit to performing a non-critical activity with much float until it is due in proper sequence. Also, an activity would not generate a negative schedule variance until it had used up its float. Under this method, one way of gaming the schedule metrics would be eliminated. The only way of generating a positive schedule variance (or SPI over 1.0) would be by completing work on the current critical path ahead of schedule, which is in fact the only way for a project to get ahead of schedule.
In addition to managing technical and schedule performance, large and complex projects require that cost performance be monitored and reviewed at regular intervals. To measure cost performance, planned value (or BCWS - Budgeted Cost of Work Scheduled) and earned value (or BCWP - Budgeted Cost of Work Performed) must be in units of currency (the same units that actual costs are measured.).
In large implementations, the planned value curve is commonly called a Performance Measurement Baseline (PMB) and may be arranged in control accounts, summary-level planning packages, planning packages and work packages.
In large projects, establishing control accounts is the primary method of delegating responsibility and authority to various parts of the performing organization. Control accounts are cells of a responsibility assignment (RACI) matrix, which is the intersection of the project WBS and the organizational breakdown structure (OBS). Control accounts are assigned to Control Account Managers (CAMs).
Large projects require more elaborate processes for controlling baseline revisions, more thorough integration with subcontractor EVM systems, and more elaborate management of procured materials.
In the United States, the primary standard for full-featured EVM systems is the ANSI/EIA-748A standard, published in May 1998 and reaffirmed in August 2002. The standard defines 32 criteria for full-featured EVM system compliance. As of the year 2007, a draft of ANSI/EIA-748B, a revision to the original is available from ANSI. Other countries have established similar standards.
In addition to using BCWS and BCWP, prior to 1998 implementations often use the term actual cost of work performed (ACWP) instead of AC. Additional acronyms and formulas include:
Proponents of EVM note a number of issues with implementing it, and further limitations may be inherent to the concept itself.
Because EVM requires quantification of a project plan, it is often perceived to be inapplicable to discovery-driven or Agile software development projects. For example, it may be impossible to plan certain research projects far in advance, because research itself uncovers some opportunities (research paths) and actively eliminates others. However, another school of thought holds that all work can be planned, even if in weekly timeboxes or other short increments.
Traditional EVM is not intended for non-discrete (continuous) effort. In traditional EVM standards, non-discrete effort is called "level of effort" (LOE). If a project plan contains a significant portion of LOE, and the LOE is intermixed with discrete effort, EVM results will be contaminated. This is another area of EVM research.
Traditional definitions of EVM typically assume that project accounting and project network schedule management are prerequisites to achieving any benefit from EVM. Many small projects don't satisfy either of these prerequisites, but they too can benefit from EVM, as described for simple implementations, above. Other projects can be planned with a project network, but do not have access to true and timely actual cost data. In practice, the collection of true and timely actual cost data can be the most difficult aspect of EVM. Such projects can benefit from EVM, as described for intermediate implementations, above, and Earned Schedule.
As a means of overcoming objections to EVM's lack of connection to qualitative performance issues, the Naval Air Systems Command (NAVAIR) PEO(A) organization initiated a project in the late 1990s to integrate true technical achievement into EVM projections by utilizing risk profiles. These risk profiles anticipate opportunities that may be revealed and possibly be exploited as development and testing proceeds. The published research resulted in a Technical Performance Management (TPM) methodology and software application that is still used by many DoD agencies in informing EVM estimates with technical achievement.
The research was peer-reviewed and was the recipient of the Defense Acquisition University Acquisition Research Symposium 1997 Acker Award for excellence in the exchange of information in the field of acquisition research.
There is the difficulty inherent for any periodic monitoring of synchronizing data timing: actual deliveries, actual invoicing, and the date the EVM analysis is done are all independent, so that some items have arrived but their invoicing has not and by the time analysis is delivered the data will likely be weeks behind events. This may limit EVM to a less tactical or less definitive role where use is combined with other forms to explain why or add recent news and manage future expectations.
There is a measurement limitation for how precisely EVM can be used, stemming from classic conflict between accuracy and precision, as the mathematics can calculate deceptively far beyond the precision of the measurements of data and the approximation that is the plan estimation. The limitation on estimation is commonly understood (such as the ninety-ninety rule in software) but is not visible in any margin of error. The limitations on measurement are largely a form of digitization error as EVM measurements ultimately can be no finer than by item, which may be the Work Breakdown Structure terminal element size, to the scale of reporting period, typically end summary of a month, and by the means of delivery measure. (The delivery measure may be actual deliveries, may include estimates of partial work done at the end of month subject to estimation limits, and typically does not include QC check or risk offsets.)
As traditionally implemented, earned value management deals with, and is based in, budget and cost. It has no relationship to the investment value or benefit for which the project has been funded and undertaken. Yet due to the use of the word “value” in the name, this fact is often misunderstood. However, earned value metrics can be used to compute the cost and schedule inputs to Devaux's Index of Project Performance (the DIPP), which integrates schedule and cost performance with the planned investment value of the project's scope across the project management triangle. | https://en.wikipedia.org/wiki?curid=9728 |
Electron microscope
An electron microscope is a microscope that uses a beam of accelerated electrons as a source of illumination. As the wavelength of an electron can be up to 100,000 times shorter than that of visible light photons, electron microscopes have a higher resolving power than light microscopes and can reveal the structure of smaller objects. A scanning transmission electron microscope has achieved better than 50 pm resolution in annular dark-field imaging mode and magnifications of up to about 10,000,000× whereas most light microscopes are limited by diffraction to about 200 nm resolution and useful magnifications below 2000×.
Electron microscopes use shaped magnetic fields to form electron optical lens systems that are analogous to the glass lenses of an optical light microscope.
Electron microscopes are used to investigate the ultrastructure of a wide range of biological and inorganic specimens including microorganisms, cells, large molecules, biopsy samples, metals, and crystals. Industrially, electron microscopes are often used for quality control and failure analysis. Modern electron microscopes produce electron micrographs using specialized digital cameras and frame grabbers to capture the images.
In 1926 Hans Busch developed the electromagnetic lens.
According to Dennis Gabor, the physicist Leó Szilárd tried in 1928 to convince him to build an electron microscope, for which he had filed a patent. The first prototype electron microscope, capable of four-hundred-power magnification, was developed in 1931 by the physicist Ernst Ruska and the electrical engineer Max Knoll. The apparatus was the first practical demonstration of the principles of electron microscopy. In May of the same year, Reinhold Rudenberg, the scientific director of Siemens-Schuckertwerke, obtained a patent for an electron microscope. In 1932, Ernst Lubcke of Siemens & Halske built and obtained images from a prototype electron microscope, applying the concepts described in Rudenberg's patent.
In the following year, 1933, Ruska built the first electron microscope that exceeded the resolution attainable with an optical (light) microscope. Four years later, in 1937, Siemens financed the work of Ernst Ruska and Bodo von Borries, and employed Helmut Ruska, Ernst's brother, to develop applications for the microscope, especially with biological specimens. Also in 1937, Manfred von Ardenne pioneered the scanning electron microscope. Siemens produced the first commercial electron microscope in 1938. The first North American electron microscope was constructed in 1938, at the University of Toronto, by Eli Franklin Burton and students Cecil Hall, James Hillier, and Albert Prebus. Siemens produced a transmission electron microscope (TEM) in 1939. Although current transmission electron microscopes are capable of two million-power magnification, as scientific instruments, they remain based upon Ruska's prototype.
The original form of the electron microscope, the transmission electron microscope (TEM), uses a high voltage electron beam to illuminate the specimen and create an image. The electron beam is produced by an electron gun, commonly fitted with a tungsten filament cathode as the electron source. The electron beam is accelerated by an anode typically at +100 keV (40 to 400 keV) with respect to the cathode, focused by electrostatic and electromagnetic lenses, and transmitted through the specimen that is in part transparent to electrons and in part scatters them out of the beam. When it emerges from the specimen, the electron beam carries information about the structure of the specimen that is magnified by the objective lens system of the microscope. The spatial variation in this information (the "image") may be viewed by projecting the magnified electron image onto a fluorescent viewing screen coated with a phosphor or scintillator material such as zinc sulfide. Alternatively, the image can be photographically recorded by exposing a photographic film or plate directly to the electron beam, or a high-resolution phosphor may be coupled by means of a lens optical system or a fibre optic light-guide to the sensor of a digital camera. The image detected by the digital camera may be displayed on a monitor or computer.
The resolution of TEMs is limited primarily by spherical aberration, but a new generation of hardware correctors can reduce spherical aberration to increase the resolution in high-resolution transmission electron microscopy (HRTEM) to below 0.5 angstrom (50 picometres), enabling magnifications above 50 million times. The ability of HRTEM to determine the positions of atoms within materials is useful for nano-technologies research and development.
Transmission electron microscopes are often used in electron diffraction mode. The advantages of electron diffraction over X-ray crystallography are that the specimen need not be a single crystal or even a polycrystalline powder, and also that the Fourier transform reconstruction of the object's magnified structure occurs physically and thus avoids the need for solving the phase problem faced by the X-ray crystallographers after obtaining their X-ray diffraction patterns.
One major disadvantage of the transmission electron microscope is the need for extremely thin sections of the specimens, typically about 100 nanometers. Creating these thin sections for biological and materials specimens is technically very challenging. Semiconductor thin sections can be made using a focused ion beam. Biological tissue specimens are chemically fixed, dehydrated and embedded in a polymer resin to stabilize them sufficiently to allow ultrathin sectioning. Sections of biological specimens, organic polymers, and similar materials may require staining with heavy atom labels in order to achieve the required image contrast.
One application of TEM is serial-section electron microscopy (ssEM), for example in analyzing the connectivity in volumetric samples of brain tissue by imaging many thin sections in sequence.
The SEM produces images by probing the specimen with a focused electron beam that is scanned across a rectangular area of the specimen (raster scanning). When the electron beam interacts with the specimen, it loses energy by a variety of mechanisms. The lost energy is converted into alternative forms such as heat, emission of low-energy secondary electrons and high-energy backscattered electrons, light emission (cathodoluminescence) or X-ray emission, all of which provide signals carrying information about the properties of the specimen surface, such as its topography and composition. The image displayed by an SEM maps the varying intensity of any of these signals into the image in a position corresponding to the position of the beam on the specimen when the signal was generated. In the SEM image of an ant shown below and to the right, the image was constructed from signals produced by a secondary electron detector, the normal or conventional imaging mode in most SEMs.
Generally, the image resolution of an SEM is lower than that of a TEM. However, because the SEM images the surface of a sample rather than its interior, the electrons do not have to travel through the sample. This reduces the need for extensive sample preparation to thin the specimen to electron transparency. The SEM is able to image bulk samples that can fit on its stage and still be maneuvered, including a height less than the working distance being used, often 4 millimeters for high-resolution images. The SEM also has a great depth of field, and so can produce images that are good representations of the three-dimensional surface shape of the sample. Another advantage of SEMs comes with environmental scanning electron microscopes (ESEM) that can produce images of good quality and resolution with hydrated samples or in low, rather than high, vacuum or under chamber gases. This facilitates imaging unfixed biological samples that are unstable in the high vacuum of conventional electron microscopes.
In the reflection electron microscope (REM) as in the TEM, an electron beam is incident on a surface but instead of using the transmission (TEM) or secondary electrons (SEM), the reflected beam of elastically scattered electrons is detected. This technique is typically coupled with reflection high energy electron diffraction (RHEED) and "reflection high-energy loss spectroscopy (RHELS)". Another variation is spin-polarized low-energy electron microscopy (SPLEEM), which is used for looking at the microstructure of magnetic domains.
The STEM rasters a focused incident probe across a specimen that (as with the TEM) has been thinned to facilitate detection of electrons scattered "through" the specimen. The high resolution of the TEM is thus possible in STEM. The focusing action (and aberrations) occur before the electrons hit the specimen in the STEM, but afterward in the TEM. The STEMs use of SEM-like beam rastering simplifies annular dark-field imaging, and other analytical techniques, but also means that image data is acquired in serial rather than in parallel fashion. Often TEM can be equipped with the scanning option and then it can function both as TEM and STEM.
In STM, a conductive tip held at a voltage is brought near a surface, and a profile can be obtained based on the tunneling probability of an electron from the tip to the sample since it is a function of distance.
In their most common configurations, electron microscopes produce images with a single brightness value per pixel, with the results usually rendered in grayscale. However, often these images are then colorized through the use of feature-detection software, or simply by hand-editing using a graphics editor. This may be done to clarify structure or for aesthetic effect and generally does not add new information about the specimen.
In some configurations information about several specimen properties is gathered per pixel, usually by the use of multiple detectors. In SEM, the attributes of topography and material contrast can be obtained by a pair of backscattered electron detectors and such attributes can be superimposed in a single color image by assigning a different primary color to each attribute. Similarly, a combination of backscattered and secondary electron signals can be assigned to different colors and superimposed on a single color micrograph displaying simultaneously the properties of the specimen.
Some types of detectors used in SEM have analytical capabilities, and can provide several items of data at each pixel. Examples are the Energy-dispersive X-ray spectroscopy (EDS) detectors used in elemental analysis and Cathodoluminescence microscope (CL) systems that analyse the intensity and spectrum of electron-induced luminescence in (for example) geological specimens. In SEM systems using these detectors, it is common to color code the signals and superimpose them in a single color image, so that differences in the distribution of the various components of the specimen can be seen clearly and compared. Optionally, the standard secondary electron image can be merged with the one or more compositional channels, so that the specimen's structure and composition can be compared. Such images can be made while maintaining the full integrity of the original signal, which is not modified in any way.
Materials to be viewed under an electron microscope may require processing to produce a suitable sample. The technique required varies depending on the specimen and the analysis required:
Electron microscopes are expensive to build and maintain, but the capital and running costs of confocal light microscope systems now overlaps with those of basic electron microscopes. Microscopes designed to achieve high resolutions must be housed in stable buildings (sometimes underground) with special services such as magnetic field canceling systems.
The samples largely have to be viewed in vacuum, as the molecules that make up air would scatter the electrons. An exception is liquid-phase electron microscopy using either a closed liquid cell or an environmental chamber, for example, in the environmental scanning electron microscope, which allows hydrated samples to be viewed in a low-pressure (up to ) wet environment. Various techniques for in situ electron microscopy of gaseous samples have been developed as well.
Scanning electron microscopes operating in conventional high-vacuum mode usually image conductive specimens; therefore non-conductive materials require conductive coating (gold/palladium alloy, carbon, osmium, etc.). The low-voltage mode of modern microscopes makes possible the observation of non-conductive specimens without coating. Non-conductive materials can be imaged also by a variable pressure (or environmental) scanning electron microscope.
Small, stable specimens such as carbon nanotubes, diatom frustules and small mineral crystals (asbestos fibres, for example) require no special treatment before being examined in the electron microscope. Samples of hydrated materials, including almost all biological specimens have to be prepared in various ways to stabilize them, reduce their thickness (ultrathin sectioning) and increase their electron optical contrast (staining). These processes may result in "artifacts", but these can usually be identified by comparing the results obtained by using radically different specimen preparation methods. Since the 1980s, analysis of cryofixed, vitrified specimens has also become increasingly used by scientists, further confirming the validity of this technique.
Biology and life sciences | https://en.wikipedia.org/wiki?curid=9730 |
List of recently extinct bird species
Over 190 species of birds have become extinct since 1500, and the rate of extinction seems to be increasing. The situation is exemplified by Hawaii, where 30% of all known recently extinct bird taxa originally lived. Other areas, such as Guam, have also been hit hard; Guam has lost over 60% of its native bird taxa in the last 30 years, many of them due to the introduced brown tree snake.
Currently there are approximately 10,000 living species of birds, with an estimated 1,200 considered to be under threat of extinction.
Island species in general, and flightless island species in particular, are most at risk. The disproportionate number of rails in the list reflects the tendency of that family to lose the ability to fly when geographically isolated. Even more rails became extinct before they could be described by scientists; these taxa are listed in List of Late Quaternary prehistoric bird species.
The extinction dates given below are usually approximations of the actual date of extinction. In some cases, more exact dates are given as it is sometimes possible to pinpoint the date of extinction to a specific year or even day (the San Benedicto rock wren is possibly the most extreme example—its extinction could be timed with an accuracy of maybe half an hour). Extinction dates in the literature are usually the dates of the last verified record (credible observation or specimen taken); for many Pacific birds that became extinct shortly after European contact, however, this leaves an uncertainty period of over a century, because the islands on which they lived were only rarely visited by scientists.
Ducks, geese and swans
Quails and relatives
See also Bokaak "bustard" under Gruiformes below
Shorebirds, gulls and auks
Rails and allies - probably paraphyletic
Grebes
Pelicans and related birds
Boobies and related birds
Petrels, shearwaters, albatrosses and storm petrels.
Penguins
Pigeons, doves and dodos
For the "Réunion solitaire", see Réunion ibis.
Parrots
Cuckoos
Birds of prey
Typical owls and barn owls.
Caprimulgidae - nightjars and nighthawks
Reclusive ground-nesting birds that sally out at night to hunt for large insects and similar prey. They are easily located by the males' song, but this is not given all year. Habitat destruction represents currently the biggest threat, while island populations are threatened by introduced mammalian predators, notably dogs, cats, pigs and mongooses.
Swifts and hummingbirds
Kingfishers and related birds
Woodpeckers and related birds
Perching birds
Furnariidae- Ovenbirds
Acanthisittidae– New Zealand "wrens"
Mohoidae – Hawaiian "honeyeaters". Family established in 2008, previously in Meliphagidae.
Meliphagidae – honeyeaters and Australian chats
Acanthizidae – scrubwrens, thornbills, and gerygones
Pachycephalidae – whistlers, shrike-thrushes, pitohuis and allies
Dicruridae – monarch flycatchers and allies
Oriolidae – Old World orioles and allies
Callaeidae – New Zealand wattlebirds
Hirundinidae – swallows and martins
Acrocephalidae – marsh and tree warblers
Muscicapidae – Old World flycatchers and chats
Megaluridae – megalurid warblers or grass warblers
Cisticolidae – cisticolas and allies
Zosteropidae – white-eyes - probably belonging to Timaliidae
Timaliidae – Old World babblers
Pycnonotidae – bulbuls
Sylvioidea "incertae sedis"
Sturnidae – starlings
Turdidae – thrushes and relatives
Mimidae – mockingbirds and thrashers
Estrildidae– estrildid finches (waxbills, munias, etc.
Icteridae – grackles
Parulidae – New World warblers
Ploceidae – weavers
Fringillidae – true finches and Hawaiian honeycreepers
Emberizidae – buntings and American sparrows
Extinction of subspecies is a subject very dependent on guesswork. National and international conservation projects and research publications such as redlists usually focus on species as a whole. Reliable information on the status of threatened subspecies usually has to be assembled piecemeal from published observations, such as regional checklists. Therefore, the following listing contains a high proportion of taxa that may still exist, but are listed here due to any combination of absence of recent records, a known threat such as habitat destruction, or an observed decline.
Ratites and related birds
Tinamous
Ducks, geese and swans
Quails and relatives
Shorebirds, gulls and auks
Rails and allies - probably paraphyletic
Herons and related birds - possibly paraphyletic
Sandgrouses
Pigeons, doves and dodos
Parrots
Cuckoos
Birds of prey
Typical owls and barn owls
Nightjars and allies
Swifts and hummingbirds
Kingfishers and related birds
Woodpeckers and related birds
Perching birds
Pittidae – pittas
Tyrannidae – tyrant flycatchers
Furnariidae – ovenbirds
Formicariidae – antpittas and antthrushes
Maluridae – Australasian "wrens"
Pardalotidae – pardalotes, scrubwrens, thornbills and gerygones
PetroicidaeAustralasian "robins"
Cinclosomatidae – whipbirds and allies
Artamidae – woodswallows, currawongs and allies
Monarchidae – monarch flycatchers
Rhipiduridae – fantails
Campephagidae – cuckoo-shrikes and trillers
Oriolidae – orioles and figbird
Corvidae – crows, ravens, magpies and jays
Callaeidae – New Zealand wattlebirds
Regulidae – kinglets
Hirundinidae – swallows and martins
Phylloscopidae – phylloscopid warblers or leaf-warblers
Cettiidae – cettiid warblers or typical bush-warblers
Acrocephalidae – acrocephalid warblers or marsh- and tree warblers
Pycnonotidae – bulbuls
Cisticolidae – cisticolas and allies
Sylviidae – sylviid ("true") warblers and parrotbills
Zosteropidae – white-eyes. Probably belong into Timaliidae
Timaliidae – Old World babblers
"African warblers"
Sylvioidea "incertae sedis"
Troglodytidae – wrens
Paridae – tits, chickadees and titmice
Cinclidae – dippers
Muscicapidae – Old World flycatchers and chats
Turdidae – thrushes and allies
Mimidae – mockingbirds and thrashers
Estrildidae – estrildid finches (waxbills, munias, etc.)
Fringillidae – true finches and Hawaiian honeycreepers
Icteridae – grackles
Parulidae – New World warblers
Thraupidae – tanagers
Emberizidae – buntings and American sparrows | https://en.wikipedia.org/wiki?curid=9731 |
Eli Whitney
Eli Whitney (December 8, 1765January 8, 1825) was an American inventor, widely known for inventing the cotton gin, one of the key inventions of the Industrial Revolution and shaped the economy of the Antebellum South.
Whitney's invention made upland short cotton into a profitable crop, which strengthened the economic foundation of slavery in the United States.
Despite the social and economic impact of his invention, Whitney lost many profits in legal battles over patent infringement for the cotton gin. Thereafter, he turned his attention into securing contracts with the government in the manufacture of muskets for the newly formed United States Army. He continued making arms and inventing until his death in 1825.
Whitney was born in Westborough, Massachusetts, on December 8, 1765, the eldest child of Eli Whitney Sr., a prosperous farmer, and his wife Elizabeth Fay, also of Westborough.
Although the younger Eli, born in 1765, could technically be called a "Junior", history has never known him as such. He was famous during his lifetime and afterward by the name "Eli Whitney". His son, born in 1820, also named Eli, was well known during his lifetime and afterward by the name "Eli Whitney, Jr."
Whitney's mother, Elizabeth Fay, died in 1777, when he was 11. At age 14 he operated a profitable nail manufacturing operation in his father's workshop during the Revolutionary War.
Because his stepmother opposed his wish to attend college, Whitney worked as a farm laborer and school teacher to save money. He prepared for Yale at Leicester Academy (now Becker College) and under the tutelage of Rev. Elizur Goodrich of Durham, Connecticut, he entered in the fall of 1789 and graduated Phi Beta Kappa in 1792. Whitney expected to study law but, finding himself short of funds, accepted an offer to go to South Carolina as a private tutor.
Instead of reaching his destination, he was convinced to visit Georgia. In the closing years of the 18th century, Georgia was a magnet for New Englanders seeking their fortunes (its Revolutionary-era governor had been Lyman Hall, a migrant from Connecticut). When he initially sailed for South Carolina, among his shipmates were the widow (Catherine Littlefield Greene) and family of the Revolutionary hero Gen. Nathanael Greene of Rhode Island. Mrs. Greene invited Whitney to visit her Georgia plantation, Mulberry Grove. Her plantation manager and husband-to-be was Phineas Miller, another Connecticut migrant and Yale graduate (class of 1785), who would become Whitney's business partner.
Whitney is most famous for two innovations which came to have significant impacts on the United States in the mid-19th century: the cotton gin (1793) and his advocacy of interchangeable parts. In the South, the cotton gin revolutionized the way cotton was harvested and reinvigorated slavery. Conversely, in the North the adoption of interchangeable parts revolutionized the manufacturing industry, contributing greatly to the U.S. victory in the Civil War.
The cotton gin is a mechanical device that removes the seeds from cotton, a process that had previously been extremely labor-intensive. The word "gin" is short for "engine." While staying at Mulberry Grove, Whitney constructed several ingenious household devices which led Mrs Greene to introduce him to some businessmen who were discussing the desirability of a machine to
separate the short staple upland cotton from its seeds, work that was then done by hand at the rate of a pound of lint a day. In a few weeks Whitney produced a model. The cotton gin was a wooden drum stuck with hooks that pulled the cotton fibers through a mesh. The cotton seeds would not fit through the mesh and fell outside. Whitney occasionally told a story wherein he was pondering an improved method of seeding the cotton when he was inspired by observing a cat attempting to pull a chicken through a fence, and able to only pull through some of the feathers.
A single cotton gin could generate up to of cleaned cotton daily. This contributed to the economic development of the Southern United States, a prime cotton growing area; some historians believe that this invention allowed for the African slavery system in the Southern United States to become more sustainable at a critical point in its development.
Whitney applied for the patent for his cotton gin on October 28, 1793, and received the patent (later numbered as X72) on March 14, 1794, but it was not validated until 1807. Whitney and his partner, Miller, did not intend to sell the gins. Rather, like the proprietors of grist and sawmills, they expected to charge farmers for cleaning their cotton – two-fifths of the value, paid in cotton. Resentment at this scheme, the mechanical simplicity of the device and the primitive state of patent law, made infringement inevitable. Whitney and Miller could not build enough gins to meet demand, so gins from other makers found ready sale. Ultimately, patent infringement lawsuits consumed the profits (one patent, later annulled, was granted in 1796 to Hogden Holmes for a gin which substituted circular saws for the spikes) and their cotton gin company went out of business in 1797. One oft-overlooked point is that there were drawbacks to Whitney's first design. There is significant evidence that the design flaws were solved by his sponsor, Mrs. Greene, but Whitney gave her no public credit or recognition.
After validation of the patent, the legislature of South Carolina voted $50,000 for the rights for that state, while North Carolina levied a license tax for five years, from which about $30,000 was realized. There is a claim that Tennessee paid, perhaps, $10,000.
While the cotton gin did not earn Whitney the fortune he had hoped for, it did give him fame. It has been argued by some historians that Whitney's cotton gin was an important if unintended cause of the American Civil War. After Whitney's invention, the plantation slavery industry was rejuvenated, eventually culminating in the Civil War.
The cotton gin transformed Southern agriculture and the national economy. Southern cotton found ready markets in Europe and in the burgeoning textile mills of New England. Cotton exports from the U.S. boomed after the cotton gin's appearance – from less than in 1793 to by 1810. Cotton was a staple that could be stored for long periods and shipped long distances, unlike most agricultural products. It became the U.S.'s chief export, representing over half the value of U.S. exports from 1820 to 1860.
Paradoxically, the cotton gin, a labor-saving device, helped preserve and prolong slavery in the United States for another 70 years. Before the 1790s, slave labor was primarily employed in growing rice, tobacco, and indigo, none of which were especially profitable anymore. Neither was cotton, due to the difficulty of seed removal. But with the invention of the gin, growing cotton with slave labor became highly profitable – the chief source of wealth in the American South, and the basis of frontier settlement from Georgia to Texas. "King Cotton" became a dominant economic force, and slavery was sustained as a key institution of Southern society.
Eli Whitney has often been incorrectly credited with inventing the idea of interchangeable parts, which he championed for years as a maker of muskets; however, the idea predated Whitney, and Whitney's role in it was one of promotion and popularizing, not invention. Successful implementation of the idea eluded Whitney until near the end of his life, occurring first in others' armories.
Attempts at interchangeability of parts can be traced back as far as the Punic Wars through both archaeological remains of boats now in Museo Archeologico Baglio Anselmi and contemporary written accounts. In modern times the idea developed over decades among many people. An early leader was Jean-Baptiste Vaquette de Gribeauval, an 18th-century French artillerist who created a fair amount of standardization of artillery pieces, although not true interchangeability of parts. He inspired others, including Honoré Blanc and Louis de Tousard, to work further on the idea, and on shoulder weapons as well as artillery. In the 19th century these efforts produced the "armory system," or American system of manufacturing. Certain other New Englanders, including Captain John H. Hall and Simeon North, arrived at successful interchangeability before Whitney's armory did. The Whitney armory finally succeeded not long after his death in 1825.
The motives behind Whitney's acceptance of a contract to manufacture muskets in 1798 were mostly monetary. By the late 1790s, Whitney was on the verge of bankruptcy and the cotton gin litigation had left him deeply in debt. His New Haven cotton gin factory had burned to the ground, and litigation sapped his remaining resources. The French Revolution had ignited new conflicts between Great Britain, France, and the United States. The new American government, realizing the need to prepare for war, began to rearm. The War Department issued contracts for the manufacture of 10,000 muskets. Whitney, who had never made a gun in his life, obtained a contract in January 1798 to deliver 10,000 to 15,000 muskets in 1800. He had not mentioned interchangeable parts at that time. Ten months later, the Treasury Secretary, Oliver Wolcott, Jr., sent him a "foreign pamphlet on arms manufacturing techniques," possibly one of Honoré Blanc's reports, after which Whitney first began to talk about interchangeability.
In May 1798, Congress voted for legislation that would use eight hundred thousand dollars in order to pay for small arms and cannons in case war with France erupted. It offered a 5,000 dollar incentive with an additional 5,000 dollars once that money was exhausted for the person that was able to accurately produce arms for the government. Because the cotton gin had not brought Whitney the rewards he believed it promised, he accepted the offer. Although the contract was for one year, Whitney did not deliver the arms until 1809, using multiple excuses for the delay. Recently, historians have found that during 1801–1806, Whitney took the money and headed into South Carolina in order to profit from the cotton gin.
Although Whitney's demonstration of 1801 appeared to show the feasibility of creating interchangeable parts, Merritt Roe Smith concludes that it was "staged" and "duped government authorities" into believing that he had been successful. The charade gained him time and resources toward achieving that goal.
When the government complained that Whitney's price per musket compared unfavorably with those produced in government armories, he was able to calculate an actual price per musket by including fixed costs such as insurance and machinery, which the government had not accounted for. He thus made early contributions to both the concepts of cost accounting, and economic efficiency in manufacturing.
Machine tool historian Joseph W. Roe credited Whitney with inventing the first milling machine circa 1818. Subsequent work by other historians (Woodbury; Smith; Muir; Battison [cited by Baida]) suggests that Whitney was among a group of contemporaries all developing milling machines at about the same time (1814 to 1818), and that the others were more important to the innovation than Whitney was. (The machine that excited Roe may not have been built until 1825, after Whitney's death.) Therefore, no one person can properly be described as the inventor of the milling machine.
Despite his humble origins, Whitney was keenly aware of the value of social and political connections. In building his arms business, he took full advantage of the access that his status as a Yale alumnus gave him to other well-placed graduates, such as Oliver Wolcott, Jr., Secretary of the Treasury (class of 1778), and James Hillhouse, a New Haven developer and political leader.
His 1817 marriage to Henrietta Edwards, granddaughter of the famed evangelist Jonathan Edwards, daughter of Pierpont Edwards, head of the Democratic Party in Connecticut, and first cousin of Yale's president, Timothy Dwight, the state's leading Federalist, further tied him to Connecticut's ruling elite. In a business dependent on government contracts, such connections were essential to success.
Whitney died of prostate cancer on January 8, 1825, in New Haven, Connecticut, just a month after his 59th birthday. He left a widow and his four children behind. During the course of his illness, he reportedly invented and constructed several devices to mechanically ease his pain.
The Eli Whitney Students Program, Yale University's admissions program for non-traditional students, is named in honor of Whitney, who not only began his studies there when he was 23, but also went on to graduate Phi Beta Kappa in just three years. | https://en.wikipedia.org/wiki?curid=9732 |
Electromagnetic field
An electromagnetic field (also EM field) is a classical (i.e. non-quantum) field produced by moving electric charges. It is the field described by classical electrodynamics and is the classical counterpart to the quantized electromagnetic field tensor in quantum electrodynamics. The electromagnetic field propagates at the speed of light (in fact, this field can be identified "as" light) and interacts with charges and currents. Its quantum counterpart is one of the four fundamental forces of nature (the others are gravitation, weak interaction and strong interaction.)
The field can be viewed as the combination of an electric field and a magnetic field. The electric field is produced by stationary charges, and the magnetic field by moving charges (currents); these two are often described as the sources of the field. The way in which charges and currents interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. The force created by the electric field is much stronger than the force created by the magnetic field.
From a classical perspective in the history of electromagnetism, the electromagnetic field can be regarded as a smooth, continuous field, propagated in a wavelike manner. By contrast, from the perspective of quantum field theory, this field is seen as quantized; meaning that the free quantum field (i.e. non-interacting field) can be expressed as the Fourier sum of creation and annihilation operators in energy-momentum space while the effects of the interacting quantum field may be analyzed in perturbation theory via the S-matrix with the aid of a whole host of mathematical technologies such as the Dyson series, Wick's theorem, correlation functions, time-evolution operators, Feynman diagrams etc. Note that the quantized field is still spatially continuous; its "energy states" however are discrete (the field's energy states must not be confused with its "energy values", which are continuous; the quantum field's creation operators create multiple "discrete" states of energy called photons.)
The electromagnetic field may be viewed in two distinct ways: a continuous structure or a discrete structure.
Classically, electric and magnetic fields are thought of as being produced by smooth motions of charged objects. For example, oscillating charges produce variations in electric and magnetic fields that may be viewed in a 'smooth', continuous, wavelike fashion. In this case, energy is viewed as being transferred continuously through the electromagnetic field between any two locations. For instance, the metal atoms in a radio transmitter appear to transfer energy continuously. This view is useful to a certain extent (radiation of low frequency), but problems are found at high frequencies (see ultraviolet catastrophe).
The electromagnetic field may be thought of in a more 'coarse' way. Experiments reveal that in some circumstances electromagnetic energy transfer is better described as being carried in the form of packets called quanta (in this case, photons) with a fixed frequency. Planck's relation links the photon energy "E" of a photon to its frequency f through the equation:
where "h" is Planck's constant, and "f" is the frequency of the photon . Although modern quantum optics tells us that there also is a semi-classical explanation of the photoelectric effect—the emission of electrons from metallic surfaces subjected to electromagnetic radiation—the photon was historically (although not strictly necessarily) used to explain certain observations. It is found that increasing the intensity of the incident radiation (so long as one remains in the linear regime) increases only the number of electrons ejected, and has almost no effect on the energy distribution of their ejection. Only the frequency of the radiation is relevant to the energy of the ejected electrons.
This quantum picture of the electromagnetic field (which treats it as analogous to harmonic oscillators) has proven very successful, giving rise to quantum electrodynamics, a quantum field theory describing the interaction of electromagnetic radiation with charged matter. It also gives rise to quantum optics, which is different from quantum electrodynamics in that the matter itself is modelled using quantum mechanics rather than quantum field theory.
In the past, electrically charged objects were thought to produce two different, unrelated types of field associated with their charge property. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field as well as an electric field is produced when the charge moves, creating an electric current with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole—the electromagnetic field. Until 1820, when the Danish physicist H. C. Ørsted showed the effect of electric current on a compass needle, electricity and magnetism had been viewed as unrelated phenomena. In 1831, Michael Faraday made the seminal observation that time-varying magnetic fields could induce electric currents and then, in 1864, James Clerk Maxwell published his famous paper "A Dynamical Theory of the Electromagnetic Field".
Once this electromagnetic field has been produced from a given charge distribution, other charged or magnetised objects in this field may experience a force. If these other charges and currents are comparable in size to the sources producing the above electromagnetic field, then a new net electromagnetic field will be produced. Thus, the electromagnetic field may be viewed as a dynamic entity that causes other charges and currents to move, and which is also affected by them. These interactions are described by Maxwell's equations and the Lorentz force law. This discussion ignores the radiation reaction force.
The behavior of the electromagnetic field can be divided into four different parts of a loop:
A common misunderstanding is that (a) the quanta of the fields act in the same manner as (b) the charged particles, such as electrons, that generate the fields. In our everyday world, electrons travel slowly through conductors with a drift velocity of a fraction of a centimeter (or inch) per second and through a vacuum tube at speeds of around 1 thousand km/s, but fields propagate at the speed of light, approximately 300 thousand kilometers (or 186 thousand miles) a second. The speed ratio between charged particles in a conductor and field quanta is on the order of one to a million. Maxwell's equations relate (a) the presence and movement of charged particles with (b) the generation of fields. Those fields can then affect the force on, and can then move other slowly moving charged particles. Charged particles can move at relativistic speeds nearing field propagation speeds, but, as Albert Einstein showed, this requires enormous field energies, which are not present in our everyday experiences with electricity, magnetism, matter, and time and space.
The feedback loop can be summarized in a list, including phenomena belonging to each part of the loop:
There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as E(x, y, z, t) (electric field) and B(x, y, z, t) (magnetic field).
If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations.
With the advent of special relativity, physical laws became susceptible to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws.
The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are:
where formula_6 is the charge density, which can (and often does) depend on time and position, formula_7 is the permittivity of free space, formula_8 is the permeability of free space, and J is the current density vector, also a function of time and position. The units used above are the standard SI units. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors.
The Lorentz force law governs the interaction of the electromagnetic field with charged matter.
When a field travels across to different media, the properties of the field change according to the various boundary conditions. These equations are derived from Maxwell's equations.
The tangential components of the electric and magnetic fields as they relate on the boundary of two media are as follows:
The angle of refraction of an electric field between media is related to the permittivity formula_13 of each medium:
The angle of refraction of a magnetic field between media is related to the permeability formula_15 of each medium:
The two Maxwell equations, Faraday's Law and the Ampère-Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as 'a changing magnetic field creates an electric field'. This is the principle behind the electric generator.
Ampere's Law roughly states that 'a changing electric field creates a magnetic field'. Thus, this law can be applied to generate a magnetic field and run an electric motor.
Maxwell's equations take the form of an electromagnetic wave in a volume of space not containing charges or currents (free space) – that is, where formula_6 and J are zero. Under these conditions, the electric and magnetic fields satisfy the electromagnetic wave equation:
James Clerk Maxwell was the first to obtain this relationship by his completion of Maxwell's equations with the addition of a displacement current term to Ampere's circuital law.
Being one of the four fundamental forces of nature, it is useful to compare the electromagnetic field with the gravitational, strong and weak fields. The word 'force' is sometimes replaced by 'interaction' because modern particle physics models electromagnetism as an exchange of particles known as gauge bosons.
Sources of electromagnetic fields consist of two types of charge – positive and negative. This contrasts with the sources of the gravitational field, which are masses. Masses are sometimes described as "gravitational charges", the important feature of them being that there are only positive masses and no negative masses. Further, gravity differs from electromagnetism in that positive masses attract other positive masses whereas same charges in electromagnetism repel each other.
The relative strengths and ranges of the four interactions and other information are tabulated below:
When an EM field (see electromagnetic tensor) is not varying in time, it may be seen as a purely electrical field or a purely magnetic field, or a mixture of both. However the general case of a static EM field with both electric and magnetic components present, is the case that appears to most observers. Observers who see only an electric or magnetic field component of a static EM field, have the other (electric or magnetic) component suppressed, due to the special case of the immobile state of the charges that produce the EM field in that case. In such cases the other component becomes manifest in other observer frames.
A consequence of this, is that any case that seems to consist of a "pure" static electric or magnetic field, can be converted to an EM field, with both E and M components present, by simply moving the observer into a frame of reference which is moving with regard to the frame in which only the “pure” electric or magnetic field appears. That is, a pure static electric field will show the familiar magnetic field associated with a current, in any frame of reference where the charge moves. Likewise, any new motion of a charge in a region that seemed previously to contain only a magnetic field, will show that the space now contains an electric field as well, which will be found to produces an additional Lorentz force upon the moving charge.
Thus, electrostatics, as well as magnetism and magnetostatics, are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely the observer's. The "applications" of all such non-time varying (static) fields are discussed in the main articles linked in this section.
An EM field that varies in time has two “causes” in Maxwell's equations. One is charges and currents (so-called “sources”), and the other cause for an E or M field is a change in the other type of field (this last cause also appears in “free space” very far from currents and charges).
An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source, and has no "feedback" effect on them, and is also not affected directly by them in the present time (rather, it is indirectly produced by a sequences of changes in fields radiating out from them in the past). EMR consists of the radiations in the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles.
A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen.
A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of “close”) will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic "near-field".
Changing "electric" dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances.
Changing "magnetic" dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as metal detectors and MRI scanner coils at higher frequencies. Sometimes these high-frequency magnetic fields change at radio frequencies without being far-field waves and thus radio waves; see RFID tags.
See also near-field communication.
Further uses of near-field EM effects commercially, may be found in the article on virtual photons, since at the quantum level, these fields are represented by these particles. Far-field effects (EMR) in the quantum picture of radiation, are represented by ordinary photons.
The potential effects of electromagnetic fields on human health vary widely depending on the frequency and intensity of the fields.
The potential health effects of the very low frequency EMFs surrounding power lines and electrical devices are the subject of on-going research and a significant amount of public debate. The US National Institute for Occupational Safety and Health (NIOSH) and other US government agencies do not consider EMFs a proven health hazard. NIOSH has issued some cautionary advisories but stresses that the data are currently too limited to draw good conclusions.
Employees working at electrical equipment and installations can always be assumed to be exposed to electromagnetic fields. The exposure of office workers to fields generated by computers, monitors, etc. is negligible owing to the low field strengths. However, industrial installations for induction hardening and melting or on welding equipment may produce considerably higher field strengths and require further examination. If the exposure cannot be determined upon manufacturers' information, comparisons with similar systems or analytical calculations, measurements have to be accomplished. The results of the evaluation help to assess possible hazards to the safety and health of workers and to define protective measures. Since electromagnetic fields may influence passive or active implants of workers, it is essential to consider the exposure at their workplaces separately in the risk assessment.
On the other hand, radiation from other parts of the electromagnetic spectrum, such as ultraviolet light and gamma rays, are known to cause significant harm in some circumstances. For more information on the health effects due to specific electromagnetic phenomena and parts of the electromagnetic spectrum, see the following articles: | https://en.wikipedia.org/wiki?curid=9735 |
Empire State Building
The Empire State Building is a 102-story Art Deco skyscraper in Midtown Manhattan in New York City. It was designed by Shreve, Lamb & Harmon and built from 1930 to 1931. Its name is derived from "Empire State", the nickname of the state of New York. The building has a roof height of and stands a total of tall, including its antenna. The Empire State Building stood as the world's tallest building until the construction of the World Trade Center in 1970; following its collapse in the September 11, 2001 attacks, the Empire State Building was again the city's tallest skyscraper until 2012. , the building is the seventh-tallest building in New York City, the ninth-tallest completed skyscraper in the United States, the 48th-tallest in the world, and the fifth-tallest freestanding structure in the Americas.
The site of the Empire State Building, located in Midtown South on the west side of Fifth Avenue between West 33rd and 34th Streets, was originally part of an early 18th-century farm. It was developed in 1893 as the site of the Waldorf–Astoria Hotel. In 1929, Empire State Inc. acquired the site and devised plans for a skyscraper there. The design for the Empire State Building was changed fifteen times until it was ensured to be the world's tallest building. Construction started on March 17, 1930, and the building opened thirteen and a half months afterward on May 1, 1931. Despite favorable publicity related to the building's construction, because of the Great Depression and World War II, its owners did not make a profit until the early 1950s.
The building's Art Deco architecture, height, and observation decks have made it a popular attraction. Around 4 million tourists from around the world annually visit the building's 86th and 102nd floor observatories; an additional indoor observatory on the 80th floor opened in 2019. The Empire State Building is an American cultural icon: it has been featured in more than 250 TV shows and movies since the film "King Kong" was released in 1933. A symbol of New York City, the tower has been named as one of the Seven Wonders of the Modern World by the American Society of Civil Engineers. It was ranked first on the American Institute of Architects' List of America's Favorite Architecture in 2007. Additionally, the Empire State Building and its ground-floor interior were designated city landmarks by the New York City Landmarks Preservation Commission in 1980, and were added to the National Register of Historic Places as a National Historic Landmark in 1986.
The Empire State Building is located on the west side of Fifth Avenue in Manhattan, between 33rd Street to the south and 34th Street to the north. Tenants enter the building through the Art Deco lobby located at 350 Fifth Avenue. Visitors to the observatories use an entrance at 20 West 34th Street; prior to August 2018, visitors entered through the Fifth Avenue lobby. Although physically located in South Midtown, a mixed residential and commercial area, the building is so large that it was assigned its own ZIP Code, 10118; , it is one of 43 buildings in New York City that has its own ZIP code.
The areas surrounding the Empire State Building are home to other major points of interest, including Macy's at Herald Square on Sixth Avenue and 34th Street, Koreatown on 32nd Street between Fifth and Sixth Avenues, Penn Station and Madison Square Garden on Seventh Avenue between 32nd and 34th Streets, and the Flower District on 28th Street between Sixth and Seventh Avenues. The nearest New York City Subway stations are 34th Street–Penn Station at Seventh Avenue, two blocks west; 34th Street–Herald Square, one block west; and 33rd Street at Park Avenue, two blocks east. There is also a PATH station at 33rd Street and Sixth Avenue.
To the east of the Empire State Building is Murray Hill, a neighborhood with a mix of residential, commercial, and entertainment activity. One block east of the Empire State Building, on Madison Avenue at 34th Street, is the New York Public Library's Science, Industry and Business Library, which is located on the same block as the City University of New York's Graduate Center. Bryant Park and the New York Public Library Main Branch are located six blocks north of the Empire State Building, on the block bounded by Fifth Avenue, Sixth Avenue, 40th Street, and 42nd Street. Grand Central Terminal is located two blocks east of the library's Main Branch, at Park Avenue and 42nd Street.
The tract was originally part of Mary and John Murray's farm on Murray Hill. The earliest recorded major action on the site was during the American Revolutionary War, when General George Washington's troops retreated from the British following the Battle of Kip's Bay. In 1799, John Thompson (or Thomson; accounts vary) bought a tract of land roughly bounded by present-day Madison Avenue, 36th Street, Sixth Avenue, and 33rd Street, immediately north of the Caspar Samler farm. He paid a total of 482 British pounds for the parcel, equivalent to roughly $2,400 at the time, or about £ ($) today. Thompson was said to have sold the farm to Charles Lawton for $10,000 (equal to $ today) on September 24, 1825. The full details of this sale are unclear, as parts of the deed that certified the sale were later lost. In 1826, John Jacob Astor of the prominent Astor family bought the land from Lawton for $20,500. The Astors also purchased a parcel from the Murrays. John Jacob's son William Backhouse Astor Sr. bought a half interest in the properties for $20,500 on July 28, 1827, securing a tract of land on Fifth Avenue from 32nd to 35th streets.
On March 13, 1893, John Jacob Astor Sr's grandson William Waldorf Astor opened the Waldorf Hotel on the site with the help of hotelier George Boldt. On November 1, 1897, Waldorf's cousin, John Jacob Astor IV, opened the 16-story Astoria Hotel on an adjacent site. Together, the combined hotels had a total of 1,300 bedrooms making it the largest hotel in the world at the time. After Boldt died, in early 1918, the hotel lease was purchased by Thomas Coleman du Pont. By the 1920s, the hotel was becoming dated and the elegant social life of New York had moved much farther north than 34th Street. The Astor family decided to build a replacement hotel further uptown, and sold the hotel to Bethlehem Engineering Corporation in 1928 for $14–16 million. The hotel on the site of today's Empire State Building closed on May 3, 1929.
Bethlehem Engineering Corporation originally intended to build a 25-story office building on the Waldorf–Astoria site. The company's president, Floyd De L. Brown, paid $100,000 of the $1 million down payment required to start construction on the tower, with the promise that the difference would be paid later. Brown borrowed $900,000 from a bank, but then defaulted on the loan.
The land was then resold to Empire State Inc., a group of wealthy investors that included Louis G. Kaufman, Ellis P. Earle, John J. Raskob, Coleman du Pont, and Pierre S. du Pont. The name came from the state nickname for New York. Alfred E. Smith, a former Governor of New York and U.S. presidential candidate whose 1928 campaign had been managed by Raskob, was appointed head of the company. The group also purchased nearby land so they would have the needed for the tower's base, with the combined plot measuring wide by long. The Empire State Inc. consortium was announced to the public in August 1929.
Empire State Inc. contracted William F. Lamb, of architectural firm Shreve, Lamb and Harmon, to create the building design. Lamb produced the building drawings in just two weeks using the firm's earlier designs for the Reynolds Building in Winston-Salem, North Carolina as the basis. Concurrently, Lamb's partner Richmond Shreve created "bug diagrams" of the project requirements. The 1916 Zoning Act forced Lamb to design a structure that incorporated setbacks resulting in the lower floors being larger than the upper floors. Consequently, the tower was designed from the top down, giving it a "pencil"-like shape.
The original plan of the building was 50 stories, but was later increased to 60 and then 80 stories. Height restrictions were placed on nearby buildings to ensure that the top fifty floors of the planned 80-story, building would have unobstructed views of the city. "The New York Times" lauded the site's proximity to mass transit, with the Brooklyn–Manhattan Transit's 34th Street station and the Hudson and Manhattan Railroad's 33rd Street terminal one block away, as well as Penn Station two blocks away and the Grand Central Terminal nine blocks away at its closest. It also praised the of proposed floor space near "one of the busiest sections in the world".
While plans for the Empire State Building were being finalized, an intense competition in New York for the title of "world's tallest building" was underway. 40 Wall Street (then the Bank of Manhattan Building) and the Chrysler Building in Manhattan both vied for this distinction and were already under construction when work began on the Empire State Building. The "Race into the Sky", as popular media called it at the time, was representative of the country's optimism in the 1920s, fueled by the building boom in major cities. The 40 Wall Street tower was revised, in April 1929, from to making it the world's tallest. The Chrysler Building added its steel tip to its roof in October 1929, thus bringing it to a height of and greatly exceeding the height of 40 Wall Street. The Chrysler Building's developer, Walter Chrysler, realized that his tower's height would exceed the Empire State Building's as well, having instructed his architect, William Van Alen, to change the Chrysler's original roof from a stubby Romanesque dome to a narrow steel spire. Raskob, wishing to have the Empire State Building be the world's tallest, reviewed the plans and had five floors added as well as a spire; however, the new floors would need to be set back because of projected wind pressure on the extension. On November 18, 1929, Smith acquired a lot at 27–31 West 33rd Street, adding to the width of the proposed office building's site. Two days later, Smith announced the updated plans for the skyscraper. The plans included an observation deck on the 86th-floor roof at a height of , higher than the Chrysler's 71st-floor observation deck.
The 1,050-foot Empire State Building would only be taller than the Chrysler Building, and Raskob was afraid that Chrysler might try to "pull a trick like hiding a rod in the spire and then sticking it up at the last minute." The plans were revised one last time in December 1929, to include a 16-story, metal "crown" and an additional mooring mast intended for dirigibles. The roof height was now , making it the tallest building in the world by far, even without the antenna. The addition of the dirigible station meant that another floor, the now-enclosed 86th floor, would have to be built below the crown; however, unlike the Chrysler's spire, the Empire State's mast would serve a practical purpose. The final plan was announced to the public on January 8, 1930, just before the start of construction. "The New York Times" reported that the spire was facing some "technical problems", but they were "no greater than might be expected under such a novel plan." By this time the blueprints for the building had gone through up to fifteen versions before they were approved. Lamb described the other specifications he was given for the final, approved plan:
The contractors were Starrett Brothers and Eken, Paul and William A. Starrett and Andrew J. Eken, who would later construct other New York City buildings such as Stuyvesant Town, Starrett City and Trump Tower. The project was financed primarily by Raskob and Pierre du Pont, while James Farley's General Builders Supply Corporation supplied the building materials. John W. Bowser was the construction superintendent of the project, and the structural engineer of the building was Homer G. Balcom. The tight completion schedule necessitated the commencement of construction even though the design had yet to be finalized.
Demolition of the old Waldorf–Astoria began on October 1, 1929. Stripping the building down was an arduous process, as the hotel had been constructed using more rigid material than earlier buildings had been. Furthermore, the old hotel's granite, wood chips, and "'precious' metals such as lead, brass, and zinc" were not in high demand resulting in issues with disposal. Most of the wood was deposited into a woodpile on nearby 30th Street or was burned in a swamp elsewhere. Much of the other materials that made up the old hotel, including the granite and bronze, were dumped into the Atlantic Ocean near Sandy Hook, New Jersey.
By the time the hotel's demolition started, Raskob had secured the required funding for the construction of the building. The plan was to start construction later that year but, on October 24, the New York Stock Exchange suffered a sudden crash marking the beginning of the decade-long Great Depression. Despite the economic downturn, Raskob refused to cancel the project because of the progress that had been made up to that point. Neither Raskob, who had ceased speculation in the stock market the previous year, nor Smith, who had no stock investments, suffered financially in the crash. However, most of the investors were affected and as a result, in December 1929, Empire State Inc. obtained a $27.5 million loan from Metropolitan Life Insurance Company so construction could begin. The stock market crash resulted in no demand in new office space, Raskob and Smith nonetheless started construction, as canceling the project would have resulted in greater losses for the investors.
A structural steel contract was awarded on January 12, 1930, with excavation of the site beginning ten days later on January 22, before the old hotel had been completely demolished. Two twelve-hour shifts, consisting of 300 men each, worked continuously to dig the foundation. Small pier holes were sunk into the ground to house the concrete footings that would support the steelwork. Excavation was nearly complete by early March, and construction on the building itself started on March 17, with the builders placing the first steel columns on the completed footings before the rest of the footings had been finished. Around this time, Lamb held a press conference on the building plans. He described the reflective steel panels parallel to the windows, the large-block Indiana Limestone facade that was slightly more expensive than smaller bricks, and the tower's lines and rise. Four colossal columns, intended for installation in the center of the building site, were delivered; they would support a combined when the building was finished.
The structural steel was pre-ordered and pre-fabricated in anticipation of a revision to the city's building code that would have allowed the Empire State Building's structural steel to carry , up from , thus reducing the amount of steel needed for the building. Although the 18,000-psi regulation had been safely enacted in other cities, Mayor Jimmy Walker did not sign the new codes into law until March 26, 1930, just before construction was due to commence. The first steel framework was installed on April 1, 1930. From there, construction proceeded at a rapid pace; during one stretch of 10 working days, the builders erected fourteen floors. This was made possible through precise coordination of the building's planning, as well as the mass production of common materials such as windows and spandrels. On one occasion, when a supplier could not provide timely delivery of dark Hauteville marble, Starrett switched to using Rose Famosa marble from a German quarry that was purchased specifically to provide the project with sufficient marble.
The scale of the project was massive, with trucks carrying "16,000 partition tiles, 5,000 bags of cement, of sand and 300 bags of lime" arriving at the construction site every day. There were also cafes and concession stands on five of the incomplete floors so workers did not have to descend to the ground level to eat lunch. Temporary water taps were also built so workers did not waste time buying water bottles from the ground level. Additionally, carts running on a small railway system transported materials from the basement storage to elevators that brought the carts to the desired floors where they would then be distributed throughout that level using another set of tracks. The of steel ordered for the project was the largest-ever single order of steel at the time, comprising more steel than was ordered for the Chrysler Building and 40 Wall Street combined. According to historian John Tauranac, building materials were sourced from numerous, and distant, sources with "limestone from Indiana, steel girders from Pittsburgh, cement and mortar from upper New York State, marble from Italy, France, and England, wood from northern and Pacific Coast forests, [and] hardware from New England." The facade, too, used a variety of material, most prominently Indiana limestone but also Swedish black granite, terracotta, and brick.
By June 20, the skyscraper's supporting steel structure had risen to the 26th floor, and by July 27, half of the steel structure had been completed. Starrett Bros. and Eken endeavored to build one floor a day in order to speed up construction, a goal that they almost reached with their pace of stories per week; prior to this, the fastest pace of construction for a building of similar height had been stories per week. While construction progressed, the final designs for the floors were being designed from the ground up (as opposed to the general design, which had been from the roof down). Some of the levels were still undergoing final approval, with several orders placed within an hour of a plan being finalized. On September 10, as steelwork was nearing completion, Smith laid the building's cornerstone during a ceremony attended by thousands. The stone contained a box with contemporary artifacts including the previous day's "New York Times", a U.S. currency set containing all denominations of notes and coins minted in 1930, a history of the site and building, and photographs of the people involved in construction. The steel structure was topped out at on September 19, twelve days ahead of schedule and 23 weeks after the start of construction. Workers raised a flag atop the 86th floor to signify this milestone.
Afterward, work on the building's interior and crowning mast commenced. The mooring mast topped out on November 21, two months after the steelwork had been completed. Meanwhile, work on the walls and interior was progressing at a quick pace, with exterior walls built up to the 75th floor by the time steelwork had been built to the 95th floor. The majority of the facade was already finished by the middle of November. Because of the building's height, it was deemed infeasible to have many elevators or large elevator cabins, so the builders contracted with the Otis Elevator Company to make 66 cars that could speed at , which represented the largest-ever elevator order at the time.
In addition to the time constraint builders had, there were also space limitations because construction materials had to be delivered quickly, and trucks needed to drop off these materials without congesting traffic. This was solved by creating a temporary driveway for the trucks between 33rd and 34th Streets, and then storing the materials in the building's first floor and basements. Concrete mixers, brick hoppers, and stone hoists inside the building ensured that materials would be able to ascend quickly and without endangering or inconveniencing the public. At one point, over 200 trucks made material deliveries at the building site every day. A series of relay and erection derricks, placed on platforms erected near the building, lifted the steel from the trucks below and installed the beams at the appropriate locations. The Empire State Building was structurally completed on April 11, 1931, twelve days ahead of schedule and 410 days after construction commenced. Al Smith shot the final rivet, which was made of solid gold.
The project involved more than 3,500 workers at its peak, including 3,439 on a single day, August 14, 1930. Many of the workers were Irish and Italian immigrants, with a sizable minority of Mohawk ironworkers from the Kahnawake reserve near Montreal. According to official accounts, five workers died during the construction, although the "New York Daily News" gave reports of 14 deaths and a headline in the socialist magazine "The New Masses" spread unfounded rumors of up to 42 deaths. The Empire State Building cost $40,948,900 to build, including demolition of the Waldorf–Astoria (equivalent to $ in ). This was lower than the $60 million budgeted for construction.
Lewis Hine captured many photographs of the construction, documenting not only the work itself but also providing insight into the daily life of workers in that era. Hine's images were used extensively by the media to publish daily press releases. According to the writer Jim Rasenberger, Hine "climbed out onto the steel with the ironworkers and dangled from a derrick cable hundreds of feet above the city to capture, as no one ever had before (or has since), the dizzy work of building skyscrapers". In Rasenberger's words, Hine turned what might have been an assignment of "corporate flak" into "exhilarating art". These images were later organized into their own collection. Onlookers were enraptured by the sheer height at which the steelworkers operated. "New York" magazine wrote of the steelworkers: "Like little spiders they toiled, spinning a fabric of steel against the sky".
The Empire State Building officially opened on May 1, 1931, forty-five days ahead of its projected opening date, and eighteen months from the start of construction. The opening was marked with an event featuring United States President Herbert Hoover, who turned on the building's lights with the ceremonial button push from Washington, D.C.. Over 350 guests attended the opening ceremony, and following luncheon, at the 86th floor including Jimmy Walker, Governor Franklin D. Roosevelt, and Al Smith. An account from that day stated that the view from the luncheon was obscured by a fog, with other landmarks such as the Statue of Liberty being "lost in the mist" enveloping New York City. The building officially opened the next day. Advertisements for the building's observatories were placed in local newspapers, while nearby hotels also capitalized on the events by releasing advertisements that lauded their proximity to the newly opened tower.
According to "The New York Times", builders and real estate speculators predicted that the Empire State Building would be the world's tallest building "for many years", thus ending the great New York City skyscraper rivalry. At the time, most engineers agreed that it would be difficult to build a building taller than , even with the hardy Manhattan bedrock as a foundation. Technically, it was believed possible to build a tower of up to , but it was deemed uneconomical to do so, especially during the Great Depression. As the tallest building in the world, at that time, and the first one to exceed 100 floors, the Empire State Building became an icon of the city and, ultimately, of the nation.
In 1932, the Fifth Avenue Association gave the tower its 1931 "gold medal" for architectural excellence, signifying that the Empire State had been the best-designed building on Fifth Avenue to open in 1931. A year later, on March 2, 1933, the movie "King Kong" was released. The movie, which depicted a large stop motion ape named Kong climbing the Empire State Building, made the still-new building into a cinematic icon.
The Empire State Building's opening coincided with the Great Depression in the United States, and as a result much of its office space was vacant from its opening. In the first year, only 23% of the available space was rented, as compared to the early 1920s, where the average building would have occupancy of 52% upon opening and 90% rented within five years. The lack of renters led New Yorkers to deride the building as the "Empty State Building". | https://en.wikipedia.org/wiki?curid=9736 |
Eugenics
Eugenics (; from Greek εὐ- "good" and γενής "come into being, growing") is a set of beliefs and practices that aim to improve the genetic quality of a human population, historically by excluding people and groups judged to be inferior and promoting those judged to be superior.
The concept predates the term; Plato suggested applying the principles of selective breeding to humans around 400 BC. Early advocates of eugenics in the 19th century regarded it as a way of improving groups of people. In contemporary usage, the term "eugenics" is closely associated with scientific racism and white supremacism. Modern bioethicists who advocate new eugenics characterise it as a way of enhancing individual traits, regardless of group membership.
While eugenic principles have been practiced as early as ancient Greece, the contemporary history of eugenics began in the early 20th century, when a popular eugenics movement emerged in the United Kingdom, and then spread to many countries, including the United States, Canada, and most European countries. In this period, people from across the political spectrum espoused eugenic ideas. Consequently, many countries adopted eugenic policies, intended to improve the quality of their populations' genetic stock. Such programs included both "positive" measures, such as encouraging individuals deemed particularly "fit" to reproduce, and "negative" measures, such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. Those deemed "unfit to reproduce" often included people with mental or physical disabilities, people who scored in the low ranges on different IQ tests, criminals and "deviants," and members of disfavored minority groups.
The eugenics movement became associated with Nazi Germany and the Holocaust when the defense of many of the defendants at the Nuremberg trials of 1945 to 1946 attempted to justify their human-rights abuses by claiming there was little difference between the Nazi eugenics programs and the U.S. eugenics programs. In the decades following World War II, with more emphasis on human rights, many countries began to abandon eugenics policies, although some Western countries (the United States, Canada, and Sweden among them) continued to carry out forced sterilizations.
Since the 1980s and 1990s, with new assisted reproductive technology procedures available, such as gestational surrogacy (available since 1985), preimplantation genetic diagnosis (available since 1989), and cytoplasmic transfer (first performed in 1996), concern has grown about the possible revival of a more potent form of eugenics after decades of promoting human rights.
A criticism of eugenics policies is that, regardless of whether "negative" or "positive" policies are used, they are susceptible to abuse because the genetic selection criteria are determined by whichever group has political power at the time. Furthermore, many criticize "negative eugenics" in particular as a violation of basic human rights, seen since 1968's Proclamation of Tehran as including the right to reproduce. Another criticism is that eugenics policies eventually lead to a loss of genetic diversity, thereby resulting in inbreeding depression due to a loss of genetic variation. Yet another criticism of contemporary eugenics policies is that they propose to permanently and artificially disrupt millions of years of evolution, and that attempting to create genetic lines "clean" of "disorders" can have far-reaching ancillary downstream effects in the genetic ecology, including negative effects on immunity and on species resilience.
The concept of positive eugenics to produce better human beings has existed at least since Plato suggested selective mating to produce a guardian class. In Sparta, every Spartan child was inspected by the council of elders, the Gerousia, which determined if the child was fit to live or not. In the early years of ancient Rome, a Roman father was obliged by law to immediately kill his child if they were "dreadfully deformed". According to Tacitus, a Roman of the Imperial Period, the Germanic tribes of his day killed any member of their community they deemed cowardly, unwarlike or "stained with abominable vices", usually by drowning them in swamps. Modern historians, however, see Tacitus' ethnographic writing as unreliable in such details.
The idea of a modern project for improving the human population through selective breeding was originally developed by Francis Galton, and was initially inspired by Darwinism and its theory of natural selection. Galton had read his half-cousin Charles Darwin's theory of evolution, which sought to explain the development of plant and animal species, and desired to apply it to humans. Based on his biographical studies, Galton believed that desirable human qualities were hereditary traits, although Darwin strongly disagreed with this elaboration of his theory. In 1883, one year after Darwin's death, Galton gave his research a name: "eugenics". With the introduction of genetics, eugenics became associated with genetic determinism, the belief that human character is entirely or in the majority caused by genes, unaffected by education or living conditions. Many of the early geneticists were not Darwinians, and evolution theory was not needed for eugenics policies based on genetic determinism. Throughout its recent history, eugenics has remained controversial.
Eugenics became an academic discipline at many colleges and universities and received funding from many sources. Organizations were formed to win public support and sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907 and the American Eugenics Society of 1921. Both sought support from leading clergymen and modified their message to meet religious ideals. In 1909, the Anglican clergymen William Inge and James Peile both wrote for the British Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes. The book "The Passing of the Great Race" ("Or, The Racial Basis of European History") by American eugenicist, lawyer, and amateur anthropologist Madison Grant was published in 1916. Though influential, the book was largely ignored when it first appeared, and it went through several revisions and editions. Nevertheless, the book was used by people who advocated restricted immigration as justification for what became known as “scientific racism”.Three International Eugenics Conferences presented a global venue for eugenists with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies were first implemented in the early 1900s in the United States. It also took root in France, Germany, and Great Britain. Later, in the 1920s and 1930s, the eugenic policy of sterilizing certain mental patients was implemented in other countries including Belgium, Brazil, Canada, Japan and Sweden. Frederick Osborn's 1937 journal article "Development of a Eugenic Philosophy" framed it as a social philosophy—a philosophy with implications for social order. That definition is not universally accepted. Osborn advocated for higher rates of sexual reproduction among people with desired traits ("positive eugenics") or reduced rates of sexual reproduction or sterilization of people with less-desired or undesired traits ("negative eugenics").
In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbor Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure "Nordic race" or "Aryan" genetic pool and the eventual elimination of "unfit" races. Many leading British politicians subscribed to the theories of eugenics. Winston Churchill supported the British Eugenics Society and was an honorary vice president for the organization. Churchill believed that eugenics could solve "race deterioration" and reduce crime and poverty.
Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward, the English writer G. K. Chesterton, the German-American anthropologist Franz Boas, who argued that advocates of eugenics greatly over-estimate the influence of biology, and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward's 1913 article "Eugenics, Euthenics, and Eudemics", Chesterton's 1917 book "", and Boas' 1916 article "" (published in "The Scientific Monthly") were all harshly critical of the rapidly growing movement. Sutherland identified eugenists as a major obstacle to the eradication and cure of tuberculosis in his 1917 address "Consumption: Its Cause and Cure", and criticism of eugenists and Neo-Malthusians in his 1921 book "Birth Control" led to a writ for libel from the eugenist Marie Stopes. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben. Other biologists such as J. B. S. Haldane and R. A. Fisher expressed skepticism in the belief that sterilization of "defectives" would lead to the disappearance of undesirable genetic traits.
Among institutions, the Catholic Church was an opponent of state-enforced sterilizations. Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party. The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical "Casti connubii". In this, Pope Pius XI explicitly condemned sterilization laws: "Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason."
As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide. By 2014, gene selection (rather than "people selection") was made possible through advances in genome editing, leading to what is sometimes called "new eugenics", also known as "neo-eugenics", "consumer eugenics", or "liberal eugenics".
The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rüdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in "Mein Kampf" in 1925 and emulated eugenic legislation for the sterilization of "defectives" that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as "degenerate" or "unfit", and therefore led to segregation, institutionalization, sterilization, euthanasia, and even mass murder. The Nazi practice of euthanasia was carried out on hospital patients in the Aktion T4 centers such as Hartheim Castle.
By the end of World War II, many eugenics laws were abandoned, having become associated with Nazi Germany. H. G. Wells, who had called for "the sterilization of failures" in 1904, stated in his 1940 book "The Rights of Man: Or What Are We Fighting For?" that among the human rights, which he believed should be available to all people, was "a prohibition on mutilation, sterilization, torture, and any bodily punishment". After World War II, the practice of "imposing measures intended to prevent births within [a national, ethnical, racial or religious] group" fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide. The Charter of Fundamental Rights of the European Union also proclaims "the prohibition of eugenic practices, in particular those aiming at selection of persons". In spite of the decline in discriminatory eugenics laws, some government mandated sterilizations continued into the 21st century. During the ten years President Alberto Fujimori led Peru from 1990 to 2000, 2,000 persons were allegedly involuntarily sterilized. China maintained its one-child policy until 2015 as well as a suite of other eugenics based legislation to reduce population size and manage fertility rates of different populations. In 2007, the United Nations reported coercive sterilizations and hysterectomies in Uzbekistan. During the years 2005 to 2013, nearly one-third of the 144 California prison inmates who were sterilized did not give lawful consent to the operation.
Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject. Some, such as UC Berkeley sociologist Troy Duster, have claimed that modern genetics is a back door to eugenics. This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products". In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction.
Lee Kuan Yew, the so-called "Founding Father" of Singapore, started promoting eugenics as early as 1983.
In October 2015, the United Nations' International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology.
Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term "eugenics" (preferring "germinal choice" or "reprogenetics") to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements.
Prenatal screening can be considered a form of contemporary eugenics because it may lead to abortions of children with undesirable traits. A system was proposed by California Senator Skinner to compensate victims of the well-documented examples of prison sterilizations resulting from California's eugenics programs, but this did not pass by the bill's 2018 deadline in the Legislature.
The term "eugenics" and its modern field of study were first formulated by Francis Galton in 1883, drawing on the recent work of his half-cousin Charles Darwin. Galton published his observations and conclusions in his book "Inquiries into Human Faculty and Its Development".
The origins of the concept began with certain interpretations of Mendelian inheritance and the theories of August Weismann. The word "eugenics" is derived from the Greek word "eu" ("good" or "well") and the suffix "-genēs" ("born"); Galton intended it to replace the word "stirpiculture", which he had used previously but which had come to be mocked due to its perceived sexual overtones. Galton defined eugenics as "the study of all agencies under human control which can improve or impair the racial quality of future generations".
Historically, the term "eugenics" has referred to everything from prenatal care for mothers to forced sterilization and euthanasia. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today.
Edwin Black, journalist and author of "War Against the Weak", claims eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is often deemed a cultural choice rather than a matter that can be determined through objective scientific inquiry. The most disputed aspect of eugenics has been the definition of "improvement" of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics was tainted with scientific racism and pseudoscience.
Early eugenicists were mostly concerned with factors of perceived intelligence that often correlated strongly with social class. These included Karl Pearson and Walter Weldon, who worked on this at the University College London. In his lecture "Darwinism, Medical Progress and Eugenics", Pearson claimed that everything concerning eugenics fell into the field of medicine.
Eugenic policies have been conceptually divided into two categories. Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, "in vitro" fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; abortion for fit women, for example, was illegal in Nazi Germany.
The first major challenge to conventional eugenics based on genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly ("Drosophila melanogaster") with white eyes from a family with red eyes, demonstrating that major genetic changes occurred outside of inheritance. Additionally, Morgan criticized the view that certain traits, such as intelligence and criminality, were hereditary because these traits were subjective. Despite Morgan's public rejection of eugenics, much of his genetic research was adopted by proponents of eugenics.
The heterozygote test is used for the early detection of recessive hereditary diseases, allowing for couples to determine if they are at risk of passing genetic defects to a future child. The goal of the test is to estimate the likelihood of passing the hereditary disease to future descendants.
Recessive traits can be severely reduced, but never eliminated unless the complete genetic makeup of all members of the pool was known, as aforementioned. As only very few undesirable traits, such as Huntington's disease, are dominant, it could be argued from certain perspectives that the practicality of "eliminating" traits is quite low.
There are examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not influencing the prevalence of heterozygote carriers of those diseases. The elevated prevalence of certain genetically transmitted diseases among the Ashkenazi Jewish population (Tay–Sachs, cystic fibrosis, Canavan's disease, and Gaucher's disease), has been decreased in current populations by the application of genetic screening.
Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Andrzej Pękalski, from the University of Wrocław, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pekalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together.
Eugenic policies may lead to a loss of genetic diversity. Further, a culturally-accepted "improvement" of the gene pool may result in extinction, due to increased vulnerability to disease, reduced ability to adapt to environmental change, and other factors that may not be anticipated in advance. This has been evidenced in numerous instances, in isolated island populations. A long-term, species-wide eugenics plan might lead to such a scenario because the elimination of traits deemed undesirable would reduce genetic diversity by definition.
Edward M. Miller claims that, in any one generation, any realistic program should make only minor changes in a fraction of the gene pool, giving plenty of time to reverse direction if unintended consequences emerge, reducing the likelihood of the elimination of desirable genes. Miller also argues that any appreciable reduction in diversity is so far in the future that little concern is needed for now.
While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology, at this point there is no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some conditions such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual, so eliminating these genes is undesirable in places where such diseases are common.
Societal and political consequences of eugenics call for a place in the discussion on the ethics behind the eugenics movement. Many of the ethical concerns regarding eugenics arise from its controversial past, prompting a discussion on what place, if any, it should have in the future. Advances in science have changed eugenics. In the past, eugenics had more to do with sterilization and enforced reproduction laws. Now, in the age of a progressively mapped genome, embryos can be tested for susceptibility to disease, gender, and genetic defects, and alternative methods of reproduction such as in vitro fertilization are becoming more common. Therefore, eugenics is no longer "ex post facto" regulation of the living but instead preemptive action on the unborn.
With this change, however, there are ethical concerns which lack adequate attention, and which must be addressed before eugenic policies can be properly implemented in the future. Sterilized individuals, for example, could volunteer for the procedure, albeit under incentive or duress, or at least voice their opinion. The unborn fetus on which these new eugenic procedures are performed cannot speak out, as the fetus lacks the voice to consent or to express his or her opinion. Philosophers disagree about the proper framework for reasoning about such actions, which change the very identity and existence of future persons.
Some have described potential "eugenics wars" as the worst-case outcome of eugenics. This scenario would mean the return of coercive state-sponsored genetic discrimination and human rights violations such as compulsory sterilization of persons with genetic defects, the killing of the institutionalized and, specifically, segregation and genocide of races perceived as inferior. Health law professor George Annas and technology law professor Lori Andrews are prominent advocates of the position that the use of these technologies could lead to such human-posthuman caste warfare. According to eugenics advocate Richard Lynn, the criticism of eugenics that "it inevitably leads to measures that are unethical" is a slippery slope argument.
In his 2003 book "Enough: Staying Human in an Engineered Age", environmental ethicist Bill McKibben argued at length against germinal choice technology and other advanced biotechnological strategies for human enhancement. He writes that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to "improve" themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome with technology. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using as examples Ming China, Tokugawa Japan and the contemporary Amish.
Some, for example Nathaniel C. Comfort from Johns Hopkins University, claim that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making from the state to the patient and their family. Comfort suggests that "the eugenic impulse drives us to eliminate disease, live longer and healthier, with greater intelligence, and a better adjustment to the conditions of society; and the health benefits, the intellectual thrill and the profits of genetic bio-medicine are too great for us to do otherwise." Others, such as bioethicist Stephen Wilkinson of Keele University and Honorary Research Fellow Eve Garrard at the University of Manchester, claim that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral.
In their book published in 2000, "From Chance to Choice: Genetics and Justice", bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements.
In his book "A Theory of Justice" (1971), American philosopher John Rawls argued that "Over time a society is to take steps to preserve the general level of natural abilities and to prevent the diffusion of serious defects". The Original position, a hypothetical situation developed by Rawls, has been used as an argument for "negative eugenics".
The film "Gattaca" (1997) provides a fictional example of a dystopian society that uses eugenics to decide what people are capable of and their place in the world. Although critically acclaimed, "Gattaca" was not a box office success, but it is said to have crystallized the debate over the controversial topic of human genetic engineering. The film's dystopian depiction of "genoism" has been cited by many bioethicists and laypeople in support of their hesitancy about, or opposition to, eugenics and the societal acceptance of the genetic-determinist ideology that may frame it. In a 1997 review of the film for the journal "Nature Genetics", molecular biologist Lee M. Silver stated that ""Gattaca" is a film that all geneticists should see if for no other reason than to understand the perception of our trade held by so many of the public-at-large".
Notes | https://en.wikipedia.org/wiki?curid=9737 |
Email
Electronic mail (email or e-mail) is a method of exchanging messages ("mail") between people using electronic devices. Email entered limited use in the 1960s, but users could only send to users of the same computer, and some early email systems required the author and the recipient to both be online simultaneously, similar to instant messaging. Ray Tomlinson is credited as the inventor of email; in 1971, he developed the first system able to send mail between users on different hosts across the ARPANET, using the @ sign to link the user name with a destination server. By the mid-1970s, this was the form recognized as email.
Email operates across computer networks, primarily the Internet. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver, and store messages. Neither the users nor their computers are required to be online simultaneously; they need to connect, typically to a mail server or a webmail interface to send or receive messages or download it.
Originally an ASCII text-only communications medium, Internet email was extended by Multipurpose Internet Mail Extensions (MIME) to carry text in other character sets and multimedia content attachments. International email, with internationalized email addresses using UTF-8, is standardized but not widely adopted.
The history of modern Internet email services reaches back to the early ARPANET, with standards for encoding email messages published as early as 1973 (RFC 561). An email message sent in the early 1970s is similar to a basic email sent today.
Historically, the term "electronic mail" is any electronic document transmission. For example, several writers in the early 1970s used the term to refer to fax document transmission. As a result, finding its first use is difficult with the specific meaning it has today.
The term "electronic mail" has been in use with its current meaning since at least 1975, and variations of the shorter "E-mail" have been in use since at least 1979:
In the original protocol, "RFC 524", none of these forms was used. The service is simply referred to as "mail", and a single piece of electronic mail is called a "message".
An Internet e-mail consists of an envelope and content; the content consists of a header and a body.
Computer-based mail and messaging became possible with the advent of time-sharing computers in the early 1960s, and informal methods of using shared files to pass messages were soon expanded into the first mail systems. Most developers of early mainframes and minicomputers developed similar, but generally incompatible, mail applications. Over time, a complex web of gateways and routing systems linked many of them. Many US universities were part of the ARPANET (created in the late 1960s), which aimed at software portability between its systems. In 1971 the first ARPANET network email was sent, introducing the now-familiar address syntax with the '@' symbol designating the user's system address. The Simple Mail Transfer Protocol (SMTP) protocol was introduced in 1981.
For a time in the late 1980s and early 1990s, it seemed likely that either a proprietary commercial system or the X.400 email system, part of the Government Open Systems Interconnection Profile (GOSIP), would predominate. However, once the final restrictions on carrying commercial traffic over the Internet ended in 1995, a combination of factors made the current Internet suite of SMTP, POP3 and IMAP email protocols the standard.
The following is a typical sequence of events that takes place when sender Alice transmits a message using a mail user agent (MUA) addressed to the email address of the recipient.
In addition to this example, alternatives and complications exist in the email system:
Many MTAs used to accept messages for any recipient on the Internet and do their best to deliver them. Such MTAs are called "open mail relays". This was very important in the early days of the Internet when network connections were unreliable. However, this mechanism proved to be exploitable by originators of unsolicited bulk email and as a consequence open mail relays have become rare, and many MTAs do not accept messages from open mail relays.
The basic Internet message format used for email is defined by RFC 5322, with encoding of non-ASCII data and multimedia content attachments defined in RFC 2045 through RFC 2049, collectively called "Multipurpose Internet Mail Extensions" or "MIME". The extensions in International email apply only to email. RFC 5322 replaced the earlier RFC 2822 in 2008, then RFC 2822 in 2001 replaced RFC 822 – the standard for Internet email for decades. Published in 1982, RFC 822 was based on the earlier RFC 733 for the ARPANET.
Internet email messages consist of two sections, 'header' and 'body'. These are known as 'content'.
The header is structured into fields such as From, To, CC, Subject, Date, and other information about the email. In the process of transporting email messages between systems, SMTP communicates delivery parameters and information using message header fields. The body contains the message, as unstructured text, sometimes containing a signature block at the end. The header is separated from the body by a blank line.
RFC 5322 specifies the syntax of the email header. Each email message has a header (the "header section" of the message, according to the specification), comprising a number of fields ("header fields"). Each field has a name ("field name" or "header field name"), followed by the separator character ":", and a value ("field body" or "header field body").
Each field name begins in the first character of a new line in the header section, and begins with a non-whitespace printable character. It ends with the separator character ":". The separator follows the field value (the "field body"). The value can continue onto subsequent lines if those lines have space or tab as their first character. Field names and, without SMTPUTF8, field bodies are restricted to 7-bit ASCII characters. Some non-ASCII values may be represented using MIME encoded words.
Email header fields can be multi-line, with each line recommended to be no more than 78 characters, although the limit is 998 characters. Header fields defined by RFC 5322 contain only US-ASCII characters; for encoding characters in other sets, a syntax specified in RFC 2047 may be used. In some examples, the IETF EAI working group defines some standards track extensions, replacing previous experimental extensions so UTF-8 encoded Unicode characters may be used within the header. In particular, this allows email addresses to use non-ASCII characters. Such addresses are supported by Google and Microsoft products, and promoted by some government agents.
The message header must include at least the following fields:
RFC 3864 describes registration procedures for message header fields at the IANA; it provides for permanent and provisional field names, including also fields defined for MIME, netnews, and HTTP, and referencing relevant RFCs. Common header fields for email include:
The "To:" field may be unrelated to the addresses to which the message is delivered. The delivery list is supplied separately to the transport protocol, SMTP, which may be extracted from the header content. The "To:" field is similar to the addressing at the top of a conventional letter delivered according to the address on the outer envelope. In the same way, the "From:" field may not be the sender. Some mail servers apply email authentication systems to messages relayed. Data pertaining to the server's activity is also part of the header, as defined below.
SMTP defines the "trace information" of a message saved in the header using the following two fields:
Other fields added on top of the header by the receiving server may be called "trace fields".
Internet email was designed for 7-bit ASCII. Most email software is 8-bit clean, but must assume it will communicate with 7-bit servers and mail readers. The MIME standard introduced character set specifiers and two content transfer encodings to enable transmission of non-ASCII data: quoted printable for mostly 7-bit content with a few characters outside that range and base64 for arbitrary binary data. The 8BITMIME and BINARY extensions were introduced to allow transmission of mail without the need for these encodings, but many mail transport agents may not support them. In some countries, several encoding schemes co-exist; as the result, by default, the message in a non-Latin alphabet language appears in non-readable form (the only exception is a coincidence if the sender and receiver use the same encoding scheme). Therefore, for international character sets, Unicode is growing in popularity.
Most modern graphic email clients allow the use of either plain text or HTML for the message body at the option of the user. HTML email messages often include an automatic-generated plain text copy for compatibility. Advantages of HTML include the ability to include in-line links and images, set apart previous messages in block quotes, wrap naturally on any display, use emphasis such as underlines and italics, and change font styles. Disadvantages include the increased size of the email, privacy concerns about web bugs, abuse of HTML email as a vector for phishing attacks and the spread of malicious software.
Some web-based mailing lists recommend all posts be made in plain-text, with 72 or 80 characters per line for all the above reasons, and because they have a significant number of readers using text-based email clients such as Mutt. Some Microsoft email clients may allow rich formatting using their proprietary Rich Text Format (RTF), but this should be avoided unless the recipient is guaranteed to have a compatible email client.
Messages are exchanged between hosts using the Simple Mail Transfer Protocol with software programs called mail transfer agents (MTAs); and delivered to a mail store by programs called mail delivery agents (MDAs, also sometimes called local delivery agents, LDAs). Accepting a message obliges an MTA to deliver it, and when a message cannot be delivered, that MTA must send a bounce message back to the sender, indicating the problem.
Users can retrieve their messages from servers using standard protocols such as POP or IMAP, or, as is more likely in a large corporate environment, with a proprietary protocol specific to Novell Groupwise, Lotus Notes or Microsoft Exchange Servers. Programs used by users for retrieving, reading, and managing email are called mail user agents (MUAs).
Mail can be stored on the client, on the server side, or in both places. Standard formats for mailboxes include Maildir and mbox. Several prominent email clients use their own proprietary format and require conversion software to transfer email between them. Server-side storage is often in a proprietary format but since access is through a standard protocol such as IMAP, moving email from one server to another can be done with any MUA supporting the protocol.
Many current email users do not run MTA, MDA or MUA programs themselves, but use a web-based email platform, such as Gmail or Yahoo! Mail, that performs the same tasks. Such webmail interfaces allow users to access their mail with any standard web browser, from any computer, rather than relying on an email client.
Upon reception of email messages, email client applications save messages in operating system files in the file system. Some clients save individual messages as separate files, while others use various database formats, often proprietary, for collective storage. A historical standard of storage is the "mbox" format. The specific format used is often indicated by special filename extensions:
Some applications (like Apple Mail) leave attachments encoded in messages for searching while also saving separate copies of the attachments. Others separate attachments from messages and save them in a specific directory.
The URI scheme, as registered with the IANA, defines the codice_5 scheme for SMTP email addresses. Though its use is not strictly defined, URLs of this form are intended to be used to open the new message window of the user's mail client when the URL is activated, with the address as defined by the URL in the "To:" field. Many clients also support query string parameters for the other email fields, such as its subject line or carbon copy recipients.
Many email providers have a web-based email client (e.g. AOL Mail, Gmail, Outlook.com and Yahoo! Mail). This allows users to log into the email account by using any compatible web browser to send and receive their email. Mail is typically not downloaded to the client, so can't be read without a current Internet connection.
The Post Office Protocol 3 (POP3) is a mail access protocol used by a client application to read messages from the mail server. Received messages are often deleted from the server. POP supports simple download-and-delete requirements for access to remote mailboxes (termed maildrop in the POP RFC's).
The Internet Message Access Protocol (IMAP) provides features to manage a mailbox from multiple devices. Small portable devices like smartphones are increasingly used to check email while traveling and to make brief replies, larger devices with better keyboard access being used to reply at greater length. IMAP shows the headers of messages, the sender and the subject and the device needs to request to download specific messages. Usually, the mail is left in folders in the mail server.
Messaging Application Programming Interface (MAPI) is used by Microsoft Outlook to communicate to Microsoft Exchange Server - and to a range of other email server products such as Axigen Mail Server, Kerio Connect, Scalix, Zimbra, HP OpenMail, IBM Lotus Notes, Zarafa, and Bynari where vendors have added MAPI support to allow their products to be accessed directly via Outlook.
Email has been widely accepted by businesses, governments and non-governmental organizations in the developed world, and it is one of the key parts of an 'e-revolution' in workplace communication (with the other key plank being widespread adoption of highspeed Internet). A sponsored 2010 study on workplace communication found 83% of U.S. knowledge workers felt email was critical to their success and productivity at work.
It has some key benefits to business and other organizations, including:
Email marketing via "opt-in" is often successfully used to send special sales offerings and new product information. Depending on the recipient's culture, email sent without permission—such as an "opt-in"—is likely to be viewed as unwelcome "email spam".
Many users access their personal emails from friends and family members using a personal computer in their house or apartment.
Email has become used on smartphones and on all types of computers. Mobile "apps" for email increase accessibility to the medium for users who are out of their homes. While in the earliest years of email, users could only access email on desktop computers, in the 2010s, it is possible for users to check their email when they are away from home, whether they are across town or across the world. Alerts can also be sent to the smartphone or other devices to notify them immediately of new messages. This has given email the ability to be used for more frequent communication between users and allowed them to check their email and write messages throughout the day. , there were approximately 1.4 billion email users worldwide and 50 billion non-spam emails that were sent daily.
Individuals often check emails on smartphones for both personal and work-related messages. It was found that US adults check their email more than they browse the web or check their Facebook accounts, making email the most popular activity for users to do on their smartphones. 78% of the respondents in the study revealed that they check their email on their phone. It was also found that 30% of consumers use only their smartphone to check their email, and 91% were likely to check their email at least once per day on their smartphone. However, the percentage of consumers using email on a smartphone ranges and differs dramatically across different countries. For example, in comparison to 75% of those consumers in the US who used it, only 17% in India did.
, the number of Americans visiting email web sites had fallen 6 percent after peaking in November 2009. For persons 12 to 17, the number was down 18 percent. Young people preferred instant messaging, texting and social media. Technology writer Matt Richtel said in "The New York Times" that email was like the VCR, vinyl records and film cameras—no longer cool and something older people do.
A 2015 survey of Android users showed that persons 13 to 24 used messaging apps 3.5 times as much as those over 45, and were far less likely to use email.
Email messages may have one or more attachments, which are additional files that are appended to the email. Typical attachments include Microsoft Word documents, PDF documents and scanned images of paper documents. In principle there is no technical restriction on the size or number of attachments, but in practice email clients, servers and Internet service providers implement various limitations on the size of files, or complete email - typically to 25MB or less. Furthermore, due to technical reasons, attachment sizes as seen by these transport systems can differ to what the user sees, which can be confusing to senders when trying to assess whether they can safely send a file by email. Where larger files need to be shared, various file hosting services are available and commonly used.
The ubiquity of email for knowledge workers and "white collar" employees has led to concerns that recipients face an "information overload" in dealing with increasing volumes of email. With the growth in mobile devices, by default employees may also receive work-related emails outside of their working day. This can lead to increased stress, decreased satisfaction with work, and some observers even argue it could have a significant negative economic effect, as efforts to read the many emails could reduce productivity.
Email "spam" is unsolicited bulk email. The low cost of sending such email meant that, by 2003, up to 30% of total email traffic was spam, and was threatening the usefulness of email as a practical tool. The US CAN-SPAM Act of 2003 and similar laws elsewhere had some impact, and a number of effective anti-spam techniques now largely mitigate the impact of spam by filtering or rejecting it for most users, but the volume sent is still very high—and increasingly consists not of advertisements for products, but malicious content or links. In September 2017, for example, the proportion of spam to legitimate email rose to 59.56%.
A range of malicious email types exist. These range from various types of email scams, including "social engineering" scams such as advance-fee scam "Nigerian letters", to phishing, email bombardment and email worms.
Email spoofing occurs when the email message header is designed to make the message appear to come from a known or trusted source. Email spam and phishing methods typically use spoofing to mislead the recipient about the true message origin. Email spoofing may be done as a prank, or as part of a criminal effort to defraud an individual or organization. An example of a potentially fraudulent email spoofing is if an individual creates an email that appears to be an invoice from a major company, and then sends it to one or more recipients. In some cases, these fraudulent emails incorporate the logo of the purported organization and even the email address may appear legitimate.
Email bombing is the intentional sending of large volumes of messages to a target address. The overloading of the target email address can render it unusable and can even cause the mail server to crash.
Today it can be important to distinguish between the Internet and internal email systems. Internet email may travel and be stored on networks and computers without the sender's or the recipient's control. During the transit time it is possible that third parties read or even modify the content. Internal mail systems, in which the information never leaves the organizational network, may be more secure, although information technology personnel and others whose function may involve monitoring or managing may be accessing the email of other employees.
Email privacy, without some security precautions, can be compromised because:
There are cryptography applications that can serve as a remedy to one or more of the above. For example, Virtual Private Networks or the Tor anonymity network can be used to encrypt traffic from the user machine to a safer network while GPG, PGP, SMEmail, or S/MIME can be used for end-to-end message encryption, and SMTP STARTTLS or SMTP over Transport Layer Security/Secure Sockets Layer can be used to encrypt communications for a single mail hop between the SMTP client and the SMTP server.
Additionally, many mail user agents do not protect logins and passwords, making them easy to intercept by an attacker. Encrypted authentication schemes such as SASL prevent this. Finally, the attached files share many of the same hazards as those found in peer-to-peer filesharing. Attached files may contain trojans or viruses.
Emails can now often be considered as binding contracts as well, so users must be careful about what they send through email correspondence.
Flaming occurs when a person sends a message (or many messages) with angry or antagonistic content. The term is derived from the use of the word "incendiary" to describe particularly heated email discussions. The ease and impersonality of email communications mean that the social norms that encourage civility in person or via telephone do not exist and civility may be forgotten.
Also known as "email fatigue", email bankruptcy is when a user ignores a large number of email messages after falling behind in reading and answering them. The reason for falling behind is often due to information overload and a general sense there is so much information that it is not possible to read it all. As a solution, people occasionally send a "boilerplate" message explaining that their email inbox is full, and that they are in the process of clearing out all the messages. Harvard University law professor Lawrence Lessig is credited with coining this term, but he may only have popularized it.
Originally Internet email was completely ASCII text-based. MIME now allows body content text and some header content text in international character sets, but other headers and email addresses using UTF-8, while standardized have yet to be widely adopted.
The original SMTP mail service provides limited mechanisms for tracking a transmitted message, and none for verifying that it has been delivered or read. It requires that each mail server must either deliver it onward or return a failure notice (bounce message), but both software bugs and system failures can cause messages to be lost. To remedy this, the IETF introduced Delivery Status Notifications (delivery receipts) and Message Disposition Notifications (return receipts); however, these are not universally deployed in production.
Many ISPs now deliberately disable non-delivery reports (NDRs) and delivery receipts due to the activities of spammers:
In the absence of standard methods, a range of system based around the use of web bugs have been developed. However, these are often seen as underhand or raising privacy concerns, and only work with email clients that support rendering of HTML. Many mail clients now default to not showing "web content". Webmail providers can also disrupt web bugs by pre-caching images. | https://en.wikipedia.org/wiki?curid=9738 |
Emoticon
An emoticon (, , rarely pronounced ), short for "emotion icon", also known simply as an emote, is a pictorial representation of a facial expression using characters—usually punctuation marks, numbers, and letters—to express a person's feelings or mood, or as a time-saving method. The first ASCII emoticons, codice_1 and codice_2, were written by Scott Fahlman in 1982, but emoticons actually originated on the PLATO IV computer system in 1972.
In Western countries, emoticons are usually written at a right angle to the direction of the text. Users from Japan popularized a kind of emoticon called kaomoji (; lit. 顔(kao)=face, 文字(moji)=character(s)), utilizing the Katakana character set, that can be understood without tilting one's head to the left. This style arose on ASCII NET of Japan in 1986.
As SMS and the internet became widespread in the late 1990s, emoticons became increasingly popular and were commonly used on text messages, internet forums and e-mails. Emoticons have played a significant role in communication through technology, and some devices and applications have provided stylized pictures that do not use text punctuation. They offer another range of "tone" and feeling through texting that portrays specific emotions through facial gestures while in the midst of text-based cyber communication.
Emoticons began with the suggestion that combinations of punctuation could be used in typography to replace language. While Scott Fahlman's suggestion in the 1980s was the birth of the emoticon, it wasn't the first occasion that :) or :-) was used in language.
In 1648, poet Robert Herrick included the lines:
Herrick's work predated any other recorded use of brackets as a smiling face by around 200 years. However, experts have since weighed whether the inclusion of the colon in the poem was deliberate and if it was meant to represent a smiling face. English professor Alan Jacobs argued "punctuation in general was unsettled in the seventeenth century... Herrick was unlikely to have consistent punctuational practices himself, and even if he did he couldn't expect either his printers or his readers to share them."
Many different forms of communication are now seen as precursors to emoticons and more recently emojis. The use of emoticons can be traced back to the 17th century, drawn by a Slovak notary to indicate his satisfaction with the state of his town's municipal financial records in 1635, but they were commonly used in casual and humorous writing. Digital forms of emoticons on the Internet were included in a proposal by Scott Fahlman of Carnegie Mellon University in Pittsburgh, Pennsylvania, in a message on September 19, 1982.
The "National Telegraphic Review and Operators Guide" in April 1857 documented the use of the number 73 in Morse code to express "love and kisses" (later reduced to the more formal "best regards"). "Dodge's Manual" in 1908 documented the reintroduction of "love and kisses" as the number 88. Gajadhar and Green comment that both Morse code abbreviations are more succinct than modern abbreviations such as LOL. Aside from morse code, other communication tools such as generic prosigns were seen by some as an evolution of language. The first time an emoticon appeared in text was in the transcript of one of Abraham Lincoln's speeches written in 1862. It contained the following:
codice_3
According to the New York Times, there has been some debate whether the emoticon in Abraham Lincoln's speech was a typo, a legitimate punctuation construct, or the first emoticon. In the late 1800s, the first emoticons were created as an art form in the U.S. satirical magazine "Puck." In total, four different emoticon designs were displayed, all using punctuation to create different typographical emoticon faces. The emoticon designs were similar to that which formed many years later in Japan, often referred to as ""Kaomoji"", due to their complicated design. Despite the innovation, complex emoticons didn't develop in Japan until nearly a century later. In 1912, American author Ambrose Bierce was the first to suggest that a bracket could be used to represent a smiling face. He stated, "an improvement in punctuation – the snigger point, or note of cachinnation: it is written thus ‿ and presents a smiling mouth. It is to be appended, with the full stop, to every jocular or ironical sentence".
Following this breakthrough statement, other writers and linguistic experts began to put out theories as to how punctuations could be used in collections to represent a face. Moving on from Bierce's theory that a horizontal brackets could be used for a smiling face, Alan Gregg was the first recorded person to suggest that by combining punctuation marks, more elaborate emotions could be demonstrated. There is an argument that this was the first real set of emoticons, despite later designs becoming the standard for emoticons. Gregg published his theory in 1936, in an "Harvard Lampoon" article. He suggested that by turning the bracket sideways, it could be used for the sides of the mouth or cheeks, with other punctuation used between the brackets to display various emotions. Gregg's theory took the step of creating more than one smiling face, with (-) for a normal smile and (--) for a laughing smile. The logic behind the design was that more teeth were showing on the wider design. Two other emoticons were proposed in the article, with (#) for a frown and (*) for a wink.
Emoticons had already come into use in sci-fi fandom in the 1940s, although there seems to have been a lapse in cultural continuity between the communities.
The September 1962 issue of "MAD" magazine included an article titled "Typewri-toons". The piece, featuring typewriter-generated artwork credited to "Royal Portable", was entirely made up of repurposed typography, including a capital letter P having a bigger bust than a capital I, a lowercase b and d discussing their pregnancies, an asterisk on top of a letter to indicate the letter had just come inside from a snowfall, and a classroom of lowercase n's interrupted by a lowercase h "raising its hand". Two additional "Typewri-toons" articles subsequently appeared in "Mad", in 1965 and 1987.
In a "New York Times" interview in April 1969, Alden Whitman asked writer Vladimir Nabokov: "How do you rank yourself among writers (living) and of the immediate past?" Nabokov answered: "I often think there should exist a special typographical sign for a smile – some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question."
Up until this point, many of the designs considered to be early emoticons were created using fairly basic punctuation, using a single punctuation mark instead of a word or to express feeling, before individuals started combining two punctuations (often a colon and bracket) to create something that resembled a smiling face.
Scott Fahlman is considered to be the first person to create the first true emoticon as he began to experiment with using multiple punctuation marks to display emotion and replace language. He is the first documented person to use a complex emoticon of three or more punctuation marks, with codice_1 and codice_2 with a specific suggestion that they be used to express emotion. Not only did Fahlman create two different emoticons, he also said with the emoticons that they could be used to express emotion. While Nabokov had suggested something similar to Fahlman, there was little analysis of the wider consideration of what Nabokov could do with the design. Fahlman on the other hand quickly theorized that his emoticons could be designed to replace language on a large scale. The two designs of colon, hyphen and bracket were also adapted very quickly to portray a range of emotions, therefore creating the first true set of emoticons.
The message from Fahlman was sent via the Carnegie Mellon University computer science general board on September 19, 1982. The conversation was taking place between many notable computer scientists, including David Touretzky, Guy Steele, and Jaime Carbonell. The messaging transcript was considered to have been lost, before it was recovered 20 years later by Jeff Baird from old backup tapes.
19-Sep-82 11:44 Scott E Fahlman :-)
From: Scott E Fahlman
I propose that the following character sequence for joke markers:
Read it sideways. Actually, it is probably more economical to mark
things that are NOT jokes, given current trends. For this, use
Within a few months, it had spread to the ARPANET and Usenet. Many variations on the theme were immediately suggested by Scott and others.
Inspired by Scott Fahlman's idea of using faces in language, the Loufrani family established The Smiley Company in 1996. Nicolas Loufrani developed hundreds of different emoticons, including 3D versions. His designs were registered at the United States Copyright Office in 1997 and appeared online as .gif files in 1998. These were the first graphical representations of the originally text-based emoticon. He published his icons as well as emoticons created by others, along with their ASCII versions, in an online Smiley Dictionary in the early 2000s. This dictionary included over 3,000 different smileys and was published as a book called "Dico Smileys" in 2002.
Fahlman has stated in numerous interviews that he sees emojis as ""the remote descendants of this thing I did.""
Usually, emoticons in Western style have the eyes on the left, followed by the nose and the mouth. The two-character version codice_6 which omits the nose is also very popular.
The most basic emoticons are relatively consistent in form, but each of them can be transformed by being rotated (making them tiny ambigrams), with or without a hyphen (nose).
There are also some possible variations to emoticons to get new definitions, like changing a character to express a new feeling, or slightly change the mood of the emoticon. For example, codice_7 equals sad and codice_8 equals very sad. Weeping can be written as codice_9. A blush can be expressed as codice_10. Others include wink codice_11, a grin codice_12, smug codice_13, and tongue out codice_14, such as when blowing a raspberry. An often used combination is also codice_15 for a heart, and codice_16 for a broken heart. codice_17 is also sometimes used to depict shock.
A broad grin is sometimes shown with crinkled eyes to express further amusement; codice_18 and the addition of further "D" letters can suggest laughter or extreme amusement e.g. codice_19. There are hundreds of other variations including codice_20 for anger, or codice_21 for an evil grin, which can be, again, used in reverse, for an unhappy angry face, in the shape of codice_22. codice_23 for vampire teeth, codice_24 for grimace, and codice_25 can be used to denote a flirting or joking tone, or may be implying a second meaning in the sentence preceding it.
As computers offer increasing built-in support for non-Western writing systems, it has become possible to use other glyphs to build emoticons. The 'shrug' emoticon, codice_26, uses the glyph ツ from the Japanese katakana writing system.
An equal sign is often used for the eyes in place of the colon, seen as codice_27, without changing the meaning of the emoticon. In these instances, the hyphen is almost always either omitted or, occasionally, replaced with an "o" as in codice_28. In most circles it has become acceptable to omit the hyphen, whether a colon or an equal sign is used for the eyes, but in some areas of usage people still prefer the larger, more traditional emoticon codice_1 or codice_30. One linguistic study has indicated that the use of a nose in an emoticon may be related to the user's age, with younger people less likely to use a nose. Similar-looking characters are commonly substituted for one another: for instance, codice_31, codice_32, and codice_33 can all be used interchangeably, sometimes for subtly different effect or, in some cases, one type of character may look better in a certain font and therefore be preferred over another. It is also common for the user to replace the rounded brackets used for the mouth with other, similar brackets, such as codice_34 instead of codice_35.
Some variants are also more common in certain countries due to keyboard layouts. For example, the smiley codice_27 may occur in Scandinavia, where the keys for codice_37 and codice_35 are placed right beside each other. However, the codice_6 variant is without a doubt the dominant one in Scandinavia, making the codice_27 version a rarity. Diacritical marks are sometimes used. The letters codice_41 and codice_42 can be seen as an emoticon, as the upright version of codice_17 (meaning that one is surprised) and codice_12 (meaning that one is very happy) respectively.
Some emoticons may be read right to left instead, and in fact, can only be written using standard ASCII keyboard characters this way round; for example codice_45 which refers to being shocked or anxious, opposite to the large grin of codice_12.
Users from Japan popularized a style of emoticons (, "kaomoji, lit." "face characters") that can be understood without tilting one's head to the left. This style arose on ASCII NET, an early Japanese online service, in 1986. Similar-looking emoticons were used on the Byte Information Exchange (BIX) around the same time.
These emoticons are usually found in a format similar to codice_47. The asterisks indicate the eyes; the central character, commonly an underscore, the mouth; and the parentheses, the outline of the face.
Different emotions can be expressed by changing the character representing the eyes: for example, "T" can be used to express crying or sadness: codice_48. codice_49 may also be used to mean "unimpressed". The emphasis on the eyes in this style is reflected in the common usage of emoticons that use only the eyes, e.g. codice_50. Looks of stress are represented by the likes of codice_51, while codice_52 is a generic emoticon for nervousness, the semicolon representing an anxiety-induced sweat drop (discussed further below). codice_53 can indicate embarrassment by symbolizing blushing. Characters like hyphens or periods can replace the underscore; the period is often used for a smaller, "cuter" mouth, or to represent a nose, e.g. codice_54. Alternatively, the mouth/nose can be left out entirely, e.g. codice_55.
Parentheses are sometimes replaced with braces or square brackets, e.g. codice_56 or codice_57. Many times, the parentheses are left out completely, e.g. codice_50,codice_59, codice_60, codice_61, codice_62, or codice_63. A quotation mark codice_64, apostrophe codice_65, or semicolon codice_66 can be added to the emoticon to imply apprehension or embarrassment, in the same way that a sweat drop is used in manga and anime.
Microsoft IME 2000 (Japanese) or later supports the input of emoticons like the above by enabling the Microsoft IME Spoken Language/Emotion Dictionary. In IME 2007, this support was moved to the Emoticons dictionary. Such dictionaries allow users to call up emoticons by typing words that represent them.
Communication software allowing the use of Shift JIS encoded Japanese characters rather than just ASCII allowed for the development of new kaomoji using the extended character set, such as codice_67 or codice_68.
Modern communication software generally utilizes Unicode, which allows for the incorporation of characters from other languages (e.g. from the Cyrillic alphabet), and a variety of symbols into the kaomoji, as in codice_69 or codice_70.
Further variations can be produced using Unicode combining characters, as in codice_71 or codice_72.
English-language anime forums adopted those Japanese-style emoticons that could be used with the standard ASCII characters available on Western keyboards. Because of this, they are often called "anime style" emoticons in English. They have since seen use in more mainstream venues, including online gaming, instant-messaging, and non-anime-related discussion forums. Emoticons such as codice_73, codice_74, codice_75, codice_76, codice_77, or codice_78 which include the parentheses, mouth or nose, and arms (especially those represented by the inequality signs < or >) also are often referred to as "" in reference to their likeness to Nintendo's video game character Kirby. The parentheses are sometimes dropped when used in the English language context, and the underscore of the mouth may be extended as an intensifier for the emoticon in question, e.g. codice_79 for very happy. The emoticon uses the Eastern style, but incorporates a depiction of the Western "middle-finger flick-off" using a "t" as the arm, hand, and finger. Using a lateral click for the nose such as in is believed to originate from the Finnish image-based message board Ylilauta, and is called a "Lenny face". Another apparently Western invention is the use of emoticons like codice_80 or codice_81 to indicate vampires or other mythical beasts with fangs.
Exposure to both Western and Japanese style emoticons or kaomoji through blogs, instant messaging, and forums featuring a blend of Western and Japanese pop culture has given rise to many emoticons that have an upright viewing format. The parentheses are often dropped, and these emoticons typically only use alphanumeric characters and the most commonly used English punctuation marks. Emoticons such as codice_82, codice_83, codice_84, codice_85, codice_86, codice_49, codice_88, and codice_89 are used to convey mixed emotions that are more difficult to convey with traditional emoticons. Characters are sometimes added to emoticons to convey an anime- or manga-styled sweat drop, for example codice_90, codice_91, codice_92, codice_93, and codice_94. The equals sign can also be used for closed, anime-looking eyes, for example codice_95, codice_96, codice_97, codice_98, and codice_99. The codice_100 face (and its variations codice_101 and codice_102), is an emoticon of Japanese origin which denotes a cute expression or emotion felt by the user.
In Brazil, sometimes combining characters (accents) are added to emoticons to represent eyebrows, as in codice_103, codice_104, codice_105, codice_106, or codice_107.
Users of the Japanese discussion board 2channel, in particular, have developed a wide variety of unique emoticons using characters from various languages, such as Kannada, as in codice_108 (for a look of disapproval, disbelief, or confusion). These were quickly picked up by 4chan and spread to other Western sites soon after. Some have taken on a life of their own and become characters in their own right, like Monā.
In South Korea, emoticons use Korean Hangul letters, and the Western style is rarely used. The structures of Korean and Japanese emoticons are somewhat similar, but they have some differences. Korean style contains Korean jamo (letters) instead of other characters. There are countless number of emoticons that can be formed with such combinations of Korean jamo letters. Consonant jamos codice_109, codice_110 or codice_111 as the mouth/nose component and codice_112, codice_113 or codice_114 for the eyes. For example: codice_115, codice_116, codice_117 and codice_118. Faces such as codice_119, codice_120, codice_121 and codice_122, using quotation marks codice_64 and apostrophes codice_65 are also commonly used combinations. Vowel jamos such as ㅜ,ㅠ depict a crying face. Example: codice_125, codice_126 and codice_127 (same function as T in western style). Sometimes ㅡ (not an em-dash "—" but a vowel jamo), a comma or an underscore is added, and the two character sets can be mixed together, as in codice_128, codice_129, codice_130, codice_131, codice_132 and codice_133. Also, semicolons and carets are commonly used in Korean emoticons; semicolons mean sweating (embarrassed). If they are used with ㅡ or – they depict a bad feeling. Examples: codice_134, codice_135, codice_136, codice_137 and codice_138. However, codice_139 means smile (almost all people use this without distinction of sex or age). Others include: codice_140, codice_141, codice_142, codice_143.
The character 囧 (U+56E7), which means "bright", may be combined with posture emoticon Orz, such as 囧rz. The character existed in Oracle bone script, but its use as emoticon was documented as early as January 20, 2005.
Other ideographic variants for 囧 include 崮 (king 囧), 莔 (queen 囧), 商 (囧 with hat), 囧興 (turtle), 卣 (Bomberman).
The character 槑 (U+69D1), which sounds like the word for "plum" (梅 (U+FA44)), is used to represent double of 呆 (dull), or further magnitude of dullness. In Chinese, normally full characters (as opposed to the stylistic use of 槑) might be duplicated to express emphasis.
On the Russian speaking internet, the right parenthesis codice_35 is used as a smiley. Multiple parentheses codice_145 are used to express greater happiness, amusement or laughter. It is commonly placed at the end of a sentence. The colon is omitted due to being in a lesser-known and difficult to type position on the ЙЦУКЕН keyboard layout.
Orz (other forms include: ) is an emoticon representing a kneeling or bowing person (the Japanese version of which is called "dogeza") with the "o" being the head, the "r" being the arms and part of the body, and the "z" being part of the body and the legs. This stick figure can represent respect or "kowtowing", but commonly appears along a range of responses, including "frustration, despair, sarcasm, or grudging respect".
It was first used in late 2002 at the forum on Techside, a Japanese personal website. At the "Techside FAQ Forum" (TECHSIDE教えて君BBS(教えてBBS) ), a poster asked about a cable cover, typing to show a cable and its cover. Others commented that it looked like a kneeling person, and the symbol became popular. These comments were soon deleted as they were considered off-topic. By 2005, Orz spawned a subculture: blogs have been devoted to the emoticon, and URL shortening services have been named after it. In Taiwan, Orz is associated with the phrase "nice guy"that is, the concept of males being rejected for a date by females, with a phrase like "You are a nice guy."
Orz should not be confused with m(_ _)m, which means "Thank you" or an apology.
A portmanteau of "emotion" and "sound", an emotisound is a brief sound transmitted and played back during the viewing of a message, typically an IM message or e-mail message. The sound is intended to communicate an emotional subtext. Many instant messaging clients automatically trigger sound effects in response to specific emoticons.
Some services, such as MuzIcons, combine emoticons and music player in an Adobe Flash-based widget.
In 2004, the Trillian chat application introduced a feature called "emotiblips", which allows Trillian users to stream files to their instant message recipients "as the voice and video equivalent of an emoticon".
In 2007, MTV and Paramount Home Entertainment promoted the "emoticlip" as a form of viral marketing for the second season of the show "The Hills". The emoticlips were twelve short snippets of dialogue from the show, uploaded to YouTube, which the advertisers hoped would be distributed between web users as a way of expressing feelings in a similar manner to emoticons. The emoticlip concept is credited to the Bradley & Montgomery advertising firm, which hopes they would be widely adopted as "greeting cards that just happen to be selling something".
In 2008, an emotion-sequence animation tool, called FunIcons was created. The Adobe Flash and Java-based application allows users to create a short animation. Users can then email or save their own animations to use them on similar social utility applications.
During the first half of the 2010s, there have been different forms of small audiovisual pieces to be sent through instant messaging systems to express one's emotion. These videos lack an established name, and there are several ways to designate them: "emoticlips" (named above), "emotivideos" or more recently "emoticon videos". These are tiny videos which can be easily transferred from one mobile phone to another. Current video compression codecs such as H.264 allow these pieces of video to be light in terms of file size and very portable. The popular computer and mobile app Skype use these in a separate keyboard or by typing the code of the "emoticon videos" between parentheses.
In 2000, Despair, Inc. obtained a U.S. trademark registration for the "frowny" emoticon codice_2 when used on "greeting cards, posters and art prints". In 2001, they issued a satirical press release, announcing that they would sue Internet users who typed the frowny; the joke backfired and the company received a storm of protest when its mock release was posted on technology news website Slashdot.
A number of patent applications have been filed on inventions that assist in communicating with emoticons. A few of these have been issued as US patents. , for example, discloses a method developed in 2001 to send emoticons over a cell phone using a drop-down menu. The stated advantage over the prior art was that the user saved on the number of keystrokes though this may not address the obviousness criteria.
The emoticon codice_1 was also filed in 2006 and registered in 2008 as a European Community Trademark (CTM). In Finland, the Supreme Administrative Court ruled in 2012 that the emoticon cannot be trademarked, thus repealing a 2006 administrative decision trademarking the emoticons codice_1, codice_27, codice_150, codice_6 and codice_7.
In 2005, a Russian court rejected a legal claim against Siemens by a man who claimed to hold a trademark on the codice_153 emoticon.
In 2008, Russian entrepreneur Oleg Teterin claimed to have been granted the trademark on the codice_153 emoticon. A license would not "cost that muchtens of thousands of dollars" for companies, but would be free of charge for individuals.
Some smiley faces were present in Unicode since 1.1, including a white frowning face, a white smiling face, and a black smiling face. ("Black" refers to a glyph which is filled, "white" refers to a glyph which is unfilled).
The Emoticons block was introduced in Unicode Standard version 6.0 (published in October 2010) and extended by 7.0. It covers Unicode range from U+1F600 to U+1F64F fully.
After that block had been filled, Unicode 8.0 (2015), 9.0 (2016) and 10.0 (2017) added additional emoticons in the range from U+1F910 to U+1F9FF. Currently, U+1F90CU+1F90F, U+1F93F, U+1F94DU+1F94F, U+1F96CU+1F97F, U+1F998U+1F9CF (excluding U+1F9C0 which contains the 🧀 emoji) and U+1F9E7U+1F9FF do not contain any emoticons since Unicode 10.0.
For historic and compatibility reasons, some other heads and figures, which mostly represent different aspects like genders, activities, and professions instead of emotions, are also found in Miscellaneous Symbols and Pictographs (especially U+1F466U+1F487) and Transport and Map Symbols. Body parts, mostly hands, are also encoded in the Dingbat and Miscellaneous Symbols blocks. | https://en.wikipedia.org/wiki?curid=9739 |
Erdős number
The Erdős number () describes the "collaborative distance" between mathematician and another person, as measured by authorship of mathematical papers. The same principle has been applied in other fields where a particular individual has collaborated with a large and broad number of peers.
Paul Erdős (1913–1996) was an influential Hungarian mathematician who in the latter part of his life spent a great deal of time writing papers with a large number of colleagues, working on solutions to outstanding mathematical problems. He published more papers during his lifetime (at least 1,525) than any other mathematician in history. (Leonhard Euler published more total pages of mathematics but fewer separate papers: about 800.) Erdős spent a large portion of his later life living out of a suitcase, visiting his over 500 collaborators around the world.
The idea of the Erdős number was originally created by the mathematician's friends as a tribute to his enormous output. Later it gained prominence as a tool to study how mathematicians cooperate to find answers to unsolved problems. Several projects are devoted to studying connectivity among researchers, using the Erdős number as a proxy. For example, Erdős collaboration graphs can tell us how authors cluster, how the number of co-authors per paper evolves over time, or how new theories propagate.
Several studies have shown that leading mathematicians tend to have particularly low Erdős numbers. The median Erdős number of Fields Medalists is 3. Only 7,097 (about 5% of mathematicians with a collaboration path) have an Erdős number of 2 or lower. As time passes, the smallest Erdős number that can still be achieved will necessarily increase, as mathematicians with low Erdős numbers die and become unavailable for collaboration. Still, historical figures can have low Erdős numbers. For example, renowned Indian mathematician Srinivasa Ramanujan has an Erdős number of only 3 (through G. H. Hardy, Erdős number 2), even though Paul Erdős was only 7 years old when Ramanujan died.
To be assigned an Erdős number, someone must be a coauthor of a research paper with another person who has a finite Erdős number. Paul Erdős has an Erdős number of zero. Anybody else's Erdős number is where is the lowest Erdős number of any coauthor. The American Mathematical Society provides a free online tool to determine the Erdős number of every mathematical author listed in the "Mathematical Reviews" catalogue.
Erdős wrote around 1,500 mathematical articles in his lifetime, mostly co-written. He had 511 direct collaborators; these are the people with Erdős number 1. The people who have collaborated with them (but not with Erdős himself) have an Erdős number of 2 (11,009 people as of 2015), those who have collaborated with people who have an Erdős number of 2 (but not with Erdős or anyone with an Erdős number of 1) have an Erdős number of 3, and so forth. A person with no such coauthorship chain connecting to Erdős has an Erdős number of infinity (or an undefined one). Since the death of Paul Erdős, the lowest Erdős number that a new researcher can obtain is 2.
There is room for ambiguity over what constitutes a link between two authors. The American Mathematical Society collaboration distance calculator uses data from "Mathematical Reviews", which includes most mathematics journals but covers other subjects only in a limited way, and which also includes some non-research publications. The Erdős Number Project web site says:
but they do not include non-research publications such as elementary textbooks, joint editorships, obituaries, and the like. The "Erdős number of the second kind" restricts assignment of Erdős numbers to papers with only two collaborators.
The Erdős number was most likely first defined in print by Casper Goffman, an analyst whose own Erdős number is 2. Goffman published his observations about Erdős' prolific collaboration in a 1969 article entitled ""And what is your Erdős number?"" See also some comments in an obituary by Michael Golomb.
The median Erdős number among Fields medalists is as low as 3. Fields medalists with Erdős number 2 include Atle Selberg, Kunihiko Kodaira, Klaus Roth, Alan Baker, Enrico Bombieri, David Mumford, Charles Fefferman, William Thurston, Shing-Tung Yau, Jean Bourgain, Richard Borcherds, Manjul Bhargava, Jean-Pierre Serre and Terence Tao. There are no Fields medalists with Erdős number 1; however, Endre Szemerédi is an Abel Prize Laureate with Erdős number 1.
While Erdős collaborated with hundreds of co-authors, there were some individuals with whom he co-authored dozens of papers. This is a list of the ten persons who most frequently co-authored with Erdős and their number of papers co-authored with Erdős (i.e. their number of collaborations).
, all Fields Medalists have a finite Erdős number, with values that range between 2 and 6, and a median of 3. In contrast, the median Erdős number across all mathematicians (with a finite Erdős number) is 5, with an extreme value of 13. The table below summarizes the Erdős number statistics for Nobel prize laureates in Physics, Chemistry, Medicine and Economics. The first column counts the number of laureates. The second column counts the number of winners with a finite Erdős number. The third column is the percentage of winners with a finite Erdős number. The remaining columns report the minimum, maximum, average and median Erdős numbers among those laureates.
Among the Nobel Prize laureates in Physics, Albert Einstein and Sheldon Lee Glashow have an Erdős number of 2. Nobel Laureates with an Erdős number of 3 include Enrico Fermi, Otto Stern, Wolfgang Pauli, Max Born, Willis E. Lamb, Eugene Wigner, Richard P. Feynman, Hans A. Bethe, Murray Gell-Mann, Abdus Salam, Steven Weinberg, Norman F. Ramsey, Frank Wilczek, and David Wineland. Fields Medal-winning physicist Ed Witten has an Erdős number of 3.
Computational biologist Lior Pachter has an Erdős number of 2. Evolutionary biologist Richard Lenski has an Erdős number of 3, having co-authored a publication with Lior Pachter and with mathematician Bernd Sturmfels, each of whom has an Erdős number of 2.
There are at least two winners of the Nobel Prize in Economics with an Erdős number of 2: Harry M. Markowitz (1990) and Leonid Kantorovich (1975). Other financial mathematicians with Erdős number of 2 include David Donoho, Marc Yor, Henry McKean, Daniel Stroock, and Joseph Keller.
Nobel Prize laureates in Economics with an Erdős number of 3 include Kenneth J. Arrow (1972), Milton Friedman (1976), Herbert A. Simon (1978), Gerard Debreu (1983), John Forbes Nash, Jr. (1994), James Mirrlees (1996), Daniel McFadden (1996), Daniel Kahneman (2002), Robert J. Aumann (2005), Leonid Hurwicz (2007), Roger Myerson (2007), Alvin E. Roth (2012), and Lloyd S. Shapley (2012) and Jean Tirole (2014).
Some investment firms have been founded by mathematicians with low Erdős numbers, among them James B. Ax of Axcom Technologies, and James H. Simons of Renaissance Technologies, both with an Erdős number of 3.
Since the more formal versions of philosophy share reasoning with the basics of mathematics, these fields overlap considerably, and Erdős numbers are available for many philosophers. Philosopher John P. Burgess has an Erdős number of 2. Jon Barwise and Joel David Hamkins, both with Erdős number 2, have also contributed extensively to philosophy, but are primarily described as mathematicians.
Judge Richard Posner, having coauthored with Alvin E. Roth, has an Erdős number of at most 4. Roberto Mangabeira Unger, a politician, philosopher and legal theorist who teaches at Harvard Law School, has an Erdős number of at most 4, having coauthored with Lee Smolin.
Angela Merkel, Chancellor of Germany from 2005 to the present, has an Erdős number of at most 5.
Some fields of engineering, in particular communication theory and cryptography, make direct use of the discrete mathematics championed by Erdős. It is therefore not surprising that practitioners in these fields have low Erdős numbers. For example, Robert McEliece, a professor of electrical engineering at Caltech, had an Erdős number of 1, having collaborated with Erdős himself. Cryptographers Ron Rivest, Adi Shamir, and Leonard Adleman, inventors of the RSA cryptosystem, all have Erdős number 2.
Anthropologist Douglas R. White has an Erdős number of 2 via graph theorist Frank Harary. Sociologist Barry Wellman has an Erdős number of 3 via social network analyst and statistician Ove Frank, another collaborator of Harary's.
The Romanian mathematician and computational linguist Solomon Marcus had an Erdős number of 1 for a paper in "Acta Mathematica Hungarica" that he co-authored with Erdős in 1957.
Erdős numbers have been a part of the folklore of mathematicians throughout the world for many years. Among all working mathematicians at the turn of the millennium who have a finite Erdős number, the numbers range up to 15, the median is 5, and the mean is 4.65; almost everyone with a finite Erdős number has a number less than 8. Due to the very high frequency of interdisciplinary collaboration in science today, very large numbers of non-mathematicians in many other fields of science also have finite Erdős numbers. For example, political scientist Steven Brams has an Erdős number of 2. In biomedical research, it is common for statisticians to be among the authors of publications, and many statisticians can be linked to Erdős via John Tukey, who has an Erdős number of 2. Similarly, the prominent geneticist Eric Lander and the mathematician Daniel Kleitman have collaborated on papers, and since Kleitman has an Erdős number of 1, a large fraction of the genetics and genomics community can be linked via Lander and his numerous collaborators. Similarly, collaboration with Gustavus Simmons opened the door for
Erdős numbers within the cryptographic research community, and many linguists have finite Erdős numbers, many due to chains of collaboration with such notable scholars as Noam Chomsky (Erdős number 4), William Labov (3), Mark Liberman (3), Geoffrey Pullum (3), or Ivan Sag (4). There are also connections with arts fields.
According to Alex Lopez-Ortiz, all the Fields and Nevanlinna prize winners during the three cycles in 1986 to 1994 have Erdős numbers of at most 9.
Earlier mathematicians published fewer papers than modern ones, and more rarely published jointly written papers. The earliest person known to have a finite Erdős number is either Antoine Lavoisier (born 1743, Erdős number 13), Richard Dedekind (born 1831, Erdős number 7), or Ferdinand Georg Frobenius (born 1849, Erdős number 3), depending on the standard of publication eligibility.
Martin Tompa proposed a directed graph version of the Erdős number problem, by orienting edges of the collaboration graph from the alphabetically earlier author to the alphabetically later author and defining the "monotone Erdős number" of an author to be the length of a longest path from Erdős to the author in this directed graph. He finds a path of this type of length 12.
Also, Michael Barr suggests "rational Erdős numbers", generalizing the idea that a person who has written p joint papers with Erdős should be assigned Erdős number 1/p. From the collaboration multigraph of the second kind (although he also has a way to deal with the case of the first kind)—with one edge between two mathematicians for "each" joint paper they have produced—form an electrical network with a one-ohm resistor on each edge. The total resistance between two nodes tells how "close" these two nodes are.
It has been argued that "for an individual researcher, a measure such as Erdős number captures the structural properties of [the] network whereas the "h"-index captures the citation impact of the publications," and that "One can be easily convinced that ranking in coauthorship networks should take into account both measures to generate a realistic and acceptable ranking."
In 2004 William Tozier, a mathematician with an Erdős number of 4, auctioned off a co-authorship on eBay, hence providing the buyer with an Erdős number of 5. The winning bid of $1031 was posted by a Spanish mathematician, who however did not intend to pay but just placed the bid to stop what he considered a mockery.
A number of variations on the concept have been proposed to apply to other fields.
The best known is the Bacon number (as in the game Six Degrees of Kevin Bacon), connecting actors that appeared in a film together to the actor Kevin Bacon. It was created in 1994, 25 years after Goffman's article on the Erdős number.
A small number of people are connected to both Erdős and Bacon and thus have an Erdős–Bacon number, which combines the two numbers by taking their sum. One example is the actress-mathematician Danica McKellar, best known for playing Winnie Cooper on the TV series, "The Wonder Years".
Her Erdős number is 4, and her Bacon number is 2.
Further extension is possible. For example, the "Erdős–Bacon–Sabbath number" is the sum of the Erdős–Bacon number and the collaborative distance to the band Black Sabbath in terms of singing in public. Physicist Stephen Hawking had an Erdős–Bacon–Sabbath number of 8, and actress Natalie Portman has one of 11 (her Erdős number is 5). | https://en.wikipedia.org/wiki?curid=9742 |
School voucher
A school voucher, also called an education voucher, in a voucher system, is a certificate of government funding for a student at a school chosen by the student or the student's parents. The funding is usually for a particular year, term or semester. In some countries, states or local jurisdictions, the voucher can be used to cover or reimburse home schooling expenses. In some countries, vouchers only exist for tuition at private schools.
According to a 2017 review of the economics literature on school vouchers, "the evidence to date is not sufficient to warrant recommending that vouchers be adopted on a widespread basis; however, multiple positive findings support continued exploration." A 2006 survey of members of American Economic Association found that over two-thirds of economists support giving parents educational vouchers that can be used at government-operated or privately operated schools, and that support is greater if the vouchers are to be used by parents with low-incomes or parents with children in poorly performing schools.
France lost the Franco-Prussian War of 1870–1871 and many blamed the loss on France's inferior military education system. Following this defeat, the French assembly proposed a religious voucher that would hopefully improve schools by allowing students to seek out the best school. This proposal never moved forward due to the reluctance of the French to subsidize religious education. Despite its failure, this proposal is one of the earliest examples of a voucher system that closely resembles voucher systems proposed and used today in many countries.
The oldest continuing school voucher programs existing today in the United States are the Town Tuitioning programs in Vermont and Maine, beginning in 1869 and 1873 respectively. Because some towns in these states operate neither local high schools nor elementary schools, students in these towns "are eligible for a voucher to attend [either] public schools in other towns or non-religious private schools. In these cases, the 'sending' towns pay tuition directly to the 'receiving' schools."
A system of educational vouchers was introduced in the Netherlands in 1917. Today, more than 70% of pupils attend privately run but publicly funded schools, mostly split along denominational lines.
Milton Friedman argued for the modern concept of vouchers in the 1950s, stating that competition would improve schools, cost less and yield superior educational outcomes. Friedman's reasoning in favor of vouchers gained additional attention in 1980 with the broadcast of his ten part television series "Free to Choose" and the publication of its companion book of the same name (co-written with his wife Rose Friedman, who was also an economist). Episode 6 of the series and chapter 6 of the book were both entitled, "What's Wrong with Our Schools?" and asserted that permitting parents and students to use vouchers to choose their schools would expand freedom of choice and produce more well-educated students.
In some Southern states during the 1960s, school vouchers were used as a way to perpetuate segregation. In a few instances, public schools were closed outright and vouchers were issued to parents. The vouchers, then known as tuition grants, in many cases, were only good at new, private, segregated schools, known as segregation academies.
Today, all modern voucher programs prohibit racial discrimination.
There are important distinctions between different kinds of schools:
Education as a tool for human capital accumulation is often crucial to the development and progression of societies and thus governments have large incentives to continually intervene and improve public education. Additionally, education is often the tool with which societies instill a common set of values that underlie the basic norms of the society. Furthermore, there are positive externalities to society from education. These positive externalities can be in the form of reduced crime, more informed citizens and economic development, known as the neighborhood effect.
In terms of economic theory, families face a bundle of consumption choices that determine how much they will spend on education and private consumption. Any number of consumption bundles are available as long as they fit within the budget constraint. Meaning that any bundle of consumption of education and private consumption must not exceed budgetary constraints. Indifference curves represent the preferences of one good over another. The indifference curve determines how much education an individual will want to consume versus how much private consumption an individual will want to consume.
Government intervention in education typically takes two forms. The first approach can be broad, such as instituting charter schools, magnet schools, or for-profit schools and increasing competition. The second approach can be individually focused such as providing subsidies or loans for individuals to attend college or school vouchers for K-12.
Vouchers are typically instituted for two broad economic reasons. The first reason is consumer choice. A family can choose to where their child goes to school and pick the school that is closest to their preference of education provider.
The second reason why vouchers are proposed is to increase market competition amongst schools. Similar to the free market theorem, vouchers are intended to make schools more competitive while lowering costs for schools and increasing the educational quality for consumers, the families.
In many instances where school voucher programs have been instituted, there have been mixed results, with some programs showing increased benefits of school vouchers and some instances showing detrimental effects.
In the United States, vouchers are usually funded with state dollars, and in other countries, through a variety of government funding vehicles. It is important to note that schools in the United States retain their federal and local funding regardless of enrollment- only state funding is dependent on enrollment size. Part of improving student performance involves improving teacher and school performance. In theory, more school vouchers would prompt the formation of more private schools which will give parents more choice in school. This increased competition would make both the private and public schools, who are both competing for the voucher funds, maintain a high-quality of teaching as well as keeping costs low.
Indeed, there is evidence that school vouchers result in cost savings for school systems. A fiscal analysis of Indiana's school voucher system showed annual savings, per student, for the state government.
Proponents of voucher schools argue that there is evidence of multiple benefits for students and families because of school vouchers. There is evidence to show that the use of school vouchers results in increased test scores and higher high school graduation rates for students. A case study in the country of Colombia showed that the presence of voucher programs resulted in an increase of 10 percentage points in a child's likelihood of finishing the 8th grade and showed a 0.2 standard deviations increase in achievement on standardized tests. Furthermore, evidence shows that African Americans experience increased college enrollment rates under voucher programs. It is important to note that these gains for African American students are not present for other racial and ethnic groups.
Research has also shown spatial benefits of voucher system. Public schools, which are near private schools that accept vouchers, often have better test scores than other public schools not near voucher ready private schools. Additional research by Caroline Hoxby shows that when voucher systems are available, both the public and private schools in that school system have increased test scores and graduation rates.
While there are some studies that show the positive effects of voucher programs, there is also research that shows the ineffectiveness of school vouchers. There have been some recent case studies showing that in voucher system school districts, students attending the public school, as opposed to the private school with a voucher, tend to outperform their private school peers.
Besides general lack of results, critics of school vouchers argue that vouchers will lead to segregation. Empirical studies show that there is some evidence that school vouchers can lead to racial or income segregation. However research on this topic is inconclusive, as there is also valid research that shows under certain circumstances, income and racial segregation can be reduced indirectly by increasing school choice.
Additionally, since school vouchers are funded by the government, the implementation could cause the funds for public schools to be reduced. Private-school vouchers affect government budgets through two channels: additional direct voucher expenditures, and public-school cost savings from lower enrollments. Voucher programs would be paid for by the government's education budget, which would be subtracted from the public school's budget. This might affect the public-school system by giving them less to spend and use for their student's education.
A 2018 study by Abdulkadiroğlu et al. found that disadvantaged students who won a lottery (the Louisiana Scholarship Program) to get vouchers to attend private schools had worse education outcomes than disadvantaged students who did not win vouchers: "LSP participation lowers math scores by 0.4 standard deviations and also reduces achievement in reading, science, and social studies. These effects may be due in part to selection of low-quality private schools into the program."
The PACES voucher program was established by the Colombian government in late 1991. It aimed to assist low-income households by distributing school vouchers to students living in neighborhoods situated in the two lowest socioeconomic strata. Between 1991 and 1997, the PACES program awarded 125,000 vouchers to lower-income secondary school students. Those vouchers were worth about US $190 in 1998, and data shows that matriculation fees and other monthly expenses incurred by voucher students attending private schools averaged about US $340 in 1998, so a majority of voucher recipients supplemented the voucher with personal funds.
The students selected to be in the program were selected by lottery. The vouchers were able to be renewed annually, conditional on students achieving satisfactory academic success as indicated by scheduled grade promotion. The program also included incentives to study harder as well as widening schooling options. Empirical evidence showed that the program had some success. Joshua Angrist shows that after 3 years into the program, lottery winners were 15 percentage points more likely to attend private school and complete .1 more years of schooling, and were about 10 percentage points more likely to have finished the 8th grade.The study also reported that there were larger voucher effects for boys than for girls, especially in mathematics performance. It is important to note that the program did not have a significant impact on dropout rates. Angrist reports that lottery winners scored .2 standard deviations higher on standardized tests. The voucher program also reported some social effects. Lottery winners worked less on average than non-lottery winners. Angrist reports that this was correlated with a decreased likelihood to marry or cohabit as teenagers. In general, the school voucher program's benefits outweighed the costs.
In 1981, Chile implemented a universal school voucher system for both elementary and secondary school students. As a result, over 1,000 private schools entered the market, and private enrollment increased by 20-40% by 1998, surpassing 50% in some urban areas. From 1981 to 1988, the private school enrollment rate in urban areas grew 11% more than the private school enrollment rate in rural areas. This change coincided with the transfer of public school administration from the central government to local municipalities. The financial value of a voucher didn't depend on the income of the family receiving it, and the program allowed private voucher schools to be selective, while public schools had to accept and enroll every interested student. At the turn of the 21st century, student achievement in Chile was low compared to students in other nations based on international test-scores. This disparity led to the Chilean government enacting substantial educational reforms in 2008, including major changes in the school voucher system.
The Chilean government passed the Preferential School Subsidy Law (SEP) in January 2008. This piece of legislation made the educational voucher system much more like the regulated compensatory model championed by Christopher Jencks. Under SEP, the voucher system was altered to take family incomes into account. The vouchers provided to "priority students," students whose family income was in the bottom 40% of Chileans in were worth 50% more than those given to the families of students in the upper 60%. Schools with larger numbers of priority students were eligible to receive per-student bonuses, the size of which was tied to the percentage of priority students in the student body. When SEP was started, it covered preschool to fourth grade, and an additional school-year of coverage was added each subsequent year. Almost every public school chose to participate in SEP in 2008, as well as almost two-thirds of private subsidized elementary schools.
There were three important requirements attached to the program. The first requirement stipulated that participating schools could not charge fees to priority students, although private schools in the voucher system could do so for non-priority students. The second requirement ensured that schools could not select students based on their academic ability, not expel them on academic grounds. The third requirement postulated that schools had to self-enroll themselves in an accountability system that ensured that schools were responsible for the utilization of financial resources and student test scores.
In most European countries, education for all primary and secondary schools is fully subsidized. In some countries (e.g. Belgium or France), parents are free to choose which school their child attends.
Most schools in the Republic of Ireland are state-aided parish schools, established under diocesan patronage but with capital costs, teachers salaries and a per head fee paid to the school. These are given to the school regardless of whether or not it requires its students to pay fees. (Although fee-paying schools are in the minority, there has been much criticism over the state aid they receive with opponents claiming this gives them an unfair advantage.)
There is a recent trend towards multi-denominational schools established by parents, which are organised as limited companies without share capital. Parents and students are free to choose their own school. In the event of a school failing to attract students it immediately loses its per-head fee and over time loses its teaching posts – and teachers are moved to other schools which are attracting students. The system is perceived to have achieved very successful outcomes for most Irish children.
The 1995–7 Rainbow Coalition (which contained parties of the centre right and the left) introduced free third-level education to primary degree level. Critics of the latter development charge that it has not increased the number of students from economically deprived backgrounds attending university. However, studies have shown that the removal of tuition fees at third level has increased the number of students overall and those from lower socio-economic backgrounds. This concurs with evidence from the UK of a decrease in attendance numbers after the introduction of fees. However, since the economic crisis, there has been extensive talk and debate regarding the reintroduction of third-level fees.
In Sweden, a system of school vouchers (called "skolpeng") was introduced in 1992 at primary and secondary school level, enabling free choice among publicly run schools and privately run "friskolor" ("free schools"). The voucher is paid with public funds from the local municipality ("kommun") directly to a school based solely on its number of students. Both public schools and free schools are funded the same way. Free schools can be run by not-for-profit groups as well as by for-profit companies, but may not charge top-up fees or select students other than on a first-come, first-served basis. Over 10% of Swedish pupils were enrolled in free schools in 2008 and the number is growing fast, leading the country to be viewed as a pioneer of the model.
Per Unckel, governor of Stockholm and former Minister of Education, has promoted the system, saying "Education is so important that you can't just leave it to one producer, because we know from monopoly systems that they do not fulfill all wishes." The Swedish system has been recommended to Barack Obama by some commentators, including the Pacific Research Institute, which has released a documentary called "Not As Good As You Think: Myth of the Middle Class Schools", a movie depicting positive benefits for middle class schools resulting from Sweden's voucher programs.
A 2004 study concluded that school results in public schools improved due to the increased competition. However, Per Thulberg, director general of the Swedish National Agency for Education, has said that the system "has not led to better results" and in the 2000s Sweden's ranking in the PISA league tables worsened. Though Rachel Wolf, director of the New Schools Network, has suggested that Sweden's education standards had slipped for reasons other than as a result of free schools.
A 2015 study was able to show that "an increase in the share of independent school students improves average short‐ and long‐run outcomes, explained primarily by external effects (e.g. school competition)".
A voucher system for children three to six years-old who attend a non-profit kindergarten was implemented in Hong Kong in 2007. Each child will get HK$13,000 per year. The $13,000 subsidy will be separated into two parts. $10,000 is used to subsidize the school fee and the remaining $3,000 is used for kindergarten teachers to pursue further education and obtain a certificate in Education. Also, there are some restrictions on the voucher system. Parents can only choose non-profit schools with a yearly fee less than $24,000. The government hoped that all kindergarten teachers can obtain an Education certificate by the year 2011–12, at which point the subsidies are to be adjusted to $16,000 for each student, all of which will go toward the school fee.
Milton Friedman criticised the system, saying "I do not believe that CE Mr. Tsang's proposal is properly structured." He said that the whole point of a voucher system is to provide a competitive market place so should not be limited to non-profit kindergartens.
After protests by parents with children enrolled in for profit kindergartens, the program was extended to children in for- profit kindergartens, but only for children enrolled in or before September 2007. The government will also provide up to HK$30,000 subsidy to for profit kindergartens wanting to convert to non profit.
In Pakistani Punjab, the Education Voucher Scheme (EVS) was introduced by Dr. Allah Bakhsh Malik Managing Director and Chief Executive of Punjab Education Foundation (PEF), especially in urban slums and poorest of the poor in 2005. The initial study was sponsored by Open Society Institute New York USA. Professor Henry M. Levin extended Pro-Bono services for children of poor families from Punjab. To ensure educational justice and integration, the government must ensure that the poorest families have equal access to quality education. The voucher scheme was designed by the Teachers College, Columbia University, and the Open Society Institute. It aims to promote freedom of choice, efficiency, equity, and social cohesion.
A pilot project was started in 2006 in the urban slums of Sukhnehar, Lahore, where a survey showed that all households lived below the poverty line. Through the EVS, the foundation would deliver education vouchers to every household with children 5–16 years of age. The vouchers would be redeemable against tuition payments at participating private schools. In the pilot stage, 1,053 households were given an opportunity to send their children to a private school of their choice. The EVS makes its partner schools accountable to the parents rather than to the bureaucrats at the Ministry of Education. In the FAS program, every school principal has the choice to admit a student or not. However, in the EVS, a partner school cannot refuse a student if the student has a voucher and the family has chosen that school. The partner schools are also accountable to the PEF: they are subject to periodic reviews of their student learning outcomes, additional private investments, and improvements in working conditions of the teachers. The EVS provides an incentive to parents to send their children to school, and so it has become a source of competition among private schools seeking to join the program.
When it comes to the selection of schools, the following criteria are applied across the board: (i) The fee paid by the PEF to EVS partner schools is PKR 550 to per child per month. Schools charging higher fees can also apply to the program, but they will not be paid more than PKR 1200, and they will not be entitled to charge the difference to students' families. (ii) Total school enrollment should be at least 50 children. (iii) The school should have an adequate infrastructure and a good learning environment. (iv) EVS partner schools should be located within a half-kilometer radius of the residences of voucher holders. However, if the parents prefer a particular school farther away, the PEF will not object, provided that the school fulfills the EVS selection criteria. (v) The PEF advertises to stimulate the interest of potential partner schools. It then gives students at short-listed schools preliminary tests in selected subjects, and conducts physical inspections of these schools. PEF offices display a list of all the EVS partner schools so that parents may consult it and choose a school for their children.
By now more than 500,000 students are benefiting from EVS and the program is being scaled up by financing from Government of Punjab.
In the 1980s, the Reagan administration pushed for vouchers, as did the George W. Bush administration in the initial education-reform proposals leading up to the No Child Left Behind Act. As of December 2016, 14 states had traditional school voucher programs. These states consist of: Arkansas, Florida, Georgia, Indiana, Louisiana, Maine, Maryland, Mississippi, North Carolina, Ohio, Oklahoma, Utah, Vermont, and Wisconsin. The capital of the United States, Washington, D.C., also had operating school voucher programs as of December 2016. When including scholarship tax credits and education savings accounts – two alternatives to vouchers – there are 27 states plus the District of Columbia with private school choice programs. Most of these programs were offered to students in low-income families, low performing schools, or students with disabilities. By 2014, the number participating in either vouchers or tax-credit scholarships increased to 250,000, a 30% increase from 2010, but still a small fraction compared to the 55 million in traditional schools.
In 1990, the city of Milwaukee, Wisconsin's public schools were the first to offer vouchers and has nearly 15,000 students using vouchers as of 2011. The program, entitled the Milwaukee Parental Choice Program, originally funded school vouchers for nonreligious, private institutions. It was, however, eventually expanded to include private, religious institutions after it saw success with nonreligious, private institutions. The 2006/07 school year marked the first time in Milwaukee that more than $100 million was paid in vouchers. Twenty-six percent of Milwaukee students will receive public funding to attend schools outside the traditional Milwaukee Public School system. In fact, if the voucher program alone were considered a school district, it would mark the sixth-largest district in Wisconsin. St. Anthony Catholic School, located on Milwaukee's south side, boasts 966 voucher students, meaning that it very likely receives more public money for general school support of a parochial elementary or high school than any before it in American history. A 2013 study of Milwaukee's program posited that the use of vouchers increased the probability that a student would graduate from high school, go to college, and stay in college. A 2015 paper published by the National Bureau of Economic Research found that participation in Louisiana's voucher program "substantially reduces academic achievement" although that the result may be reflective of the poor quality of private schools in the program.
Recent analysis of the competitive effects of school vouchers in Florida suggests that more competition improves performance in the regular public schools.
The largest school voucher program in the United States is Indiana's Indiana Choice Scholarships program.
Proponents of school voucher and education tax credit systems argue that those systems promote free market competition among both private and public schools by allowing parents and students to choose the school where to use the vouchers. This choice available to parents forces schools to perpetually improve in order to maintain enrollment. Thus, proponents argue that a voucher system increases school performance and accountability because it provides consumer sovereignty – allowing individuals to choose what product to buy, as opposed to a bureaucracy.
This argument is supported by studies such as "When Schools Compete: The Effects of Vouchers on Florida Public School Achievement" (Manhattan Institute for Policy Research, 2003), which concluded that public schools located near private schools that were eligible to accept voucher students made significantly more improvements than did similar schools not located near eligible private schools. Stanford's Caroline Hoxby, who has researched the systemic effects of school choice, determined that areas with greater residential school choice have consistently higher test scores at a lower per-pupil cost than areas with very few school districts. Hoxby studied the effects of vouchers in Milwaukee and of charter schools in Arizona and Michigan on nearby public schools. Public schools forced to compete made greater test-score gains than schools not faced with such competition, and that the so-called effect of cream skimming did not exist in any of the voucher districts examined. Hoxby's research has found that both private and public schools improved through the use of vouchers. Also, similar competition has helped in manufacturing, energy, transportation, and parcel postal (UPS, FedEx vs. USPS) sectors of government that have been socialized and later opened up to free market competition.
Similarly, it is argued that such competition has helped in higher education, with publicly funded universities directly competing with private universities for tuition money provided by the Government, such as the GI Bill and the Pell Grant in the United States. The Foundation for Educational Choice alleges that a school voucher plan "embodies exactly the same principle as the GI bills that provide for educational benefits to military veterans. The veteran gets a voucher good only for educational expense and he is completely free to choose the school at which he uses it, provided that it satisfies certain standards." The Pell Grant, a need-based aid, like the Voucher, can only be used for authorized school expenses at qualified schools, and, like the Pell, the money follows the student, for use against those authorized expenses (not all expenses are covered).
Proponents are encouraged by private school sector growth, as they believe that private schools are typically more efficient at achieving results at a much lower per-pupil cost than public schools. A CATO Institute study of public and private school per pupil spending in Phoenix, Los Angeles, D.C., Chicago, New York City, and Houston found that public schools spend 93% more than estimated median private schools.
Proponents claim that institutions often are forced to operate more efficiently when they are made to compete and that any resulting job losses in the public sector would be offset by the increased demand for jobs in the private sector.
Friedrich von Hayek on the privatizing of education:
Other notable supporters include New Jersey Senator Cory Booker, former Governor of South Carolina Mark Sanford, billionaire and American philanthropist John T. Walton, Former Mayor of Baltimore Kurt L. Schmoke, Former Massachusetts Governor Mitt Romney and John McCain. A random survey of 210 Ph.D. holding members of the American Economic Association, found that over two-thirds of economists support giving parents educational vouchers that can be used at government-operated or privately operated schools, and that support is greater if the vouchers are to be used by parents with low-incomes or parents with children in poorly performing schools.
Another prominent proponent of the voucher system was Apple co-founder and CEO, Steve Jobs, who said:
As a practical matter, proponents note, most U.S. programs only offer poor families the same choice more affluent families already have, by providing them with the means to leave a failing school and attend one where the child can get an education. Because public schools are funded on a per-pupil basis, the money simply follows the child, but the cost to taxpayers is less because the voucher generally is less than the actual cost.
In addition, they say, the comparisons of public and private schools on average are meaningless. Vouchers usually are utilized by children in failing schools, so they can hardly be worse off even if the parents fail to choose a better school. Also, focusing on the effect on the public school suggests that is more important than the education of children.
Some proponents of school vouchers, including the Sutherland Institute and many supporters of the Utah voucher effort, see it as a remedy for the negative cultural impact caused by under-performing public schools, which falls disproportionately on demographic minorities. During the run-up to the November referendum election, Sutherland issued a controversial publication: Voucher, Vows, & Vexations. Sutherland called the publication an important review of the history of education in Utah, while critics just called it revisionist history. Sutherland then released a companion article in a law journal as part of an academic conference about school choice.
EdChoice, founded by Milton and Rose Friedman in 1996, is a non-profit organization that promotes universal school vouchers and other forms of school choice. In defense of vouchers, it cites empirical research showing that students who were randomly assigned to receive vouchers had higher academic outcomes than students who applied for vouchers but lost a random lottery and did not receive them; and that vouchers improve academic outcomes at public schools, reduce racial segregation, deliver better services to special education students, and do not drain money from public schools.
EdChoice also argues that education funding should belong to children, not a specific school type or building. Their purpose for the argument is to try to argue that people should prioritize a student's education and their opportunity over making a specific type of school better. They also emphasize that if a family chooses a public school, the funds also go to that school. This would mean that it would also benefit those who value the public education system.
The main critique of school vouchers and education tax credits is that they put public education in competition with private education, threatening to reduce and reallocate public school funding to private schools. Opponents question the belief that private schools are more efficient.
Public school teachers and teacher unions have also fought against school vouchers. In the United States, public school teacher unions, most notably the National Education Association (the largest labor union in the USA), argue that school vouchers erode educational standards and reduce funding, and that giving money to parents who choose to send their child to a religious or other school is unconstitutional. The latter issue was struck down by the Supreme Court case "Zelman v. Simmons-Harris", which upheld Ohio's voucher plan in a 5-4 ruling. In contrast, the use of public school funding for vouchers to private schools was disallowed by the Louisiana Supreme Court in 2013. The Louisiana Supreme Court did not declare vouchers unconstitutional, just the use of money earmarked for public schools via the Louisiana Constitution for funding Louisiana's voucher program. The National Education Association also points out that access to vouchers is just like "a chance in a lottery" where parents had to be lucky in order to get a space in this program. Since almost all students and their families would like to choose the best schools, those schools, as a result, quickly reach its maximum capacity number for students that state law permits. Those who did not get vouchers then have to compete again to look for some other less preferred and competitive schools or give up searching and go back to their assigned local schools. Jonathan Kozol, a prominent public school reform thinker and former public school teacher called vouchers the "single worst, most dangerous idea to have entered education discourse in my adult life".
The National Education Association additionally argues that more money should go towards the Public Education to help the schools struggling and improve the schools overall, instead of reducing the public school's fund to go towards school vouchers. Their argument claims that increasing that amount of money that goes towards public education would also increase the amount of resources put into public schools, therefore, improving the education. This argument made towards school vouchers reflect the way the organization values public education. For example, in an interview in May 2017 regarding Donald Trump's 2018 Budget Proposal, the organization's president, Lily Eskelsen García, claimed:
"We should invest in what makes schools great, the things that build curiosity and instill a love of learning. That is what every student deserves and what every parent wants for his or her child. It should not depend on how much their parents make, what language they speak at home, and certainly, not what neighborhood they live in." -National Education Association President Lily Eskelsen García.
Furthermore, there are multiple studies that support the arguments made by opponents of school vouchers. One of these studies, conducted by the Tulane University's Education Research Alliance, consists of observing the relationship between voucher programs and student's test scores. They found that students in the Louisiana voucher program initially had lower test scores, but after three years, their scores matched those of students who stayed in public schools from standardized test scores spanning from 2012 to 2015.
People who can benefit from vouchers may not know it. In April 2012, a bill passed in Louisiana that made vouchers available to low-income families whose children attended poorly ranked schools. A student whose household income was low (up to about $44,000 for a family of three) who attended a school ranked "C", "D", or "F" could apply for vouchers to attend another school. Of the estimated 380,000 eligible students during the school year when the bill was passed (2012/13), only 5,000 students knew about and applied for the vouchers, and accepted them.
In 2006, the United States Department of Education released a report concluding that average test scores for reading and mathematics, when adjusted for student and school characteristics, tend to be very similar among public schools and private schools. Private schools performed significantly better than public schools only if results were not adjusted for factors such as race, gender, and free or reduced price lunch program eligibility. Other research questions assumptions that large improvements would result from a more comprehensive voucher system.
Given the limited budget for schools, it is claimed that a voucher system would weaken public schools while not providing enough money for people to attend private schools. 76% of the money given in Arizona's voucher program went to children already in private schools.
Some sources claim that public schools' higher per-pupil spending is due to having a higher proportion of students with behavioral, physical and emotional problems, since in the United States, public schools must by law accept any student regardless of race, gender, religion, disability, educational aptitude, and so forth, while private schools are not so bound. They argue that some, if not all, of the cost difference between public and private schools comes from "cream skimming", whereby the private schools select only those students who belong to a preferred group – whether economic, religious, educational aptitude level, or ethnicity – rather than from differences in administration. The end result, it has been argued, is that a voucher system has led or would lead students who do not belong to the private schools' preferred groupings to become concentrated at public schools. However, of the ten state-run voucher programs in the United States at the beginning of 2011, four targeted low-income students, two targeted students in failing schools, and six targeted students with special needs. (Louisiana ran a single program targeting all three groups.)
It is also argued that voucher programs are often implemented without the necessary safeguards that prevent institutions from discriminating against marginalized communities. In the United States, as of 2016, there are currently no state laws that require voucher programs to not discriminate against marginalized communities. Further, while some voucher programs may explicitly be aimed at marginalized communities, this is not necessarily always the case. A common argument for school vouchers is that it allows for marginalized communities of color to be uplifted from poverty. Historically, however, data suggests that voucher programs have been used to further segregate Americans. Further, some data has shown that the effects of voucher programs such as the New York City School Choice Scholarship Program, are marginal when it comes to increasing student achievement.
Another argument against a school voucher system is its lack of accountability to taxpayers. In many states, members of a community's board of education are elected by voters. Similarly, a school budget faces a referendum. Meetings of the Board of Education must be announced in advance, and members of the public are permitted to voice their concerns directly to board members. By contrast, although vouchers may be used in private and religious schools, taxpayers cannot vote on budget issues, elect members of the board or even attend board meetings. Kevin Welner points out that vouchers funded through a convoluted tax credit system—a policy he calls "neovouchers"—present additional accountability concerns. With neovoucher systems, a taxpayer owing money to the state instead donates that money to a private, nonprofit organization. That organization then bundles donations and gives them to parents as vouchers to be used for private school tuition. The state then steps in and forgives (through a tax credit) some or all of the taxes that the donor has given to the organization. While conventional tax credit systems are structured to treat all private school participants equally, neovoucher systems effectively delegate to individual private taxpayers (those owing money to the state) the power to decide which private schools will benefit.
An example of lack of accountability is the voucher situation in Louisiana. In 2012, Louisiana State Superintendent of Education John White selected private schools to receive vouchers, then tried to fabricate criteria (including site visits) after schools had already received approval letters. One school of note, New Living Word in Ruston, Louisiana, did not have sufficient facilities for the over-300 students White and the state board of education had approved. Following a voucher audit in 2013, New Living Word had overcharged the state $395,000. White referred to the incident as a "lone substantive issue". However, most voucher schools did not undergo a complete audit for not having a separate checking account for state voucher money.
According to Susanne Wiborg, an expert on comparative education, Sweden's voucher system introduced in 1992 has "augmented social and ethnic segregation, particularly in relation to schools in deprived areas".
Tax-credit scholarships which are in most part disbursed to current private school students or to families which made substantial donations to the scholarship fund, rather than to low-income students attempting to escape from failing schools, amount to nothing more than a mechanism to use public funds in the form of foregone taxes to support private, often religiously based, private schools.
The school voucher question in the United States has also received a considerable amount of judicial review in the early 2000s.
A program launched in the city of Cleveland in 1995 and authorized by the state of Ohio was challenged in court on the grounds that it violated both the federal constitutional principle of separation of church and state and the guarantee of religious liberty in the Ohio Constitution. These claims were rejected by the Ohio Supreme Court, but the federal claims were upheld by the local federal district court and by the Sixth Circuit appeals court. The fact that nearly all of the families using vouchers attended Catholic schools in the Cleveland area was cited in the decisions.
This was later reversed during 2002 in a landmark case before the US Supreme Court, "Zelman v. Simmons-Harris", in which the divided court, in a 5–4 decision, ruled the Ohio school voucher plan constitutional and removed any constitutional barriers to similar voucher plans in the future, with conservative justices Anthony Kennedy, Sandra Day O'Connor, William Rehnquist, Antonin Scalia, and Clarence Thomas in the majority.
Chief Justice William Rehnquist, writing for the majority, stated that "The incidental advancement of a religious mission, or the perceived endorsement of a religious message, is reasonably attributable to the individual aid recipients not the government, whose role ends with the disbursement of benefits." The Supreme Court ruled that the Ohio program did not violate the Establishment Clause, because it passed a five-part test developed by the Court in this case, titled the Private Choice Test.
Dissenting opinions included Justice Stevens's, who wrote "...the voluntary character of the private choice to prefer a parochial education over an education in the public school system seems to me quite irrelevant to the question whether the government's choice to pay for religious indoctrination is constitutionally permissible." and Justice Souter's, whose opinion questioned how the Court could keep "Everson v. Board of Education" on as precedent and decide this case in the way they did, feeling it was contradictory. He also found that religious instruction and secular education could not be separated and this itself violated the Establishment Clause.
In 2006, the Florida Supreme Court struck down legislation known as the Florida Opportunity Scholarship Program (OSP), which would have implemented a system of school vouchers in Florida. The court ruled that the OSP violated article IX, section 1(a) of the Florida Constitution: "Adequate provision shall be made by law for a uniform, efficient, safe, secure, and high quality system of free public schools." This decision was criticized by Clark Neily, Institute for Justice senior attorney and legal counsel to Pensacola families using Florida Opportunity Scholarships, as, "educational policymaking".
Political support for school vouchers in the United States is mixed. On the left/right spectrum, conservatives are more likely to support vouchers. Some state legislatures have enacted voucher laws. In New Mexico, then-Republican Gary Johnson made school voucher provision the major issue of his second term as Governor. As of 2006, the federal government operates the largest voucher program, for evacuees from the region affected by Hurricane Katrina. The Federal government provided a voucher program for 7,500 residents of Washington, D.C. - the D.C. Opportunity Scholarship Program. until in early March 2009 congressional Democrats were moving to close down the program and remove children from their voucher-funded school places at the end of the 2009/10 school year under the $410 billion Omnibus Appropriations Act of 2009 which, as of March 7 had passed the House and was pending in the Senate. The Obama administration stated that it preferred to allow children already enrolled in the program to finish their schooling while closing the program to new entrants. However, its preference on this matter does not appear to be strong enough to prevent the President from signing the Bill.
Whether or not the public generally supports vouchers is debatable. Majorities seem to favor improving existing schools over providing vouchers, yet as many as 40% of those surveyed admit that they do not know enough to form an opinion or do not understand the system of school vouchers.
In November 2000, a voucher system proposed by Tim Draper was placed on the California ballot as Proposition 38. It was unusual among school voucher proposals in that it required neither accreditation on the part of schools accepting vouchers, nor proof of need on the part of families applying for them; neither did it have any requirement that schools accept vouchers as payment-in-full, nor any other provision to guarantee a reduction in the real cost of private school tuition. The measure was defeated by a final percentage tally of 70.6 to 29.4.
A statewide universal school voucher system providing a maximum tuition subsidy of $3,000 was passed in Utah in 2007, but 62% of voters repealed it in a statewide referendum before it took effect. On April 27, 2011 Indiana passed a statewide voucher program, the largest in the U.S. It offers up to $4,500 to students with household incomes under $41,000, and lesser benefits to households with higher incomes. The vouchers can be used to fund a variety of education options outside the public school system. In March 2013, the Indiana Supreme Court found that the program does not violate the state constitution.
President Donald Trump proposed a 2018 budget that includes $250 million for voucher initiatives, which are state-funded programs that pay for students to go to private school. This 2018 budget served the purpose of, "Expanding school choice, ensuring more children have an equal opportunity to receive a great education, maintaining strong support for the Nation's most vulnerable students, simplifying funding for post secondary education, continuing to build evidence around educational innovation, and eliminating or reducing Department programs consistent with the limited Federal role in education." The Budget reduces more than 30 programs that duplicate other programs, which are ineffective; or are more appropriately supported with State, local, or private funds. Another $1 billion is set aside for encouraging schools to adopt school choice-friendly policies.
Betsy DeVos, Trump's education secretary, is also an advocate for voucher programs, and has argued that they would lead to better educational outcomes for students. Both Trump and DeVos want to propose cutting the Education Department's budget by about $3.6 billion and spend more than $1 billion on private school vouchers and other school choice plans.
DeVos makes a statement regarding the purpose and importance of the budget. DeVos claims:
"This budget makes an historic investment in America's students. President Trump is committed to ensuring the Department focuses on returning decision-making power back to the States, where it belongs, and on giving parents more control over their child's education. By refocusing the Department's funding priorities on supporting students, we can usher in a new era of creativity and ingenuity and lay a new foundation for American greatness." – Betsy DeVos, U.S. Secretary of Education
Some private religious schools in voucher programs teach creationism instead of the theory of evolution, including religious schools that teach religious theology side-by-side with or in place of science. Over 300 schools in the US have been documented as teaching creation and receive taxpayer money. Contrary to popular belief , a strict definition of state-funded religious education was narrowly deemed constitutional in "Zelman v. Simmons-Harris" (2002). However, currently 35 states have passed various Blaine Amendments restricting or prohibiting public funding of religious education. | https://en.wikipedia.org/wiki?curid=9750 |
Elegiac couplet
The elegiac couplet is a poetic form used by Greek lyric poets for a variety of themes usually of smaller scale than the epic. Roman poets, particularly Catullus, Propertius, Tibullus, and Ovid, adopted the same form in Latin many years later. As with the English heroic, each couplet usually makes sense on its own, while forming part of a larger work.
Each couplet consists of a hexameter verse followed by a pentameter verse. The following is a graphic representation of its scansion:
The form was felt by the ancients to contrast the rising action of the first verse with a falling quality in the second. The sentiment is summarized in a line from Ovid's "Amores" I.1.27 "Sex mihi surgat opus numeris, in quinque residat"—"Let my work rise in six steps, fall back in five." The effect is illustrated by Coleridge as:
translating Schiller,
The elegiac couplet is presumed to be the oldest Greek form of epodic poetry (a form where a later verse is sung in response or comment to a previous one). Scholars, who even in the past did not know who created it, theorize the form was originally used in Ionian dirges, with the name "elegy" derived from the Greek "ε, λεγε ε, λεγε"—"Woe, cry woe, cry!" Hence, the form was used initially for funeral songs, typically accompanied by an aulos, a double-reed instrument. Archilochus expanded use of the form to treat other themes, such as war, travel, or homespun philosophy. Between Archilochus and other imitators, the verse form became a common poetic vehicle for conveying any strong emotion.
At the end of the 7th century BCE, Mimnermus of Colophon struck on the innovation of using the verse for erotic poetry. He composed several elegies celebrating his love for the flute girl Nanno, and though fragmentary today his poetry was clearly influential in the later Roman development of the form. Propertius, to cite one example, notes "Plus in amore valet Mimnermi versus Homero"—"The verse of Mimnermus is stronger in love than Homer".
The form continued to be popular throughout the Greek period and treated a number of different themes. Tyrtaeus composed elegies on a war theme, apparently for a Spartan audience. Theognis of Megara vented himself in couplets as an embittered aristocrat in a time of social change. Popular leaders were writers of elegy—Solon the lawgiver of Athens composed on political and ethical subjects—and even Plato and Aristotle dabbled with the meter.
By the Hellenistic period, the Alexandrian school made elegy its favorite and most highly developed form. They preferred the briefer style associated with elegy in contrast to the lengthier epic forms, and made it the singular medium for short epigrams. The founder of this school was Philitas of Cos. He was eclipsed only by the school's most admired exponent, Callimachus; their learned character and intricate art would have a heavy influence on the Romans.
Like many Greek forms, elegy was adapted by the Romans for their own literature. The fragments of Ennius contain a few couplets, and scattered verses attributed to Roman public figures like Cicero and Julius Caesar also survive.
But it is the elegists of the mid-to-late first century BCE who are most commonly associated with the distinctive Roman form of the elegiac couplet. Catullus, the first of these, is an invaluable link between the Alexandrine school and the subsequent elegies of Tibullus and Propertius a generation later. His collection, for example, shows a familiarity with the usual Alexandrine style of terse epigram and a wealth of mythological learning, while his 66th poem is a direct translation of Callimachus' "Coma Berenices". Arguably the most famous elegiac couplet in Latin is his two-line 85th poem "Odi et Amo":
Many people, particularly students of Latin, who read this poem aloud often miss the metre because of the high amount of elision in this poem.
Cornelius Gallus is another important statesman/writer of this period, one who was generally regarded by the ancients as among the greatest of the elegists. Other than a few scant lines, all of his work has been lost.
The form reached its zenith with the collections of Tibullus and Propertius and several collections of Ovid (the "Amores, Heroides, Tristia", and "Epistulae ex Ponto"). The vogue of elegy during this time is seen in the so-called 3rd and 4th books of Tibullus. Many poems in these books were clearly not written by Tibullus but by others, perhaps part of a circle under Tibullus' patron Mesalla. Notable in this collection are the poems of Sulpicia, among the few surviving works by Classical Latin female poets.
Through these poets—and in comparison with the earlier Catullus—it is possible to trace specific characteristics and evolutionary patterns in the Roman form of the verse:
Although no classical poet wrote collections of love elegies after Ovid, the verse retained its popularity as a vehicle for popular occasional poetry. Elegiac verses appear, for example, in Petronius' "Satyricon", and Martial's Epigrams uses it for many witty stand-alone couplets and for longer pieces. The trend continues through the remainder of the empire; short elegies appear in Apuleius's story "Psyche and Cupid" and the minor writings of Ausonius.
After the fall of the empire, one writer who produced elegiac verse was Maximianus. Various Christian writers also adopted the form; Venantius Fortunatus wrote some of his hymns in the meter, while later Alcuin and the Venerable Bede dabbled in the verse. The form also remained popular among the educated classes for gravestone epitaphs; many such epitaphs can be found in European cathedrals.
"De tribus puellis" is an example of a Latin "fabliau", a genre of comedy which employed elegiac couplets in imitation of Ovid. The medieval theorist John of Garland wrote that "all comedy is elegy, but the reverse is not true." Medieval Latin had a developed comedic genre known as elegiac comedy. Sometimes narrative, sometimes dramatic, it deviated from ancient practice because, as Ian Thompson writes, "no ancient drama would ever have been written in elegiacs."
With the Renaissance, more skilled writers interested in the revival of Roman culture took on the form in a way which attempted to recapture the spirit of the Augustan writers. The Dutch Latinist Johannes Secundus, for example, included Catullus-inspired love elegies in his "Liber Basiorum", while the English poet John Milton wrote several lengthy elegies throughout his career. This trend continued down through the Recent Latin writers, whose close study of their Augustan counterparts reflects their general attempts to apply the cultural and literary forms of the ancient world to contemporary themes. | https://en.wikipedia.org/wiki?curid=9755 |
Exabyte
The exabyte is a multiple of the unit byte for digital information. In the International System of Units (SI), the prefix "exa" indicates multiplication by the sixth power of 1000 (1018). Therefore, one exabyte is one quintillion bytes (short scale). The unit symbol for the exabyte is EB.
A related unit, the exbibyte, using a binary prefix, is equal to (=, about 15% larger.
Allegedly, "all words ever spoken by human beings" could be stored in approximately 5 exabytes of data. This claim often cites a project at the UC Berkeley School of Information in support (although this project is now outdated and therefore not entirely accurate). The 2003 University of California, Berkeley, report credits the estimate to the website of Caltech researcher Roy Williams, where the statement can be found as early as May 1999. This statement has been criticized. Mark Liberman calculated the storage requirements for all human speech at 42 zettabytes (42,000 exabytes, and 8,400 times the original estimate) if digitized as 16 kHz 16-bit audio, although he did freely confess that "maybe the authors [of the exabyte estimate] were thinking about text".
Earlier studies from the University of California, Berkeley, estimated that by the end of 1999, the sum of human-produced information (including all audio, video recordings, and text/books) was about 12 exabytes of data. The 2003 Berkeley report stated that in 2002 alone, "telephone calls worldwide on both landlines and mobile phones contained 17.3 exabytes of new information if stored in digital form" and that "it would take 9.25 exabytes of storage to hold all U.S. [telephone] calls each year". International Data Corporation estimates that approximately 160 exabytes of digital information were created, captured, and replicated worldwide in 2006. Research from the University of Southern California estimates that the amount of data stored in the world by 2007 was 295 exabytes and the amount of information shared on two-way communications technology, such as cell phones, in 2007 as 65 exabytes.
The content of the Library of Congress is commonly estimated to hold 10 terabytes of data in all printed material. Recent estimates of the size including audio, video, and digital materials start at 3 petabytes to 20 petabytes. Therefore, one exabyte could hold a hundred thousand times the printed material or 50 to 300 times all the content of the Library of Congress.
In 2013, Randall Munroe compiled published assertions about Google's data centers, and estimated that the company has about 10 exabytes stored on disk, and additionally approximately 5 exabytes on tape backup. The company has not commented on Munroe's estimate. | https://en.wikipedia.org/wiki?curid=9756 |
Era
An era is a span of time defined for the purposes of chronology or historiography, as in the regnal eras in the history of a given monarchy, a calendar era used for a given calendar, or the geological eras defined for the history of Earth.
Comparable terms are epoch, age, period, saeculum, aeon (Greek "aion") and Sanskrit yuga.
The word has been in use in English since 1615, and is derived from Late Latin "aera" "an era or epoch from which time is reckoned," probably identical to Latin "æra" "counters used for calculation," plural of "æs" "brass, money".
The Latin word use in chronology seems to have begun in 5th century Visigothic Spain, where it appears in the "History" of Isidore of Seville, and in later texts. The Spanish era is calculated from 38 BC, perhaps because of a tax (cfr. indiction) levied in that year, or due to a miscalculation of the Battle of Actium, which occurred in 31 BC.
Like epoch, "era" in English originally meant "the starting point of an age"; the meaning "system of chronological notation" is c.1646; that of "historical period" is 1741.
In chronology, an era is the highest level for the organization of the measurement of time. A calendar era indicates a span of many years which are numbered beginning at a specific reference date (epoch), which often marks the origin of a political state or cosmology, dynasty, ruler, the birth of a leader, or another significant historical or mythological event; it is generally called after its focus accordingly as in "Victorian era".
In large-scale natural science, there is need for another time perspective, independent from human activity, and indeed spanning a far longer period (mainly prehistoric), where "geologic era" refers to well-defined time spans.
The next-larger division of geologic time is the eon. The Phanerozoic Eon, for example, is subdivided into eras. There are currently three eras defined in the Phanerozoic; the following table lists them from youngest to oldest (BP is an abbreviation for "before present").
The older Proterozoic and Archean eons are also divided into eras.
For periods in the history of the universe, the term "epoch" is typically preferred, but "era" is used e.g. of the "Stelliferous Era".
Calendar eras count the years since a particular date (epoch), often one with religious significance. Anno mundi ("year of the world") refers to a group of calendar eras based on a calculation of the age of the world, assuming it was created as described in the Book of Genesis. In Jewish religious contexts one of the versions is still used, and many Eastern Orthodox religious calendars used another version until 1728. Hebrew year 5772 AM began at sunset on 28 September 2011 and ended on 16 September 2012. In the Western church Anno Domini ("AD" also written "CE"), counting the years since the birth of Jesus on traditional calculations, was always dominant.
The Islamic calendar, which also has variants, counts years from the Hijra or emigration of the Islamic prophet Muhammad from Mecca to Medina, which occurred in 622 AD. The Islamic year is some days shorter than 365; January 2012 fell in 1433 AH ("After Hijra").
For a time ranging from 1872 to the Second World War, the Japanese used the imperial year system ("kōki"), counting from the year when the legendary Emperor Jimmu founded Japan which occurred in 660 BC.
Many Buddhist calendars count from the death of the Buddha, which according to the most commonly used calculations was in 545-543 BCE or 483 BCE. Dates are given as "BE" for "Buddhist Era"; 2000 AD was 2543 BE in the Thai solar calendar.
Other calendar eras of the past counted from political events, such as the Seleucid era and the Ancient Roman "ab urbe condita" ("AUC"), counting from the foundation of the city.
The word era also denotes the units used under a different, more arbitrary system where time is not represented as an endless continuum with a single reference year, but each unit starts counting from one again as if time starts again. The use of regnal years is a rather impractical system, and a challenge for historians if a single piece of the historical chronology is missing, and often reflects the preponderance in public life of an absolute ruler in many ancient cultures. Such traditions sometimes outlive the political power of the throne, and may even be based on mythological events or rulers who may not have existed (for example Rome numbering from the rule of Romulus and Remus). In a manner of speaking the use of the supposed date of the birth of Christ as a base year is a form of an era.
In East Asia, each emperor's reign may be subdivided into several reign periods, each being treated as a new era. The name of each was a motto or slogan chosen by the emperor. Different East Asian countries utilized slightly different systems, notably:
A similar practice survived in the United Kingdom until quite recently, but only for formal official writings: in daily life the ordinary year A.D. has been used for a long time, but Acts of Parliament were dated according to the years of the reign of the current Monarch, so that "61 & 62 Vict c. 37" refers to the Local Government (Ireland) Act 1898 passed in the session of Parliament in the 61st/62nd year of the reign of Queen Victoria.
"Era" can be used to refer to well-defined periods in historiography, such as the Roman era, Elizabethan era, Victorian era, etc.
Use of the term for more recent periods or topical history might include Soviet era, and "" in the history of modern popular music, such as the "Big Band era", "Disco era", etc. | https://en.wikipedia.org/wiki?curid=9758 |
Eschatology
Eschatology is a part of theology concerned with the final events of history, or the ultimate destiny of humanity. This concept is commonly referred to as the "end of the world" or "end times".
The word arises from the Greek "eschatos" meaning "last" and "-logy" meaning "the study of", and first appeared in English around 1844. The "Oxford English Dictionary" defines eschatology as "the part of theology concerned with death, judgment, and the final destiny of the soul and of humankind".
In the context of mysticism, the term refers metaphorically to the end of ordinary reality and to reunion with the Divine. Many religions treat eschatology as a future event prophesied in sacred texts or in folklore.
Most modern eschatology and apocalypticism, both religious and secular, involves the violent disruption or destruction of the world; Christian and Jewish eschatologies view the end times as the consummation or perfection of God's creation of the world, albeit with violent overtures, such as the Great Tribulation. For example, according to some ancient Hebrew worldviews, reality unfolds along a linear path (or rather, a spiral path, with cyclical components that nonetheless have a linear trajectory); the world began with God and is ultimately headed toward God's final goal for creation, the world to come.
Eschatologies vary as to their degree of optimism or pessimism about the future. In some eschatologies, conditions are better for some and worse for others, e.g. "heaven and hell". They also vary as to time frames. Groups claiming "imminent" eschatology are also referred to as doomsday cults.
In Bahá'í belief, creation has neither a beginning nor an end. Instead, the eschatology of other religions is viewed as symbolic. In Bahá'í belief, human time is marked by a series of progressive revelations in which successive messengers or prophets come from God. The coming of each of these messengers is seen as the day of judgment to the adherents of the previous religion, who may choose to accept the new messenger and enter the "heaven" of belief, or denounce the new messenger and enter the "hell" of denial. In this view, the terms "heaven" and "hell" are seen as symbolic terms for the person's spiritual progress and their nearness to or distance from God. In Bahá'í belief, the coming of Bahá'u'lláh, the founder of the Bahá'í Faith, signals the fulfilment of previous eschatological expectations of Islam, Christianity and other major religions.
Christian eschatology is the study concerned with the ultimate destiny of the individual soul and the entire created order, based primarily upon biblical texts within the Old and New Testament.
Christian eschatology looks to study and discuss matters such as the nature of the Divine and the divine nature of Jesus Christ, death and the afterlife, Heaven and Hell, the Second Coming of Jesus, the resurrection of the dead, the Rapture, the Tribulation, Millennialism, the end of the world, the Last Judgment, and the New Heaven and New Earth in the world to come.
Eschatological passages are found in many places in the Bible, both in the Old and the New Testaments. In the Old Testament, apocalyptic eschatology can be found notably in Isaiah 24–27, Isaiah 56–66, Joel, Zechariah 9–14 as well as closing chapters of Daniel, and Ezekiel. In the New Testament, applicable passages include Matthew 24, Mark 13, the parable of "The Sheep and the Goats" and in the Book of Revelation—although Revelation often occupies a central place in Christian eschatology.
The Second Coming of Christ is the central event in Christian eschatology within the broader context of the fullness of the Kingdom of God. Most Christians believe that death and suffering will continue to exist until Christ's return. There are, however, various views concerning the order and significance of other eschatological events.
The Book of Revelation is at the core of Christian eschatology. The study of Revelation is usually divided into four interpretative methodologies or hermeneutics. In the Futurist approach, Revelation is treated mostly as unfulfilled prophecy taking place in some yet undetermined future. In the Preterist approach, Revelation is chiefly interpreted as having prophetic fulfillment in the past, principally the events of the first century CE.
In the Historicist approach, Revelation provides a broad view of history, and passages in Revelation are identified with major historical people and events. This is the view Jewish scholars held, along with the early Christian church, and it was prevalent in Wycliffe's writings, and other Reformers such as Martin Luther, John Calvin,
John Wesley, and Sir Isaac Newton, and many others.
In the Idealist approach, the events of Revelation are neither past nor future, but are purely symbolic, dealing with the ongoing struggle and ultimate triumph of good over evil.
Contemporary Hindu eschatology is linked in the Vaishnavite tradition to the figure of Kalki, the tenth and last avatar of Vishnu before the age draws to a close who will reincarnate as Shiva and simultaneously dissolve and regenerate the universe.
Most Hindus believe that the current period is the Kali Yuga, the last of four "Yuga" that make up the current age. Each period has seen successive degeneration in the moral order, to the point that in the Kali Yuga quarrel and hypocrisy are the norm. In Hinduism, time is cyclic, consisting of cycles or "kalpas". Each kalpa lasts 4.1 – 8.2 billion years, which is one full day and night for Brahma, who in turn will live for 311 trillion, 40 billion years. The cycle of birth, growth, decay, and renewal at the individual level finds its echo in the cosmic order, yet is affected by vagaries of divine intervention in Vaishnavite belief. Some Shaivites hold the view that Shiva is incessantly destroying and creating the world.
Islamic eschatology is documented in the sayings of the Prophet Muhammad, regarding the Signs of the Day of Judgement.
The Prophet's sayings on the subject have been traditionally divided into Major and Minor Signs. He spoke about several Minor Signs of the approach of the Day of Judgment, including:
Regarding the Major Signs, a Companion of the Prophet narrated: "Once we were sitting together and talking amongst ourselves when the Prophet appeared. He asked us what it was we were discussing. We said it was the Day of Judgment. He said: 'It will not be called until ten signs have appeared: Smoke, Dajjal (the Antichrist), the creature (that will wound the people), the rising of the sun in the West, the Second Coming of Jesus, the emergence of Gog and Magog, and three sinkings (or cavings in of the earth): one in the East, another in the West and a third in the Arabian Peninsula.'" (note: the previous events were not listed in the chronological order of appearance)
Jewish eschatology is concerned with events that will happen in the end of days, according to the Hebrew Bible and Jewish thought. This includes the ingathering of the exiled diaspora, the coming of the Jewish Messiah, afterlife, and the revival of the dead Tzadikim.
In Judaism, the end times are usually called the "end of days" ("aḥarit ha-yamim", אחרית הימים), a phrase that appears several times in the Tanakh. The idea of a messianic age has a prominent place in Jewish thought and is incorporated as part of the end of days.
Judaism addresses the end times in the Book of Daniel and numerous other prophetic passages in the Hebrew scriptures, and also in the Talmud, particularly Tractate Avodah Zarah.
Frashokereti is the Zoroastrian doctrine of a final renovation of the universe when evil will be destroyed, and everything else will then be in perfect unity with God (Ahura Mazda). The doctrinal premises are (1) good will eventually prevail over evil; (2) creation was initially perfectly good, but was subsequently corrupted by evil; (3) the world will ultimately be restored to the perfection it had at the time of creation; (4) the "salvation for the individual depended on the sum of [that person's] thoughts, words and deeds, and there could be no intervention, whether compassionate or capricious, by any divine being to alter this." Thus, each human bears the responsibility for the fate of his own soul, and simultaneously shares in the responsibility for the fate of the world.
Researchers in futures studies and transhumanists investigate how the accelerating rate of scientific progress may lead to a "technological singularity" in the future that would profoundly and unpredictably change the course of human history, and result in "Homo sapiens" no longer being the dominant life form on Earth.
Occasionally the term "physical eschatology" is applied to the long-term predictions of astrophysics. The Sun will turn into a red giant in approximately 6 billion years. Life on Earth will become impossible due to a rise in temperature long before the planet is actually swallowed up by the Sun. Even later, the Sun will become a white dwarf. | https://en.wikipedia.org/wiki?curid=9760 |
Ecumenical council
An ecumenical council (or oecumenical council; also general council) is a conference of ecclesiastical dignitaries and theological experts convened to discuss and settle matters of Church doctrine and practice in which those entitled to vote are convoked from the whole world (oikoumene) and which secures the approbation of the whole Church.
The word "ecumenical" derives from the Late Latin "oecumenicus" "general, universal", from Greek "oikoumenikos" "from the whole world", from "he oikoumene ge" "the inhabited world (as known to the ancient Greeks); the Greeks and their neighbors considered as developed human society (as opposed to barbarian lands)", in later use "the Roman world" and in the Christian sense in ecclesiastical Greek, from "oikoumenos", present passive participle of "oikein" "inhabit", from "oikos" "house, habitation." The first seven ecumenical councils, recognised by both the eastern and western denominations comprising Chalcedonian Christianity, were convoked by Roman Emperors, who also enforced the decisions of those councils within the state church of the Roman Empire.
Starting with the third ecumenical council, noteworthy schisms led to non-participation by some members of what had previously been considered a single Christian Church. Thus, some parts of Christianity did not attend later councils, or attended but did not accept the results. Bishops belonging to what became known as the Eastern Orthodox Church accept only seven ecumenical councils, as described below. Bishops belonging to what became known as the Church of the East only participated in the first two councils. Bishops belonging to what became known as Oriental Orthodoxy participated in the first four councils, but rejected the decisions of the fourth and did not attend any subsequent ecumenical councils.
Acceptance of councils as ecumenical and authoritative varies between different Christian denominations. Disputes over christological and other questions have led certain branches to reject some councils that others accept.
The Church of the East (accused by others of adhering to Nestorianism) accepts as ecumenical only the first two councils. Oriental Orthodox Churches accept the first three. Both the Eastern Orthodox Church and Catholic Church recognise as ecumenical the first seven councils, held from the 4th to the 9th centuries. While the Eastern Orthodox Church accepts no later council or synod as ecumenical, the Catholic Church continues to hold general councils of the bishops in full communion with the Pope, reckoning them as ecumenical. In all, the Catholic Church recognises twenty-one councils as ecumenical. Anglicans and confessional Protestants accept either the first seven or the first four as ecumenical councils.
The doctrine of the "infallibility of ecumenical councils" states that solemn definitions of ecumenical councils, which concern faith or morals, and to which the whole Church must adhere, are infallible. Such decrees are often labeled as 'Canons' and they often have an attached anathema, a penalty of excommunication, against those who refuse to believe the teaching. The doctrine does not claim that every aspect of every ecumenical council is dogmatic, but that every aspect of an ecumenical council is free of errors or is indefectible.
Both the Eastern Orthodox and the Catholic churches uphold versions of this doctrine. However, the Catholic Church holds that solemn definitions of ecumenical councils meet the conditions of infallibility only when approved by the Pope, while the Eastern Orthodox Church holds that an ecumenical council is itself infallible when pronouncing on a specific matter.
Protestant churches would generally view ecumenical councils as fallible human institutions that have no more than a derived authority to the extent that they correctly expound Scripture (as most would generally consider occurred with the first four councils in regard to their dogmatic decisions).
Church councils were, from the beginning, bureaucratic exercises. Written documents were circulated, speeches made and responded to, votes taken, and final documents published and distributed. A large part of what is known about the beliefs of heresies comes from the documents quoted in councils in order to be refuted, or indeed only from the deductions based on the refutations.
Most councils dealt not only with doctrinal but also with disciplinary matters, which were decided in "canons" ("laws"). Study of the canons of church councils is the foundation of the development of canon law, especially the reconciling of seemingly contradictory canons or the determination of priority between them. Canons consist of doctrinal statements and disciplinary measures—most Church councils and local synods dealt with immediate disciplinary concerns as well as major difficulties of doctrine. Eastern Orthodoxy typically views the purely doctrinal canons as dogmatic and applicable to the entire church at all times, while the disciplinary canons apply to a particular time and place and may or may not be applicable in other situations.
Of the seven councils recognised in whole or in part by both the Catholic and the Eastern Orthodox Church as ecumenical, all were called by a Roman emperor. The emperor gave them legal status within the entire Roman Empire. All were held in the eastern part of the Roman Empire. The bishop of Rome (self-styled as "pope" since the end of the fourth century) did not attend, although he sent legates to some of them.
Church councils were traditional and the ecumenical councils were a continuation of earlier councils (also known as synods) held in the Empire before Christianity was made legal. These include the Council of Jerusalem (c. 50), the Council of Rome (155), the Second Council of Rome (193), the Council of Ephesus (193), the Council of Carthage (251), the Council of Iconium (258), the Council of Antioch (264), the Councils of Arabia (246–247), the Council of Elvira (306), the Council of Carthage (311), the Synod of Neo-Caesarea (c. 314), the Council of Ancyra (314) and the Council of Arles (314).
The first seven councils recognised in both East and West as ecumenical and several others to which such recognition is refused were called by the Byzantine emperors. In the first millennium, various theological and political differences such as Nestorianism or Dyophysitism caused parts of the Church to separate after councils such as those of Ephesus and Chalcedon, but councils recognised as ecumenical continued to be held.
The Council of Hieria of 754, held at the imperial palace of that name close to Chalcedon in Anatolia, was summoned by Byzantine Emperor Constantine V and was attended by 338 bishops, who regarded it as the seventh ecumenical council The Second Council of Nicaea, which annulled that of Hieria, was itself annulled at a synod held in 815 in Constantinople under Emperor Leo V. This synod, presided over by Patriarch Theodotus I of Constantinople, declared the Council of Hieria to be the seventh ecumenical council, but, although the Council of Hieria was called by an emperor and confirmed by another, and although it was held in the east, it later ceased to be considered ecumenical.
Similarly, the Second Council of Ephesus of 449, also held in Anatolia, was called by the Byzantine Emperor Theodosius II and, though annulled by the Council of Chalcedon, was confirmed by Emperor Basiliscus, who annulled the Council of Chalcedon. This too ceased to be considered an ecumenical council.
The Catholic Church does not consider the validity of an ecumenical council's teaching to be in any way dependent on where it is held or on the granting or withholding of prior authorization or legal status by any state, in line with the attitude of the 5th-century bishops who "saw the definition of the church's faith and canons as supremely their affair, with or without the leave of the Emperor" and who "needed no one to remind them that Synodical process pre-dated the Christianisation of the royal court by several centuries".
The Catholic Church recognizes as ecumenical various councils held later than the First Council of Ephesus (after which churches out of communion with the Holy See because of the Nestorian Schism did not participate), later than the Council of Chalcedon (after which there was no participation by churches that rejected Dyophysitism), later than the Second Council of Nicaea (after which there was no participation by the Eastern Orthodox Church), and later than the Fifth Council of the Lateran (after which groups that adhered to Protestantism did not participate).
Of the twenty-one ecumenical councils recognised by the Catholic Church, some gained recognition as ecumenical only later. Thus the Eastern First Council of Constantinople became ecumenical only when its decrees were accepted in the West also.
In the history of Christianity, the first seven ecumenical councils, from the First Council of Nicaea (325) to the Second Council of Nicaea (787), represent an attempt to reach an orthodox consensus and to unify Christendom.
All of the original seven ecumenical councils as recognized in whole or in part were called by an emperor of the Eastern Roman Empire and all were held in the Eastern Roman Empire, a recognition denied to other councils similarly called by an Eastern Roman emperor and held in his territory, in particular the Council of Serdica (343), the Second Council of Ephesus (449) and the Council of Hieria (754), which saw themselves as ecumenical or were intended as such.
As late as the 11th century, only seven councils were recognised as ecumenical in the Catholic Church. Then, in the time of Pope Gregory VII (1073–1085), canonists who in the Investiture Controversy quoted the prohibition in canon 22 of the Council of Constantinople of 869–870 against laymen influencing the appointment of prelates elevated this council to the rank of ecumenical council. Only in the 16th century was recognition as ecumenical granted by Catholic scholars to the Councils of the Lateran, of Lyon and those that followed. The following is a list of further councils generally recognised as ecumenical by Catholic theologians:
Eastern Orthodox catechisms teach that there are seven ecumenical councils and there are feast days for seven ecumenical councils. Nonetheless, some Eastern Orthodox consider events like the Council of Constantinople of 879–880, that of Constantinople in 1341–1351 and that of Jerusalem in 1672 to be ecumenical:
It is unlikely that formal ecumenical recognition will be granted to these councils, despite the acknowledged orthodoxy of their decisions, so that only seven are universally recognized among the Eastern Orthodox as ecumenical.
The 2016 Pan-Orthodox Council was sometimes referred to as a potential "Eighth Ecumenical Council" following debates on several issues facing Eastern Orthodoxy, however not all autocephalous churches were represented.
Although some Protestants reject the concept of an ecumenical council establishing doctrine for the entire Christian faith, Catholics, Lutherans, Anglicans, Eastern Orthodox and Oriental Orthodox all accept the authority of ecumenical councils in principle. Where they differ is in which councils they accept and what the conditions are for a council to be considered "ecumenical". The relationship of the Papacy to the validity of ecumenical councils is a ground of controversy between Catholicism and the Eastern Orthodox Churches. The Catholic Church holds that recognition by the Pope is an essential element in qualifying a council as ecumenical; Eastern Orthodox view approval by the Bishop of Rome (the Pope) as being roughly equivalent to that of other patriarchs.
Some have held that a council is ecumenical only when all five patriarchs of the Pentarchy are represented at it. Others reject this theory in part because there were no patriarchs of Constantinople and Jerusalem at the time of the first ecumenical council.
Both the Catholic and Eastern Orthodox churches recognize seven councils in the early centuries of the church, but Catholics also recognize fourteen councils in later times called or confirmed by the Pope. At the urging of German King Sigismund, who was to become Holy Roman Emperor in 1433, the Council of Constance was convoked in 1414 by Antipope John XXIII, one of three claimants to the papal throne, and was reconvened in 1415 by the Roman Pope Gregory XII. The Council of Florence is an example of a council accepted as ecumenical in spite of being rejected by the East, as the Councils of Ephesus and Chalcedon are accepted in spite of being rejected respectively by the Church of the East and Oriental Orthodoxy.
The Catholic Church teaches that an ecumenical council is a gathering of the College of Bishops (of which the Bishop of Rome is an essential part) to exercise in a solemn manner its supreme and full power over the whole Church. It holds that "there never is an ecumenical council which is not confirmed or at least recognized as such by Peter's successor". Its present canon law requires that an ecumenical council be convoked and presided over, either personally or through a delegate, by the Pope, who is also to decide the agenda; but the church makes no claim that all past ecumenical councils observed these present rules, declaring only that the Pope's confirmation or at least recognition has always been required, and saying that the version of the Nicene Creed adopted at the First Council of Constantinople (381) was accepted by the Church of Rome only seventy years later, in 451. One writer has even claimed that this council was summoned without the knowledge of the pope.
The Eastern Orthodox Church accepts seven ecumenical councils, with the disputed Council in Trullo—rejected by Catholics—being incorporated into, and considered as a continuation of, the Third Council of Constantinople.
To be considered ecumenical, Orthodox accept a council that meets the condition that it was accepted by the whole church. That it was called together legally is also an important factor. A case in point is the Third Ecumenical Council, where two groups met as duly called for by the emperor, each claiming to be the legitimate council. The Emperor had called for bishops to assemble in the city of Ephesus. Theodosius did not attend but sent his representative Candidian to preside. However, Cyril managed to open the council over Candidian's insistent demands that the bishops disperse until the delegation from Syria could arrive. Cyril was able to completely control the proceedings, completely neutralizing Candidian, who favored Cyril's antagonist, Nestorius. When the pro-Nestorius Antiochene delegation finally arrived, they decided to convene their own council, over which Candidian presided. The proceedings of both councils were reported to the emperor, who decided ultimately to depose Cyril, Memnon and Nestorius. Nonetheless, the Orthodox accept Cyril's group as being the legitimate council because it maintained the same teaching that the church has always taught.
Paraphrasing a rule by St Vincent of Lérins, Hasler states
Orthodox believe that councils could over-rule or even depose popes. At the Sixth Ecumenical Council, Pope Honorius and Patriarch Sergius were declared heretics. The council anathematized them and declared them tools of the devil and cast them out of the church.
It is their position that, since the Seventh Ecumenical Council, there has been no synod or council of the same scope. Local meetings of hierarchs have been called "pan-Orthodox", but these have invariably been simply meetings of local hierarchs of whatever Eastern Orthodox jurisdictions are party to a specific local matter. From this point of view, there has been no fully "pan-Orthodox" (Ecumenical) council since 787. Unfortunately, the use of the term "pan-Orthodox" is confusing to those not within Eastern Orthodoxy, and it leads to mistaken impressions that these are "ersatz" ecumenical councils rather than purely local councils to which nearby Orthodox hierarchs, regardless of jurisdiction, are invited.
Others, including 20th-century theologians Metropolitan Hierotheos (Vlachos) of Naupactus, Fr. John S. Romanides, and Fr. George Metallinos (all of whom refer repeatedly to the "Eighth and Ninth Ecumenical Councils"), Fr. George Dragas, and the 1848 Encyclical of the Eastern Patriarchs (which refers explicitly to the "Eighth Ecumenical Council" and was signed by the patriarchs of Constantinople, Jerusalem, Antioch, and Alexandria as well as the Holy Synods of the first three), regard other synods beyond the Seventh Ecumenical Council as being ecumenical.
From the Eastern Orthodox perspective, a council is accepted as being ecumenical if it is accepted by the Eastern Orthodox church at large – clergy, monks and assembly of believers. Teachings from councils that purport to be ecumenical, but which lack this acceptance by the church at large, are, therefore, not considered ecumenical.
Oriental Orthodoxy accepts three ecumenical councils, the First Council of Nicaea, the First Council of Constantinople, and the Council of Ephesus. The formulation of the Chalcedonian Creed caused a schism in the Alexandrian and Syriac churches. Reconciliatory efforts between Oriental Orthodox with the Eastern Orthodox and the Catholic Church in the mid- and late 20th century have led to common Christological declarations. The Oriental and Eastern Churches have also been working toward reconciliation as a consequence of the ecumenical movement.
The Oriental Orthodox hold that the Dyophysite formula of two natures formulated at the Council of Chalcedon is inferior to the Miaphysite formula of "One Incarnate Nature of God the Word" (Byzantine Greek: Mia physis tou theou logou sarkousomene) and that the proceedings of Chalcedon themselves were motivated by imperial politics. The Alexandrian Church, the main Oriental Orthodox body, also felt unfairly underrepresented at the council following the deposition of their Pope, Dioscorus of Alexandria at the council.
The Church of the East accepts two ecumenical councils, the First Council of Nicaea and the First Council of Constantinople. It was the formulation of Mary as the Theotokos which caused a schism with the Church of the East, now divided between the Assyrian Church of the East and the Ancient Church of the East, while the Chaldean Catholic Church entered into full communion with Rome in the 16th century. Meetings between Pope John Paul II and the Assyrian Patriarch Mar Dinkha IV led to a common Christological declaration on 11 November 1994 that "the humanity to which the Blessed Virgin Mary gave birth always was that of the Son of God himself". Both sides recognised the legitimacy and rightness, as expressions of the same faith, of the Assyrian Church's liturgical invocation of Mary as "the Mother of Christ our God and Saviour" and the Catholic Church's use of "the Mother of God" and also as "the Mother of Christ".
While the Councils are part of the "historic formularies" of Anglican tradition, it is difficult to locate an explicit reference in Anglicanism to the unconditional acceptance of all Seven Ecumenical Councils. There is little evidence of dogmatic or canonical acceptance beyond the statements of individual Anglican theologians and bishops.
Bishop Chandler Holder Jones, SSC, explains:
He quotes William Tighe, Associate Professor of History at Muhlenberg College in Allentown, Pennsylvania (another member of the Anglo-Catholic wing of Anglicanism):
Article XXI teaches: "General Councils ... when they be gathered together, forasmuch as they be an assembly of men, whereof all be not governed with the Spirit and word of God, they may err and sometime have erred, even in things pertaining to God. Wherefore things ordained by them as necessary to salvation have neither strength nor authority, unless it may be declared that they be taken out of Holy Scripture."
The 19th Canon of 1571 asserted the authority of the Councils in this manner: "Let preachers take care that they never teach anything...except what is agreeable to the doctrine of the Old and New Testament, and what the Catholic Fathers and ancient Bishops have collected from the same doctrine." This remains the Church of England's teaching on the subject. A modern version of this appeal to catholic consensus is found in the Canon Law of the Church of England and also in the liturgy published in "Common Worship":
The 1559 Act of Supremacy made a distinction between the decisions of the first four ecumenical councils, which were to be used as sufficient proof that something was heresy, as opposed to those of later councils, which could only be used to that purpose if "the same was declared heresy by the express and plain words of the...canonical Scriptures".
Many Protestants (especially those belonging to the magisterial traditions, such as Lutherans, or those such as Methodists, that broke away from the Anglican Communion) accept the teachings of the first seven councils but do not ascribe to the councils themselves the same authority as Catholics and the Eastern Orthodox do. The Lutheran World Federation, in ecumenical dialogues with the Ecumenical Patriarch of Constantinople has affirmed all of the first seven councils as ecumenical and authoritative.
Some, including some fundamentalist Christians, condemn the ecumenical councils for other reasons. Independency or congregationalist polity among Protestants may involve the rejection of any governmental structure or binding authority above local congregations; conformity to the decisions of these councils is therefore considered purely voluntary and the councils are to be considered binding only insofar as those doctrines are derived from the Scriptures. Many of these churches reject the idea that anyone other than the authors of Scripture can directly lead other Christians by original divine authority; after the New Testament, they assert, the doors of revelation were closed and councils can only give advice or guidance, but have no authority. They consider new doctrines not derived from the sealed canon of Scripture to be both impossible and unnecessary whether proposed by church councils or by more recent prophets. Catholic and Orthodox objections to this position point to the fact that the Canon of Scripture itself was fixed by these councils. They conclude that this would lead to a logical inconsistency of a non-authoritative body fixing a supposedly authoritative source.
Ecumenical councils are not recognised by nontrinitarian churches such as The Church of Jesus Christ of Latter-day Saints (and other denominations within the Latter Day Saint movement), Jehovah's Witnesses, Church of God (Seventh-Day), their descendants and Unitarians. They view the ecumenical councils as misguided human attempts to establish doctrine, and as attempts to define dogmas by debate rather than by revelation. | https://en.wikipedia.org/wiki?curid=9762 |
Exoplanet
An exoplanet or extrasolar planet is a planet outside the Solar System. The first possible evidence of an exoplanet was noted in 1917, but was not recognized as such. The first confirmation of detection occurred in 1992. This was followed by the confirmation of a different planet, originally detected in 1988.
There are many methods of detecting exoplanets. Transit photometry and Doppler spectroscopy have found the most, but these methods suffer from a clear observational bias favoring the detection of planets near the star; thus, 85% of the exoplanets detected are inside the tidal locking zone. In several cases, multiple planets have been observed around a star. About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone. Assuming there are 200 billion stars in the Milky Way, it can be hypothesized that there are 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if planets orbiting the numerous red dwarfs are included.
The least massive planet known is Draugr (also known as PSR B1257+12 A or PSR B1257+12 b), which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is HR 2562 b, about 30 times the mass of Jupiter, although according to some definitions of a planet (based on the nuclear fusion of deuterium), it is too massive to be a planet and may be a brown dwarf instead. Known orbital times for exoplanets vary from a few hours (for those closest to their star) to thousands of years. Some exoplanets are so far away from the star that it is difficult to tell whether they are gravitationally bound to it. Almost all of the planets detected so far are within the Milky Way. There is evidence that extragalactic planets, exoplanets farther away in galaxies beyond the local Milky Way galaxy, may exist. The nearest exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 parsecs) from Earth and orbiting Proxima Centauri, the closest star to the Sun.
The discovery of exoplanets has intensified interest in the search for extraterrestrial life. There is special interest in planets that orbit in a star's habitable zone, where it is possible for liquid water, a prerequisite for life on Earth, to exist on the surface. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life.
Rogue planets do not orbit any star. Such objects are considered as a separate category of planet, especially if they are gas giants, which are often counted as sub-brown dwarfs. The rogue planets in the Milky Way possibly number in the billions or more.
The convention for designating exoplanets is an extension of the system used for designating multiple-star systems as adopted by the International Astronomical Union (IAU). For exoplanets orbiting a single star, the IAU designation is formed by taking the designated or proper name of its parent star, and adding a lower case letter. Letters are given in order of each planet's discovery around the parent star, so that the first planet discovered in a system is designated "b" (the parent star is considered to be "a") and later planets are given subsequent letters. If several planets in the same system are discovered at the same time, the closest one to the star gets the next letter, followed by the other planets in order of orbital size. A provisional IAU-sanctioned standard exists to accommodate the designation of circumbinary planets. A limited number of exoplanets have IAU-sanctioned proper names. Other naming systems exist.
For centuries scientists, philosophers, and science fiction writers suspected that extrasolar planets existed, but there was no way of knowing whether they existed, how common they were, or how similar they might be to the planets of the Solar System. Various detection claims made in the nineteenth century were rejected by astronomers.
The first evidence of a possible exoplanet, orbiting Van Maanen 2, was noted in 1917, but was not recognized as such. The astronomer Walter Sydney Adams, who later became director of the Mount Wilson Observatory, produced a spectrum of the star using Mount Wilson's 60-inch telescope. He interpreted the spectrum to be of an F-type main-sequence star, but it is now thought that such a spectrum could be caused by the residue of a nearby exoplanet that had been pulverized into dust by the gravity of the star, the resulting dust then falling onto the star.
The first suspected scientific detection of an exoplanet occurred in 1988. Shortly afterwards, the first confirmation of detection came in 1992, with the discovery of several terrestrial-mass planets orbiting the pulsar PSR B1257+12. The first confirmation of an exoplanet orbiting a main-sequence star was made in 1995, when a giant planet was found in a four-day orbit around the nearby star 51 Pegasi. Some exoplanets have been imaged directly by telescopes, but the vast majority have been detected through indirect methods, such as the transit method and the radial-velocity method. In February 2018, researchers using the Chandra X-ray Observatory, combined with a planet detection technique called microlensing, found evidence of planets in a distant galaxy, stating "Some of these exoplanets are as (relatively) small as the moon, while others are as massive as Jupiter. Unlike Earth, most of the exoplanets are not tightly bound to stars, so they're actually wandering through space or loosely orbiting between stars. We can estimate that the number of planets in this [faraway] galaxy is more than a trillion.
In the sixteenth century, the Italian philosopher Giordano Bruno, an early supporter of the Copernican theory that Earth and other planets orbit the Sun (heliocentrism), put forward the view that the fixed stars are similar to the Sun and are likewise accompanied by planets.
In the eighteenth century, the same possibility was mentioned by Isaac Newton in the "General Scholium" that concludes his "Principia". Making a comparison to the Sun's planets, he wrote "And if the fixed stars are the centres of similar systems, they will all be constructed according to a similar design and subject to the dominion of "One"."
In 1952, more than 40 years before the first hot Jupiter was discovered, Otto Struve wrote that there is no compelling reason why planets could not be much closer to their parent star than is the case in the Solar System, and proposed that Doppler spectroscopy and the transit method could detect super-Jupiters in short orbits.
Claims of exoplanet detections have been made since the nineteenth century. Some of the earliest involve the binary star 70 Ophiuchi. In 1855 William Stephen Jacob at the East India Company's Madras Observatory reported that orbital anomalies made it "highly probable" that there was a "planetary body" in this system. In the 1890s, Thomas J. J. See of the University of Chicago and the United States Naval Observatory stated that the orbital anomalies proved the existence of a dark body in the 70 Ophiuchi system with a 36-year period around one of the stars. However, Forest Ray Moulton published a paper proving that a three-body system with those orbital parameters would be highly unstable. During the 1950s and 1960s, Peter van de Kamp of Swarthmore College made another prominent series of detection claims, this time for planets orbiting Barnard's Star. Astronomers now generally regard all the early reports of detection as erroneous.
In 1991 Andrew Lyne, M. Bailes and S. L. Shemar claimed to have discovered a pulsar planet in orbit around PSR 1829-10, using pulsar timing variations. The claim briefly received intense attention, but Lyne and his team soon retracted it.
As of , a total of confirmed exoplanets are listed in the Extrasolar Planets Encyclopedia, including a few that were confirmations of controversial claims from the late 1980s. The first published discovery to receive subsequent confirmation was made in 1988 by the Canadian astronomers Bruce Campbell, G. A. H. Walker, and Stephenson Yang of the University of Victoria and the University of British Columbia. Although they were cautious about claiming a planetary detection, their radial-velocity observations suggested that a planet orbits the star Gamma Cephei. Partly because the observations were at the very limits of instrumental capabilities at the time, astronomers remained skeptical for several years about this and other similar observations. It was thought some of the apparent planets might instead have been brown dwarfs, objects intermediate in mass between planets and stars. In 1990, additional observations were published that supported the existence of the planet orbiting Gamma Cephei, but subsequent work in 1992 again raised serious doubts. Finally, in 2003, improved techniques allowed the planet's existence to be confirmed.
On 9 January 1992, radio astronomers Aleksander Wolszczan and Dale Frail announced the discovery of two planets orbiting the pulsar PSR 1257+12. This discovery was confirmed, and is generally considered to be the first definitive detection of exoplanets. Follow-up observations solidified these results, and confirmation of a third planet in 1994 revived the topic in the popular press. These pulsar planets are thought to have formed from the unusual remnants of the supernova that produced the pulsar, in a second round of planet formation, or else to be the remaining rocky cores of gas giants that somehow survived the supernova and then decayed into their current orbits.
On 6 October 1995, Michel Mayor and Didier Queloz of the University of Geneva announced the first definitive detection of an exoplanet orbiting a main-sequence star, nearby G-type star 51 Pegasi. This discovery, made at the Observatoire de Haute-Provence, ushered in the modern era of exoplanetary discovery, and was recognized by a share of the 2019 Nobel Prize in Physics. Technological advances, most notably in high-resolution spectroscopy, led to the rapid detection of many new exoplanets: astronomers could detect exoplanets indirectly by measuring their gravitational influence on the motion of their host stars. More extrasolar planets were later detected by observing the variation in a star's apparent luminosity as an orbiting planet transited in front of it.
Initially, most known exoplanets were massive planets that orbited very close to their parent stars. Astronomers were surprised by these "hot Jupiters", because theories of planetary formation had indicated that giant planets should only form at large distances from stars. But eventually more planets of other sorts were found, and it is now clear that hot Jupiters make up the minority of exoplanets. In 1999, Upsilon Andromedae became the first main-sequence star known to have multiple planets. Kepler-16 contains the first discovered planet that orbits around a binary main-sequence star system.
On 26 February 2014, NASA announced the discovery of 715 newly verified exoplanets around 305 stars by the "Kepler" Space Telescope. These exoplanets were checked using a statistical technique called "verification by multiplicity". Before these results, most confirmed planets were gas giants comparable in size to Jupiter or larger because they are more easily detected, but the "Kepler" planets are mostly between the size of Neptune and the size of Earth.
On 23 July 2015, NASA announced Kepler-452b, a near-Earth-size planet orbiting the habitable zone of a G2-type star.
On 6 September 2018, NASA discovered an exoplanet about 145 light years away from Earth in the constellation Virgo. This exoplanet, Wolf 503b, is twice the size of Earth and was discovered orbiting a type of star known as an "Orange Dwarf". Wolf 503b completes one orbit in as few as six days because it is very close to the star. Wolf 503b is the only exoplanet that large that can be found near the so-called Fulton gap. The Fulton gap, first noticed in 2017, is the observation that it is unusual to find planets within a certain mass range. Under the Fulton gap studies, this opens up a new field for astronomers, who are still studying whether planets found in the Fulton gap are gaseous or rocky.
In January 2020, scientists announced the discovery of TOI 700 d, the first Earth-sized planet in the habitable zone detected by TESS.
As of January 2020, NASA's "Kepler" and TESS missions had identified 4374 planetary candidates yet to be confirmed, several of them being nearly Earth-sized and located in the habitable zone, some around Sun-like stars.
About 97% of all the confirmed exoplanets have been discovered by indirect techniques of detection, mainly by radial velocity measurements and transit monitoring techniques. Recently the techniques of singular optics have been applied in the search for exoplanets.
Planets may form within a few to tens (or more) of millions of years of their star forming.
The planets of the Solar System can only be observed in their current state, but observations of different planetary systems of varying ages allows us to observe planets at different stages of evolution. Available observations range from young proto-planetary disks where planets are still forming to planetary systems of over 10 Gyr old. When planets form in a gaseous protoplanetary disk, they accrete hydrogen/helium envelopes. These envelopes cool and contract over time and, depending on the mass of the planet, some or all of the hydrogen/helium is eventually lost to space. This means that even terrestrial planets may start off with large radii if they form early enough. An example is Kepler-51b which has only about twice the mass of Earth but is almost the size of Saturn which is a hundred times the mass of Earth. Kepler-51b is quite young at a few hundred million years old.
There is at least one planet on average per star.
About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone.
Most known exoplanets orbit stars roughly similar to the Sun, i.e. main-sequence stars of spectral categories F, G, or K. Lower-mass stars (red dwarfs, of spectral category M) are less likely to have planets massive enough to be detected by the radial-velocity method. Despite this, several tens of planets around red dwarfs have been discovered by the "Kepler" spacecraft, which uses the transit method to detect smaller planets.
Using data from "Kepler", a correlation has been found between the metallicity of a star and the probability that the star host planets. Stars with higher metallicity are more likely to have planets, especially giant planets, than stars with lower metallicity.
Some planets orbit one member of a binary star system, and several circumbinary planets have been discovered which orbit around both members of binary star. A few planets in triple star systems are known and one in the quadruple system Kepler-64.
In 2013 the color of an exoplanet was determined for the first time. The best-fit albedo measurements of HD 189733b suggest that it is deep dark blue. Later that same year, the colors of several other exoplanets were determined, including GJ 504 b which visually has a magenta color, and Kappa Andromedae b, which if seen up close would appear reddish in color.
Helium planets are expected to be white or grey in appearance.
The apparent brightness (apparent magnitude) of a planet depends on how far away the observer is, how reflective the planet is (albedo), and how much light the planet receives from its star, which depends on how far the planet is from the star and how bright the star is. So, a planet with a low albedo that is close to its star can appear brighter than a planet with high albedo that is far from the star.
The darkest known planet in terms of geometric albedo is TrES-2b, a hot Jupiter that reflects less than 1% of the light from its star, making it less reflective than coal or black acrylic paint. Hot Jupiters are expected to be quite dark due to sodium and potassium in their atmospheres but it is not known why TrES-2b is so dark—it could be due to an unknown chemical compound.
For gas giants, geometric albedo generally decreases with increasing metallicity or atmospheric temperature unless there are clouds to modify this effect. Increased cloud-column depth increases the albedo at optical wavelengths, but decreases it at some infrared wavelengths. Optical albedo increases with age, because older planets have higher cloud-column depths. Optical albedo decreases with increasing mass, because higher-mass giant planets have higher surface gravities, which produces lower cloud-column depths. Also, elliptical orbits can cause major fluctuations in atmospheric composition, which can have a significant effect.
There is more thermal emission than reflection at some near-infrared wavelengths for massive and/or young gas giants. So, although optical brightness is fully phase-dependent, this is not always the case in the near infrared.
Temperatures of gas giants reduce over time and with distance from their star. Lowering the temperature increases optical albedo even without clouds. At a sufficiently low temperature, water clouds form, which further increase optical albedo. At even lower temperatures ammonia clouds form, resulting in the highest albedos at most optical and near-infrared wavelengths.
In 2014, a magnetic field around HD 209458 b was inferred from the way hydrogen was evaporating from the planet. It is the first (indirect) detection of a magnetic field on an exoplanet. The magnetic field is estimated to be about one tenth as strong as Jupiter's.
Exoplanets magnetic fields may be detectable by their auroral radio emissions with sensitive enough radio telescopes such as LOFAR. The radio emissions could enable determination of the rotation rate of the interior of an exoplanet, and may yield a more accurate way to measure exoplanet rotation than by examining the motion of clouds.
Earth's magnetic field results from its flowing liquid metallic core, but in massive super-Earths with high pressure, different compounds may form which do not match those created under terrestrial conditions. Compounds may form with greater viscosities and high melting temperatures which could prevent the interiors from separating into different layers and so result in undifferentiated coreless mantles. Forms of magnesium oxide such as MgSi3O12 could be a liquid metal at the pressures and temperatures found in super-Earths and could generate a magnetic field in the mantles of super-Earths.
Hot Jupiters have been observed to have a larger radius than expected. This could be caused by the interaction between the stellar wind and the planet's magnetosphere creating an electric current through the planet that heats it up causing it to expand. The more magnetically active a star is the greater the stellar wind and the larger the electric current leading to more heating and expansion of the planet. This theory matches the observation that stellar activity is correlated with inflated planetary radii.
In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand giant gas planets, such as Jupiter, Saturn and related exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields.
Although scientists previously announced that the magnetic fields of close-in exoplanets may cause increased stellar flares and starspots on their host stars, in 2019 this claim was demonstrated to be false in the HD 189733 system. The failure to detect "star-planet interactions" in the well-studied HD 189733 system calls other related claims of the effect into question.
In 2019 the strength of the surface magnetic fields of 4 hot Jupiters were estimated and ranged between 20 and 120 gauss compared to Jupiter's surface magnetic field of 4.3 gauss.
In 2007, two independent teams of researchers came to opposing conclusions about the likelihood of plate tectonics on larger super-Earths with one team saying that plate tectonics would be episodic or stagnant and the other team saying that plate tectonics is very likely on super-Earths even if the planet is dry.
If super-Earths have more than 80 times as much water as Earth then they become ocean planets with all land completely submerged. However, if there is less water than this limit, then the deep water cycle will move enough water between the oceans and mantle to allow continents to exist.
Large surface temperature variations on 55 Cancri e have been attributed to possible volcanic activity releasing large clouds of dust which blanket the planet and block thermal emissions.
The star 1SWASP J140747.93-394542.6 is orbited by an object that is circled by a ring system much larger than Saturn's rings. However, the mass of the object is not known; it could be a brown dwarf or low-mass star instead of a planet.
The brightness of optical images of Fomalhaut b could be due to starlight reflecting off a circumplanetary ring system with a radius between 20 and 40 times that of Jupiter's radius, about the size of the orbits of the Galilean moons.
The rings of the Solar System's gas giants are aligned with their planet's equator. However, for exoplanets that orbit close to their star, tidal forces from the star would lead to the outermost rings of a planet being aligned with the planet's orbital plane around the star. A planet's innermost rings would still be aligned with the planet's equator so that if the planet has a tilted rotational axis, then the different alignments between the inner and outer rings would create a warped ring system.
In December 2013 a candidate exomoon of a rogue planet was announced. On 3 October 2018, evidence suggesting a large exomoon orbiting Kepler-1625b was reported.
Atmospheres have been detected around several exoplanets. The first to be observed was HD 209458 b in 2001.
In May 2017, glints of light from Earth, seen as twinkling from an orbiting satellite a million miles away, were found to be reflected light from ice crystals in the atmosphere. The technology used to determine this may be useful in studying the atmospheres of distant worlds, including those of exoplanets.
KIC 12557548 b is a small rocky planet, very close to its star, that is evaporating and leaving a trailing tail of cloud and dust like a comet. The dust could be ash erupting from volcanos and escaping due to the small planet's low surface-gravity, or it could be from metals that are vaporized by the high temperatures of being so close to the star with the metal vapor then condensing into dust.
In June 2015, scientists reported that the atmosphere of GJ 436 b was evaporating, resulting in a giant cloud around the planet and, due to radiation from the host star, a long trailing tail long.
Tidally locked planets in a 1:1 spin-orbit resonance would have their star always shining directly overhead on one spot which would be hot with the opposite hemisphere receiving no light and being freezing cold. Such a planet could resemble an eyeball with the hotspot being the pupil. Planets with an eccentric orbit could be locked in other resonances. 3:2 and 5:2 resonances would result in a double-eyeball pattern with hotspots in both eastern and western hemispheres. Planets with both an eccentric orbit and a tilted axis of rotation would have more complicated insolation patterns.
As more planets are discovered, the field of exoplanetology continues to grow into a deeper study of extrasolar worlds, and will ultimately tackle the prospect of life on planets beyond the Solar System. At cosmic distances, life can only be detected if it is developed at a planetary scale and strongly modified the planetary environment, in such a way that the modifications cannot be explained by classical physico-chemical processes (out of equilibrium processes). For example, molecular oxygen () in the atmosphere of Earth is a result of photosynthesis by living plants and many kinds of microorganisms, so it can be used as an indication of life on exoplanets, although small amounts of oxygen could also be produced by non-biological means. Furthermore, a potentially habitable planet must orbit a stable star at a distance within which planetary-mass objects with sufficient atmospheric pressure can support liquid water at their surfaces. | https://en.wikipedia.org/wiki?curid=9763 |
Emma Goldman
Emma Goldman (, 1869May 14, 1940) was an anarchist political activist and writer. She played a pivotal role in the development of anarchist political philosophy in North America and Europe in the first half of the 20th century.
Born in Kaunas, Russian Empire (now Lithuania) to a Jewish family, Goldman emigrated to the United States in 1885. Attracted to anarchism after the Chicago Haymarket affair, Goldman became a writer and a renowned lecturer on anarchist philosophy, women's rights, and social issues, attracting crowds of thousands. She and anarchist writer Alexander Berkman, her lover and lifelong friend, planned to assassinate industrialist and financier Henry Clay Frick as an act of propaganda of the deed. Frick survived the attempt on his life in 1892, and Berkman was sentenced to 22 years in prison. Goldman was imprisoned several times in the years that followed, for "inciting to riot" and illegally distributing information about birth control. In 1906, Goldman founded the anarchist journal "Mother Earth".
In 1917, Goldman and Berkman were sentenced to two years in jail for conspiring to "induce persons not to register" for the newly instated draft. After their release from prison, they were arrested—along with 248 others—and deported to Russia. Initially supportive of that country's October Revolution that brought the Bolsheviks to power, Goldman changed her opinion in the wake of the Kronstadt rebellion; she denounced the Soviet Union for its violent repression of independent voices. She left the Soviet Union and in 1923 published a book about her experiences, "My Disillusionment in Russia". While living in England, Canada, and France, she wrote an autobiography called "Living My Life". It was published in two volumes, in 1931 and 1935. After the outbreak of the Spanish Civil War, Goldman traveled to Spain to support the anarchist revolution there. She died in Toronto, Canada, on May 14, 1940, aged 70.
During her life, Goldman was lionized as a freethinking "rebel woman" by admirers, and denounced by detractors as an advocate of politically motivated murder and violent revolution. Her writing and lectures spanned a wide variety of issues, including prisons, atheism, freedom of speech, militarism, capitalism, marriage, free love, and homosexuality. Although she distanced herself from first-wave feminism and its efforts toward women's suffrage, she developed new ways of incorporating gender politics into anarchism. After decades of obscurity, Goldman gained iconic status in the 1970s by a revival of interest in her life, when feminist and anarchist scholars rekindled popular interest.
Emma Goldman was born into an Orthodox Jewish family in Kovno in the Russian Empire, which is now known as Kaunas in Lithuania. Goldman's mother Taube Bienowitch had been married before to a man with whom she had two daughters—Helena in 1860 and Lena in 1862. When her first husband died of tuberculosis, Taube was devastated. Goldman later wrote: "Whatever love she had had died with the young man to whom she had been married at the age of fifteen."
Taube's second marriage was arranged by her family and, as Goldman puts it, "mismated from the first". Her second husband, Abraham Goldman, invested Taube's inheritance in a business that quickly failed. The ensuing hardship, combined with the emotional distance of husband and wife, made the household a tense place for the children. When Taube became pregnant, Abraham hoped desperately for a son; a daughter, he believed, would be one more sign of failure. They eventually had three sons, but their first child was Emma.
Emma Goldman was born on June 27, 1869. Her father used violence to punish his children, beating them when they disobeyed him. He used a whip on Emma, the most rebellious of them. Her mother provided scarce comfort, rarely calling on Abraham to tone down his beatings. Goldman later speculated that her father's furious temper was at least partly a result of sexual frustration.
Goldman's relationships with her elder half-sisters, Helena and Lena, were a study in contrasts. Helena, the oldest, provided the comfort the children lacked from their mother; she filled Goldman's childhood with "whatever joy it had". Lena, however, was distant and uncharitable. The three sisters were joined by brothers Louis (who died at the age of six), Herman (born in 1872), and Moishe (born in 1879).
When Emma was a young girl, the Goldman family moved to the village of Papilė, where her father ran an inn. While her sisters worked, she became friends with a servant named Petrushka, who excited her "first erotic sensations". Later in Papilė she witnessed a peasant being whipped with a knout in the street. This event traumatized her and contributed to her lifelong distaste for violent authority.
At the age of seven, Goldman moved with her family to the Prussian city of Königsberg (then part of the German Empire), and she was enrolled in a "Realschule". One teacher punished disobedient students—targeting Goldman in particular—by beating their hands with a ruler. Another teacher tried to molest his female students and was fired when Goldman fought back. She found a sympathetic mentor in her German-language teacher, who loaned her books and took her to an opera. A passionate student, Goldman passed the exam for admission into a gymnasium, but her religion teacher refused to provide a certificate of good behavior and she was unable to attend.
The family moved to the Russian capital of Saint Petersburg, where her father opened one unsuccessful store after another. Their poverty forced the children to work, and Goldman took an assortment of jobs, including one in a corset shop. As a teenager Goldman begged her father to allow her to return to school, but instead he threw her French book into the fire and shouted: "Girls do not have to learn much! All a Jewish daughter needs to know is how to prepare gefilte fish, cut noodles fine, and give the man plenty of children."
Goldman pursued an independent education on her own, however, and soon began to study the political turmoil around her, particularly the Nihilists responsible for assassinating Alexander II of Russia. The ensuing turmoil intrigued Goldman, although she did not fully understand it at the time. When she read Nikolai Chernyshevsky's novel, "What Is to Be Done?" (1863), she found a role model in the protagonist Vera. She adopts a Nihilist philosophy and escapes her repressive family to live freely and organize a sewing cooperative. The book enthralled Goldman and remained a source of inspiration throughout her life.
Her father, meanwhile, continued to insist on a domestic future for her, and he tried to arrange for her to be married at the age of fifteen. They fought about the issue constantly; he complained that she was becoming a "loose" woman, and she insisted that she would marry for love alone. At the corset shop, she was forced to fend off unwelcome advances from Russian officers and other men. One persistent suitor took her into a hotel room and committed what Goldman described as "violent contact"; two biographers call it rape. She was stunned by the experience, overcome by "shock at the discovery that the contact between man and woman could be so brutal and painful." Goldman felt that the encounter forever soured her interactions with men.
In 1885, her sister Helena made plans to move to New York in the United States to join her sister Lena and her husband. Goldman wanted to join her sister, but their father refused to allow it. Despite Helena's offer to pay for the trip, Abraham turned a deaf ear to their pleas. Desperate, Goldman threatened to throw herself into the Neva River if she could not go. Their father finally agreed. On December 29, 1885, Helena and Emma arrived at New York City's Castle Garden, the entry for immigrants.
They settled upstate, living in the Rochester home which Lena had made with her husband Samuel. Fleeing the rising antisemitism of Saint Petersburg, their parents and brothers joined them a year later. Goldman began working as a seamstress, sewing overcoats for more than ten hours a day, earning two and a half dollars a week. She asked for a raise and was denied; she quit and took work at a smaller shop nearby.
At her new job, Goldman met a fellow worker named Jacob Kershner, who shared her love for books, dancing, and traveling, as well as her frustration with the monotony of factory work. After four months, they married in February 1887. Once he moved in with Goldman's family, however, their relationship faltered. On their wedding night she discovered that he was impotent; they became emotionally and physically distant. Before long he became jealous and suspicious. She, meanwhile, was becoming more engaged with the political turmoil around her—particularly the aftermath of executions related to the 1886 Haymarket affair in Chicago and the anti-authoritarian political philosophy of anarchism.
Less than a year after the wedding, the couple were divorced; Kershner begged Goldman to return and threatened to poison himself if she did not. They reunited, but after three months she left once again. Her parents considered her behavior "loose" and refused to allow Goldman into their home. Carrying her sewing machine in one hand and a bag with five dollars in the other, she left Rochester and headed southeast to New York City.
On her first day in the city, Goldman met two men who greatly changed her life. At Sachs's Café, a gathering place for radicals, she was introduced to Alexander Berkman, an anarchist who invited her to a public speech that evening. They went to hear Johann Most, editor of a radical publication called "Freiheit" and an advocate of "propaganda of the deed"—the use of violence to instigate change. She was impressed by his fiery oration, and Most took her under his wing, training her in methods of public speaking. He encouraged her vigorously, telling her that she was "to take my place when I am gone." One of her first public talks in support of "the Cause" was in Rochester. After convincing Helena not to tell their parents of her speech, Goldman found her mind a blank once on stage. She later wrote, suddenly:
Excited by the experience, Goldman refined her public persona during subsequent engagements. Quickly, however, she found herself arguing with Most over her independence. After a momentous speech in Cleveland, she felt as though she had become "a parrot repeating Most's views" and resolved to express herself on the stage. When she returned to New York, Most became furious and told her: "Who is not with me is against me!" She left "Freiheit" and joined another publication, "Die Autonomie".
Meanwhile, Goldman had begun a friendship with Berkman, whom she affectionately called Sasha. Before long they became lovers and moved into a communal apartment with his cousin Modest "Fedya" Stein and Goldman's friend, Helen Minkin, on 42nd Street. Although their relationship had numerous difficulties, Goldman and Berkman would share a close bond for decades, united by their anarchist principles and commitment to personal equality.
In 1892, Goldman joined with Berkman and Stein in opening an ice cream shop in Worcester, Massachusetts. After a few months of operating the shop, however, Goldman and Berkman were diverted by becoming involved in the Homestead Strike in western Pennsylvania near Pittsburgh.
Berkman and Goldman came together through the Homestead Strike. In June 1892, a steel plant in Homestead, Pennsylvania owned by Andrew Carnegie became the focus of national attention when talks between the Carnegie Steel Company and the Amalgamated Association of Iron and Steel Workers (AA) broke down. The factory's manager was Henry Clay Frick, a fierce opponent of the union. When a final round of talks failed at the end of June, management closed the plant and locked out the workers, who immediately went on strike. Strikebreakers were brought in and the company hired Pinkerton guards to protect them. On July 6, a fight broke out between 300 Pinkerton guards and a crowd of armed union workers. During the twelve-hour gunfight, seven guards and nine strikers were killed.
When a majority of the nation's newspapers expressed support of the strikers, Goldman and Berkman resolved to assassinate Frick, an action they expected would inspire the workers to revolt against the capitalist system. Berkman chose to carry out the assassination, and ordered Goldman to stay behind in order to explain his motives after he went to jail. He would be in charge of "the deed"; she of the associated propaganda. Berkman tried and failed to make a bomb, then set off for Pittsburgh to buy a gun and a suit of decent clothes.
Goldman, meanwhile, decided to help fund the scheme through prostitution. Remembering the character of Sonya in Fyodor Dostoevsky's novel "Crime and Punishment" (1866), she mused: "She had become a prostitute in order to support her little brothers and sisters...Sensitive Sonya could sell her body; why not I?" Once on the street, Goldman caught the eye of a man who took her into a saloon, bought her a beer, gave her ten dollars, informed her she did not have "the knack," and told her to quit the business. She was "too astounded for speech". She wrote to Helena, claiming illness, and asked her for fifteen dollars.
On July 23, Berkman gained access to Frick's office while carrying a concealed handgun; he shot Frick three times, and stabbed him in the leg. A group of workers—far from joining in his "attentat"—beat Berkman unconscious, and he was carried away by the police. Berkman was convicted of attempted murder and sentenced to 22 years in prison. Goldman suffered during his long absence.
Convinced Goldman was involved in the plot, police raided her apartment. Although they found no evidence, they pressured her landlord into evicting her. Worse, the "attentat" had failed to rouse the masses: workers and anarchists alike condemned Berkman's action. Johann Most, their former mentor, lashed out at Berkman and the assassination attempt. Furious at these attacks, Goldman brought a toy horsewhip to a public lecture and demanded, onstage, that Most explain his betrayal. He dismissed her, whereupon she struck him with the whip, broke it on her knee, and hurled the pieces at him. She later regretted her assault, confiding to a friend: "At the age of twenty-three, one does not reason."
When the Panic of 1893 struck in the following year, the United States suffered one of its worst economic crises. By year's end, the unemployment rate was higher than 20%, and "hunger demonstrations" sometimes gave way to riots. Goldman began speaking to crowds of frustrated men and women in New York City. On August 21, she spoke to a crowd of nearly 3,000 people in Union Square, where she encouraged unemployed workers to take immediate action. Her exact words are unclear: undercover agents insist she ordered the crowd to "take everything ... by force". But Goldman later recounted this message: "Well then, demonstrate before the palaces of the rich; demand work. If they do not give you work, demand bread. If they deny you both, take bread." Later in court, Detective-Sergeant Charles Jacobs offered yet another version of her speech.
A week later, Goldman was arrested in Philadelphia and returned to New York City for trial, charged with "inciting to riot". During the train ride, Jacobs offered to drop the charges against her if she would inform on other radicals in the area. She responded by throwing a glass of ice water in his face. As she awaited trial, Goldman was visited by Nellie Bly, a reporter for the "New York World." She spent two hours talking to Goldman and wrote a positive article about the woman she described as a "modern Joan of Arc."
Despite this positive publicity, the jury was persuaded by Jacobs' testimony and frightened by Goldman's politics. The assistant District Attorney questioned Goldman about her anarchism, as well as her atheism; the judge spoke of her as "a dangerous woman". She was sentenced to one year in the Blackwell's Island Penitentiary. Once inside she suffered an attack of rheumatism and was sent to the infirmary; there she befriended a visiting doctor and began studying medicine. She also read dozens of books, including works by the American activist-writers Ralph Waldo Emerson and Henry David Thoreau; novelist Nathaniel Hawthorne; poet Walt Whitman, and philosopher John Stuart Mill. When Goldman was released after ten months, a raucous crowd of nearly 3,000 people greeted her at the Thalia Theater in New York City. She soon became swamped with requests for interviews and lectures.
To make money, Goldman decided to pursue the medical work she had studied in prison. However, her preferred fields of specialization—midwifery and massage—were not available to nursing students in the US. She sailed to Europe, lecturing in London, Glasgow, and Edinburgh. She met with renowned anarchists such as Errico Malatesta, Louise Michel, and Peter Kropotkin. In Vienna, she received two diplomas for midwifery and put them immediately to use back in the US.
Alternating between lectures and midwifery, Goldman conducted the first cross-country tour by an anarchist speaker. In November 1899 she returned to Europe to speak, where she met the Czech anarchist Hippolyte Havel in London. They went together to France and helped organize the 1900 International Anarchist Congress on the outskirts of Paris. Afterward Havel immigrated to the United States, traveling with her to Chicago. They shared a residence there with friends of Goldman.
On September 6, 1901, Leon Czolgosz, an unemployed factory worker and registered Republican with a history of mental illness, shot US President William McKinley twice during a public speaking event in Buffalo, New York. McKinley was hit in the breastbone and stomach, and died eight days later. Czolgosz was arrested, and interrogated around the clock. During interrogation he claimed to be an anarchist and said he had been inspired to act after attending a speech by Goldman. The authorities used this as a pretext to charge Goldman with planning McKinley's assassination. They tracked her to a residence in Chicago she shared with Hippolyte Havel, who had come to the US; as well as with Mary and Abe Isaak, an anarchist couple and their family. Goldman was arrested, along with Isaak, Havel, and ten other anarchists.
Earlier, Czolgosz had tried but failed to become friends with Goldman and her companions. During a talk in Cleveland, Czolgosz had approached Goldman and asked her advice on which books he should read. In July 1901, he had appeared at the Isaak house, asking a series of unusual questions. They assumed he was an infiltrator, like a number of police agents sent to spy on radical groups. They had remained distant from him, and Abe Isaak sent a notice to associates warning of "another spy".
Although Czolgosz repeatedly denied Goldman's involvement, the police held her in close custody, subjecting her to what she called the "third degree". She explained her housemates' distrust of Czolgosz, and the police finally recognized that she had not had any significant contact with the attacker. No evidence was found linking Goldman to the attack, and she was released after two weeks of detention. Before McKinley died, Goldman offered to provide nursing care, referring to him as "merely a human being". Czolgosz, despite considerable evidence of mental illness, was convicted of murder and executed.
Throughout her detention and after her release, Goldman steadfastly refused to condemn Czolgosz's actions, standing virtually alone in doing so. Friends and supporters—including Berkman—urged her to quit his cause. But Goldman defended Czolgosz as a "supersensitive being" and chastised other anarchists for abandoning him. She was vilified in the press as the "high priestess of anarchy", while many newspapers declared the anarchist movement responsible for the murder. In the wake of these events, socialism gained support over anarchism among US radicals. McKinley's successor, Theodore Roosevelt, declared his intent to crack down "not only against anarchists, but against all active and passive sympathizers with anarchists".
After Czolgosz was executed, Goldman withdrew from the world. Scorned by her fellow anarchists, vilified by the press, and separated from her love, Berkman, she retreated into anonymity and nursing. "It was bitter and hard to face life anew," she wrote later.
Using the name E. G. Smith, she left public life and took on a series of private nursing jobs. When the US Congress passed the Anarchist Exclusion Act (1903), however, a new wave of activism rose to oppose it, and Goldman was pulled back into the movement. A coalition of people and organizations across the left end of the political spectrum opposed the law on grounds that it violated freedom of speech, and she had the nation's ear once again.
After an English anarchist named John Turner was arrested under the Anarchist Exclusion Act and threatened with deportation, Goldman joined forces with the Free Speech League to champion his cause. The league enlisted the aid of noted attorneys Clarence Darrow and Edgar Lee Masters, who took Turner's case to the US Supreme Court. Although Turner and the League lost, Goldman considered it a victory of propaganda. She had returned to anarchist activism, but it was taking its toll on her. "I never felt so weighed down," she wrote to Berkman. "I fear I am forever doomed to remain public property and to have my life worn out through the care for the lives of others."
In 1906, Goldman decided to start a publication, "a place of expression for the young idealists in arts and letters". "Mother Earth" was staffed by a cadre of radical activists, including Hippolyte Havel, Max Baginski, and Leonard Abbott. In addition to publishing original works by its editors and anarchists around the world, "Mother Earth" reprinted selections from a variety of writers. These included the French philosopher Pierre-Joseph Proudhon, Russian anarchist Peter Kropotkin, German philosopher Friedrich Nietzsche, and British writer Mary Wollstonecraft. Goldman wrote frequently about anarchism, politics, labor issues, atheism, sexuality, and feminism, and was the first editor of the magazine.
On May 18 of the same year, Alexander Berkman was released from prison. Carrying a bouquet of roses, Goldman met him on the train platform and found herself "seized by terror and pity" as she beheld his gaunt, pale form. Neither was able to speak; they returned to her home in silence. For weeks, he struggled to readjust to life on the outside. An abortive speaking tour ended in failure, and in Cleveland he purchased a revolver with the intent of killing himself. He returned to New York, however, and learned that Goldman had been arrested with a group of activists meeting to reflect on Czolgosz. Invigorated anew by this violation of freedom of assembly, he declared, "My resurrection has come!" and set about securing their release.
Berkman took the helm of "Mother Earth" in 1907, while Goldman toured the country to raise funds to keep it operating. Editing the magazine was a revitalizing experience for Berkman. But his relationship with Goldman faltered, and he had an affair with a 15-year-old anarchist named Becky Edelsohn. Goldman was pained by his rejection of her, but considered it a consequence of his prison experience. Later that year she served as a delegate from the US to the International Anarchist Congress of Amsterdam. Anarchists and syndicalists from around the world gathered to sort out the tension between the two ideologies, but no decisive agreement was reached. Goldman returned to the US and continued speaking to large audiences.
For the next ten years, Goldman traveled around the country nonstop, delivering lectures and agitating for anarchism. The coalitions formed in opposition to the Anarchist Exclusion Act had given her an appreciation for reaching out to those of other political positions. When the US Justice Department sent spies to observe, they reported the meetings as "packed". Writers, journalists, artists, judges, and workers from across the spectrum spoke of her "magnetic power", her "convincing presence", her "force, eloquence, and fire".
In the spring of 1908, Goldman met and fell in love with Ben Reitman, the so-called "Hobo doctor." Having grown up in Chicago's Tenderloin District, Reitman spent several years as a drifter before earning a medical degree from the College of Physicians and Surgeons of Chicago. As a doctor, he treated people suffering from poverty and illness, particularly venereal diseases. He and Goldman began an affair. They shared a commitment to free love and Reitman took a variety of lovers, but Goldman did not. She tried to reconcile her feelings of jealousy with a belief in freedom of the heart, but found it difficult.
Two years later, Goldman began feeling frustrated with lecture audiences. She yearned to "reach the few who really want to learn, rather than the many who come to be amused". She collected a series of speeches and items she had written for "Mother Earth" and published a book titled "Anarchism and Other Essays." Covering a wide variety of topics, Goldman tried to represent "the mental and soul struggles of twenty-one years". In addition to a comprehensive look at anarchism and its criticisms, the book includes essays on patriotism, women's suffrage, marriage, and prisons.
When Margaret Sanger, an advocate of access to contraception, coined the term "birth control" and disseminated information about various methods in the June 1914 issue of her magazine "The Woman Rebel," she received aggressive support from Goldman. The latter had already been active in efforts to increase birth control access for several years. In 1916, Goldman was arrested for giving lessons in public on how to use contraceptives. Sanger, too, was arrested under the Comstock Law, which prohibited the dissemination of "obscene, lewd, or lascivious articles", which authorities defined as including information relating to birth control.
Although they later split from Sanger over charges of insufficient support, Goldman and Reitman distributed copies of Sanger's pamphlet "Family Limitation" (along with a similar essay of Reitman's). In 1915 Goldman conducted a nationwide speaking tour, in part to raise awareness about contraception options. Although the nation's attitude toward the topic seemed to be liberalizing, Goldman was arrested on February 11, 1916, as she was about to give another public lecture. Goldman was charged with violating the Comstock Law. Refusing to pay a $100 fine, Goldman spent two weeks in a prison workhouse, which she saw as an "opportunity" to reconnect with those rejected by society.
Although President Woodrow Wilson was re-elected in 1916 under the slogan "He kept us out of the war", at the start of his second term, he announced that Germany's continued deployment of unrestricted submarine warfare was sufficient cause for the US to enter the Great War. Shortly afterward, Congress passed the Selective Service Act of 1917, which required all males aged 21–30 to register for military conscription. Goldman saw the decision as an exercise in militarist aggression, driven by capitalism. She declared in "Mother Earth" her intent to resist conscription, and to oppose US involvement in the war.
To this end, she and Berkman organized the No Conscription League of New York, which proclaimed: "We oppose conscription because we are internationalists, antimilitarists, and opposed to all wars waged by capitalistic governments." The group became a vanguard for anti-draft activism, and chapters began to appear in other cities. When police began raiding the group's public events to find young men who had not registered for the draft, however, Goldman and others focused their efforts on distributing pamphlets and other writings. In the midst of the nation's patriotic fervor, many elements of the political left refused to support the League's efforts. The Women's Peace Party, for example, ceased its opposition to the war once the US entered it. The Socialist Party of America took an official stance against US involvement, but supported Wilson in most of his activities.
On June 15, 1917, Goldman and Berkman were arrested during a raid of their offices, in which authorities seized "a wagon load of anarchist records and propaganda". "The New York Times" reported that Goldman asked to change into a more appropriate outfit, and emerged in a gown of "royal purple". The pair were charged with conspiracy to "induce persons not to register" under the newly enacted Espionage Act, and were held on US$25,000 bail each. Defending herself and Berkman during their trial, Goldman invoked the First Amendment, asking how the government could claim to fight for democracy abroad while suppressing free speech at home:
We say that if America has entered the war to make the world safe for democracy, she must first make democracy safe in America. How else is the world to take America seriously, when democracy at home is daily being outraged, free speech suppressed, peaceable assemblies broken up by overbearing and brutal gangsters in uniform; when free press is curtailed and every independent opinion gagged? Verily, poor as we are in democracy, how can we give of it to the world?
The jury found Goldman and Berkman guilty. Judge Julius Marshuetz Mayer imposed the maximum sentence: two years' imprisonment, a $10,000 fine each, and the possibility of deportation after their release from prison. As she was transported to Missouri State Penitentiary, Goldman wrote to a friend: "Two years imprisonment for having made an uncompromising stand for one's ideal. Why that is a small price."
In prison, she was assigned to work as a seamstress, under the eye of a "miserable gutter-snipe of a 21-year-old boy paid to get results". She met the socialist Kate Richards O'Hare, who had also been imprisoned under the Espionage Act. Although they differed on political strategy— O'Hare believed in voting to achieve state power—the two women came together to agitate for better conditions among prisoners. Goldman also met and became friends with Gabriella Segata Antolini, an anarchist and follower of Luigi Galleani. Antolini had been arrested transporting a satchel filled with dynamite on a Chicago-bound train. She had refused to cooperate with authorities, and was sent to prison for 14 months. Working together to make life better for the other inmates, the three women became known as "The Trinity". Goldman was released on September 27, 1919.
Goldman and Berkman were released from prison during the United States' Red Scare of 1919–20, when public anxiety about wartime pro-German activities had expanded into a pervasive fear of Bolshevism and the prospect of an imminent radical revolution. It was a time of social unrest due to union organizing strikes and actions by activist immigrants. Attorney General Alexander Mitchell Palmer and J. Edgar Hoover, head of the US Department of Justice's General Intelligence Division (now the FBI), were intent on using the Anarchist Exclusion Act and its 1918 expansion to deport any non-citizens they could identify as advocates of anarchy or revolution. "Emma Goldman and Alexander Berkman," Hoover wrote while they were in prison, "are, beyond doubt, two of the most dangerous anarchists in this country and return to the community will result in undue harm."
At her deportation hearing on October 27, Goldman refused to answer questions about her beliefs, on the grounds that her American citizenship invalidated any attempt to deport her under the Anarchist Exclusion Act, which could be enforced only against non-citizens of the US. She presented a written statement instead: "Today so-called aliens are deported. Tomorrow native Americans will be banished. Already some patrioteers are suggesting that native American sons to whom democracy is a sacred ideal should be exiled." Louis Post at the Department of Labor, which had ultimate authority over deportation decisions, determined that the revocation of her husband Kershner's American citizenship in 1908 after his conviction had revoked hers as well. After initially promising a court fight, Goldman decided not to appeal his ruling.
The Labor Department included Goldman and Berkman among 249 aliens it deported "en masse," mostly people with only vague associations with radical groups, who had been swept up in government raids in November. "Buford", a ship the press nicknamed the "Soviet Ark," sailed from the Army's New York Port of Embarkation on December 21. Some 58 enlisted men and four officers provided security on the journey, and pistols were distributed to the crew. Most of the press approved enthusiastically. The Cleveland "Plain Dealer" wrote: "It is hoped and expected that other vessels, larger, more commodious, carrying similar cargoes, will follow in her wake." The ship landed her charges in Hanko, Finland on Saturday, January 17, 1920. Upon arrival in Finland, authorities there conducted the deportees to the Russian frontier under a flag of truce.
Goldman initially viewed the Bolshevik revolution in a positive light. She wrote in "Mother Earth" that despite its dependence on Communist government, it represented "the most fundamental, far-reaching and all-embracing principles of human freedom and of economic well-being". By the time she neared Europe, however, she expressed fears about what was to come. She was worried about the ongoing Russian Civil War and the possibility of being seized by anti-Bolshevik forces. The state, anti-capitalist though it was, also posed a threat. "I could never in my life work within the confines of the State," she wrote to her niece, "Bolshevist or otherwise."
She quickly discovered that her fears were justified. Days after returning to Petrograd (Saint Petersburg), she was shocked to hear a party official refer to free speech as a "bourgeois superstition". As she and Berkman traveled around the country, they found repression, mismanagement, and corruption instead of the equality and worker empowerment they had dreamed of. Those who questioned the government were demonized as counter-revolutionaries, and workers labored under severe conditions. They met with Vladimir Lenin, who assured them that government suppression of press liberties was justified. He told them: "There can be no free speech in a revolutionary period." Berkman was more willing to forgive the government's actions in the name of "historical necessity", but he eventually joined Goldman in opposing the Soviet state's authority.
In March 1921, strikes erupted in Petrograd when workers took to the streets demanding better food rations and more union autonomy. Goldman and Berkman felt a responsibility to support the strikers, stating: "To remain silent now is impossible, even criminal." The unrest spread to the port town of Kronstadt, where the government ordered a military response to suppress striking soldiers and sailors. In the Kronstadt rebellion, approximately 1,000 rebelling sailors and soldiers were killed and two thousand more were arrested; many were later executed. In the wake of these events, Goldman and Berkman decided there was no future in the country for them. "More and more", she wrote, "we have come to the conclusion that we can do nothing here. And as we can not keep up a life of inactivity much longer we have decided to leave."
In December 1921, they left the country and went to the Latvian capital city of Riga. The US commissioner in that city wired officials in Washington DC, who began requesting information from other governments about the couple's activities. After a short trip to Stockholm, they moved to Berlin for several years; during this time Goldman agreed to write a series of articles about her time in Russia for Joseph Pulitzer's newspaper, the "New York World." These were later collected and published in book form as "My Disillusionment in Russia" (1923) and "My Further Disillusionment in Russia" (1924). The publishers added these titles to attract attention; Goldman protested, albeit in vain.
Goldman found it difficult to acclimate to the German leftist community in Berlin. Communists despised her outspokenness about Soviet repression; liberals derided her radicalism. While Berkman remained in Berlin helping Russian exiles, Goldman moved to London in September 1924. Upon her arrival, the novelist Rebecca West arranged a reception dinner for her, attended by philosopher Bertrand Russell, novelist H. G. Wells, and more than 200 other guests. When she spoke of her dissatisfaction with the Soviet government, the audience was shocked. Some left the gathering; others berated her for prematurely criticizing the Communist experiment. Later, in a letter, Russell declined to support her efforts at systemic change in the Soviet Union and ridiculed her anarchist idealism.
In 1925, the spectre of deportation loomed again, but a Scottish anarchist named James Colton offered to marry her and provide British citizenship. Although they were only distant acquaintances, she accepted and they were married on June 27, 1925. Her new status gave her peace of mind, and allowed her to travel to France and Canada. Life in London was stressful for Goldman; she wrote to Berkman: "I am awfully tired and so lonely and heartsick. It is a dreadful feeling to come back here from lectures and find not a kindred soul, no one who cares whether one is dead or alive." She worked on analytical studies of drama, expanding on the work she had published in 1914. But the audiences were "awful," and she never finished her second book on the subject.
Goldman traveled to Canada in 1927, just in time to receive news of the impending executions of Italian anarchists Nicola Sacco and Bartolomeo Vanzetti in Boston. Angered by the many irregularities of the case, she saw it as another travesty of justice in the US. She longed to join the mass demonstrations in Boston; memories of the Haymarket affair overwhelmed her, compounded by her isolation. "Then," she wrote, "I had my life before me to take up the cause for those killed. Now I have nothing."
In 1928, she began writing her autobiography, with the support of a group of American admirers, including journalist H. L. Mencken, poet Edna St. Vincent Millay, novelist Theodore Dreiser and art collector Peggy Guggenheim, who raised $4,000 for her. She secured a cottage in the French coastal city of Saint-Tropez and spent two years recounting her life. Berkman offered sharply critical feedback, which she eventually incorporated at the price of a strain on their relationship. Goldman intended the book, "Living My Life," as a single volume for a price the working class could afford (she urged no more than $5.00); her publisher Alfred A. Knopf, however, released it as two volumes sold together for $7.50. Goldman was furious, but unable to force a change. Due in large part to the Great Depression, sales were sluggish despite keen interest from libraries around the US. Critical reviews were generally enthusiastic; "The New York Times", "The New Yorker", and "Saturday Review of Literature" all listed it as one of the year's top non-fiction books.
In 1933, Goldman received permission to lecture in the United States under the condition that she speak only about drama and her autobiography—but not current political events. She returned to New York on February 2, 1934 to generally positive press coverage—except from Communist publications. Soon she was surrounded by admirers and friends, besieged with invitations to talks and interviews. Her visa expired in May, and she went to Toronto in order to file another request to visit the US. However, this second attempt was denied. She stayed in Canada, writing articles for US publications.
In February and March 1936, Berkman underwent a pair of prostate gland operations. Recuperating in Nice and cared for by his companion, Emmy Eckstein, he missed Goldman's sixty-seventh birthday in Saint-Tropez in June. She wrote in sadness, but he never read the letter; she received a call in the middle of the night that Berkman was in great distress. She left for Nice immediately but when she arrived that morning, Goldman found that he had shot himself and was in a nearly comatose paralysis. He died later that evening.
In July 1936, the Spanish Civil War started after an attempted "coup d'état" by parts of the Spanish Army against the government of the Second Spanish Republic. At the same time, the Spanish anarchists, fighting against the Nationalist forces, started an anarchist revolution. Goldman was invited to Barcelona and in an instant, as she wrote to her niece, "the crushing weight that was pressing down on my heart since Sasha's death left me as by magic". She was welcomed by the Confederación Nacional del Trabajo (CNT) and Federación Anarquista Ibérica (FAI) organizations, and for the first time in her life lived in a community run by and for anarchists, according to true anarchist principles. "In all my life", she wrote later, "I have not met with such warm hospitality, comradeship and solidarity." After touring a series of collectives in the province of Huesca, she told a group of workers: "Your revolution will destroy forever [the notion] that anarchism stands for chaos." She began editing the weekly "CNT-FAI Information Bulletin" and responded to English-language mail.
Goldman began to worry about the future of Spain's anarchism when the CNT-FAI joined a coalition government in 1937—against the core anarchist principle of abstaining from state structures—and, more distressingly, made repeated concessions to Communist forces in the name of uniting against fascism. In November 1936, she wrote that cooperating with Communists in Spain was "a denial of our comrades in Stalin's concentration camps". Russia, meanwhile, refused to send weapons to anarchist forces, and disinformation campaigns were being waged against the anarchists across Europe and the US. Her faith in the movement unshaken, Goldman returned to London as an official representative of the CNT-FAI.
Delivering lectures and giving interviews, Goldman enthusiastically supported the Spanish anarcho-syndicalists. She wrote regularly for "Spain and the World", a biweekly newspaper focusing on the civil war. In May 1937, however, Communist-led forces attacked anarchist strongholds and broke up agrarian collectives. Newspapers in England and elsewhere accepted the timeline of events offered by the Second Spanish Republic at face value. British journalist George Orwell, present for the crackdown, wrote: "[T]he accounts of the Barcelona riots in May ... beat everything I have ever seen for lying."
Goldman returned to Spain in September, but the CNT-FAI appeared to her like people "in a burning house". Worse, anarchists and other radicals around the world refused to support their cause. The Nationalist forces declared victory in Spain just before she returned to London. Frustrated by England's repressive atmosphere—which she called "more fascist than the fascists"—she returned to Canada in 1939. Her service to the anarchist cause in Spain was not forgotten, however. On her seventieth birthday, the former Secretary-General of the CNT-FAI, Mariano Vázquez, sent a message to her from Paris, praising her for her contributions and naming her as "our spiritual mother". She called it "the most beautiful tribute I have ever received".
As the events preceding World War II began to unfold in Europe, Goldman reiterated her opposition to wars waged by governments. "[M]uch as I loathe Hitler, Mussolini, Stalin and Franco", she wrote to a friend, "I would not support a war against them and for the democracies which, in the last analysis, are only Fascist in disguise." She felt that Britain and France had missed their opportunity to oppose fascism, and that the coming war would only result in "a new form of madness in the world".
On Saturday, February 17, 1940, Goldman suffered a debilitating stroke. She became paralyzed on her right side, and although her hearing was unaffected, she could not speak. As one friend described it: "Just to think that here was Emma, the greatest orator in America, unable to utter one word." For three months she improved slightly, receiving visitors and on one occasion gesturing to her address book to signal that a friend might find friendly contacts during a trip to Mexico. She suffered another stroke on May 8, however, and on May 14 she died in Toronto, aged 70.
The US Immigration and Naturalization Service allowed her body to be brought back to the United States. She was buried in German Waldheim Cemetery (now named Forest Home Cemetery) in Forest Park, Illinois, a western suburb of Chicago, near the graves of those executed after the Haymarket affair. The bas relief on her grave marker was created by sculptor Jo Davidson.
Goldman spoke and wrote extensively on a wide variety of issues. While she rejected orthodoxy and fundamentalist thinking, she was an important contributor to several fields of modern political philosophy.
She was influenced by many diverse thinkers and writers, including Mikhail Bakunin, Henry David Thoreau, Peter Kropotkin, Ralph Waldo Emerson, Nikolai Chernyshevsky, and Mary Wollstonecraft. Another philosopher who influenced Goldman was Friedrich Nietzsche. In her autobiography, she wrote: "Nietzsche was not a social theorist, but a poet, a rebel, and innovator. His aristocracy was neither of birth nor of purse; it was the spirit. In that respect Nietzsche was an anarchist, and all true anarchists were aristocrats."
Anarchism was central to Goldman's view of the world and she is today considered one of the most important figures in the history of anarchism. First drawn to it during the persecution of anarchists after the 1886 Haymarket affair, she wrote and spoke regularly on behalf of anarchism. In the title essay of her book "Anarchism and Other Essays", she wrote:
Anarchism, then, really stands for the liberation of the human mind from the dominion of religion; the liberation of the human body from the dominion of property; liberation from the shackles and restraint of government. Anarchism stands for a social order based on the free grouping of individuals for the purpose of producing real social wealth; an order that will guarantee to every human being free access to the earth and full enjoyment of the necessities of life, according to individual desires, tastes, and inclinations.
Goldman's anarchism was intensely personal. She believed it was necessary for anarchist thinkers to live their beliefs, demonstrating their convictions with every action and word. "I don't care if a man's theory for tomorrow is correct," she once wrote. "I care if his spirit of today is correct." Anarchism and free association were to her logical responses to the confines of government control and capitalism. "It seems to me that "these" are the new forms of life," she wrote, "and that they will take the place of the old, not by preaching or voting, but by living them."
At the same time, she believed that the movement on behalf of human liberty must be staffed by liberated humans. While dancing among fellow anarchists one evening, she was chided by an associate for her carefree demeanor. In her autobiography, Goldman wrote:
I told him to mind his own business, I was tired of having the Cause constantly thrown in my face. I did not believe that a Cause which stood for a beautiful ideal, for anarchism, for release and freedom from conventions and prejudice, should demand denial of life and joy. I insisted that our Cause could not expect me to behave as a nun and that the movement should not be turned into a cloister. If it meant that, I did not want it. "I want freedom, the right to self-expression, everybody's right to beautiful, radiant things."
Goldman, in her political youth, held targeted violence to be a legitimate means of revolutionary struggle. Goldman at the time believed that the use of violence, while distasteful, could be justified in relation to the social benefits it might accrue. She advocated propaganda of the deed—"attentat", or violence carried out to encourage the masses to revolt. She supported her partner Alexander Berkman's attempt to kill industrialist Henry Clay Frick, and even begged him to allow her to participate. She believed that Frick's actions during the Homestead strike were reprehensible and that his murder would produce a positive result for working people. "Yes," she wrote later in her autobiography, "the end in this case justified the means." While she never gave explicit approval of Leon Czolgosz's assassination of US President William McKinley, she defended his ideals and believed actions like his were a natural consequence of repressive institutions. As she wrote in "The Psychology of Political Violence": "the accumulated forces in our social and economic life, culminating in an act of violence, are similar to the terrors of the atmosphere, manifested in storm and lightning."
Her experiences in Russia led her to qualify her earlier belief that revolutionary ends might justify violent means. In the afterword to "My Disillusionment in Russia", she wrote: "There is no greater fallacy than the belief that aims and purposes are one thing, while methods and tactics are another... The means employed become, through individual habit and social practice, part and parcel of the final purpose..." In the same chapter, however, Goldman affirmed that "Revolution is indeed a violent process," and noted that violence was the "tragic inevitability of revolutionary upheavals..." Some misinterpreted her comments on the Bolshevik terror as a rejection of all militant force, but Goldman corrected this in the preface to the first US edition of "My Disillusionment in Russia":
The argument that destruction and terror are part of revolution I do not dispute. I know that in the past every great political and social change necessitated violence...Black slavery might still be a legalized institution in the United States but for the militant spirit of the John Browns. I have never denied that violence is inevitable, nor do I gainsay it now. Yet it is one thing to employ violence in combat, as a means of defense. It is quite another thing to make a principle of terrorism, to institutionalize it, to assign it the most vital place in the social struggle. Such terrorism begets counter-revolution and in turn itself becomes counter-revolutionary.
Goldman saw the militarization of Soviet society not as a result of armed resistance per se, but of the statist vision of the Bolsheviks, writing that "an insignificant minority bent on creating an absolute State is necessarily driven to oppression and terrorism."
Goldman believed that the economic system of capitalism was incompatible with human liberty. "The only demand that property recognizes," she wrote in "Anarchism and Other Essays", "is its own gluttonous appetite for greater wealth, because wealth means power; the power to subdue, to crush, to exploit, the power to enslave, to outrage, to degrade." She also argued that capitalism dehumanized workers, "turning the producer into a mere particle of a machine, with less will and decision than his master of steel and iron."
Originally opposed to anything less than complete revolution, Goldman was challenged during one talk by an elderly worker in the front row. In her autobiography, she wrote:
He said that he understood my impatience with such small demands as a few hours less a day, or a few dollars more a week... But what were men of his age to do? They were not likely to live to see the ultimate overthrow of the capitalist system. Were they also to forgo the release of perhaps two hours a day from the hated work? That was all they could hope to see realized in their lifetime.
Goldman realized that smaller efforts for improvement such as higher wages and shorter hours could be part of a social revolution.
Goldman viewed the state as essentially and inevitably a tool of control and domination. As a result, Goldman believed that voting was useless at best and dangerous at worst. Voting, she wrote, provided an illusion of participation while masking the true structures of decision-making. Instead, Goldman advocated targeted resistance in the form of strikes, protests, and "direct action against the invasive, meddlesome authority of our moral code". She maintained an anti-voting position even when many anarcho-syndicalists in 1930s Spain voted for the formation of a liberal republic. Goldman wrote that any power anarchists wielded as a voting bloc should instead be used to strike across the country. She disagreed with the movement for women's suffrage, which demanded the right of women to vote. In her essay "Woman Suffrage", she ridicules the idea that women's involvement would infuse the democratic state with a more just orientation: "As if women have not sold their votes, as if women politicians cannot be bought!" She agreed with the suffragists' assertion that women are equal to men, but disagreed that their participation alone would make the state more just. "To assume, therefore, that she would succeed in purifying something which is not susceptible of purification, is to credit her with supernatural powers."
Goldman was also a passionate critic of the prison system, critiquing both the treatment of prisoners and the social causes of crime. Goldman viewed crime as a natural outgrowth of an unjust economic system, and in her essay "Prisons: A Social Crime and Failure", she quoted liberally from the 19th-century authors Fyodor Dostoevsky and Oscar Wilde on prisons, and wrote: Year after year the gates of prison hells return to the world an emaciated, deformed, will-less, shipwrecked crew of humanity, with the Cain mark on their foreheads, their hopes crushed, all their natural inclinations thwarted. With nothing but hunger and inhumanity to greet them, these victims soon sink back into crime as the only possibility of existence.
Goldman was a committed war resister, believing that wars were fought by the state on behalf of capitalists. She was particularly opposed to the draft, viewing it as one of the worst of the state's forms of coercion, and was one of the founders of the No-Conscription League—for which she was ultimately arrested (1917), imprisoned and deported (1919).
Goldman was routinely surveilled, arrested, and imprisoned for her speech and organizing activities in support of workers and various strikes, access to birth control, and in opposition to World War I. As a result, she became active in the early 20th century free speech movement, seeing freedom of expression as a fundamental necessity for achieving social change. Her outspoken championship of her ideals, in the face of persistent arrests, inspired Roger Baldwin, one of the founders of the American Civil Liberties Union. Goldman's and Reitman's experiences in the San Diego free speech fight (1912) were notorious examples of state and capitalist repression of the Industrial Workers of the World's campaign of free speech fights.
Although she was hostile to the suffragist goals of first-wave feminism, Goldman advocated passionately for the rights of women, and is today heralded as a founder of anarcha-feminism, which challenges patriarchy as a hierarchy to be resisted alongside state power and class divisions. In 1897, she wrote: "I demand the independence of woman, her right to support herself; to live for herself; to love whomever she pleases, or as many as she pleases. I demand freedom for both sexes, freedom of action, freedom in love and freedom in motherhood."
A nurse by training, Goldman was an early advocate for educating women concerning contraception. Like many feminists of her time, she saw abortion as a tragic consequence of social conditions, and birth control as a positive alternative. Goldman was also an advocate of free love, and a strong critic of marriage. She saw early feminists as confined in their scope and bounded by social forces of Puritanism and capitalism. She wrote: "We are in need of unhampered growth out of old traditions and habits. The movement for women's emancipation has so far made but the first step in that direction."
Goldman was also an outspoken critic of prejudice against homosexuals. Her belief that social liberation should extend to gay men and lesbians was virtually unheard of at the time, even among anarchists. As German sexologist Magnus Hirschfeld wrote, "she was the first and only woman, indeed the first and only American, to take up the defense of homosexual love before the general public." In numerous speeches and letters, she defended the right of gay men and lesbians to love as they pleased and condemned the fear and stigma associated with homosexuality. As Goldman wrote in a letter to Hirschfeld, "It is a tragedy, I feel, that people of a different sexual type are caught in a world which shows so little understanding for homosexuals and is so crassly indifferent to the various gradations and variations of gender and their great significance in life."
A committed atheist, Goldman viewed religion as another instrument of control and domination. Her essay "The Philosophy of Atheism" quoted Bakunin at length on the subject and added:
Consciously or unconsciously, most theists see in gods and devils, heaven and hell, reward and punishment, a whip to lash the people into obedience, meekness and contentment... The philosophy of Atheism expresses the expansion and growth of the human mind. The philosophy of theism, if we can call it a philosophy, is static and fixed.
In essays like "The Hypocrisy of Puritanism" and a speech entitled "The Failure of Christianity", Goldman made more than a few enemies among religious communities by attacking their moralistic attitudes and efforts to control human behavior. She blamed Christianity for "the perpetuation of a slave society", arguing that it dictated individuals' actions on Earth and offered poor people a false promise of a plentiful future in heaven. She was also critical of Zionism, which she saw as another failed experiment in state control.
Goldman was well known during her life, described as—among other things—"the most dangerous woman in America". After her death and through the middle part of the 20th century, her fame faded. Scholars and historians of anarchism viewed her as a great speaker and activist, but did not regard her as a philosophical or theoretical thinker on par with, for example, Kropotkin.
In 1970, Dover Press reissued Goldman's biography, "Living My Life", and in 1972, feminist writer Alix Kates Shulman issued a collection of Goldman's writing and speeches, "Red Emma Speaks". These works brought Goldman's life and writings to a larger audience, and she was in particular lionized by the women's movement of the late 20th century. In 1973, Shulman was asked by a printer friend for a quotation by Goldman for use on a T-shirt. She sent him the selection from "Living My Life" about "the right to self-expression, everybody's right to beautiful, radiant things", recounting that she had been admonished "that it did not behoove an agitator to dance". The printer created a statement based on these sentiments that has become one of Goldman's most famous quotations, even though she probably never said or wrote it as such: "If I can't dance I don't want to be in your revolution." Variations of this saying have appeared on thousands of T-shirts, buttons, posters, bumper stickers, coffee mugs, hats, and other items.
The women's movement of the 1970s that "rediscovered" Goldman was accompanied by a resurgent anarchist movement, beginning in the late 1960s, which also reinvigorated scholarly attention to earlier anarchists. The growth of feminism also initiated some reevaluation of Goldman's philosophical work, with scholars pointing out the significance of Goldman's contributions to anarchist thought in her time. Goldman's belief in the value of aesthetics, for example, can be seen in the later influences of anarchism and the arts. Similarly, Goldman is now given credit for significantly influencing and broadening the scope of activism on issues of sexual liberty, reproductive rights, and freedom of expression.
Goldman has been depicted in numerous works of fiction over the years, including Warren Beatty's 1981 film "Reds", in which she was portrayed by Maureen Stapleton, who won an Academy Award for her performance. Goldman has also been a character in two Broadway musicals, "Ragtime" and "Assassins". Plays depicting Goldman's life include Howard Zinn's play, "Emma"; Martin Duberman's "Mother Earth"; Jessica Litwak's "Emma Goldman: Love, Anarchy, and Other Affairs" (about Goldman's relationship with Berkman and her arrest in connection with McKinley's assassination); Lynn Rogoff's "Love Ben, Love Emma" (about Goldman's relationship with Reitman); Carol Bolt's "Red Emma"; and Alexis Roblan's "Red Emma and the Mad Monk". Ethel Mannin's 1941 novel "Red Rose" is also based on Goldman's Life.
Goldman has been honored by a number of organizations named in her memory. The Emma Goldman Clinic, a women's health center located in Iowa City, Iowa, selected Goldman as a namesake "in recognition of her challenging spirit." Red Emma's Bookstore Coffeehouse, an infoshop in Baltimore, Maryland adopted her name out of their belief "in the ideas and ideals that she fought for her entire life: free speech, sexual and racial equality and independence, the right to organize in our jobs and in our own lives, ideas and ideals that we continue to fight for, even today".
Paul Gailiunas and his late wife Helen Hill co-wrote the anarchist song "Emma Goldman", which was performed and released by the band Piggy: The Calypso Orchestra of the Maritimes in 1999. The song was later performed by Gailiunas' new band The Troublemakers and released on their 2004 album "Here Come The Troublemakers".
UK punk band Martha's song "Goldman's Detective Agency" reimagines Goldman as a private detective investigating police and political corruption.
Goldman was a prolific writer, penning countless pamphlets and articles on a diverse range of subjects. She authored six books, including an autobiography, "Living My Life", and a biography of fellow anarchist Voltairine de Cleyre. | https://en.wikipedia.org/wiki?curid=9764 |
Equuleus
Equuleus is a constellation. Its name is Latin for "little horse", a foal. It was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations. It is the second smallest of the modern constellations (after Crux), spanning only 72 square degrees. It is also very faint, having no stars brighter than the fourth magnitude.
The brightest star in Equuleus is Alpha Equulei, traditionally called Kitalpha, a yellow star magnitude 3.9, 186 light-years from Earth. Its traditional name means "the section of the horse".
There are few variable stars in Equuleus. Only around 25 are known, most of which are faint. Gamma Equulei is an alpha CVn star, ranging between magnitudes 4.58 and 4.77 over a period of around 12½ minutes. It is a white star 115 light-years from Earth, and has an optical companion of magnitude 6.1, 6 Equulei. It is divisible in binoculars. R Equulei is a Mira variable that ranges between magnitudes 8.0 and 15.7 over nearly 261 days.
Equuleus contains some double stars of interest. γ Equ consists of a primary star with a magnitude around 4.7 (slightly variable) and a secondary star of magnitude 11.6, separated by 2 arcseconds. Epsilon Equulei is a triple star also designated 1 Equulei. The system, 197 light-years away, has a primary of magnitude 5.4 that is itself a binary star; its components are of magnitude 6.0 and 6.3 and have a period of 101 years. The secondary is of magnitude 7.4 and is visible in small telescopes. The components of the primary are becoming closer together and will not be divisible in amateur telescopes beginning in 2015. δ Equ is a binary star with an orbital period of 5.7 years, which at one time was the shortest known orbital period for an optical binary. The two components of the system are never more than 0.35 arcseconds apart.
Due to its small size and its distance from the plane of the Milky Way, Equuleus contains no notable deep sky objects. Some very faint galaxies between magnitudes 13 and 15 include NGC 7015, NGC 7040, NGC 7045 and NGC 7046.
In Greek mythology, one myth associates Equuleus with the foal Celeris (meaning "swiftness" or "speed"), who was the offspring or brother of the winged horse Pegasus. Celeris was given to Castor by Mercury. Other myths say that Equuleus is the horse struck from Poseidon's trident, during the contest between him and Athena when deciding which would be the superior. Because this section of stars rises before Pegasus, it is often called Equus Primus, or the First Horse. Equuleus is also linked to the story of Philyra and Saturn.
Created by Hipparchus and included by Ptolemy, it abuts Pegasus; unlike the larger horse it is depicted as a horse's head alone.
In Chinese astronomy, the stars that correspond to Equuleus are located within the Black Tortoise of the North (北方玄武, "Běi Fāng Xuán Wǔ").
Equuleus is briefly mentioned in the "Martha Speaks" episode "Dogs in Space" as one of Helen Lorraine's favorite constellations. | https://en.wikipedia.org/wiki?curid=9765 |
Eucharist
The Eucharist (; also known as Holy Communion and the Lord's Supper among other names) is a Christian rite that is considered a sacrament in most churches, and as an ordinance in others. According to the New Testament, the rite was instituted by Jesus Christ during the Last Supper; giving his disciples bread and wine during a Passover meal, Jesus commanded his disciples to "do this in memory of me" while referring to the bread as "my body" and the cup of wine as "the new covenant in my blood". Through the eucharistic celebration Christians remember both Christ's sacrifice of himself on the cross and his commission of the apostles at the Last Supper.
The elements of the Eucharist, sacramental bread (leavened or unleavened) and sacramental wine (or grape juice), are consecrated on an altar (or a communion table) and consumed thereafter. Communicants, those who consume the elements, may speak of "receiving the Eucharist" as well as "celebrating the Eucharist". Christians generally recognize a special presence of Christ in this rite, though they differ about exactly how, where, and when Christ is present. While all agree that there is no perceptible change in the appearances of the elements (e.g. color, taste, feel, and smell), Catholics believe that their substances actually become the body and blood of Christ (transubstantiation) while the appearances or "species" of the elements remain. Lutherans believe the true body and blood of Christ are really present "in, with, and under" the forms of the bread and wine (sacramental union). Reformed Christians believe in a real spiritual presence of Christ in the Eucharist. Anglican eucharistic theologies universally affirm the real presence of Christ in the Eucharist, though Evangelical Anglicans believe that this is a spiritual presence, while Anglo-Catholics share the Catholic belief. Others, such as the Plymouth Brethren, take the act to be only a symbolic reenactment of the Last Supper and a memorial.
In spite of differences among Christians about various aspects of the Eucharist, there is, according to the "Encyclopædia Britannica", "more of a consensus among Christians about the meaning of the Eucharist than would appear from the confessional debates over the sacramental presence, the effects of the Eucharist, and the proper auspices under which it may be celebrated".
The Greek noun (), meaning "thanksgiving", appears fifteen times in the New Testament but is not used as an official name for the rite; however, the related verb is found in New Testament accounts of the Last Supper, including the earliest such account:
The term (thanksgiving) is that by which the rite is referred to in the Didache (a late 1st or early 2nd century document), and by Ignatius of Antioch (who died between 98 and 117) and Justin Martyr (writing between 147 and 167). Today, "the Eucharist" is the name still used by Eastern Orthodox, Oriental Orthodox, Catholics, Anglicans, Presbyterians, and Lutherans. Other Protestant denominations rarely use this term, preferring either "Communion", "the Lord's Supper", "Remembrance", or "the Breaking of Bread". Latter-day Saints call it "Sacrament".
The Lord's Supper, in Greek (), was in use in the early 50s of the 1st century, as witnessed by the First Epistle to the Corinthians ():
When you come together, it is not the Lord's Supper you eat, for as you eat, each of you goes ahead without waiting for anybody else. One remains hungry, another gets drunk.
It is the predominant term among Evangelicals, such as Baptists and Pentecostals. They also refer to the observance as an ordinance rather than a sacrament.
Use of the term "Communion" (or "Holy Communion") to refer to the Eucharistic rite began by some groups originating in the Protestant Reformation. Others, such as the Catholic Church, do not formally use this term for the rite, but instead mean by it the act of partaking of the consecrated elements; they speak of receiving Holy Communion even outside of the rite, and of participating in the rite without receiving First Communion. The term "Communion" is derived from Latin "communio" ("sharing in common"), which translates Greek κοινωνία ("koinōnía") in : The cup of blessing which we bless, is it not the "communion" of the blood of Christ? The bread which we break, is it not the "communion" of the body of Christ?
The phrase ( 'breaking of the bread'; in later liturgical Greek also ) appears in various related forms five times in the New Testament (; , , and ) in contexts which, according to some, may refer to the celebration of the Eucharist, in either closer or symbolically more distant reference to the Last Supper. It is the term used by the Plymouth Brethren.
The "Blessed Sacrament", the "Sacrament of the Altar", and other variations, are common terms used by Catholics, Lutherans and some Anglicans (Anglo-Catholics) for the consecrated elements, particularly when reserved in a tabernacle. In The Church of Jesus Christ of Latter-day Saints the term "The Sacrament" is used of the rite.
Mass is used in the Roman Rite of the Catholic Church, the Lutheran churches (especially in the Church of Sweden, the Church of Norway, the Evangelical Lutheran Church of Finland), by many Anglicans (especially those of an Anglo-Catholic churchmanship), and in some other forms of Western Christianity. At least in the Catholic Church, the Mass is a longer rite which always consists of two main parts: the Liturgy of the Word and the Liturgy of the Eucharist, in that order. The Liturgy of the Word consists mainly of readings from scripture (the Bible) and a homily (otherwise called a sermon) preached by a priest or deacon and is essentially distinct and separate from the Sacrament of the Eucharist, which comprises the entirety of the Liturgy of the Eucharist, so the Eucharist itself is only about one half of the Mass. (It is also possible and permissible in the Roman Rite for distribution of the Eucharist to occur outside the ritual structure of the Mass—such an event is often called a communion service—but it is much more common to celebrate a full Mass.) Among the many other terms used in the Catholic Church are "Holy Mass", "the Memorial of the Passion, Death and Resurrection of the Lord", the "Holy Sacrifice of the Mass", and the "Holy Mysteries". The term "mass" derives from post-classical Latin "missa" ("dismissal"), found in the concluding phrase of the liturgy, ""Ite, missa est"". The term "missa" has come to imply a 'mission', because at the end of the Mass the congregation are sent out to serve Christ.
The term Divine Liturgy () is used in Byzantine Rite traditions, whether in the Eastern Orthodox Church or among the Eastern Catholic Churches. These also speak of "the Divine Mysteries", especially in reference to the consecrated elements, which they also call "the Holy Gifts".
The term Divine Service () is used in the Lutheran Churches, in addition to the terms "Eucharist", "Mass" and "Holy Communion". The term reflects the Lutheran belief that God is serving the congregants in the liturgy.
Some Eastern rites have yet more names for Eucharist. Holy Qurbana is common in Syriac Christianity and "Badarak" in the Armenian Rite; in the Alexandrian Rite, the term "Prosfora" is common in Coptic Christianity and "Keddase" in Ethiopian and Eritrean Christianity.
The Last Supper appears in all three Synoptic Gospels: Matthew, Mark, and Luke. It also is found in the First Epistle to the Corinthians, which suggests how early Christians celebrated what Paul the Apostle called the Lord's Supper. Although the Gospel of John does not reference the Last Supper explicitly, some argue that it contains theological allusions to the early Christian celebration of the Eucharist, especially in the chapter 6 Bread of Life Discourse but also in other passages.
In his First Epistle to the Corinthians (c. 54–55), Paul the Apostle gives the earliest recorded description of Jesus' Last Supper: "The Lord Jesus on the night when he was betrayed took bread, and when he had given thanks, he broke it and said, 'This is my body, which is for you. Do this in remembrance of me.' Those interested might note that the Greek word for remembrance is ἀνάμνησιν or "anamnesis", which itself has a much richer theological history than the English word for "remember".
The synoptic gospels, , and , depict Jesus as presiding over the Last Supper prior to his crucifixion. The versions in Matthew and Mark are almost identical, but Luke's Gospel presents a textual problem in that a few manuscripts omit the second half of verse 19 and all of v.20 ("given for you … poured out for you"), which are found in the vast majority of ancient witnesses to the text. If the shorter text is the original one, then Luke's account is independent of both that of Paul and that of Matthew/Mark. If the majority longer text comes from the author of the third gospel, then this version is very similar to that of Paul in 1 Corinthians, being somewhat fuller in its description of the early part of the Supper, particularly in making specific mention of a cup being blessed before the bread was broken.
Uniquely, in the one prayer given to posterity by Jesus, the Lord's Prayer, the word epiousios—which does not exist in Classical Greek literature—has been interpreted by some as meaning "super-substantial", a reference to the Bread of Life, the Eucharist.
In the gospel of John, however, the account of the Last Supper does not mention Jesus taking bread and "the cup" and speaking of them as his body and blood; instead, it recounts other events: his humble act of washing the disciples' feet, the prophecy of the betrayal, which set in motion the events that would lead to the cross, and his long discourse in response to some questions posed by his followers, in which he went on to speak of the importance of the unity of the disciples with him, with each other, and with the Father. Some would find in this unity and in the washing of the feet the deeper meaning of the Communion bread in the other three gospels. In , the evangelist attributes a long discourse to Jesus that deals with the subject of the living bread and in contains echoes of Eucharistic language. The interpretation of the whole passage has been extensively debated due to theological and scholarly disagreements. Sir Edwyn Hoskyns notes three main schools of thought: (a) the language is metaphorical, and verse 63: "The Spirit gives life; the flesh counts for nothing. The words I have spoken to you—they are full of the Spirit and life" gives the author's precise meaning; (b) vv 51–58 are a later interpolation that cannot be harmonized with the context; (c) the discourse is homogeneous, sacrificial, and sacramental and can be harmonized, though not all attempts are satisfactory.
The expression "The Lord's Supper", derived from St. Paul's usage in , may have originally referred to the Agape feast (or love feast), the shared communal meal with which the Eucharist was originally associated. The Agape feast is mentioned in but "The Lord's Supper" is now commonly used in reference to a celebration involving no food other than the sacramental bread and wine.
The Didache (Greek: Διδαχή "teaching") is an early Church treatise that includes instructions for Baptism and the Eucharist. Most scholars date it to the late 1st century, and distinguish in it two separate Eucharistic traditions, the earlier tradition in chapter 10 and the later one preceding it in chapter 9. The Eucharist is mentioned again in chapter 14.
Ignatius of Antioch (born c. 35 or 50, died between 98 and 117), one of the Apostolic Fathers, mentions the Eucharist as "the flesh of our Saviour Jesus Christ", and Justin Martyr speaks of it as more than a meal: "the food over which the prayer of thanksgiving, the word received from Christ, has been said ... is the flesh and blood of this Jesus who became flesh ... and the deacons carry some to those who are absent."
Paschasius Radbertus (785–865) was a Carolingian theologian, and the abbot of Corbie, whose most well-known and influential work is an exposition on the nature of the Eucharist written around 831, entitled "De Corpore et Sanguine Domini".
He was canonized in 1073 by Pope Gregory VII. His works are edited in "Patrologia Latina" vol. 120 (1852).
Most Christians, even those who deny that there is any real change in the elements used, recognize a special presence of Christ in this rite. But Christians differ about exactly how, where and how long Christ is present in it. Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Church of the East teach that the reality (the "substance") of the elements of bread and wine is wholly changed into the body and blood of Jesus Christ, while the appearances (the "species") remain. Transubstantiation (""change of the reality"") is the term used by Catholics to denote "what" is changed, not to explain "how" the change occurs, since the Catholic Church teaches that "the signs of bread and wine become, "in a way surpassing understanding", the Body and Blood of Christ". The Orthodox use various terms such as transelementation, but no explanation is official as they prefer to leave it a mystery.
Lutherans and Reformed Christians believe Christ to be present and may both use the term "sacramental union" to describe this. However, Lutherans believe that the whole Christ, including his body and blood, are truly present in the supper, whereas the Reformed generally describe a "spiritual presence," as Jesus' body (and thereby his blood) are present in heaven and cannot be present on Earth as well. Lutherans specify that Christ is "in, with and under" the forms of bread and wine. Anglicans adhere to a range of views although the teaching in the Articles of Religion holds that the body of Christ is received by the faithful only in a heavenly and spiritual manner. Some Christians do not believe in the concept of the real presence, believing that the Eucharist is only a ceremonial remembrance or memorial of the death of Christ.
The "Baptism, Eucharist and Ministry" document of the World Council of Churches, attempting to present the common understanding of the Eucharist on the part of the generality of Christians, describes it as "essentially the sacrament of the gift which God makes to us in Christ through the power of the Holy Spirit", "Thanksgiving to the Father", "Anamnesis or Memorial of Christ", "the sacrament of the unique sacrifice of Christ, who ever lives to make intercession for us", "the sacrament of the body and blood of Christ, the sacrament of his real presence", "Invocation of the Spirit", "Communion of the Faithful", and "Meal of the Kingdom".
Many Christian denominations classify the Eucharist as a sacrament. Some Protestants (though not all) prefer to instead call it an "ordinance", viewing it not as a specific channel of divine grace but as an expression of faith and of obedience to Christ.
In the Catholic Church the Eucharist is considered as a sacrament, according to the Church the Eucharist is "the source and summit of the Christian life." "The other sacraments, and indeed all ecclesiastical ministries and works of the apostolate, are bound up with the Eucharist and are oriented toward it. For in the blessed Eucharist is contained the whole spiritual good of the Church, namely Christ himself, our Pasch."
In the Eucharist the same sacrifice that Jesus made only once on the cross is made present at every Mass. According to Compendium of the Catechism of the Catholic Church "The Eucharist is the very sacrifice of the Body and Blood of the Lord Jesus which he instituted to perpetuate the sacrifice of the cross throughout the ages until his return in glory. Thus he entrusted to his Church this memorial of his death and Resurrection. It is a sign of unity, a bond of charity, a paschal banquet, in which Christ is consumed, the mind is filled with grace, and a pledge of future glory is given to us."
For the Catholic Church the Eucharist is the memorial of Christ's Passover, the making present and the sacramental offering of his unique sacrifice, in the liturgy of the Church which is his Body... the "memorial" is not merely the recollection of past events but ... they become in a certain way present and real. When the Church celebrates the Eucharist, she commemorates Christ's Passover, and it is made present the sacrifice Christ offered once for all on the cross remains ever present. The Eucharist is thus a sacrifice because it re-presents (makes present) the same and only sacrifice offered once for all on the cross, because it is its memorial and because it applies its fruit.
The sacrifice of Christ and the sacrifice of the Eucharist are one single sacrifice: "The victim is one and the same: the same now offers through the ministry of priests, who then offered himself on the cross; only the manner of offering is different." "And since in this divine sacrifice which is celebrated in the Mass, the same Christ who offered himself once in a bloody manner on the altar of the cross is contained and is offered in an unbloody manner... this sacrifice is truly propitiatory."
Currently, however, scripture scholars contend that using the word "propitiation" was a mistranslation by Jerome from the Greek "hilastērion" into the Latin Vulgate, and is misleading for describing the sacrifice of Jesus and its Eucharistic remembrance. One expression of the conclusion of theologians is that sacrifice "is not something human beings do to God (that would be propitiation) but something which God does for human kind (which is expiation)."
The only ministers who can officiate at the Eucharist and consecrate the sacrament are ordained priests (either bishops or presbyters) acting in the person of Christ (""in persona Christi""). In other words, the priest celebrant represents Christ, who is the Head of the Church, and acts before God the Father in the name of the Church, always using "we" not "I" during the Eucharistic prayer . The matter used must be wheaten bread and grape wine; this is considered essential for validity.
The Catholic Church teaches that Jesus is present in a true, real and substantial way, with his Body and his Blood, with his Soul and his Divinity under the Eucharistic species of bread and wine, Christ whole and entire, God and Man. During the consecration of bread and wine, both bread and wine become the body and blood of Jesus Christ. The change of the whole substance of bread into the substance of the Body of Christ and of the whole substance of wine into the substance of his Blood is called transubstantiation. This change is brought about in the eucharistic prayer through the efficacy of the word of Christ and by the action of the Holy Spirit. However, the outward characteristics of bread and wine, that is the "eucharistic species", remain unaltered. The presence of Christ continues in the Eucharist as long as the eucharistic species subsist. that is, until the Eucharist is digested, physically destroyed, or decays by some natural process (at which point Aquinas argued that the substance of the bread and wine cannot return). The empirical appearance and physical properties (called the "species" or "accidents") are not changed, but in the view of Catholics, the reality (called the "substance") indeed is; hence the term "transubstantiation" to describe the phenomenon. The Council of Trent declares that by the consecration of the bread (known as the Host) and wine "there takes place a change of the whole substance of the bread into the substance of the body of Christ our Lord and of the whole substance of the wine into the substance of his blood. This change the holy Catholic Church has fittingly and properly called transubstantiation." The Church holds that the body and blood of Jesus can no longer be truly separated. Where one is, the other must be. Therefore, although the priest (or extraordinary minister of Holy Communion) says "The Body of Christ" when administering the Host and "The Blood of Christ" when presenting the chalice, the communicant who receives either one receives Christ, whole and entire. "The Eucharistic presence of Christ begins at the moment of the consecration and endures as long as the Eucharistic species subsist. Christ is present whole and entire in each of the species and whole and entire in each of their parts, in such a way that the breaking of the bread does not divide Christ."
The Catholic Church sees as the main basis for this belief the words of Jesus himself at his Last Supper: the Synoptic Gospels (; ; ) and Saint Paul's recount that in that context Jesus said of what to all appearances were bread and wine: "This is my body … this is my blood." The Catholic understanding of these words, from the Patristic authors onward, has emphasized their roots in the covenantal history of the Old Testament. The interpretation of Christ's words against this Old Testament background coheres with and supports belief in the Real presence of Christ in the Eucharist.
In 1551, the Council of Trent definitively declared, "Because Christ our Redeemer said that it was truly his body that he was offering under the species of bread, it has always been the conviction of the Church of God, and this holy Council now declares again that by the consecration of the bread and wine there takes place a change of the whole substance of the bread into the substance of the body of Christ our Lord and of the whole substance of the wine into the substance of his blood. This change the holy Catholic Church has fittingly and properly called transubstantiation." The Fourth Council of the Lateran in 1215 had spoken of "Jesus Christ, whose body and blood are truly contained in the sacrament of the altar under the forms of bread and wine, the bread being changed ("transsubstantiatis") by divine power into the body and the wine into the blood." The attempt by some twentieth-century Catholic theologians to present the Eucharistic change as an alteration of significance (transignification rather than transubstantiation) was rejected by Pope Paul VI in his 1965 encyclical letter "Mysterium fidei". In his 1968 "Credo of the People of God", he reiterated that any theological explanation of the doctrine must hold to the twofold claim that, after the consecration, 1) Christ's body and blood are really present; and 2) bread and wine are really absent; and this presence and absence is "real" and not merely something in the mind of the believer.
On entering a church, Roman Rite Catholics genuflect to the tabernacle that holds the consecrated host in order to respectfully acknowledge the presence of Jesus in the Blessed Sacrament, a presence signalled by a sanctuary lamp or votive candle kept burning close to such a tabernacle. (If there is no such burning light, it indicates that the tabernacle is empty of the special presence of Jesus in the Eucharist.) Catholics will also often kneel or sit before the tabernacle, when the sanctuary light is lit, to pray directly to Jesus, materially present in the form of the Eucharist. Similarly, the consecrated Eucharistic host is sometimes exposed on the altar, usually in an ornamental fixture called a Monstrance, so that Catholics may pray or contemplate in the direct presence and in direct view of Jesus in the Eucharist; this is sometimes called "exposition of the Blessed Sacrament", and the prayer and contemplation in front of the exposed Eucharist are often called "adoration of the Blessed Sacrament" or just "adoration". All of these practices stem from belief in the Real Presence of Jesus Christ in the Eucharist, which is an essential Article of Faith of the Catholic Church.
According to the Catholic Church doctrine receiving the Eucharist in a state of mortal sin is a sacrilege and only those who are in a state of grace, that is, without any mortal sin, can receive it. Based on it affirms the following: "Anyone who is aware of having committed a mortal sin must not receive Holy Communion, even if he experiences deep contrition, without having first received sacramental absolution, unless he has a grave reason for receiving Communion and there is no possibility of going to confession."
Within Eastern Christianity, the Eucharistic service is called the "Divine Liturgy" (Byzantine Rite) or similar names in other rites. It comprises two main divisions: the first is the "Liturgy of the Catechumens" which consists of introductory litanies, antiphons and scripture readings, culminating in a reading from one of the Gospels and, often, a homily; the second is the "Liturgy of the Faithful" in which the Eucharist is offered, consecrated, and received as Holy Communion. Within the latter, the actual Eucharistic prayer is called the "anaphora, " literally: "offering" or "carrying up" (). In the Rite of Constantinople, two different anaphoras are currently used: one is attributed to Saint John Chrysostom, the other to Saint Basil the Great. In the Oriental Orthodox Church, a variety of anaphoras are used, but all are similar in structure to those of the Constantinopolitan Rite, in which the Anaphora of Saint John Chrysostom is used most days of the year; Saint Basil's is offered on the Sundays of Great Lent, the eves of Christmas and Theophany, Holy Thursday, Holy Saturday, and upon his feast day (1 January). At the conclusion of the Anaphora the bread and wine are held to be the Body and Blood of Christ. Unlike the Latin Church, the Byzantine Rite uses leavened bread, with the leaven symbolizing the presence of the Holy Spirit. The Armenian Apostolic Church, like the Latin Church, uses unleavened bread, whereas the Greek Orthodox Church utilizes leavened bread in their celebration.
Conventionally this change in the elements is understood to be accomplished at the "Epiclesis" ("invocation") by which the Holy Spirit is invoked and the consecration of the bread and wine as the true and genuine Body and Blood of Christ is specifically requested, but since the anaphora as a whole is considered a unitary (albeit lengthy) prayer, no one moment within it can readily be singled out.
Anglican eucharistic theology is not memorialist (the belief that nothing special happens at the Lord's Supper other than devotional reflection on Christ's death). Christ's is present in the fullness of his person but Church of England repeatedly has refused to make official any definition of the Presence preferring to leave it a mystery while proclaiming the consecrated bread and wine to be the spiritual food of his Most Precious Body and Blood or just his Body and Blood. The bread and wine are an "outward sign of an inner grace," BCP Catechism, p. 859. The Words of Administration at Communion allow for Real Presence or for a real but spiritual Presence (Calvinist Receptionism and Virtualism) which was congenial to most Anglicans well into the 19th Century. From the 1840s the Tractarians re-introduced the Real Presence to suggest a corporeal presence which could be done since the language of the BCP Rite referred to the Body and Blood of Christ without details as well as referring to these as spiritual food at other places in the text. Both are found in the Roman and other Rites, but in the former a definite interpretation is applied. Receptionism and Virtualism assert the Real Presence. The former places emphasis on the recipient, and the latter states the Presence is confected by the power of the Holy Spirit but not in Christ's natural body. His presence is objective and does not depend on its existence from the faith of the recipient. The liturgy petitions that elements 'be' rather than 'become' the Body and Blood of Christ leaving aside any theory of a change in the natural elements: bread and wine are the outer reality and the Presence is the inner not visible but perceived by faith.
In 1789 the Protestant Episcopal Church of the USA restored explicit language that the Eucharist is an oblation (sacrifice) to God. Subsequent revisions of the Prayer Book by member churches of the Anglican Communion have done likewise (the Church of England did so in the 1928 Prayer Book).
The so-called 'Black Rubric' in the 1552 Prayer Book which allowed kneeling for communion but denied the real and essential presence of Christ in the elements was omitted in the 1559 edition at the Queen's insistence. It was re-instated in the 1662 Book modified to deny any corporeal presence to suggest Christ was present in his Natural Body.
In most parishes of the Anglican Communion the Eucharist is celebrated every Sunday, having replaced Morning Prayer as the principal service. The rites for the Eucharist are found in the various prayer books of the Anglican churches. Wine and unleavened wafers or unleavened bread is used. Daily celebrations are the norm in many cathedrals and parish churches sometimes offer one or more services of Holy Communion during the week. The nature of the liturgy varies according to the theological tradition of the priests, parishes, dioceses and regional churches. Leavened or unleavened bread may be used.
The bread and "fruit of the vine" indicated in Matthew, Mark and Luke as the elements of the "Lord's Supper" are interpreted by many Baptists as unleavened bread (although leavened bread is often used) and, in line with the historical stance of some Baptist groups (since the mid-19th century) against partaking of alcoholic beverages, grape juice, which they commonly refer to simply as "the Cup". The unleavened bread also underscores the symbolic belief attributed to Christ's breaking the bread and saying that it was his body. A soda cracker is often used.
Most Baptists consider the Communion to be primarily an act of remembrance of Christ's atonement, and a time of renewal of personal commitment.
However, with the rise of confessionalism, some Baptists have denied the Zwinglian doctrine of mere memorialism and have taken up a Reformed view of Communion. Confessional Baptists believe in pneumatic presence, which is expressed in the Second London Baptist Confession, specifically in Chapter 30, Articles 3 and 7. This view is prevalent among Southern Baptists, those in the Founders movement (a Calvinistic movement within the some Independent Baptists), Freewill Baptists, and several individuals in other Baptist associations.
Communion practices and frequency vary among congregations. A typical practice is to have small cups of juice and plates of broken bread distributed to the seated congregation. In other congregations, communicants may proceed to the altar to receive the elements, then return to their seats. A widely accepted practice is for all to receive and hold the elements until everyone is served, then consume the bread and cup in unison. Usually, music is performed and Scripture is read during the receiving of the elements.
Some Baptist churches are closed-Communionists (even requiring full membership in the church before partaking), with others being partially or fully open-Communionists. It is rare to find a Baptist church where The Lord's Supper is observed every Sunday; most observe monthly or quarterly, with some holding Communion only during a designated Communion service or following a worship service. Adults and children in attendance, who have not made a profession of faith in Christ, are expected to not participate.
Lutherans believe that the body and blood of Christ are "truly and substantially present in, with, and under the forms" of the consecrated bread and wine (the elements), so that communicants eat and drink the body and blood of Christ himself as well as the bread and wine in this sacrament. The Lutheran doctrine of the Real Presence is more accurately and formally known as the "sacramental union". It has been called "consubstantiation" by non-Lutherans. This term is specifically rejected by Lutheran churches and theologians since it creates confusion about the actual doctrine and subjects the doctrine to the control of a non-biblical philosophical concept in the same manner as, in their view, does the term "transubstantiation".
While an official movement exists in Lutheran congregations to celebrate Eucharist weekly, using formal rites very similar to the Catholic and "high" Anglican services, it was historically common for congregations to celebrate monthly or even quarterly. Even in congregations where Eucharist is offered weekly, there is not a requirement that every church service be a Eucharistic service, nor that all members of a congregation must receive it weekly.
Traditional Mennonite and German Baptist Brethren Churches such as the Church of the Brethren churches and congregations have the Agape Meal, footwashing and the serving of the bread and wine two parts to the Communion service in the Lovefeast. In the more modern groups, Communion is only the serving of the Lord's Supper. In the communion meal, the members of the Mennonite churches renew their covenant with God and with each other.
Among Open assemblies, also termed Plymouth Brethren, the Eucharist is more commonly called the Breaking of Bread or the Lord's Supper. It is seen as a symbolic memorial and is central to the worship of both individual and assembly. In principle, the service is open to all baptized Christians, but an individual's eligibility to participate depends on the views of each particular assembly. The service takes the form of non-liturgical, open worship with all male participants allowed to pray audibly and select hymns or readings. The breaking of bread itself typically consists of one leavened loaf, which is prayed over and broken by a participant in the meeting and then shared around. The wine is poured from a single container into one or several vessels, and these are again shared around.
The Exclusive Brethren follow a similar practice to the Open Brethren. They also call the Eucharist the Breaking of Bread or the Lord's Supper.
In the Reformed Churches the Eucharist is variously administered. The Calvinist view of the Sacrament sees a real presence of Christ in the supper which differs both from the objective ontological presence of the Catholic view, and from the real absence of Christ and the mental recollection of the memorialism of the Zwinglians and their successors.
The bread and wine become the means by which the believer has real communion with Christ in his death and Christ's body and blood are present to the faith of the believer as really as the bread and wine are present to their senses but this presence is "spiritual", that is the work of the Holy Spirit. There is no standard frequency; John Calvin desired weekly communion, but the city council only approved monthly, and monthly celebration has become the most common practice in Reformed churches today.
Many, on the other hand, follow John Knox in celebration of the Lord's supper on a quarterly basis, to give proper time for reflection and inward consideration of one's own state and sin. Recently, Presbyterian and Reformed Churches have been considering whether to restore more frequent communion, including weekly communion in more churches, considering that infrequent communion was derived from a memorialist view of the Lord's Supper, rather than Calvin's view of the sacrament as a means of grace. Some churches use bread without any raising agent (whether leaven or yeast), in view of the use of unleavened bread at Jewish Passover meals, while others use any bread available.
The Presbyterian Church (USA), for instance, prescribes "bread common to the culture". Harking back to the regulative principle of worship, the Reformed tradition had long eschewed coming forward to receive communion, preferring to have the elements distributed throughout the congregation by the presbyters (elders) more in the style of a shared meal. Over the last half a century it is much more common in Presbyterian churches to have Holy Communion monthly or on a weekly basis. It is also becoming common to receive the elements by intinction (receiving a piece of consecrated bread or wafer, dipping it in the blessed wine, and consuming it) Wine and grape juice are both used, depending on the congregation.
Most Reformed churches practice open communion", i.e., all believers who are united to a church of like faith and practice, and who are not living in sin, would be allowed to join in the Sacrament.
The British "Catechism for the use of the people called Methodists" states that, "[in the Eucharist] Jesus Christ is present with his worshipping people and gives himself to them as their Lord and Saviour". Methodist theology of this sacrament is reflected in one of the fathers of the movement, Charles Wesley, who wrote a Eucharistic hymn with the following stanza:
Reflecting Wesleyan covenant theology, Methodists also believe that the Lord's Supper is a sign and seal of the covenant of grace.
In many Methodist denominations, non-alcoholic wine (grape juice) is used, so as to include those who do not take alcohol for any reason, as well as a commitment to the Church's historical support of temperance. Variations of the Eucharistic Prayer are provided for various occasions, including communion of the sick and brief forms for occasions that call for greater brevity. Though the ritual is standardized, there is great variation amongst Methodist churches, from typically high-church to low-church, in the enactment and style of celebration. Methodist clergy are not required to be vested when celebrating the Eucharist.
John Wesley, a founder of Methodism, said that it was the duty of Christians to receive the sacrament as often as possible. Methodists in the United States are encouraged to celebrate the Eucharist every Sunday, though it is typically celebrated on the first Sunday of each month, while a few go as long as celebrating quarterly (a tradition dating back to the days of circuit riders that served multiple churches). Communicants may receive standing, kneeling, or while seated. Gaining more wide acceptance is the practice of receiving by intinction (receiving a piece of consecrated bread or wafer, dipping it in the blessed wine, and consuming it). The most common alternative to intinction is for the communicants to receive the consecrated juice using small, individual, specially made glass or plastic cups known as communion cups. The United Methodist Church practices open communion, inviting "all who intend a Christian life, together with their children" to receive Communion.
Many non-denominational Christians, including the Churches of Christ, receive communion every Sunday. Others, including Evangelical churches such as the Church of God, Calvary Chapel, and many forms of Baptist, typically receive communion on a monthly or periodic basis. Many non-denominational Christians hold to the Biblical autonomy of local churches and have no universal requirement among congregations.
Some Churches of Christ, among others, use grape juice and unleavened wafers or unleavened bread and practice open communion.
Holy Qurbana or Qurbana Qadisha, the "Holy Offering" or "Holy Sacrifice", refers to the Eucharist as celebrated according to the East Syrian and West Syrian traditions of Syriac Christianity. The main Anaphora of the East Syrian tradition is the Holy Qurbana of Addai and Mari, while that of the West Syrian tradition is the Liturgy of Saint James. Both are extremely old, going back at least to the third century, and are the oldest extant liturgies continually in use.
In the Seventh-day Adventist Church the Holy Communion service customarily is celebrated once per quarter. The service includes the ordinance of footwashing and the Lord's Supper. Unleavened bread and unfermented (non-alcoholic) grape juice is used. Open communion is practised: all who have committed their lives to the Saviour may participate. The communion service must be conducted by an ordained pastor, minister or church elder.
The Christian Congregation of Jehovah's Witnesses commemorates Christ's death as a ransom or propitiatory sacrifice by observing a Memorial annually on the evening that corresponds to the Passover, Nisan 14, according to the ancient Jewish calendar. They refer to this observance generally as "the Lord's Evening Meal" or the "Memorial of Christ's Death", taken from Jesus' words to his Apostles "do this as a memorial of me". (Luke 22:19) They believe that this is the only annual religious observance commanded for Christians in the Bible.
Of those who attend the Memorial a small minority worldwide partake of the wine and unleavened bread. Jehovah's Witnesses believe that only 144,000 people will receive heavenly salvation and immortal life and thus spend eternity with God and Christ in heaven, with glorified bodies, as under-priests and co-rulers under Christ the King and High Priest, in Jehovah's Kingdom. Paralleling the anointing of kings and priests, they are referred to as the "anointed" class and are the only ones who should partake of the bread and wine. They believe that the baptized "other sheep" of Christ's flock, or the "great crowd", also benefit from the ransom sacrifice, and are respectful observers and viewers of the Lord's Supper remembrance at these special meetings of Jehovah's witnesses, with hope of receiving salvation, through Christ's atoning sacrifice, which is memorialized by the Lord's Evening Meal, and with the hope of obtaining everlasting life in Paradise restored on a prophesied "New Earth", under Christ as Redeemer and Ruler.
The Memorial, held after sundown, includes a sermon on the meaning and importance of the celebration and gathering, and includes the circulation and viewing among the audience of unadulterated red wine and unleavened bread (matzo). Jehovah's Witnesses believe that the bread symbolizes and represents Jesus Christ's perfect body which he gave on behalf of mankind, and that the wine represents his perfect blood which he shed at Calvary and redeems fallen man from inherited sin and death. The wine and the bread (sometimes referred to as "emblems") are viewed as symbolic and commemorative; the Witnesses do not believe in transubstantiation or consubstantiation; so not a literal presence of flesh and blood in the emblems, but that the emblems are simply sacred symbolisms and representations, denoting what was used in the first Lord's Supper, and which figuratively represent the ransom sacrifice of Jesus and sacred realities.
In The Church of Jesus Christ of Latter-day Saints (LDS Church), the "Holy Sacrament of the Lord's Supper", more simply referred to as the Sacrament, is administered every Sunday (except General Conference or other special Sunday meeting) in each LDS Ward or branch worldwide at the beginning of Sacrament meeting. The Sacrament, which consists of both ordinary bread and water (rather than wine or grape juice), is prepared by priesthood holders prior to the beginning of the meeting. At the beginning of the Sacrament, priests say specific prayers to bless the bread and water. The Sacrament is passed row-by-row to the congregation by priesthood holders (typically deacons).
The prayer recited for the bread and the water is found in the Book of Mormon and Doctrine and Covenants. The prayer contains the above essentials given by Jesus: "Always remember him, and keep his commandments …, that they may always have his Spirit to be with them." (Moroni, 4:3.)
While the Salvation Army does not reject the Eucharistic practices of other churches or deny that their members truly receive grace through this sacrament, it does not practice the sacraments of Communion or baptism. This is because they believe that these are unnecessary for the living of a Christian life, and because in the opinion of Salvation Army founders William and Catherine Booth, the sacrament placed too much stress on outward ritual and too little on inward spiritual conversion.
Emphasizing the inward spiritual experience of their adherents over any outward ritual, Quakers (members of the Religious Society of Friends) generally do not baptize or observe Communion.
Christian denominations differ in their understanding of whether they may celebrate the Eucharist with those with whom they are not in full communion. The apologist Saint Justin Martyr (c. 150) wrote of the Eucharist "of which no one is allowed to partake but the man who believes that we teach are true, and who has been washed with the washing that is for the remission of sins and unto regeneration, and who is so living as Christ has enjoined." This was continued in the practice of dismissing the catechumens (those still undergoing instruction and not yet baptized) before the sacramental part of the liturgy, a custom which has left traces in the expression "Mass of the Catechumens" and in the Byzantine Rite exclamation by the deacon or priest, "The doors! The doors!", just before recitation of the Creed.
Churches such as the Catholic and the Eastern Orthodox Churches practice closed communion under normal circumstances. However, the Catholic Church allows administration of the Eucharist, at their spontaneous request, to properly disposed members of the eastern churches (Eastern Orthodox, Oriental Orthodox and Church of the East) not in full communion with it and of other churches that the Holy See judges to be sacramentally in the same position as these churches; and in grave and pressing need, such as danger of death, it allows the Eucharist to be administered also to individuals who do not belong to these churches but who share the Catholic Church's faith in the reality of the Eucharist and have no access to a minister of their own community. Some Protestant communities exclude non-members from Communion.
The Evangelical Lutheran Church in America (ELCA) practices open communion, provided those who receive are baptized, but the Lutheran Church–Missouri Synod and the Wisconsin Evangelical Lutheran Synod (WELS) practice closed communion, excluding non-members and requiring communicants to have been given catechetical instruction. The Evangelical Lutheran Church in Canada, the Evangelical Church in Germany, the Church of Sweden, and many other Lutheran churches outside of the US also practice open communion.
Some use the term "close communion" for restriction to members of the same denomination, and "closed communion" for restriction to members of the local congregation alone.
Most Protestant communities including Congregational churches, the Church of the Nazarene, the Assemblies of God, Methodists, most Presbyterians and Baptists, Anglicans, and Churches of Christ and other non-denominational churches practice various forms of open communion. Some churches do not limit it to only members of the congregation, but to any person in attendance (regardless of Christian affiliation) who considers himself/herself to be a Christian. Others require that the communicant be a baptized person, or a member of a church of that denomination or a denomination of "like faith and practice". Some Progressive Christian congregations offer communion to any individual who wishes to commemorate the life and teachings of Christ, regardless of religious affiliation.
In the Episcopal Church (United States), those who do not receive Holy Communion may enter the communion line with their arms crossed over their chest, in order to receive a blessing from the priest, instead of receiving Holy Communion. As a matter of local convention, this practice can also be found in Catholic churches in the United States for Catholics who find themselves, for whatever reason, not in a position to receive the Eucharist itself, as well as for non-Catholics, who are not permitted to receive it.
Most Latter-Day Saint churches practice closed communion; one notable exception is the Community of Christ, the second-largest denomination in this movement. While The Church of Jesus Christ of Latter-day Saints (the largest of the LDS denominations) technically practice a closed communion, their official direction to local Church leaders (in Handbook 2, section 20.4.1, last paragraph) is as follows: "Although the sacrament is for Church members, the bishopric should not announce that it will be passed to members only, and nothing should be done to prevent nonmembers from partaking of it."
The Catholic Church requires its members to receive the sacrament of Penance or Reconciliation before taking Communion if they are aware of having committed a mortal sin and to prepare by fasting, prayer, and other works of piety.
Traditionally, the Eastern Orthodox church has required its members to have observed all church-appointed fasts (most weeks, this will be at least Wednesday and Friday) for the week prior to partaking of communion, and to fast from all food and water from midnight the night before. In addition, Orthodox Christians are to have made a recent confession to their priest (the frequency varying with one's particular priest), and they must be at peace with all others, meaning that they hold no grudges or anger against anyone. In addition, one is expected to attend Vespers or the All-Night Vigil, if offered, on the night before receiving communion. Furthermore, various pre-communion prayers have been composed, which many (but not all) Orthodox churches require or at least strongly encourage members to say privately before coming to the Eucharist.
Many Protestant congregations generally reserve a period of time for self-examination and private, silent confession just before partaking in the Lord's Supper.
Seventh-day Adventists, Mennonites, and some other groups participate in "foot washing" (cf. ) as a preparation for partaking in the Lord's Supper. At that time they are to individually examine themselves, and confess any sins they may have between one and another.
Eucharistic adoration is a practice in the Roman Catholic, Anglo-Catholic and some Lutheran traditions, in which the Blessed Sacrament is exposed to and adored by the faithful. When this exposure and adoration is constant (twenty-four hours a day), it is called "Perpetual Adoration". In a parish, this is usually done by volunteer parishioners; in a monastery or convent, it is done by the resident monks or nuns. In the "Exposition of the Blessed Sacrament", the Eucharist is displayed in a monstrance, typically placed on an altar, at times with a light focused on it, or with candles flanking it.
The gluten in wheat bread is dangerous to people with celiac disease and other gluten-related disorders, such as non-celiac gluten sensitivity and wheat allergy. For the Catholic Church, this issue was addressed in the 24 July 2003 letter of the Congregation for the Doctrine of the Faith, which summarized and clarified earlier declarations. The Catholic Church believes that the matter for the Eucharist must be wheaten bread and fermented wine from grapes: it holds that, if the gluten has been entirely removed, the result is not true wheaten bread. For celiacs, but not generally, it allows low-gluten bread. It also permits Holy Communion to be received under the form of either bread or wine alone, except by a priest who is celebrating Mass without other priests or as principal celebrant. Many Protestant churches offer communicants gluten-free alternatives to wheaten bread, usually in the form of a rice-based cracker or gluten-free bread.
The Catholic Church believes that grape juice that has not begun even minimally to ferment cannot be accepted as wine, which it sees as essential for celebration of the Eucharist. For non-alcoholics, but not generally, it allows the use of mustum (grape juice in which fermentation has begun but has been suspended without altering the nature of the juice), and it holds that "since Christ is sacramentally present under each of the species, communion under the species of bread alone makes it possible to receive all the fruit of Eucharistic grace. For pastoral reasons, this manner of receiving communion has been legitimately established as the most common form in the Latin rite."
As already indicated, the one exception is in the case of a priest celebrating Mass without other priests or as principal celebrant. The water that in the Roman Rite is prescribed to be mixed with the wine must be only a relatively small quantity. The practice of the Coptic Church is that the mixture should be two parts wine to one part water.
Many Protestant churches allow clergy and communicants to take mustum instead of wine. In addition to, or in replacement of wine, some churches offer grape juice which has been pasteurized to stop the fermentation process the juice naturally undergoes; de-alcoholized wine from which most of the alcohol has been removed (between 0.5% and 2% remains), or water. Exclusive use of unfermented grape juice is common in Baptist churches, the United Methodist Church, Seventh-day Adventists, Christian Churches/Churches of Christ, Churches of Christ, Church of God (Anderson, Indiana), some Lutherans, Assemblies of God, Pentecostals, Evangelicals, the Christian Missionary Alliance, and other American independent Protestant churches.
Risk of infectious disease transmission related to use of a common communion cup exists but it is low. No case of transmission of an infectious disease related to a common communion cup has ever been documented. Experimental studies have demonstrated that infectious diseases can be transmitted. The most likely diseases to be transmitted would be common viral illnesses such as the common cold. A study of 681 individuals found that taking communion up to daily from a common cup did not increase the risk of infection beyond that of those who did not attend services at all.
In influenza epidemics, some churches suspend the giving wine at communion, for fear of spreading the disease. This is in full accord with Catholic Church belief that communion under the form of bread alone makes it possible to receive all the fruit of Eucharistic grace. However, the same measure has also been taken by churches that normally insist on the importance of receiving communion under both forms. This was done in 2009 by the Church of England.
Some fear contagion through the handling involved in distributing the hosts to the communicants, even if they are placed on the hand rather than on the tongue. Accordingly, some churches use mechanical wafer dispensers or "pillow packs" (communion wafers with wine inside them). While these methods of distributing communion are not generally accepted in Catholic parishes, one parish provides a mechanical dispenser to allow those intending to commune to place in a bowl, without touching them by hand, the hosts for use in the celebration. | https://en.wikipedia.org/wiki?curid=9767 |
Eclipse
An eclipse is an astronomical event that occurs when an astronomical object or spacecraft is temporarily obscured, by passing into the shadow of another body or by having another body pass between it and the viewer. This alignment of three celestial objects is known as a syzygy. Apart from syzygy, the term eclipse is also used when a spacecraft reaches a position where it can observe two celestial bodies so aligned. An eclipse is the result of either an occultation (completely hidden) or a transit (partially hidden).
The term eclipse is most often used to describe either a solar eclipse, when the Moon's shadow crosses the Earth's surface, or a lunar eclipse, when the Moon moves into the Earth's shadow. However, it can also refer to such events beyond the Earth–Moon system: for example, a planet moving into the shadow cast by one of its moons, a moon passing into the shadow cast by its host planet, or a moon passing into the shadow of another moon. A binary star system can also produce eclipses if the plane of the orbit of its constituent stars intersects the observer's position.
For the special cases of solar and lunar eclipses, these only happen during an "eclipse season", the two times of each year when the plane of the Earth's orbit around the Sun crosses with the plane of the Moon's orbit around the Earth. The type of solar eclipse that happens during each season (whether total, annular, hybrid, or partial) depends on apparent sizes of the Sun and Moon. If the orbit of the Earth around the Sun, and the Moon's orbit around the Earth were both in the same plane with each other, then eclipses would happen each and every month. There would be a lunar eclipse at every full moon, and a solar eclipse at every new moon. And if both orbits were perfectly circular, then each solar eclipse would be the same type every month. It is because of the non-planar and non-circular differences that eclipses are not a common event. Lunar eclipses can be viewed from the entire nightside half of the Earth. But solar eclipses, particularly total eclipses occurring at any one particular point on the Earth's surface, are very rare events that can be many decades apart.
The term is derived from the ancient Greek noun ('), which means "the abandonment", "the downfall", or "the darkening of a heavenly body", which is derived from the verb (') which means "to abandon", "to darken", or "to cease to exist," a combination of prefix ('), from preposition ('), "out," and of verb (""), "to be absent".
For any two objects in space, a line can be extended from the first through the second. The latter object will block some amount of light being emitted by the former, creating a region of shadow around the axis of the line. Typically these objects are moving with respect to each other and their surroundings, so the resulting shadow will sweep through a region of space, only passing through any particular location in the region for a fixed interval of time. As viewed from such a location, this shadowing event is known as an eclipse.
Typically the cross-section of the objects involved in an astronomical eclipse are roughly disk shaped. The region of an object's shadow during an eclipse is divided into three parts:
A total eclipse occurs when the observer is within the umbra, an annular eclipse when the observer is within the antumbra, and a partial eclipse when the observer is within the penumbra. During a lunar eclipse only the umbra and penumbra are applicable. This is because Earth's apparent diameter from the viewpoint of the Moon is nearly four times that of the Sun. The same terms may be used analogously in describing other eclipses, e.g., the antumbra of Deimos crossing Mars, or Phobos entering Mars's penumbra.
The "first contact" occurs when the eclipsing object's disc first starts to impinge on the light source; "second contact" is when the disc moves completely within the light source; "third contact" when it starts to move out of the light; and "fourth" or "last contact" when it finally leaves the light source's disc entirely.
For spherical bodies, when the occulting object is smaller than the star, the length ("L") of the umbra's cone-shaped shadow is given by:
where "Rs" is the radius of the star, "Ro" is the occulting object's radius, and "r" is the distance from the star to the occulting object. For Earth, on average "L" is equal to 1.384 km, which is much larger than the Moon's semimajor axis of 3.844 km. Hence the umbral cone of the Earth can completely envelop the Moon during a lunar eclipse. If the occulting object has an atmosphere, however, some of the luminosity of the star can be refracted into the volume of the umbra. This occurs, for example, during an eclipse of the Moon by the Earth—producing a faint, ruddy illumination of the Moon even at totality.
On Earth, the shadow cast during an eclipse moves very approximately at 1 km per sec. This depends on the location of the shadow on the Earth and the angle in which it is moving.
An eclipse cycle takes place when eclipses in a series are separated by a certain interval of time. This happens when the orbital motions of the bodies form repeating harmonic patterns. A particular instance is the saros, which results in a repetition of a solar or lunar eclipse every 6,585.3 days, or a little over 18 years. Because this is not a whole number of days, successive eclipses will be visible from different parts of the world.
An eclipse involving the Sun, Earth, and Moon can occur only when they are nearly in a straight line, allowing one to be hidden behind another, viewed from the third. Because the orbital plane of the Moon is tilted with respect to the orbital plane of the Earth (the ecliptic), eclipses can occur only when the Moon is close to the intersection of these two planes (the nodes). The Sun, Earth and nodes are aligned twice a year (during an eclipse season), and eclipses can occur during a period of about two months around these times. There can be from four to seven eclipses in a calendar year, which repeat according to various eclipse cycles, such as a saros.
Between 1901 and 2100 there are the maximum of seven eclipses in:
Excluding penumbral lunar eclipses, there are a maximum of seven eclipses in:
As observed from the Earth, a solar eclipse occurs when the Moon passes in front of the Sun. The type of solar eclipse event depends on the distance of the Moon from the Earth during the event. A total solar eclipse occurs when the Earth intersects the umbra portion of the Moon's shadow. When the umbra does not reach the surface of the Earth, the Sun is only partially occulted, resulting in an annular eclipse. Partial solar eclipses occur when the viewer is inside the penumbra.
The eclipse magnitude is the fraction of the Sun's diameter that is covered by the Moon. For a total eclipse, this value is always greater than or equal to one. In both annular and total eclipses, the eclipse magnitude is the ratio of the angular sizes of the Moon to the Sun.
Solar eclipses are relatively brief events that can only be viewed in totality along a relatively narrow track. Under the most favorable circumstances, a total solar eclipse can last for 7 minutes, 31 seconds, and can be viewed along a track that is up to 250 km wide. However, the region where a partial eclipse can be observed is much larger. The Moon's umbra will advance eastward at a rate of 1,700 km/h, until it no longer intersects the Earth's surface.
During a solar eclipse, the Moon can sometimes perfectly cover the Sun because its apparent size is nearly the same as the Sun's when viewed from the Earth. A total solar eclipse is in fact an occultation while an annular solar eclipse is a transit.
When observed at points in space other than from the Earth's surface, the Sun can be eclipsed by bodies other than the Moon. Two examples include when the crew of Apollo 12 observed the in 1969 and when the "Cassini" probe observed in 2006.
Lunar eclipses occur when the Moon passes through the Earth's shadow. This happens only during a full moon, when the Moon is on the far side of the Earth from the Sun. Unlike a solar eclipse, an eclipse of the Moon can be observed from nearly an entire hemisphere. For this reason it is much more common to observe a lunar eclipse from a given location. A lunar eclipse lasts longer, taking several hours to complete, with totality itself usually averaging anywhere from about 30 minutes to over an hour.
There are three types of lunar eclipses: penumbral, when the Moon crosses only the Earth's penumbra; partial, when the Moon crosses partially into the Earth's umbra; and total, when the Moon crosses entirely into the Earth's umbra. Total lunar eclipses pass through all three phases. Even during a total lunar eclipse, however, the Moon is not completely dark. Sunlight refracted through the Earth's atmosphere enters the umbra and provides a faint illumination. Much as in a sunset, the atmosphere tends to more strongly scatter light with shorter wavelengths, so the illumination of the Moon by refracted light has a red hue, thus the phrase 'Blood Moon' is often found in descriptions of such lunar events as far back as eclipses are recorded.
Records of solar eclipses have been kept since ancient times. Eclipse dates can be used for chronological dating of historical records. A Syrian clay tablet, in the Ugaritic language, records a solar eclipse which occurred on March 5, 1223 B.C., while Paul Griffin argues that a stone in Ireland records an eclipse on November 30, 3340 B.C. Positing classical-era astronomers' use of Babylonian eclipse records mostly from the 13th century BC provides a feasible and mathematically consistent explanation for the Greek finding all three lunar mean motions (synodic, anomalistic, draconitic) to a precision of about one part in a million or better. Chinese historical records of solar eclipses date back over 3,000 years and have been used to measure changes in the Earth's rate of spin.
By the 1600s, European astronomers were publishing books with diagrams explaining how lunar and solar eclipses occurred. In order to disseminate this information to a broader audience and decrease fear of the consequences of eclipses, booksellers printed broadsides explaining the event either using the science or via astrology.
The gas giant planets have many moons and thus frequently display eclipses. The most striking involve Jupiter, which has four large moons and a low axial tilt, making eclipses more frequent as these bodies pass through the shadow of the larger planet. Transits occur with equal frequency. It is common to see the larger moons casting circular shadows upon Jupiter's cloudtops.
The eclipses of the Galilean moons by Jupiter became accurately predictable once their orbital elements were known. During the 1670s, it was discovered that these events were occurring about 17 minutes later than expected when Jupiter was on the far side of the Sun. Ole Rømer deduced that the delay was caused by the time needed for light to travel from Jupiter to the Earth. This was used to produce the first estimate of the speed of light.
On the other three gas giants (Saturn, Uranus and Neptune) eclipses only occur at certain periods during the planet's orbit, due to their higher inclination between the orbits of the moon and the orbital plane of the planet. The moon Titan, for example, has an orbital plane tilted about 1.6° to Saturn's equatorial plane. But Saturn has an axial tilt of nearly 27°. The orbital plane of Titan only crosses the line of sight to the Sun at two points along Saturn's orbit. As the orbital period of Saturn is 29.7 years, an eclipse is only possible about every 15 years.
The timing of the Jovian satellite eclipses was also used to calculate an observer's longitude upon the Earth. By knowing the expected time when an eclipse would be observed at a standard longitude (such as Greenwich), the time difference could be computed by accurately observing the local time of the eclipse. The time difference gives the longitude of the observer because every hour of difference corresponded to 15° around the Earth's equator. This technique was used, for example, by Giovanni D. Cassini in 1679 to re-map France.
On Mars, only partial solar eclipses (transits) are possible, because neither of its moons is large enough, at their respective orbital radii, to cover the Sun's disc as seen from the surface of the planet. Eclipses of the moons by Mars are not only possible, but commonplace, with hundreds occurring each Earth year. There are also rare occasions when Deimos is eclipsed by Phobos. Martian eclipses have been photographed from both the surface of Mars and from orbit.
Pluto, with its proportionately largest moon Charon, is also the site of many eclipses. A series of such mutual eclipses occurred between 1985 and 1990. These daily events led to the first accurate measurements of the physical parameters of both objects.
Eclipses are impossible on Mercury and Venus, which have no moons. However, both have been observed to transit across the face of the Sun. There are on average 13 transits of Mercury each century. Transits of Venus occur in pairs separated by an interval of eight years, but each pair of events happen less than once a century. According to NASA, the next pair of transits will occur on December 10, 2117 and December 8, 2125. Transits on Mercury are much more common.
A binary star system consists of two stars that orbit around their common centre of mass. The movements of both stars lie on a common orbital plane in space. When this plane is very closely aligned with the location of an observer, the stars can be seen to pass in front of each other. The result is a type of extrinsic variable star system called an eclipsing binary.
The maximum luminosity of an eclipsing binary system is equal to the sum of the luminosity contributions from the individual stars. When one star passes in front of the other, the luminosity of the system is seen to decrease. The luminosity returns to normal once the two stars are no longer in alignment.
The first eclipsing binary star system to be discovered was Algol, a star system in the constellation Perseus. Normally this star system has a visual magnitude of 2.1. However, every 2.867 days the magnitude decreases to 3.4 for more than nine hours. This is caused by the passage of the dimmer member of the pair in front of the brighter star. The concept that an eclipsing body caused these luminosity variations was introduced by John Goodricke in 1783.
Sun - Moon - Earth: Solar eclipse | annular eclipse | hybrid eclipse | partial eclipse
Sun - Earth - Moon: Lunar eclipse | penumbral eclipse | partial lunar eclipse | central lunar eclipse
Sun - Phobos - Mars: Transit of Phobos from Mars | Solar eclipses on Mars
Sun - Deimos - Mars: Transit of Deimos from Mars | Solar eclipses on Mars
Other types: Solar eclipses on Jupiter | Solar eclipses on Saturn | Solar eclipses on Uranus | Solar eclipses on Neptune | Solar eclipses on Pluto | https://en.wikipedia.org/wiki?curid=9770 |
Ed (text editor)
The ed text editor was one of the first three key elements of the Unix operating system—assembler, editor, and shell—developed by Ken Thompson in August 1969 on a PDP-7 at AT&T Bell Labs. Many features of ed came from the qed text editor developed at Thompson's alma mater University of California, Berkeley. Thompson was very familiar with qed, and had reimplemented it on the CTSS and Multics systems. Thompson's versions of qed were notable as the first to implement regular expressions. Regular expressions are also implemented in ed, though their implementation is considerably less general than that in qed.
Dennis M. Ritchie produced what Doug McIlroy later described as the "definitive" ed, and aspects of ed went on to influence ex, which in turn spawned vi. The non-interactive Unix command grep was inspired by a common special use of qed and later ed, where the command g/re/p means globally search for the regular expression re and print the lines containing it. The Unix stream editor, sed implemented many of the scripting features of qed that were not supported by ed on Unix.
Features of ed include:
(In)famous for its terseness, ed gives almost no visual feedback, and has been called (by Peter H. Salus) "the most user-hostile editor ever created", even when compared to the contemporary (and notoriously complex) TECO. For example, the message that ed will produce in case of error, or when it wants to make sure the user wishes to quit without saving, is "?". It does not report the current filename or line number, or even display the results of a change to the text, unless requested. Older versions (c. 1981) did not even ask for confirmation when a quit command was issued without the user saving changes. This terseness was appropriate in the early versions of Unix, when consoles were teletypes, modems were slow, and memory was precious. As computer technology improved and these constraints were loosened, editors with more visual feedback became the norm.
In current practice, ed is rarely used interactively, but does find use in some shell scripts. For interactive use, ed was subsumed by the sam, vi and Emacs editors in the 1980s. ed can be found on virtually every version of Unix and Linux available, and as such is useful for people who have to work with multiple versions of Unix. On Unix-based operating systems, some utilities like SQL*Plus run ed as the editor if the EDITOR and VISUAL environment variables are not defined. If something goes wrong, ed is sometimes the only editor available. This is often the only time when it is used interactively.
In addition, the version of ed provided by GNU has a few switches to enhance the feedback. Using provides a simple prompt and enables more useful feedback messages. The switch is defined in POSIX since XPG2 (1987).
The ed commands are often imitated in other line-based editors. For example, EDLIN in early MS-DOS versions and 32-bit versions of Windows NT has a somewhat similar syntax, and text editors in many MUDs (LPMud and descendants, for example) use ed-like syntax. These editors, however, are typically more limited in function.
Here is an example transcript of an ed session. For clarity, commands and text typed by the user are in normal face, and output from ed is emphasized.
The end result is a simple text file containing the following text:
Started with an empty file, the codice_1 command appends text (all ed commands are single letters). The command puts ed in "insert mode", inserting the characters that follow and is terminated by a single dot on a line. The two lines that are entered before the dot end up in the file buffer. The codice_2 command also goes into insert mode, and will insert the entered text (a single empty line in our case) before line two. All commands may be prefixed by a line number to operate on that line.
In the line codice_3, the lowercase L stands for the list command. The command is prefixed by a range, in this case codice_4 which is a shortcut for codice_5. A range is two line numbers separated by a comma (codice_6 means the last line). In return, ed lists all lines, from first to last. These lines are ended with dollar signs, so that white space at the end of lines is clearly visible.
Once the empty line is inserted in line 2, the line which reads "This is line number two." is now actually the third line. This error is corrected with codice_7, a substitution command. The codice_8 will apply it to the correct line; following the command is the text to be replaced, and then the replacement. Listing all lines with codice_3 the line is shown now to be correct.
codice_10 writes the buffer to the file "text" making ed respond with "65", the number of characters written to the file. codice_11 will end an ed session. | https://en.wikipedia.org/wiki?curid=9771 |
Edlin
Edlin is a line editor, and the only text editor provided with early versions of IBM PC DOS, MS-DOS and OS/2. Although superseded in MS-DOS 5.0 and later by the full-screen MS-DOS Editor, and by Notepad in Microsoft Windows, it continues to be included in the 32-bit versions of current Microsoft operating systems.
Edlin was created by Tim Paterson in two weeks in 1980, for Seattle Computer Products's 86-DOS (QDOS) based on the CP/M line editor "ED" — a distant relative of the UNIX "ed" text editor.
Microsoft acquired 86-DOS and sold it as MS-DOS, so Edlin was included in v1.0–v5.0 of MS-DOS. From MS-DOS 6 onwards, the only editor included was the new full-screen MS-DOS Editor.
Windows 95, 98 and ME ran on top of an embedded version of DOS, which reports itself as MS-DOS 7. As a successor to MS-DOS 6, this did not include Edlin.
However, Edlin is included in the 32-bit versions of Windows NT and its derivatives—up to and including Windows 10—because the NTVDM's DOS support in those operating systems is based on MS-DOS version 5.0. However, unlike most other external DOS commands, it has not been transformed into a native Win32 program. It also does not support long filenames, which were not added to MS-DOS and MS-Windows until long after Edlin was written.
The FreeDOS version was developed by Gregory Pietsch.
There are only a few commands. The short list can be found by entering a ? at the edlin prompt.
When a file is open, typing L lists the contents (e.g., codice_1 lists lines 1 through 6). Each line is displayed with a line number in front of it.
The currently selected line has a *. To replace the contents of any line, the line number is entered and any text entered replaces the original. While editing a line pressing Ctrl-C cancels any changes. The * marker remains on that line.
Entering I (optionally preceded with a line number) inserts one or more lines before the * line or the line given. When finished entering lines, Ctrl-C returns to the edlin command prompt.
Edlin may be used as a non-interactive file editor in scripts by redirecting a series of edlin commands.
edlin < script
A GPL-licensed clone of Edlin that includes long filename support is available for download as part of the FreeDOS project. This runs on operating systems such as Linux or Unix as well as MS-DOS. | https://en.wikipedia.org/wiki?curid=9772 |
EBCDIC
Extended Binary Coded Decimal Interchange Code (EBCDIC; ) is an eight-bit character encoding used mainly on IBM mainframe and IBM midrange computer operating systems. It descended from the code used with punched cards and the corresponding six-bit binary-coded decimal code used with most of IBM's computer peripherals of the late 1950s and early 1960s. It is supported by various non-IBM platforms, such as Fujitsu-Siemens' BS2000/OSD, OS-IV, MSP, and MSP-EX, the SDS Sigma series, Unisys VS/9, Burroughs MCP and ICL VME.
EBCDIC was devised in 1963 and 1964 by IBM and was announced with the release of the IBM System/360 line of mainframe computers. It is an eight-bit character encoding, developed separately from the seven-bit ASCII encoding scheme. It was created to extend the existing Binary-Coded Decimal (BCD) Interchange Code, or BCDIC, which itself was devised as an efficient means of encoding the two "zone" and "number" punches on punched cards into six bits. The distinct encoding of 's' and 'S' (using position 2 instead of 1) was maintained from punched cards where it was desirable not to have hole punches too close to each other to ensure the integrity of the physical card.
While IBM was a chief proponent of the ASCII standardization committee, the company did not have time to prepare ASCII peripherals (such as card punch machines) to ship with its System/360 computers, so the company settled on EBCDIC. The System/360 became wildly successful, together with clones such as RCA Spectra 70, ICL System 4, and Fujitsu FACOM, thus so did EBCDIC.
All IBM mainframe and midrange peripherals and operating systems use EBCDIC as their inherent encoding (with toleration for ASCII, for example, ISPF in z/OS can browse and edit both EBCDIC and ASCII encoded files). Software and many hardware peripherals can translate to and from encodings, and modern mainframes (such as IBM Z) include processor instructions, at the hardware level, to accelerate translation between character sets.
There is an EBCDIC-oriented Unicode Transformation Format called UTF-EBCDIC proposed by the Unicode consortium, designed to allow easy updating of EBCDIC software to handle Unicode, but not intended to be used in open interchange environments. Even on systems with extensive EBCDIC support, it has not been popular. For example, z/OS supports Unicode (preferring UTF-16 specifically), but z/OS only has limited support for UTF-EBCDIC.
IBM AIX running on the RS/6000 and its descendants including the IBM Power Systems, Linux running on IBM Z, and operating systems running on the IBM PC and its descendants use ASCII, as did AIX/370 and AIX/390 running on System/370 and System/390 mainframes.
There were numerous difficulties to writing software that would work in both ASCII and EBCDIC.
There are hundreds of EBCDIC code pages based on the original EBCDIC character encoding; there are a variety of EBCDIC code pages intended for use in different parts of the world, including code pages for non-Latin scripts such as Chinese, Japanese (e.g., EBCDIC 930, JEF, and KEIS), Korean, and Greek (EBCDIC 875). There is also a huge number of variations with the letters swapped around for no discernible reason.
The table below shows the "invariant subset" of EBCDIC, which are characters that "should" have the same assignments on all EBCDIC code pages. It also shows (in gray) missing ASCII and EBCDIC punctuation, located where they are in code page 037 (one of the code page variants of EBCDIC). Unassigned codes are typically filled with international or region-specific characters in the various EBCDIC code page variants, but the characters in gray are often moved around or swapped as well. In each cell the first row is an abbreviation for a control code or the character itself; and the second row is the Unicode code (blank for controls that don't exist in Unicode).
Following are the definitions of EBCDIC control characters which either don't map onto the ASCII control characters, or have additional uses. When mapped to Unicode, these are mostly mapped to C1 control character codepoints in a manner specified by IBM's Character Data Representation Architecture (CDRA).
Although the default mapping of New Line (NL) corresponds to the ISO/IEC 6429 Next Line (NEL) character (the behaviour of which is also specified, but not required, in Unicode Annex 14), most of these C1-mapped controls match neither those in the ISO/IEC 6429 C1 set, nor those in other registered C1 control sets such as ISO 6630. Although this effectively makes the non-ASCII EBCDIC controls a unique C1 control set, they are not among the C1 control sets registered in the ISO-IR registry, meaning that they do not have an assigned control set designation sequence (as specified by ISO/IEC 2022, and optionally permitted in ISO/IEC 10646 (Unicode)).
Besides U+0085 (Next Line), the Unicode Standard does not prescribe an interpretation of C1 control characters, leaving their interpretation to higher level protocols (it suggests, but does not require, their ISO/IEC 6429 interpretations in the absence of use for other purposes), so this mapping is permissible in, but not specified by, Unicode.
The following code pages have the full Latin-1 character set (ISO/IEC 8859-1). The first column gives the original code page number. The second column gives the number of the code page updated with the euro sign (€) replacing the universal currency sign (¤) (or in the case of EBCDIC 924, with the set changed to match ISO 8859-15)
Open-source software advocate and software developer Eric S. Raymond writes in his "Jargon File" that EBCDIC was loathed by hackers, by which he meant members of a subculture of enthusiastic programmers. The Jargon File 4.4.7 gives the following definition:
EBCDIC design was also the source of many jokes. One such joke went:
References to the EBCDIC character set are made in the classic Infocom adventure game series "Zork". In the "Machine Room" in "Zork II", EBCDIC is used to imply an incomprehensible language: | https://en.wikipedia.org/wiki?curid=9773 |
Endoplasmic reticulum
The endoplasmic reticulum (ER) is a type of organelle made up of two subunits – rough endoplasmic reticulum (RER), and smooth endoplasmic reticulum (SER). The endoplasmic reticulum is found in most eukaryotic cells and forms an interconnected network of flattened, membrane-enclosed sacs known as cisternae (in the RER), and tubular structures in the SER. The membranes of the ER are continuous with the outer nuclear membrane. The endoplasmic reticulum is not found in red blood cells, or spermatozoa.
The two types of ER share many of the same proteins and engage in certain common activities such as the synthesis of certain lipids and cholesterol. Different types of cells contain different ratios of the two types of ER depending on the activities of the cell.
The outer (cytosolic) face of the rough endoplasmic reticulum is studded with ribosomes that are the sites of protein synthesis. The rough endoplasmic reticulum is especially prominent in cells such as hepatocytes. The smooth endoplasmic reticulum lacks ribosomes and functions in lipid synthesis but not metabolism, the production of steroid hormones, and detoxification. The smooth endoplasmic reticulum is especially abundant in mammalian liver and gonad cells.
The ER was observed with light microscope by Garnier in 1897, who coined the term "ergastoplasm". With electron microscopy, the lacy membranes of the endoplasmic reticulum were first seen in 1969 by Keith R. Porter, Albert Claude, and Ernest F. Fullam. Later, the word "reticulum", which means "network", was applied by Porter in 1953 to describe this fabric of membranes.
The general structure of the endoplasmic reticulum is a network of membranes called cisternae. These sac-like structures are held together by the cytoskeleton. The phospholipid membrane encloses the cisternal space (or lumen), which is continuous with the perinuclear space but separate from the cytosol. The functions of the endoplasmic reticulum can be summarized as the synthesis and export of proteins and membrane lipids, but varies between ER and cell type and cell function. The quantity of both rough and smooth endoplasmic reticulum in a cell can slowly interchange from one type to the other, depending on the changing metabolic activities of the cell. Transformation can include embedding of new proteins in membrane as well as structural changes. Changes in protein content may occur without noticeable structural changes.
The surface of the rough endoplasmic reticulum (often abbreviated "RER" or "rough ER"; also called "granular endoplasmic reticulum") is studded with protein-manufacturing ribosomes giving it a "rough" appearance (hence its name). The binding site of the ribosome on the rough endoplasmic reticulum is the translocon. However, the ribosomes are not a stable part of this organelle's structure as they are constantly being bound and released from the membrane. A ribosome only binds to the RER once a specific protein-nucleic acid complex forms in the cytosol. This special complex forms when a free ribosome begins translating the mRNA of a protein destined for the secretory pathway. The first 5–30 amino acids polymerized encode a signal peptide, a molecular message that is recognized and bound by a signal recognition particle (SRP). Translation pauses and the ribosome complex binds to the RER translocon where translation continues with the nascent (new) protein forming into the RER lumen and/or membrane. The protein is processed in the ER lumen by an enzyme (a signal peptidase), which removes the signal peptide. Ribosomes at this point may be released back into the cytosol; however, non-translating ribosomes are also known to stay associated with translocons.
The membrane of the rough endoplasmic reticulum forms large double-membrane sheets that are located near, and continuous with, the outer layer of the nuclear envelope. The double membrane sheets are stacked and connected through several right- or left-handed helical ramps, the "Terasaki ramps", giving rise to a structure resembling a multi-story car park. Although there is no continuous membrane between the endoplasmic reticulum and the Golgi apparatus, membrane-bound transport vesicles shuttle proteins between these two compartments. Vesicles are surrounded by coating proteins called COPI and COPII. COPII targets vesicles to the Golgi apparatus and COPI marks them to be brought back to the rough endoplasmic reticulum. The rough endoplasmic reticulum works in concert with the Golgi complex to target new proteins to their proper destinations. The second method of transport out of the endoplasmic reticulum involves areas called membrane contact sites, where the membranes of the endoplasmic reticulum and other organelles are held closely together, allowing the transfer of lipids and other small molecules.
The rough endoplasmic reticulum is key in multiple functions:
In most cells the smooth endoplasmic reticulum (abbreviated SER) is scarce. Instead there are areas where the ER is partly smooth and partly rough, this area is called the transitional ER. The transitional ER gets its name because it contains ER exit sites. These are areas where the transport vesicles that contain lipids and proteins made in the ER, detach from the ER and start moving to the Golgi apparatus. Specialized cells can have a lot of smooth endoplasmic reticulum and in these cells the smooth ER has many functions. It synthesizes lipids, phospholipids, and steroids. Cells which secrete these products, such as those in the testes, ovaries, and sebaceous glands have an abundance of smooth endoplasmic reticulum. It also carries out the metabolism of carbohydrates, detoxification of natural metabolism products and of alcohol and drugs, attachment of receptors on cell membrane proteins, and steroid metabolism. In muscle cells, it regulates calcium ion concentration. Smooth endoplasmic reticulum is found in a variety of cell types (both animal and plant), and it serves different functions in each. The smooth endoplasmic reticulum also contains the enzyme glucose-6-phosphatase, which converts glucose-6-phosphate to glucose, a step in gluconeogenesis. It is connected to the nuclear envelope and consists of tubules that are located near the cell periphery. These tubes sometimes branch forming a network that is reticular in appearance. In some cells, there are dilated areas like the sacs of rough endoplasmic reticulum. The network of smooth endoplasmic reticulum allows for an increased surface area to be devoted to the action or storage of key enzymes and the products of these enzymes.
The sarcoplasmic reticulum (SR), from the Greek σάρξ "sarx" ("flesh"), is smooth ER found in myocytes. The only structural difference between this organelle and the smooth endoplasmic reticulum is the medley of proteins they have, both bound to their membranes and drifting within the confines of their lumens. This fundamental difference is indicative of their functions: The endoplasmic reticulum synthesizes molecules, while the sarcoplasmic reticulum stores calcium ions and pumps them out into the sarcoplasm when the muscle fiber is stimulated. After their release from the sarcoplasmic reticulum, calcium ions interact with contractile proteins that utilize ATP to shorten the muscle fiber. The sarcoplasmic reticulum plays a major role in excitation-contraction coupling.
The endoplasmic reticulum serves many general functions, including the folding of protein molecules in sacs called cisternae and the transport of synthesized proteins in vesicles to the Golgi apparatus. Correct folding of newly made proteins is made possible by several endoplasmic reticulum chaperone proteins, including protein disulfide isomerase (PDI), ERp29, the Hsp70 family member BiP/Grp78, calnexin, calreticulin, and the peptidylpropyl isomerase family. Only properly folded proteins are transported from the rough ER to the Golgi apparatus – unfolded proteins cause an unfolded protein response as a stress response in the ER. Disturbances in redox regulation, calcium regulation, glucose deprivation, and viral infection or the over-expression of proteins can lead to endoplasmic reticulum stress response (ER stress), a state in which the folding of proteins slows, leading to an increase in unfolded proteins. This stress is emerging as a potential cause of damage in hypoxia/ischemia, insulin resistance, and other disorders.
Secretory proteins, mostly glycoproteins, are moved across the endoplasmic reticulum membrane. Proteins that are transported by the endoplasmic reticulum throughout the cell are marked with an address tag called a signal sequence. The N-terminus (one end) of a polypeptide chain (i.e., a protein) contains a few amino acids that work as an address tag, which are removed when the polypeptide reaches its destination. Nascent peptides reach the ER via the translocon, a membrane-embedded multiprotein complex. Proteins that are destined for places outside the endoplasmic reticulum are packed into transport vesicles and moved along the cytoskeleton toward their destination. In human fibroblasts, the ER is always co-distributed with microtubules and the depolymerisation of the latter cause its co-aggregation with mitochondria, which are also associated with the ER.
The endoplasmic reticulum is also part of a protein sorting pathway. It is, in essence, the transportation system of the eukaryotic cell. The majority of its resident proteins are retained within it through a retention motif. This motif is composed of four amino acids at the end of the protein sequence. The most common retention sequences are KDEL for lumen located proteins and KKXX for transmembrane protein. However, variations of KDEL and KKXX do occur, and other sequences can also give rise to endoplasmic reticulum retention. It is not known whether such variation can lead to sub-ER localizations. There are three KDEL (1, 2 and 3) receptors in mammalian cells, and they have a very high degree of sequence identity. The functional differences between these receptors remain to be established.
The endoplasmic reticulum does not harbor an ATP-regeneration machinery, and therefore requires ATP import from mitochondria. The imported ATP is vital for the ER to carry out its house keeping cellular functions, such as for protein folding and trafficking.
The ER ATP transporter, SLC35B1/AXER, was recently cloned and characterized, and the mitochondria supply ATP to the ER through a "Ca2+-antagonized transport into the ER" ("CaATiER") mechanism. The "CaATiER" mechanism shows sensitivity to cytosolic Ca2+ ranging from high nM to low μM range, with the Ca2+-sensing element yet to be identified and validated.
Abnormalities in XBP1 lead to a heightened endoplasmic reticulum stress response and subsequently causes a higher susceptibility for inflammatory processes that may even contribute to Alzheimer's disease. In the colon, XBP1 anomalies have been linked to the inflammatory bowel diseases including Crohn's disease.
The unfolded protein response (UPR) is a cellular stress response related to the endoplasmic reticulum. The UPR is activated in response to an accumulation of unfolded or misfolded proteins in the lumen of the endoplasmic reticulum. The UPR functions to restore normal function of the cell by halting protein translation, degrading misfolded proteins, and activating the signaling pathways that lead to increasing the production of molecular chaperones involved in protein folding. Sustained overactivation of the UPR has been implicated in prion diseases as well as several other neurodegenerative diseases and the inhibition of the UPR could become a treatment for those diseases. | https://en.wikipedia.org/wiki?curid=9775 |
Executive Order 9066
Executive Order 9066 was a United States presidential executive order signed and issued during World War II by United States president Franklin D. Roosevelt on February 19, 1942. This order authorized the secretary of war to prescribe certain areas as military zones, clearing the way for the incarceration of Japanese Americans, German Americans, and Italian Americans in U.S. concentration camps.
The text of Executive Order 9066 was as follows:
On March 21, 1942, Roosevelt signed Public Law 503 (approved after only an hour of discussion in the Senate and thirty minutes in the House) in order to provide for the enforcement of his executive order. Authored by War Department official Karl Bendetsen — who would later be promoted to Director of the Wartime Civilian Control Administration and oversee the incarceration of Japanese Americans — the law made violations of military orders a misdemeanor punishable by up to $5,000 in fines and one year in prison.
Using a broad interpretation of EO 9066, Lieutenant General John L. DeWitt issued orders declaring certain areas of the western United States as zones of exclusion under the Executive Order. As a result, approximately 112,000 men, women, and children of Japanese ancestry were evicted from the West Coast of the United States and held in American concentration camps and other confinement sites across the country. Japanese Americans in Hawaii were not incarcerated in the same way, despite the attack on Pearl Harbor. Although the Japanese American population in Hawaii was nearly 40% of the population of Hawaii itself, only a few thousand people were detained there, supporting the eventual finding that their mass removal on the West Coast was motivated by reasons other than "military necessity."
Japanese Americans and other Asians in the U.S. had suffered for decades from prejudice and racially-motivated fear. Laws preventing Asian Americans from owning land, voting, testifying against whites in court, and other racially discriminatory laws existed long before World War II. Additionally, the FBI, Office of Naval Intelligence and Military Intelligence Division had been conducting surveillance on Japanese American communities in Hawaii and the continental U.S. from the early 1930s. In early 1941, President Roosevelt secretly commissioned a study to assess the possibility that Japanese Americans would pose a threat to U.S. security. The report, submitted exactly one month before Pearl Harbor was bombed, found that, "There will be no armed uprising of Japanese" in the United States. "For the most part," the Munson Report said, "the local Japanese are loyal to the United States or, at worst, hope that by remaining quiet they can avoid concentration camps or irresponsible mobs." A second investigation started in 1940, written by Naval Intelligence officer Kenneth Ringle and submitted in January 1942, likewise found no evidence of fifth column activity and urged against mass incarceration. Both were ignored.
Over two-thirds of the people of Japanese ethnicity who were incarcerated — almost 70,000 — were American citizens. Many of the rest had lived in the country between 20 and 40 years. Most Japanese Americans, particularly the first generation born in the United States (the "Nisei"), considered themselves loyal to the United States of America. No Japanese American citizen or Japanese national residing in the United States was ever found guilty of sabotage or espionage.
Americans of Italian and German ancestry were also targeted by these restrictions, including internment. 11,000 people of German ancestry were interned, as were 3,000 people of Italian ancestry, along with some Jewish refugees. The interned Jewish refugees came from Germany, as the U.S. government did not differentiate between ethnic Jews and ethnic Germans (the term "Jewish" was defined as a religious practice, not an ethnicity). Some of the internees of European descent were interned only briefly, while others were held for several years beyond the end of the war. Like the Japanese American incarcerees, these smaller groups had American-born citizens in their numbers, especially among the children. A few members of ethnicities of other Axis countries were interned, but exact numbers are unknown.
There were 10 of these concentration camps across the country called “relocation centers”. There were two in Arkansas, two in Arizona, two in California, one in Idaho, one in Utah, one in Wyoming, and one in Colorado.
Secretary of War Henry L. Stimson was responsible for assisting relocated people with transport, food, shelter, and other accommodations and delegated Colonel Karl Bendetsen to administer the removal of West Coast Japanese. Over the spring of 1942, General John L. DeWitt issued Western Defense Command orders for Japanese Americans to present themselves for removal. The "evacuees" were taken first to temporary assembly centers, requisitioned fairgrounds and horse racing tracks where living quarters were often converted livestock stalls. As construction on the more permanent and isolated WRA camps was completed, the population was transferred by truck or train. These accommodations consisted of tar paper-walled frame buildings in parts of the country with bitter winters and often hot summers. The camps were guarded by armed soldiers and fenced with barbed wire (security measures not shown in published photographs of the camps). Camps held up to 18,000 people, and were small cities, with medical care, food, and education provided by the government. Adults were offered "camp jobs" with wages of $12 to $19 per month, and many camp services such as medical care and education were provided by the camp inmates themselves.
In December 1944, President Roosevelt suspended Executive Order 9066. Incarcerees were released, often to resettlement facilities and temporary housing, and the camps were shut down by 1946.
In the years after the war, the interned Japanese Americans had to rebuild their lives. United States citizens and long-time residents who had been incarcerated lost their personal liberties; many also lost their homes, businesses, property, and savings. Individuals born in Japan were not allowed to become naturalized US citizens until 1952.
On February 19, 1976, President Gerald Ford signed a proclamation formally terminating Executive Order 9066 and apologizing for the internment, stated: "We now know what we should have known then — not only was that evacuation wrong but Japanese-Americans were and are loyal Americans. On the battlefield and at home the names of Japanese-Americans have been and continue to be written in history for the sacrifices and the contributions they have made to the well-being and to the security of this, our common Nation." In 1980, President Jimmy Carter signed legislation to create the Commission on Wartime Relocation and Internment of Civilians (CWRIC). The CWRIC was appointed to conduct an official governmental study of Executive Order 9066, related wartime orders, and their impact on Japanese Americans in the West and Alaska Natives in the Pribilof Islands.
In December 1982, the CWRIC issued its findings in "Personal Justice Denied", concluding that the incarceration of Japanese Americans had not been justified by military necessity. The report determined that the decision to incarcerate was based on "race prejudice, war hysteria, and a failure of political leadership". The Commission recommended legislative remedies consisting of an official Government apology and redress payments of $20,000 to each of the survivors; a public education fund was set up to help ensure that this would not happen again ().
On August 10, 1988, the Civil Liberties Act of 1988, based on the CWRIC recommendations, was signed into law by Ronald Reagan. On November 21, 1989, George H. W. Bush signed an appropriation bill authorizing payments to be paid out between 1990 and 1998. In 1990, surviving internees began to receive individual redress payments and a letter of apology. This bill applied to the Japanese Americans and to members of the Aleut people inhabiting the strategic Aleutian islands in Alaska who were also relocated.
February 19th, the anniversary of the signing of Executive Order 9066, is now the Day of Remembrance, an annual commemoration of the unjust incarceration of the Japanese-American community.
In 2017, the Smithsonian launched an exhibit that contextualizes the document with artwork by Roger Shimomura. | https://en.wikipedia.org/wiki?curid=9778 |
Edvard Munch
Edvard Munch ( , ; 12 December 1863 – 23 January 1944) was a Norwegian painter. His best known work, "The Scream", has become one of the most iconic images of world art.
His childhood was overshadowed by illness, bereavement and the dread of inheriting a mental condition that ran in the family. Studying at the Royal School of Art and Design in Kristiania (today's Oslo), Munch began to live a bohemian life under the influence of nihilist Hans Jæger, who urged him to paint his own emotional and psychological state ('soul painting'). From this emerged his distinctive style.
Travel brought new influences and outlets. In Paris, he learned much from Paul Gauguin, Vincent van Gogh and Henri de Toulouse-Lautrec, especially their use of colour. In Berlin, he met Swedish dramatist August Strindberg, whom he painted, as he embarked on his major canon "The Frieze of Life", depicting a series of deeply-felt themes such as love, anxiety, jealousy and betrayal, steeped in atmosphere.
"The Scream" was conceived in Kristiania. According to Munch, he was out walking at sunset, when he ‘heard the enormous, infinite scream of nature’. The painting's agonised face is widely identified with the "angst" of the modern person. Between 1893 and 1910, he made two painted versions and two in pastels, as well as a number of prints. One of the pastels would eventually command the fourth highest nominal price paid for a painting at auction.
As his fame and wealth grew, his emotional state remained insecure. He briefly considered marriage, but could not commit himself. A breakdown in 1908 forced him to give up heavy drinking, and he was cheered by his increasing acceptance by the people of Kristiania and exposure in the city’s museums. His later years were spent working in peace and privacy. Although his works were banned in Nazi Germany, most of them survived World War II, securing him a legacy.
Edvard Munch was born in a farmhouse in the village of Ådalsbruk in Løten, Norway, to Laura Catherine Bjølstad and Christian Munch, the son of a priest. Christian was a doctor and medical officer who married Laura, a woman half his age, in 1861. Edvard had an elder sister, Johanne Sophie, and three younger siblings: Peter Andreas, Laura Catherine, and Inger Marie. Laura was artistically talented and may have encouraged Edvard and Sophie. Edvard was related to painter Jacob Munch and to historian Peter Andreas Munch.
The family moved to Christiania (renamed Kristiania in 1877, and now Oslo) in 1864 when Christian Munch was appointed medical officer at Akershus Fortress. Edvard's mother died of tuberculosis in 1868, as did Munch's favorite sister Johanne Sophie in 1877. After their mother's death, the Munch siblings were raised by their father and by their aunt Karen. Often ill for much of the winters and kept out of school, Edvard would draw to keep himself occupied. He was tutored by his school mates and his aunt. Christian Munch also instructed his son in history and literature, and entertained the children with vivid ghost-stories and the tales of American writer Edgar Allan Poe.
As Edvard remembered it, Christian's positive behavior toward his children was overshadowed by his morbid pietism. Munch wrote, "My father was temperamentally nervous and obsessively religious—to the point of psychoneurosis. From him I inherited the seeds of madness. The angels of fear, sorrow, and death stood by my side since the day I was born." Christian reprimanded his children by telling them that their mother was looking down from heaven and grieving over their misbehavior. The oppressive religious milieu, Edvard's poor health, and the vivid ghost stories helped inspire his macabre visions and nightmares; the boy felt that death was constantly advancing on him. One of Munch's younger sisters, Laura, was diagnosed with mental illness at an early age. Of the five siblings, only Andreas married, but he died a few months after the wedding. Munch would later write, "I inherited two of mankind's most frightful enemies—the heritage of consumption and insanity."
Christian Munch's military pay was very low, and his attempts to develop a private side practice failed, keeping his family in genteel but perennial poverty. They moved frequently from one cheap flat to another. Munch's early drawings and watercolors depicted these interiors, and the individual objects, such as medicine bottles and drawing implements, plus some landscapes. By his teens, art dominated Munch's interests. At thirteen, Munch had his first exposure to other artists at the newly formed Art Association, where he admired the work of the Norwegian landscape school. He returned to copy the paintings, and soon he began to paint in oils.
In 1879, Munch enrolled in a technical college to study engineering, where he excelled in physics, chemistry and math. He learned scaled and perspective drawing, but frequent illnesses interrupted his studies. The following year, much to his father's disappointment, Munch left the college determined to become a painter. His father viewed art as an "unholy trade", and his neighbors reacted bitterly and sent him anonymous letters. In contrast to his father's rabid pietism, Munch adopted an undogmatic stance toward art. He wrote his goal in his diary: "in my art I attempt to explain life and its meaning to myself."
In 1881, Munch enrolled at the Royal School of Art and Design of Kristiania, one of whose founders was his distant relative Jacob Munch. His teachers were sculptor Julius Middelthun and the naturalistic painter Christian Krohg. That year, Munch demonstrated his quick absorption of his figure training at the Academy in his first portraits, including one of his father and his first self-portrait. In 1883, Munch took part in his first public exhibition and shared a studio with other students. His full-length portrait of Karl Jensen-Hjell, a notorious bohemian-about-town, earned a critic's dismissive response: "It is impressionism carried to the extreme. It is a travesty of art." Munch's nude paintings from this period survive only in sketches, except for "Standing Nude" (1887). They may have been confiscated by his father.
During these early years, Munch experimented with many styles, including Naturalism and Impressionism. Some early works are reminiscent of Manet. Many of these attempts brought him unfavorable criticism from the press and garnered him constant rebukes by his father, who nonetheless provided him with small sums for living expenses. At one point, however, Munch's father, perhaps swayed by the negative opinion of Munch's cousin Edvard Diriks (an established, traditional painter), destroyed at least one painting (likely a nude) and refused to advance any more money for art supplies.
Munch also received his father's ire for his relationship with Hans Jæger, the local nihilist who lived by the code "a passion to destroy is also a creative passion" and who advocated suicide as the ultimate way to freedom. Munch came under his malevolent, anti-establishment spell. "My ideas developed under the influence of the bohemians or rather under Hans Jæger. Many people have mistakenly claimed that my ideas were formed under the influence of Strindberg and the Germans…but that is wrong. They had already been formed by then." At that time, contrary to many of the other bohemians, Munch was still respectful of women, as well as reserved and well-mannered, but he began to give in to the binge drinking and brawling of his circle. He was unsettled by the sexual revolution going on at the time and by the independent women around him. He later turned cynical concerning sexual matters, expressed not only in his behavior and his art, but in his writings as well, an example being a long poem called "The City of Free Love". Still dependent on his family for many of his meals, Munch's relationship with his father remained tense over concerns about his bohemian life.
After numerous experiments, Munch concluded that the Impressionist idiom did not allow sufficient expression. He found it superficial and too akin to scientific experimentation. He felt a need to go deeper and explore situations brimming with emotional content and expressive energy. Under Jæger's commandment that Munch should "write his life", meaning that Munch should explore his own emotional and psychological state, the young artist began a period of reflection and self-examination, recording his thoughts in his "soul's diary". This deeper perspective helped move him to a new view of his art. He wrote that his painting "The Sick Child" (1886), based on his sister's death, was his first "soul painting", his first break from Impressionism. The painting received a negative response from critics and from his family, and caused another "violent outburst of moral indignation" from the community.
Only his friend Christian Krohg defended him:
He paints, or rather regards, things in a way that is different from that of other artists. He sees only the essential, and that, naturally, is all he paints. For this reason Munch's pictures are as a rule "not complete", as people are so delighted to discover for themselves. Oh, yes, they are complete. His complete handiwork. Art is complete once the artist has really said everything that was on his mind, and this is precisely the advantage Munch has over painters of the other generation, that he really knows how to show us what he has felt, and what has gripped him, and to this he subordinates everything else.
Munch continued to employ a variety of brushstroke techniques and color palettes throughout the 1880s and early 1890s, as he struggled to define his style. His idiom continued to veer between naturalistic, as seen in "Portrait of Hans Jæger", and impressionistic, as in "Rue Lafayette". His "Inger On the Beach" (1889), which caused another storm of confusion and controversy, hints at the simplified forms, heavy outlines, sharp contrasts, and emotional content of his mature style to come. He began to carefully calculate his compositions to create tension and emotion. While stylistically influenced by the Post-Impressionists, what evolved was a subject matter which was symbolist in content, depicting a state of mind rather than an external reality. In 1889, Munch presented his first one-man show of nearly all his works to date. The recognition it received led to a two-year state scholarship to study in Paris under French painter Léon Bonnat.
Munch seems to have been an early critic of photography as an art form, and remarked that it "will never compete with the brush and the palette, until such time as photographs can be taken in Heaven or Hell!"
Munch's younger sister Laura was the subject of his 1899 interior "Melancholy: Laura". Amanda O'Neill says of the work, "In this heated claustrophobic scene Munch not only portrays Laura's tragedy, but his own dread of the madness he might have inherited."
Munch arrived in Paris during the festivities of the Exposition Universelle (1889) and roomed with two fellow Norwegian artists. His picture "Morning" (1884) was displayed at the Norwegian pavilion. He spent his mornings at Bonnat's busy studio (which included live female models) and afternoons at the exhibition, galleries, and museums (where students were expected to make copies as a way of learning technique and observation). Munch recorded little enthusiasm for Bonnat's drawing lessons—"It tires and bores me—it's numbing"—but enjoyed the master's commentary during museum trips.
Munch was enthralled by the vast display of modern European art, including the works of three artists who would prove influential: Paul Gauguin, Vincent van Gogh, and Henri de Toulouse-Lautrec—all notable for how they used color to convey emotion. Munch was particularly inspired by Gauguin's "reaction against realism" and his credo that "art was human work and not an imitation of Nature", a belief earlier stated by Whistler. As one of his Berlin friends said later of Munch, "he need not make his way to Tahiti to see and experience the primitive in human nature. He carries his own Tahiti within him." Influenced by Gauguin, as well as the etchings of German artist Max Klinger, Munch experimented with prints as a medium to create graphic versions of his works. In 1896 he created his first woodcuts—a medium that proved ideal to Munch's symbolic imagery. Together with his contemporary Nikolai Astrup, Munch is considered an innovator of the woodcut medium in Norway.
In December 1889 his father died, leaving Munch's family destitute. He returned home and arranged a large loan from a wealthy Norwegian collector when wealthy relatives failed to help, and assumed financial responsibility for his family from then on. Christian's death depressed him and he was plagued by suicidal thoughts: "I live with the dead—my mother, my sister, my grandfather, my father…Kill yourself and then it's over. Why live?" Munch's paintings of the following year included sketchy tavern scenes and a series of bright cityscapes in which he experimented with the pointillist style of Georges Seurat.
By 1892, Munch formulated his characteristic, and original, Synthetist aesthetic, as seen in "Melancholy" (1891), in which color is the symbol-laden element. Considered by the artist and journalist Christian Krohg as the first Symbolist painting by a Norwegian artist, "Melancholy" was exhibited in 1891 at the Autumn Exhibition in Oslo. In 1892, Adelsteen Normann, on behalf of the Union of Berlin Artists, invited Munch to exhibit at its November exhibition, the society's first one-man exhibition. However, his paintings evoked bitter controversy (dubbed "The Munch Affair"), and after one week the exhibition closed. Munch was pleased with the "great commotion", and wrote in a letter: "Never have I had such an amusing time—it's incredible that something as innocent as painting should have created such a stir."
In Berlin, Munch became involved in an international circle of writers, artists and critics, including the Swedish dramatist and leading intellectual August Strindberg, whom he painted in 1892. He also met Danish writer and painter Holger Drachmann, whom he painted in 1898. Drachmann was 17 years Munch's senior and a drinking companion at Zum schwarzen Ferkel in 1893–94. In 1894 Drachmann wrote of Munch: "He struggles hard. Good luck with your struggles, lonely Norwegian."
During his four years in Berlin, Munch sketched out most of the ideas that would comprise his major work, "The Frieze of Life", first designed for book illustration but later expressed in paintings. He sold little, but made some income from charging entrance fees to view his controversial paintings. Already, Munch was showing a reluctance to part with his paintings, which he termed his "children".
His other paintings, including casino scenes, show a simplification of form and detail which marked his early mature style. Munch also began to favor a shallow pictorial space and a minimal backdrop for his frontal figures. Since poses were chosen to produce the most convincing images of states of mind and psychological conditions, as in "Ashes", the figures impart a monumental, static quality. Munch's figures appear to play roles on a theatre stage ("Death in the Sick-Room"), whose pantomime of fixed postures signify various emotions; since each character embodies a single psychological dimension, as in "The Scream", Munch's men and women began to appear more symbolic than realistic. He wrote, "No longer should interiors be painted, people reading and women knitting: there would be living people, breathing and feeling, suffering and loving."
"The Scream" exists in four versions: two pastels (1893 and 1895) and two paintings (1893 and 1910). There are also several lithographs of "The Scream" (1895 and later).
The 1895 pastel sold at auction on 2 May 2012 for US$119,922,500, including commission. It is the most colorful of the versions and is distinctive for the downward-looking stance of one of its background figures. It is also the only version not held by a Norwegian museum.
The 1893 version was stolen from the National Gallery in Oslo in 1994 and recovered. The 1910 painting was stolen in 2004 from The Munch Museum in Oslo, but recovered in 2006 with limited damage.
"The Scream" is Munch's most famous work, and one of the most recognizable paintings in all art. It has been widely interpreted as representing the universal anxiety of modern man. Painted with broad bands of garish color and highly simplified forms, and employing a high viewpoint, it reduces the agonized figure to a garbed skull in the throes of an emotional crisis.
With this painting, Munch met his stated goal of "the study of the soul, that is to say the study of my own self". Munch wrote of how the painting came to be: "I was walking down the road with two friends when the sun set; suddenly, the sky turned as red as blood. I stopped and leaned against the fence, feeling unspeakably tired. Tongues of fire and blood stretched over the bluish black fjord. My friends went on walking, while I lagged behind, shivering with fear. Then I heard the enormous, infinite scream of nature." He later described the personal anguish behind the painting, "for several years I was almost mad… You know my picture, 'The Scream?' I was stretched to the limit—nature was screaming in my blood… After that I gave up hope ever of being able to love again."
In summing up the painting's effects, author Martha Tedeschi has stated:
"Whistler's Mother", Wood's "American Gothic", Leonardo da Vinci's "Mona Lisa" and Edvard Munch's "The Scream" have all achieved something that most paintings—regardless of their art historical importance, beauty, or monetary value—have not: they communicate a specific meaning almost immediately to almost every viewer. These few works have successfully made the transition from the elite realm of the museum visitor to the enormous venue of popular culture.
In December 1893, Unter den Linden in Berlin was the location of an exhibition of Munch's work, showing, among other pieces, six paintings entitled "Study for a Series: Love." This began a cycle he later called the "Frieze of Life—A Poem about Life, Love and Death". "Frieze of Life" motifs, such as "The Storm" and "Moonlight", are steeped in atmosphere. Other motifs illuminate the nocturnal side of love, such as "Rose and Amelie" and "Vampire". In "Death in the Sickroom", the subject is the death of his sister Sophie, which he re-worked in many future variations. The dramatic focus of the painting, portraying his entire family, is dispersed in the separate and disconnected figures of sorrow. In 1894, he enlarged the spectrum of motifs by adding "Anxiety", "Ashes", "Madonna" and "Women in Three Stages" (from innocence to old age).
Around the start of the 20th century, Munch worked to finish the "Frieze". He painted a number of pictures, several of them in bigger format and to some extent featuring the Art Nouveau aesthetics of the time. He made a wooden frame with carved reliefs for the large painting "Metabolism" (1898), initially called "Adam and Eve". This work reveals Munch's preoccupation with the "fall of man" and his pessimistic philosophy of love. Motifs such as "The Empty Cross" and "Golgotha" (both ) reflect a metaphysical orientation, and also reflect Munch's pietistic upbringing. The entire "Frieze" was shown for the first time at the secessionist exhibition in Berlin in 1902.
"The Frieze of Life" themes recur throughout Munch's work but he especially focused on them in the mid-1890s. In sketches, paintings, pastels and prints, he tapped the depths of his feelings to examine his major motifs: the stages of life, the femme fatale, the hopelessness of love, anxiety, infidelity, jealousy, sexual humiliation, and separation in life and death. These themes are expressed in paintings such as "The Sick Child" (1885), "Love and Pain" (retitled "Vampire"; 1893–94), "Ashes" (1894), and "The Bridge". The latter shows limp figures with featureless or hidden faces, over which loom the threatening shapes of heavy trees and brooding houses. Munch portrayed women either as frail, innocent sufferers (see "Puberty" and "Love and Pain") or as the cause of great longing, jealousy and despair (see "Separation", "Jealousy", and "Ashes").
Munch often uses shadows and rings of color around his figures to emphasize an aura of fear, menace, anxiety, or sexual intensity. These paintings have been interpreted as reflections of the artist's sexual anxieties, though it could also be argued that they represent his turbulent relationship with love itself and his general pessimism regarding human existence. Many of these sketches and paintings were done in several versions, such as "Madonna", "Hands" and "Puberty", and also transcribed as wood-block prints and lithographs. Munch hated to part with his paintings because he thought of his work as a single body of expression. So to capitalize on his production and make some income, he turned to graphic arts to reproduce many of his paintings, including those in this series. Munch admitted to the personal goals of his work but he also offered his art to a wider purpose, "My art is really a voluntary confession and an attempt to explain to myself my relationship with life—it is, therefore, actually a sort of egoism, but I am constantly hoping that through this I can help others achieve clarity."
While attracting strongly negative reactions, in the 1890s Munch began to receive some understanding of his artistic goals, as one critic wrote, "With ruthless contempt for form, clarity, elegance, wholeness, and realism, he paints with intuitive strength of talent the most subtle visions of the soul." One of his great supporters in Berlin was Walther Rathenau, later the German foreign minister, who strongly contributed to his success.
In 1896, Munch moved to Paris, where he focused on graphic representations of his "Frieze of Life" themes. He further developed his woodcut and lithographic technique. Munch's "Self-Portrait with Skeleton Arm" (1895) is done with an etching needle-and-ink method also used by Paul Klee. Munch also produced multi-colored versions of "The Sick Child", concerning tuberculosis, which sold well, as well as several nudes and multiple versions of "Kiss" (1892). Many of the Parisian critics still considered Munch's work "violent and brutal" but his exhibitions received serious attention and good attendance. His financial situation improved considerably and in 1897, Munch bought himself a summer house facing the fjords of Kristiania, a small fisherman's cabin built in the late 18th century, in the small town of Åsgårdstrand in Norway. He dubbed this home the "Happy House" and returned here almost every summer for the next 20 years. It was this place he missed when he was abroad and when he felt depressed and exhausted. "To walk in Åsgårdstrand is like walking among my paintings—I get so inspired to paint when I am here".
In 1897 Munch returned to Kristiania, where he also received grudging acceptance—one critic wrote, "A fair number of these pictures have been exhibited before. In my opinion these improve on acquaintance." In 1899, Munch began an intimate relationship with Tulla Larsen, a "liberated" upper-class woman. They traveled to Italy together and upon returning, Munch began another fertile period in his art, which included landscapes and his final painting in "The Frieze of Life" series, "The Dance of Life" (1899). Larsen was eager for marriage, and Munch begged off. His drinking and poor health reinforced his fears, as he wrote in the third person: "Ever since he was a child he had hated marriage. His sick and nervous home had given him the feeling that he had no right to get married." Munch almost gave in to Tulla, but fled from her in 1900, also turning away from her considerable fortune, and moved to Berlin. His "Girls on the Jetty", created in eighteen different versions, demonstrated the theme of feminine youth without negative connotations. In 1902, he displayed his works thematically at the hall of the Berlin Secession, producing "a symphonic effect—it made a great stir—a lot of antagonism—and a lot of approval." The Berlin critics were beginning to appreciate Munch's work even though the public still found his work alien and strange.
The good press coverage gained Munch the attention of influential patrons Albert Kollman and Max Linde. He described the turn of events in his diary, "After twenty years of struggle and misery forces of good finally come to my aid in Germany—and a bright door opens up for me." However, despite this positive change, Munch's self-destructive and erratic behavior involved him first with a violent quarrel with another artist, then with an accidental shooting in the presence of Tulla Larsen, who had returned for a brief reconciliation, which injured two of his fingers. Munch later sawed a self-portrait depicting him and Larsen in half as a consequence of the shooting and subsequent events. She finally left him and married a younger colleague of Munch. Munch took this as a betrayal, and he dwelled on the humiliation for some time to come, channeling some of the bitterness into new paintings. His paintings "Still Life (The Murderess)" and "The Death of Marat I", done in 1906–07, clearly reference the shooting incident and the emotional after effects.
In 1903–04, Munch exhibited in Paris where the coming Fauvists, famous for their boldly false colors, likely saw his works and might have found inspiration in them. When the Fauves held their own exhibit in 1906, Munch was invited and displayed his works with theirs. After studying the sculpture of Rodin, Munch may have experimented with plasticine as an aid to design, but he produced little sculpture. During this time, Munch received many commissions for portraits and prints which improved his usually precarious financial condition. In 1906, he painted the screen for an Ibsen play in the small Kammerspiele Theatre located in Berlin's Deutsches Theater, in which the "Frieze of Life" was hung. The theatre's director Max Reinhardt later sold it; it is now in the Berlin Nationalgalerie. After an earlier period of landscapes, in 1907 he turned his attention again to human figures and situations.
In the autumn of 1908, Munch's anxiety, compounded by excessive drinking and brawling, had become acute. As he later wrote, "My condition was verging on madness—it was touch and go." Subject to hallucinations and feelings of persecution, he entered the clinic of Daniel Jacobson. The therapy Munch received for the next eight months included diet and "electrification" (a treatment then fashionable for nervous conditions, not to be confused with electroconvulsive therapy). Munch's stay in hospital stabilized his personality, and after returning to Norway in 1909, his work became more colorful and less pessimistic. Further brightening his mood, the general public of Kristiania finally warmed to his work, and museums began to purchase his paintings. He was made a Knight of the Royal Order of St. Olav "for services in art". His first American exhibit was in 1912 in New York.
As part of his recovery, Dr. Jacobson advised Munch to only socialize with good friends and avoid drinking in public. Munch followed this advice and in the process produced several full-length portraits of high quality of friends and patrons—honest portrayals devoid of flattery. He also created landscapes and scenes of people at work and play, using a new optimistic style—broad, loose brushstrokes of vibrant color with frequent use of white space and rare use of black—with only occasional references to his morbid themes. With more income, Munch was able to buy several properties giving him new vistas for his art and he was finally able to provide for his family.
The outbreak of World War I found Munch with divided loyalties, as he stated, "All my friends are German but it is France I love." In the 1930s, his German patrons, many Jewish, lost their fortunes and some their lives during the rise of the Nazi movement. Munch found Norwegian printers to substitute for the Germans who had been printing his graphic work. Given his poor health history, during 1918 Munch felt himself lucky to have survived a bout of the Spanish flu, the worldwide pandemic of that year.
Munch spent most of his last two decades in solitude at his nearly self-sufficient estate in Ekely, at Skøyen, Oslo. Many of his late paintings celebrate farm life, including several in which he used his work horse "Rousseau" as a model. Without any effort, Munch attracted a steady stream of female models, whom he painted as the subjects of numerous nude paintings. He likely had sexual relationships with some of them. Munch occasionally left his home to paint murals on commission, including those done for the Freia chocolate factory.
To the end of his life, Munch continued to paint unsparing self-portraits, adding to his self-searching cycle of his life and his unflinching series of takes on his emotional and physical states. In the 1930s and 1940s, the Nazis labeled Munch's work "degenerate art" (along with that of Picasso, Klee, Matisse, Gauguin and many other modern artists) and removed his 82 works from German museums. Adolf Hitler announced in 1937, "For all we care, those prehistoric Stone Age culture barbarians and art-stutterers can return to the caves of their ancestors and there can apply their primitive international scratching."
In 1940, the Germans invaded Norway and the Nazi party took over the government. Munch was 76 years old. With nearly an entire collection of his art in the second floor of his house, Munch lived in fear of a Nazi confiscation. Seventy-one of the paintings previously taken by the Nazis had been returned to Norway through purchase by collectors (the other eleven were never recovered), including "The Scream" and "The Sick Child", and they too were hidden from the Nazis.
Munch died in his house at Ekely near Oslo on 23 January 1944, about a month after his 80th birthday. His Nazi-orchestrated funeral suggested to Norwegians that he was a Nazi sympathizer, a kind of appropriation of the independent artist. The city of Oslo bought the Ekely estate from Munch's heirs in 1946; his house was demolished in May 1960.
When Munch died, his remaining works were bequeathed to the city of Oslo, which built the Munch Museum at Tøyen (it opened in 1963). The museum holds a collection of approximately 1,100 paintings, 4,500 drawings, and 18,000 prints, the broadest collection of his works in the world. The Munch Museum serves as Munch's official estate, and has been active in responding to copyright infringements, as well as clearing copyright for the work, such as the appearance of Munch's "The Scream" in a 2006 M&M's advertising campaign. The U.S. copyright representative for the Munch Museum and the Estate of Edvard Munch is the Artists Rights Society.
Munch's art was highly personalized and he did little teaching. His "private" symbolism was far more personal than that of other Symbolist painters such as Gustave Moreau and James Ensor. Munch was still highly influential, particularly with the German Expressionists, who followed his philosophy, "I do not believe in the art which is not the compulsive result of Man's urge to open his heart." Many of his paintings, including "The Scream", have universal appeal in addition to their highly personal meaning.
Munch's works are now represented in numerous major museums and galleries in Norway and abroad. His cabin, "the Happy House", was given to the municipality of Åsgårdstrand in 1944; it serves as a small Munch Museum. The inventory has been maintained exactly as he left it.
One version of "The Scream" was stolen from the National Gallery in 1994. In 2004, another version of "The Scream", along with one of "Madonna", was stolen from the Munch Museum in a daring daylight robbery. These were all eventually recovered, but the paintings stolen in the 2004 robbery were extensively damaged. They have been meticulously restored and are on display again. Three Munch works were stolen from the Hotel Refsnes Gods in 2005; they were shortly recovered, although one of the works was damaged during the robbery.
In October 2006, the color woodcut "Two people. The lonely" ("To mennesker. De ensomme") set a new record for his prints when it was sold at an auction in Oslo for 8.1 million kroner (US$1.27 million ). It also set a record for the highest price paid in auction in Norway. On 3 November 2008, the painting "Vampire" set a new record for his paintings when it was sold for US$38,162,000 () at Sotheby's New York.
Munch's image appears on the Norwegian 1,000-kroner note, along with pictures inspired by his artwork.
In February 2012, a major Munch exhibition, "Edvard Munch. The Modern Eye", opened at the Schirn Kunsthalle Frankfurt; the exhibition was opened by Mette-Marit, Crown Princess of Norway.
In May 2012, "The Scream" sold for US$119.9 million (), and is the second most expensive artwork ever sold at an open auction. (It was surpassed in November 2013 by "Three Studies of Lucian Freud", which sold for US$142.4 million).
In 2013, four of Munch's paintings were depicted in a series of stamps by the Norwegian postal service, to commemorate in 2014 the 150th anniversary of his birth.
On 14 November 2016 a version of Munch's "The Girls on the Bridge" sold for US$54.5 million () at Sotheby's, New York, making it the second highest price achieved for one of his paintings.
In April 2019 the British Museum hosted the exhibition, "Edvard Munch: Love and Angst", comprising 83 artworks and including a rare original print of "The Scream".
In 1911 the final competition for the decoration of the large walls of the University of Oslo Aula (assembly hall) was held between Munch and Emanuel Vigeland. The episode is known as the "Aula controversy". In 1914 Munch was finally commissioned to decorate the Aula and the work was completed in 1916. This major work in Norwegian monumental painting includes 11 paintings covering . "The Sun", "History" and "Alma Mater" are the key works in this sequence. Munch declared: “I wanted the decorations to form a complete and independent world of ideas, and I wanted their visual expression to be both distinctively Norwegian and universally human.” In 2014 it was suggested that the Aula paintings have a value of at least 500 million kroner. | https://en.wikipedia.org/wiki?curid=9779 |
Extended Industry Standard Architecture
The Extended Industry Standard Architecture (in practice almost always shortened to EISA and frequently pronounced "eee-suh") is a bus standard for IBM PC compatible computers. It was announced in September 1988 by a consortium of PC clone vendors (the Gang of Nine) as a counter to IBM's use of its proprietary Micro Channel architecture (MCA) in its PS/2 series.
In comparison with the AT bus, which the Gang of Nine retroactively renamed to the ISA bus to avoid infringing IBM's trademark on its PC/AT computer, EISA is extended to 32 bits and allows more than one CPU to share the bus. The bus mastering support is also enhanced to provide access to 4 GB of memory. Unlike MCA, EISA can accept older XT and ISA boards — the lines and slots for EISA are a superset of ISA.
EISA was much favoured by manufacturers due to the proprietary nature of MCA, and even IBM produced some machines supporting it. It was somewhat expensive to implement (though not as much as MCA), so it never became particularly popular in desktop PCs. However, it was reasonably successful in the server market, as it was better suited to bandwidth-intensive tasks (such as disk access and networking). Most EISA cards produced were either SCSI or network cards. EISA was also available on some non-IBM-compatible machines such as the AlphaServer, HP 9000-D, SGI Indigo2 and MIPS Magnum.
By the time there was a strong market need for a bus of these speeds and capabilities for desktop computers, the VESA Local Bus and later PCI filled this niche, and EISA vanished into obscurity.
The original IBM PC included five 8-bit slots, running at the system clock speed of 4.77 MHz. The PC/AT, introduced in 1984, had three 8-bit slots and five 16-bit slots, all running at the system clock speed of 6 MHz in the earlier models and 8 MHz in the last version of the computer. The 16-bit slots were a superset of the 8-bit configuration, so "most" 8-bit cards were able to plug into a 16-bit slot (some cards used a "skirt" design that physically interfered with the extended portion of the slot) and continue to run in 8-bit mode. One of the key reasons for the success of the IBM PC (and the PC clones that followed it) was the active ecosystem of third-party expansion cards available for the machines. IBM was restricted from patenting the bus and widely published the bus specifications.
As the PC-clone industry continued to build momentum in the mid- to late-1980s, several problems with the bus began to be apparent. First, because the "AT slot" (as it was known at the time) was not managed by any central standards group, there was nothing to prevent a manufacturer from "pushing" the standard. One of the most common issues was that as PC clones became more common, PC manufacturers began increasing the processor speed to maintain a competitive advantage. Unfortunately, because the ISA bus was originally locked to the processor clock, this meant that some 286 machines had ISA buses that ran at 10, 12, or even 16 MHz. In fact, the first system to clock the ISA bus at 8 MHz was the turbo 8088 clones that clocked the processors at 8 MHz. This caused many issues with incompatibility, where a true IBM-compatible third-party card (designed for an 8 MHz or 4.77 MHz bus) might not work in a higher-speed system (or even worse, would work unreliably). Most PC makers eventually decoupled the slot clock from the system clock, but there was still no standards body to "police" the industry.
As companies like Dell modified the AT bus design, the architecture was so well entrenched that no single clone manufacturer had the leverage to create a standardized alternative, and there was no compelling reason for them to cooperate on a new standard. Because of this, when the first 386-based system (the Compaq Deskpro 386) hit the market in 1986, it still supported 16-bit slots. Other 386 PCs followed suit, and the AT (later ISA) bus remained a part of most systems even into the late 1990s.
Meanwhile, IBM began to worry that it was losing control of the industry it had created. In 1987, IBM released the PS/2 line of computers, which included the MCA bus. MCA included numerous enhancements over the 16-bit AT bus, including bus mastering, burst mode, software-configurable resources, and 32-bit capabilities. However, in an effort to reassert its dominant role, IBM patented the bus and placed stringent licensing and royalty policies on its use. A few manufacturers did produce licensed MCA machines (most notably, NCR), but overall the industry balked at IBM's restrictions.
Steve Gibson proposed that clone makers adopt NuBus. Instead, a group (the "Gang of Nine"), led by Compaq, created a new bus, which was named the Extended (or Enhanced) Industry Standard Architecture, or "EISA" (and the 16-bit bus became known as Industry Standard Architecture, or "ISA"). This provided virtually all of the technical advantages of MCA, while remaining compatible with existing 8-bit and 16-bit cards, and (most enticing to system and card makers) minimal licensing cost.
The EISA bus slot is a two-level staggered pin system, with the upper part of the slot corresponding to the standard ISA bus pin layout. The additional features of the EISA bus are implemented on the lower part of the slot connector, using thin traces inserted into the insulating gap of the upper / ISA card card edge connector. Additionally, the lower part of the bus has five keying notches, so an ISA card with unusually long traces cannot accidentally extend down into the lower part of the slot.
Intel introduced their first EISA chipset (and also their first chipset in the modern sense of the word) as the 82350 in September 1989. Intel introduced a lower-cost variant as the 82350DT, announced in April 1991; it began shipping in June of that year.
The first EISA computer announced was the HP Vectra 486 in October 1989. The first EISA computers to hit the market were the Compaq Deskpro 486 and the SystemPro. The SystemPro, being one of the first PC-style systems designed as a network server, was built from the ground up to take full advantage of the EISA bus. It included such features as multiprocessing, hardware RAID, and bus-mastering network cards.
One of the benefits to come out of the EISA standard was a final codification of the standard to which ISA slots and cards should be held (in particular, clock speed was fixed at an industry standard of 8.33 MHz). Thus, even systems that didn't use the EISA bus gained the advantage of having the ISA standardized, which contributed to its longevity.
The "Gang of Nine" was the informal name given to the consortium of personal computer manufacturing companies that together created the EISA bus. Rival members generally acknowledged Compaq's leadership, with one stating in 1989 that within the Gang of Nine "when you have 10 people sit down before a table to write a letter to the president, someone has to write the letter. Compaq is sitting down at the typewriter". The members were:
Although the MCA bus had a slight performance advantage over EISA (bus speed of 10 MHz, compared to 8.33 MHz), EISA contained almost all of the technological benefits that MCA boasted, including bus mastering, burst mode, software-configurable resources, and 32-bit data/address buses. These brought EISA nearly to par with MCA from a performance standpoint, and EISA easily defeated MCA in industry support.
EISA replaced the tedious jumper configuration common with ISA cards with software-based configuration. Every EISA system shipped with an EISA configuration utility; this was usually a slightly customized version of the standard utilities written by the EISA chipset makers. The user would boot into this utility, either from floppy disk or on a dedicated hard-drive partition. The utility software would detect all EISA cards in the system and could configure any hardware resources (interrupts, memory ports, etc.) on any EISA card (each EISA card would include a disk with information that described the available options on the card) or on the EISA system motherboard. The user could also enter information about ISA cards in the system, allowing the utility to automatically reconfigure EISA cards to avoid resource conflicts.
Similarly, Windows 95, with its Plug-and-Play capability, was not able to change the configuration of EISA cards, but it could detect the cards, read their configuration, and reconfigure Plug-and-Play hardware to avoid resource conflicts. Windows 95 would also automatically attempt to install appropriate drivers for detected EISA cards.
EISA's success was far from guaranteed. Many manufacturers, including those in the "Gang of Nine", researched the possibility of using MCA. For example, Compaq actually produced prototype DeskPro systems using the bus. However, these were never put into production, and when it was clear that MCA had lost, Compaq allowed its MCA license to expire (the license actually cost relatively little; the primary costs associated with MCA, and at which the industry revolted, were royalties to be paid per system shipped).
On the other hand, when it became clear to IBM that Micro Channel was dying, IBM licensed EISA for use in a few server systems. | https://en.wikipedia.org/wiki?curid=9781 |
Earthdawn
Earthdawn is a fantasy role-playing game, originally produced by FASA in 1993. In 1999 it was licensed to Living Room Games, which produced the "Second Edition". It was licensed to RedBrick in 2003, who released the Classic Edition in 2005 and the game's Third Edition in 2009 (the latter through Mongoose Publishing's Flaming Cobra imprint). The license is now held by FASA Games, Inc. (from FASA), who have released the Fourth Edition, with updated mechanics and an advanced metaplot timeline.
The game is similar to fantasy games like "Dungeons & Dragons", but draws more inspiration from games like "RuneQuest". The rules of the game are tightly bound to the underlying magical metaphysics, with the goal of creating a rich, logical fantasy world. Like many role-playing games from the nineties, "Earthdawn" focuses much of its detail on its setting, a province called Barsaive. It is also a prequel to Shadowrun.
Starting in 1993, FASA released over 20 gaming supplements describing this universe; however, it closed down production of "Earthdawn" in January 1999. During that time several novels and short-story anthologies set in the "Earthdawn" universe were also released. In late 1999, FASA granted Living Room Games a licensing agreement to produce new material for the game.
The "Second Edition" did not alter the setting, though it did update the timeline to include events that took place in Barsaive. There were a few changes to the rules in the "Second Edition"; some classes were slightly different or altered abilities from the original. The changes were meant to allow for more rounded characters and better balance of play. Living Room Games last published in 2005 and they no longer have a license with FASA to publish Earthdawn material.
In 2003 a second license was granted to RedBrick, who developed their own edition based on the FASA products, in addition to releasing the original FASA books in PDF form. The "Earthdawn Classic Player's Compendium" and "Earthdawn Classic Gamemaster's Compendium" are essentially an alternative Second Edition, but without a version designation (since the material is compatible anyway). Each book has 524 pages and summarizes much of what FASA published—not only the game mechanics, but also the setting, narrations, and stories. For example, each Discipline has its own chapter, describing it from the point of view of different adepts. Likewise, Barsaive gets a complete treatment, and the chapters contain a lot of log entries and stories in addition to the setting descriptions; the same applies to Horrors and Dragons. Errata was incorporated into the text, correcting previous edition errors and providing rules clarifications.
In 2014, FASA Games announced the forthcoming publication of Earthdawn Fourth Edition and launched a successful Kickstarter to support the project. Fourth Edition is described as a reworking of the game mechanics, with redundancies eliminated, and a simpler success level system. The game world is advanced five years, past the end of the Barsaive-Thera War, in order to clear dangling threads in the metaplot and open the game world to new stories. The first Fourth Edition title—the Player's Guide—was released in early 2015. In 2014 FASA Corporation also gave permission for Impact Miniatures to return the original Heartbreaker Hobbies & Games Official Earthdawn Miniatures range to production. In order to fund this, Impact Miniatures launched a successful Kickstarter project.
In Barsaive, magic, like many things in nature, goes through cycles. As the magic level rises, it allows alien creatures called Horrors to cross from their distant, otherworldly dimension into our own. The Horrors come in an almost infinite variety—from simple eating machines that devour all they encounter, to incredibly intelligent and cunning foes that feed off the negative emotions they inspire in their prey.
In the distant past of "Earthdawn"s setting, an elf scholar discovered that the time of the Horrors was approaching, and founded the Eternal Library in order to discover a way to defeat them — or at the very least, survive them. The community that grew up around the library developed wards and protections against the Horrors, which they traded to other lands and eventually became the powerful Theran Empire, an extremely magically advanced civilization and the main antagonist of the "Earthdawn" setting.
The peoples of the world built kaers, underground towns and cities, which they sealed with the Theran wards to wait out the time of the Horrors, which was called the Scourge. Theran wizards and politicians warned many of the outlying nations around Thera of the coming of the Horrors, offering the protection of the kaers to those who would pledge their loyalty to the Empire. Most of these nations agreed at first though some became unwilling to fulfill their end of the bargain after the end of the Scourge, wanting to have nothing to do with the bureaucratic nation run on political conflict and powered by slavery. After four hundred years of hiding, the Scourge ended, and the people emerged to a world changed by the Horrors. The player characters explore this new world, discovering lost secrets of the past, and fighting Horrors that remain.
The primary setting of Earthdawn is Barsaive, a former province of the Theran Empire. Barsaive is a region of city-states, independent from the Therans since the dwarven Kingdom of Throal led a rebellion against their former overlords. The Theran presence in Barsaive has been limited to a small part of south-western Barsaive, located around the magical fortress of Sky Point and the city of Vivane.
The setting of Earthdawn is the same world as "Shadowrun" (i.e. a fictionalized version of Earth), but takes place millennia earlier. Indeed, the map of Barsaive and its neighboring regions established that most of the game takes place where Ukraine and Russia are in our world. However, the topography other than coastlines and major rivers is quite different, and the only apparent reference to the real world besides the map may be the Blood Wood, known as "Wyrm Wood" before the Scourge and similar in location and extent to the Chernobyl (Ukrainian for "wormwood") zone of alienation. Note should be made that game world links between "Earthdawn" and "Shadowrun" were deliberately broken by the publisher when the "Shadowrun" property was licensed out, in order to avoid the necessity for coordination between publishing companies. FASA has announced since then, that there are no plans to return "Shadowrun" to in-house publication, nor to restore the links between the game worlds.
Two Earthdawn supplements cover territories outside Barsaive. "The Theran Empire" book (First Edition) covers the Theran Empire and its provinces (which roughly correspond to the territories of the Roman Empire, plus colonies in America and India). "Cathay: The Five Kingdoms" (Third Edition) covers the lands of Cathay (Far East).
The setting of "Earthdawn" features several fantasy races for characters and NPCs:
Barsaive was once one of the Theran Empire's many provinces but a series of post-Scourge wars between Thera and various city-states of Barsaive have seen the former province secure its independence. Barsaive's people and governments represent a varied number of individual powers.
"Earthdawn"'s magic system is highly varied but the essential idea is that all player characters (called Adepts) have access to magic, used to perform abilities attained through their Disciplines.
Each Discipline is given a unique set of "Talents" which are used to access the world's magic. Legend points (the "Earthdawn" equivalent of experience points) can be spent to put up the characters level in the Talent, increasing his step level for the ability, making the user more proficient at using that specific type of magic.
Caster Disciplines use the same Talent system as others, but also have access to "spells". How a player character obtains spells varies depending on his Game Master; but how they are used is universal. Casters all have special Talents called "spell matrixes" which they can place spells into. A spell "attuned" (placed into) to a matrix is easily accessible and can be cast at any time. Spells can be switched at the players will while out of combat. Once engaged in combat, however, they must use an action to do so (called re-attuning on the fly), which requires a set difficulty they must achieve, or risk losing their turn.
It is generally recommended that Casters only use attuned spells, but this is not required. Casting a spell that is not in a matrix is referred to as "raw casting". Raw casting is perhaps the most dangerous aspect of the Earthdawn magic system. If the spell is successfully cast, it has its normal effects along with added consequences. Raw casting has a very good chance of drawing the attention of a Horror, which can quickly turn into death for low level characters (and for high level characters as well in some cases).
One of the most innovative ideas in "Earthdawn" is how magical items work. At first, most magical items work exactly like a mundane item of the same type. As a character searches for information about the item's history, performs certain tasks relating to that history, and spends legend points to activate the item, he unlocks some of the magic in the item. As the character learns more about the item and its history, he can unlock more and more power within the item.
Each magical item, therefore, is unique by virtue of its history and the scope of its powers. For example, one magical broadsword may have only 4 magical ranks and only increases the damage of the blade. On the other hand, the legendary sword Purifier, has 10 magical ranks and grants its wielder numerous powers.
"Earthdawn" stands out from other tabletop RPGs with a unique approach to skill tests. Players wanting to perform an action determine their level or "step" for the skill, talent, or ability to be used. This step can then be looked up in a list of dice to be thrown; it is the next-highest integer of the average roll of the dice(s) in question. For example, two six-sided dice will on average yield a result of 7, thus the step number 8 means that 2d6 will be rolled. The consequence is that each such dice roll has a 50% chance of yielding a result at least as high as the corresponding step number.
The result of each die is added (dice which reach their maximum value are thrown again, adding each maximum to the tally, along with the final result below maximum) and compared to a value decided by the game master/storyteller according to the difficulty of the task. This approach means it's always technically possible to succeed with a low step number, yet leaves room for failure on high step numbers. This will sometimes make combat last longer than in other games. As per the above, the difficulty value where the odds of success are perfectly even is identical to the step number.
The dice in steps 3 through 13 form the basis of an 11-Step cycle. To form Steps 14-24, add 1d20. To form Steps 25-35, further add 1d10 + 1d8. For higher cycles, continue alternating between the addition of 1d20 and 1d10 + 1d8. Step 2 is rolled as Step 3, but you subtract 1 from the result. This is notated as "1d4 - 1". Step 1 is 1d4 - 2.
The 3rd edition changes this by removing d4s and d20s from the system. Steps 6 through 12 (as listed above) form the basis of a 7-Step cycle. To add 7 Steps from then on, simply add 1d12.
The 4th edition changes this by making Steps 8 through 18 form the basis of an 11-Step cycle. To form Steps 19-29, add 1d20. To form Steps 30-41, add 2d20, and so on.
Chris W. McCubbin reviewed "Earthdawn" in "Pyramid" #3 (Sept./Oct., 1993), and stated that "Although it never becomes bogged down in cliches and avoids outmoded concepts, "Earthdawn" is, at heart, a very traditional heroic fantasy RPG."
In the February 1994 edition of "Dragon" (Issue 202), Rick Swan liked the high production values "highlighted by striking illustrations and FASA’s usual state-of-the-art graphics", and found that "Thanks to clear writing and sensible organization... it's an easy read." But Swan also found the game setting insubstantial compared to others. "Despite workable rules and a clever setting, "Earthdawn" is more frosting than cake, with little of substance to distinguish it from the competition." Nevertheless, he found himself drawn to the game. "In a greasy pizza, let’s-not-take-this-too-seriously kind of way, "Earthdawn" holds its own."
In a 1996 reader poll conducted by "Arcane" magazine to determine the 50 most popular roleplaying games of all time, "Earthdawn" was ranked 24th. Editor Paul Pettengale commented: "Very good indeed. "Earthdawn" combined traditional fantasy with "Call of Cthulhu"-style horror and a detailed background to create an evocative and interesting setting. Combined with a clear, well-designed rules system and an impressive range of supporting supplements and adventures, this is an excellent fantasy game. It's also of special interest to fans of "Shadowrun", because it describes the past of the same gameworld."
In 1999 "Pyramid" magazine named "Earthdawn" as one of "The Millennium's Most Underrated Games". Editor Scott Haring noted (referring to the FASA edition) that ""Earthdawn" had an original, inventive magic system (no mean trick given the hundreds of fantasy RPGs that came before), and a game world that gave you the classic "monsters and dungeons" sort of RPG experience, but made sense doing it." | https://en.wikipedia.org/wiki?curid=9789 |
Electronic data interchange
Electronic data interchange (EDI) is the concept of businesses electronically communicating information that was traditionally communicated on paper, such as purchase orders and invoices. Technical standards for EDI exist to facilitate parties transacting such instruments without having to make special arrangements.
EDI has existed at least since the early 70s, and there are many EDI standards (including X12, EDIFACT, ODETTE, etc.), some of which address the needs of specific industries or regions. It also refers specifically to a family of standards. In 1996, the National Institute of Standards and Technology defined electronic data interchange as "the computer-to-computer interchange of strictly formatted messages that represent documents other than monetary instruments. EDI implies a sequence of messages between two parties, either of whom may serve as originator or recipient. The formatted data representing the documents may be transmitted from originator to recipient via telecommunications or physically transported on electronic storage media." It distinguished mere electronic communication or data exchange, specifying that "in EDI, the usual processing of received messages is by computer only. Human intervention in the processing of a received message is typically intended only for error conditions, for quality review, and for special situations. For example, the transmission of binary or textual data is not EDI as defined here unless the data are treated as one or more data elements of an EDI message and are not normally intended for human interpretation as part of online data processing." In short, EDI can be defined as the transfer of structured data, by agreed message standards, from one computer system to another without human intervention.
Like many other early information technologies, EDI was inspired by developments in military logistics. The complexity of the 1948 Berlin airlift required the development of concepts and methods to exchange, sometimes over a 300 baud teletype modem, vast quantities of data and information about transported goods. These initial concepts later shaped the first TDCC (Transportation Data Coordinating Committee) standards in the US. Among the first integrated systems using EDI were Freight Control Systems. One such real-time system was the London Airport Cargo EDP Scheme (LACES) at Heathrow Airport, London, UK, in 1971. Implementing the direct trader input (DTI) method, it allowed forwarding agents to enter information directly into the customs processing system, reducing the time for clearance. The increase of maritime traffic and problems at customs similar to those experienced at Heathrow Airport led to the implementation of DTI systems in individual ports or groups of ports in the 1980s.
EDI provides a technical basis for automated commercial "conversations" between two entities, either internal or external. The term EDI encompasses the entire electronic data interchange process, including the transmission, message flow, document format, and software used to interpret the documents. However, EDI standards describe the rigorous format of electronic documents, and the EDI standards were designed, initially in the automotive industry, to be independent of communication and software technologies.
EDI documents generally contain the same information that would normally be found in a paper document used for the same organizational function. For example, an EDI 940 ship-from-warehouse order is used by a manufacturer to tell a warehouse to ship product to a retailer. It typically has a 'ship-to' address, a 'bill-to' address, and a list of product numbers (usually a UPC) and quantities. Another example is the set of messages between sellers and buyers, such as request for quotation (RFQ), bid in response to RFQ, purchase order, purchase order acknowledgement, shipping notice, receiving advice, invoice, and payment advice. However, EDI is not confined to just business data related to trade but encompasses all fields such as medicine (e.g., patient records and laboratory results), transport (e.g., container and modal information), engineering and construction, etc. In some cases, EDI will be used to create a new business information flow (that was not a paper flow before). This is the case in the Advanced Shipment Notification (ASN) which was designed to inform the receiver of a shipment, the goods to be received and how the goods are packaged. This is farther complemented with the shipment's use of the shipping labels containing a GS1-128 barcode referencing the shipment's tracking number.
Some major sets of EDI standards:
Many of these standards first appeared in the early to mid-1980s. The standards prescribe the formats, character sets, and data elements used in the exchange of business documents and forms. The complete X12 Document List includes all major business documents, including purchase orders and invoices.
The EDI standard prescribes mandatory and optional information for a particular document and gives the rules for the structure of the document. The standards are like building codes. Just as two kitchens can be built "to code" but look completely different, two EDI documents can follow the same standard and contain different sets of information. For example, a food company may indicate a product's expiration date while a clothing manufacturer would choose to send colour and size information.
EDI can be transmitted using any methodology agreed to by the sender and recipient, but as more trading partners began using the Internet for transmission, standardized protocols have emerged.
This includes a variety of technologies, including:
When some people compared the synchronous protocol 2400 bit/s modems, CLEO devices, and value-added networks used to transmit EDI documents to transmitting via the Internet, they equated the non-Internet technologies with EDI and predicted erroneously that EDI itself would be replaced along with the non-Internet technologies. In most cases, these non-internet transmission methods are simply being replaced by Internet protocols, such as FTP, HTTP, telnet, and e-mail, but the EDI documents themselves still remain.
In 2002, the IETF published RFC 3335, offering a standardized, secure method of transferring EDI data via e-mail. On July 12, 2005, an IETF working group ratified RFC4130 for MIME-based HTTP EDIINT (a.k.a. AS2) transfers, and the IETF has prepared a similar RFC for FTP transfers (a.k.a. AS3). EDI via web services (a.k.a. AS4) has also been standardised by the OASIS standards body. While some EDI transmission has moved to these newer protocols, the providers of value-added networks remain active.
As more organizations connected to the Internet, eventually most or all EDI was pushed onto it. Initially, this was through ad hoc conventions, such as unencrypted FTP of ASCII text files to a certain folder on a certain host, permitted only from certain IP addresses. However, the IETF has published several informational documents (the "Applicability Statements"; see below under Protocols) describing ways to use standard Internet protocols for EDI.
As of 2002, Walmart has pushed AS2 for EDI. Because of its significant presence in the global supply chain, AS2 has become a commonly adopted approach for EDI.
Organizations that send or receive documents between each other are referred to as "trading partners" in EDI terminology. The trading partners agree on the specific information to be transmitted and how it should be used. This is done in human-readable specifications (also called Message Implementation Guidelines). While the standards are analogous to building codes, the specifications are analogous to blueprints. (The specification may also be called a "mapping," but the term mapping is typically reserved for specific machine-readable instructions given to the translation software.) Larger trading "hubs" have existing Message Implementation Guidelines which mirror their business processes for processing EDI and they are usually unwilling to modify their EDI business practices to meet the needs of their trading partners. Often in a large company these EDI guidelines will be written to be generic enough to be used by different branches or divisions and therefore will contain information not needed for a particular business document exchange. For other large companies, they may create separate EDI guidelines for each branch/division.
Trading partners are free to use any method for the transmission of documents (as described above in the Transmission protocols section). Further, they can either interact directly, or through an intermediary.
Trading partners can connect directly to each other. For example, an automotive manufacturer might maintain a modem-pool that all of its hundreds of suppliers are required to dial into to perform EDI. However, if a supplier does business with several manufacturers, it may need to acquire a different modem (or VPN device, etc.) and different software for each one.
As EDI and web technology have evolved, new EDI software technologies have emerged to facilitate direct (also known as point-to-point) EDI between trading partners. Modern EDI software can facilitate exchanges using any number of different file transmission protocols and EDI document standards, reducing costs and barriers to entry.
To address the limitations in peer-to-peer adoption of EDI, VANs (value-added networks) were established decades ago. A VAN acts as a regional post office. It receives transactions, examines the 'from' and the 'to' information, and routes the transaction to the final recipient. VANs may provide a number of additional services, e.g. retransmitting documents, providing third party audit information, acting as a gateway for different transmission methods, and handling telecommunications support. Because of these and other services VANs provide, businesses frequently use a VAN even when both trading partners are using Internet-based protocols. Healthcare clearinghouses perform many of the same functions as a VAN, but have additional legal restrictions.
VANs may be operated by various entities:
It is important to note that there are key trade-offs between VANs and Direct EDI, and in many instances, organizations exchanging EDI documents can in fact use both in concert, for different aspects of their EDI implementations. For example, in the U.S., the majority of EDI document exchanges use AS2, so a direct EDI setup for AS2 may make sense for a U.S.-based organization. But adding OFTP2 capabilities to communicate with a European partner may be difficult, so a VAN might make sense to handle those specific transactions, while direct EDI is used for the AS2 transactions.
In many ways, a VAN acts as a service provider, simplifying much of the setup for organizations looking to initiate EDI. Due to the fact that many organizations first starting out with EDI often do so to meet a customer or partner requirement and therefore lack in-house EDI expertise, a VAN can be a valuable asset.
However, VANs may come with high costs. VANs typically charge a per-document or even per-line-item transaction fee to process EDI transactions as a service on behalf of their customers. This is the predominant reason why many organizations also implement an EDI software solution or eventually migrate to one for some or all of their EDI.
On the other hand, implementing EDI software can be a challenging process, depending on the complexity of the use case, technologies involved and availability of EDI expertise. In addition, there are ongoing maintenance requirements and updates to consider. To help address these issues, many organizations with less robust IT teams - or no IT professionals - work with EDI systems integrator or managed services provider for EDI implementation and maintenance.
"EDI translation software" provides the interface between internal systems and the EDI format sent/received. For an "inbound" document, the EDI solution will receive the file (either via a value-added network or directly using protocols such as FTP or AS2), take the received EDI file (commonly referred to as an "envelope"), and validate that the trading partner who is sending the file is a valid trading partner, that the structure of the file meets the EDI standards, and that the individual fields of information conform to the agreed-upon standards. Typically, the translator will either create a file of either fixed length, variable length or XML tagged format or "print" the received EDI document (for non-integrated EDI environments). The next step is to convert/transform the file that the translator creates into a format that can be imported into a company's back-end business systems, applications or ERP. This can be accomplished by using a custom program, an integrated proprietary "mapper" or an integrated standards-based graphical "mapper," using a standard data transformation language such as XSLT. The final step is to import the transformed file (or database) into the company's back-end system.
For an "outbound" document, the process for integrated EDI is to export a file (or read a database) from a company's information systems and transform the file to the appropriate format for the translator. The translation software will then "validate" the EDI file sent to ensure that it meets the standard agreed upon by the trading partners, convert the file into "EDI" format (adding the appropriate identifiers and control structures) and send the file to the trading partner (using the appropriate communications protocol).
Another critical component of any EDI translation software is a complete "audit" of all the steps to move business documents between trading partners. The audit ensures that any transaction (which in reality is a business document) can be tracked to ensure that they are not lost. In case of a retailer sending a Purchase Order to a supplier, if the Purchase Order is "lost" anywhere in the business process, the effect is devastating to both businesses. To the supplier, they do not fulfil the order as they have not received it thereby losing business and damaging the business relationship with their retail client. For the retailer, they have a stock outage and the effect is lost sales, reduced customer service and ultimately lower profits.
In EDI terminology, "inbound" and "outbound" refer to the direction of transmission of an EDI document in relation to a particular system, not the direction of merchandise, money or other things represented by the document. For example, an EDI document that tells a warehouse to perform an outbound shipment is an inbound document in relation to the warehouse computer system. It is an outbound document in relation to the manufacturer or dealer that transmitted the document.
EDI and other similar technologies save the company money by providing an alternative to or replacing, information flows that require a great deal of human interaction and paper documents. Even when paper documents are maintained in parallel with EDI exchange, e.g. printed shipping manifests, electronic exchange and the use of data from that exchange reduces the handling costs of sorting, distributing, organizing, and searching paper documents. EDI and similar technologies allow a company to take advantage of the benefits of storing and manipulating data electronically without the cost of manual entry. Another advantage of EDI is the opportunity to reduce or eliminate manual data entry errors, such as shipping and billing errors, because EDI eliminates the need to re-key documents on the destination side. One very important advantage of EDI over paper documents is the speed in which the trading partner receives and incorporates the information into their system greatly reduces cycle times. For this reason, EDI can be an important component of just-in-time production systems.
According to the 2008 Aberdeen report "A Comparison of Supplier Enablement around the World", only 34% of purchase orders are transmitted electronically in North America. In EMEA, 36% of orders are transmitted electronically and in APAC, 41% of orders are transmitted electronically. They also report that the average paper requisition to order costs a company $37.45 in North America, $42.90 in EMEA and $23.90 in APAC. With an EDI requisition to order, costs are reduced to $23.83 in North America, $34.05 in EMEA and $14.78 in APAC.
There are a few barriers to adopting electronic data interchange. One of the most significant barriers is the accompanying business process change. Existing business processes built around paper handling may not be suited for EDI and would require changes to accommodate automated processing of business documents. For example, a business may receive the bulk of their goods by 1 or 2-day shipping and all of their invoices by mail. The existing process may, therefore, assume that goods are typically received before the invoice. With EDI, the invoice will typically be sent when the goods ship and will, therefore, require a process that handles large numbers of invoices whose corresponding goods have not yet been received.
Another significant barrier is the cost in time and money in the initial setup. The preliminary expenses and time that arise from the implementation, customization and training can be costly. It is important to select the correct level of integration to match the business requirement. For a business with relatively few transactions with EDI-based partners, it may make sense for businesses to implement inexpensive "rip and read" solutions, where the EDI format is printed out in human-readable form, and people — rather than computers — respond to the transaction. Another alternative is outsourced EDI solutions provided by EDI "Service Bureaus". For other businesses, the implementation of an integrated EDI solution may be necessary as increases in trading volumes brought on by EDI force them to re-implement their order processing business processes.
The key hindrance to a successful implementation of EDI is the perception many businesses have of the nature of EDI. Many view EDI from the technical perspective that EDI is a data format; it would be more accurate to take the business view that EDI is a system for exchanging business documents with external entities, and integrating the data from those documents into the company's internal systems. Successful implementations of EDI take into account the effect externally generated information will have on their internal systems and validate the business information received. For example, allowing a supplier to update a retailer's accounts payable system without appropriate checks and balances would put the company at significant risk. Businesses new to the implementation of EDI must understand the underlying business process and apply proper judgment.
Below are common EDI acknowledgement
Formats | https://en.wikipedia.org/wiki?curid=9790 |
Extravehicular activity
Extravehicular activity (EVA) is any activity done by an astronaut or cosmonaut outside a spacecraft beyond the Earth's appreciable atmosphere. The term most commonly applies to a spacewalk made outside a craft orbiting Earth (such as the International Space Station). On March 18, 1965, Alexei Leonov became the first human to perform a spacewalk, exiting the capsule during the Voskhod 2 mission for 12 minutes and 9 seconds. The term also applied to lunar surface exploration (commonly known as moonwalks) performed by six pairs of American astronauts in the Apollo program from 1969 to 1972. On July 21, 1969, Neil Armstrong became the first human to perform a moonwalk, outside his lunar lander on Apollo 11 for 2 hours and 31 minutes. On the last three Moon missions astronauts also performed deep-space EVAs on the return to Earth, to retrieve film canisters from the outside of the spacecraft. Astronauts Pete Conrad, Joseph Kerwin, and Paul Weitz also used EVA in 1973 to repair launch damage to Skylab, the United States' first space station.
A "Stand-up" EVA (SEVA) is when an astronaut does not fully leave a spacecraft, but is completely reliant on the spacesuit for environmental support. Its name derives from the astronaut "standing up" in the open hatch, usually to record or assist a spacewalking astronaut.
EVAs may be either tethered (the astronaut is connected to the spacecraft; oxygen and electrical power can be supplied through an umbilical cable; no propulsion is needed to return to the spacecraft), or untethered. Untethered spacewalks were only performed on three missions in 1984 using the Manned Maneuvering Unit (MMU), and on a flight test in 1994 of the Simplified Aid For EVA Rescue (SAFER), a safety device worn on tethered U.S. EVAs.
The Soviet Union/Russia, the United States, the European Space Agency and China have conducted EVAs.
NASA planners invented the term "extravehicular activity" (abbreviated with the acronym EVA) in the early 1960s for the Apollo program to land men on the Moon, because the astronauts would leave the spacecraft to collect lunar material samples and deploy scientific experiments. To support this, and other Apollo objectives, the Gemini program was spun off to develop the capability for astronauts to work outside a two-man Earth orbiting spacecraft. However, the Soviet Union was fiercely competitive in holding the early lead it had gained in crewed spaceflight, so the Soviet Communist Party, led by Nikita Khrushchev, ordered the conversion of its single-pilot Vostok capsule into a two- or three-person craft named Voskhod, in order to compete with Gemini and Apollo. The Soviets were able to launch two Voskhod capsules before U.S. was able to launch its first crewed Gemini.
The Voskhod's avionics required cooling by cabin air to prevent overheating, therefore an airlock was required for the spacewalking cosmonaut to exit and re-enter the cabin while it remained pressurized. By contrast, the Gemini avionics did not require air cooling, allowing the spacewalking astronaut to exit and re-enter the depressurized cabin through an open hatch. Because of this, the American and Soviet space programs developed different definitions for the duration of an EVA. The Soviet (now Russian) definition begins when the outer airlock hatch is open and the cosmonaut is in vacuum. An American EVA began when the astronaut had at least his head outside the spacecraft. The USA has changed its EVA definition since.
The first EVA was performed on March 18, 1965, by Soviet cosmonaut Alexei Leonov, who spent 12 minutes and 9 seconds outside the Voskhod 2 spacecraft. Carrying a white metal backpack containing 45 minutes' worth of breathing and pressurization oxygen, Leonov had no means to control his motion other than pulling on his tether. After the flight, he claimed this was easy, but his space suit ballooned from its internal pressure against the vacuum of space, stiffening so much that he could not activate the shutter on his chest-mounted camera.
At the end of his space walk, the suit stiffening caused a more serious problem: Leonov had to re-enter the capsule through the inflatable cloth airlock, in diameter and long. He improperly entered the airlock head-first and got stuck sideways. He could not get back in without reducing the pressure in his suit, risking "the bends". This added another 12 minutes to his time in vacuum, and he was overheated by from the exertion. It would be almost four years before the Soviets tried another EVA. They misrepresented to the press how difficult Leonov found it to work in weightlessness and concealed the problems encountered until after the end of the Cold War.
The first American spacewalk was performed on June 3, 1965, by Ed White from the second crewed Gemini flight, Gemini IV, for 21 minutes. White was tethered to the spacecraft, and his oxygen was supplied through a umbilical, which also carried communications and biomedical instrumentation. He was the first to control his motion in space with a Hand-Held Maneuvering Unit, which worked well but only carried enough propellant for 20 seconds. White found his tether useful for limiting his distance from the spacecraft but difficult to use for moving around, contrary to Leonov's claim. However, a defect in the capsule's hatch latching mechanism caused difficulties opening and closing the hatch, which delayed the start of the EVA and put White and his crewmate at risk of not getting back to Earth alive.
No EVAs were planned on the next three Gemini flights. The next EVA was planned to be made by David Scott on Gemini VIII, but that mission had to be aborted due to a critical spacecraft malfunction before the EVA could be conducted. Astronauts on the next three Gemini flights (Eugene Cernan, Michael Collins, and Richard Gordon), performed several EVAs, but none was able to successfully work for long periods outside the spacecraft without tiring and overheating. Cernan attempted but failed to test an Air Force Astronaut Maneuvering Unit which included a self-contained oxygen system.
On November 13, 1966, Edwin "Buzz" Aldrin became the first to successfully work in space without tiring during Gemini XII, the last Gemini mission. Aldrin worked outside the spacecraft for 2 hours and 6 minutes, in addition to two stand-up EVAs in the spacecraft hatch for an additional 3 hours and 24 minutes. Aldrin's interest in scuba diving inspired the use of underwater EVA training to simulate weightlessness, which has been used ever since to allow astronauts to practice techniques of avoiding wasted muscle energy.
On January 16, 1969, Soviet cosmonauts Aleksei Yeliseyev and Yevgeny Khrunov transferred from Soyuz 5 to Soyuz 4, which were docked together. This was the second Soviet EVA, and it would be almost another nine years before the Soviets performed their third.
American astronauts Neil Armstrong and Buzz Aldrin performed the first EVA on the lunar surface on July 21, 1969 (UTC), after landing their Apollo 11 Lunar Module spacecraft. This first Moon walk, using self-contained portable life support systems, lasted 2 hours and 36 minutes. A total of fifteen Moon walks were performed among six Apollo crews, including Charles "Pete" Conrad, Alan Bean, Alan Shepard, Edgar Mitchell, David Scott, James Irwin, John Young, Charles Duke, Eugene Cernan, and Harrison "Jack" Schmitt. Cernan was the last Apollo astronaut to step off the surface of the Moon.
Apollo 15 command module pilot Al Worden made an EVA on August 5, 1971, on the return trip from the Moon, to retrieve a film and data recording canister from the service module. He was assisted by Lunar Module Pilot James Irwin standing up in the Command Module hatch. This procedure was repeated by Ken Mattingly and Charles Duke on Apollo 16, and by Ronald Evans and Harrison Schmitt on Apollo 17.
The first EVA repairs of a spacecraft were made by Charles "Pete" Conrad, Joseph Kerwin, and Paul J. Weitz on May 26, June 7, and June 19, 1973, on the Skylab 2 mission. They rescued the functionality of the launch-damaged Skylab space station by freeing a stuck solar panel, deploying a solar heating shield, and freeing a stuck circuit breaker relay. The Skylab 2 crew made three EVAs, and a total of ten EVAs were made by the three Skylab crews. They found that activities in weightlessness required about 2 times longer than on Earth because many astronauts suffered spacesickness early in their flights.
After Skylab, no more EVAs were made by the United States until the advent of the Space Shuttle program in the early 1980s. In this period, the Soviets resumed EVAs, making four from the Salyut 6 and Salyut 7 space stations between December 20, 1977, and July 30, 1982.
When the United States resumed EVAs on April 7, 1983, astronauts started using an Extravehicular Mobility Unit (EMU) for self-contained life support independent of the spacecraft. STS-6 was the first Space Shuttle mission during which a spacewalk was conducted. Also, for the first time, American astronauts used an airlock to enter and exit the spacecraft like the Soviets. Accordingly, the American definition of EVA start time was redefined to when the astronaut switches the EMU to battery power.
China became the third country to independently carry out an EVA on September 27, 2008 during the Shenzhou 7 mission. Chinese astronaut Zhai Zhigang completed a spacewalk wearing the Chinese-developed Feitian space suit, with astronaut Liu Boming wearing the Russian-derived Orlan space suit to help him. Zhai completely exited the craft, while Liu stood by at the airlock, straddling the portal.
The first spacewalk, made by Soviet cosmonaut Alexei Leonov, was commemorated in 1965 with several Eastern Bloc stamps (see Alexei Leonov#Stamps). Since the Soviet Union did not publish details of the Voskhod spacecraft at the time, the spaceship depiction in the stamps was purely fictional.
The U.S. Post Office issued a postage stamp in 1967 commemorating Ed White's first American spacewalk. The engraved image has an accurate depiction of the Gemini IV spacecraft and White's space suit.
NASA "spacewalkers" during the Space Shuttle program were designated as EV-1, EV-2, EV-3 and EV-4 (assigned to mission specialists for each mission, if applicable).
For EVAs from the International Space Station, NASA employed a "camp-out" procedure to reduce the risk of decompression sickness. This was first tested by the Expedition 12 crew. During a camp out, astronauts sleep overnight in the airlock prior to an EVA, lowering the air pressure to , compared to the normal station pressure of . Spending a night at the lower air pressure helps flush nitrogen from the body, thereby preventing "the bends". More recently astronauts have been using the In-Suit Light Exercise protocol rather than camp-out to prevent decompression sickness. | https://en.wikipedia.org/wiki?curid=9792 |
Erin Brockovich
Erin Brockovich (born Pattee; June 22, 1960) is an American legal clerk, consumer advocate, and environmental activist, who, despite her lack of education in the law, was instrumental in building a case against the Pacific Gas and Electric Company (PG&E) of California in 1993. Her successful lawsuit was the subject of a 2000 film, "Erin Brockovich", which starred Julia Roberts. Since then, Brockovich has become a media personality as well, hosting the TV series "Challenge America with Erin Brockovich" on ABC and "Final Justice" on Zone Reality. She is the president of Brockovich Research & Consulting. She also works as a consultant for Girardi & Keese, the New York law firm of Weitz & Luxenberg, which has a focus on personal injury claims for asbestos exposure, and Shine Lawyers in Australia.
Brockovich was born Erin Pattee in Lawrence, Kansas, the daughter of Betty Jo (born O'Neal; 1923–2008), a journalist, and Frank Pattee (1924–2011), an industrial engineer and football player. She has two brothers, Frank Jr. and Thomas (1954–1992), and a sister, Jodie. She graduated from Lawrence High School, then attended Kansas State University, in Manhattan, Kansas, and graduated with an Associate in Applied Arts Degree from Wade College in Dallas, Texas. She worked as a management trainee for Kmart in 1981 but quit after a few months and entered a beauty pageant. She won Miss Pacific Coast in 1981 and left the beauty pageant after the win. She has lived in California since 1982.
The case ("Anderson, et al. v. Pacific Gas and Electric," file BCV 00300) alleged contamination of drinking water with hexavalent chromium (also written as "chromium VI", "Cr-VI" or "Cr-6") in the southern California town of Hinkley. At the center of the case was a facility, the Hinkley compressor station, built in 1952 as a part of a natural-gas pipeline connecting to the San Francisco Bay Area. Between 1952 and 1966, PG&E used hexavalent chromium in a cooling tower system to fight corrosion. The waste water was discharged to unlined ponds at the site, and some percolated into the groundwater, affecting an area near the plant approximately . The Regional Water Quality Control Board (RWQCB) put the PG&E site under its regulations in 1968.
The case was settled in 1996 for US$333 million, the largest settlement ever paid in a direct-action lawsuit in U.S. history. Masry & Vititoe, the law firm for which Brockovich was a legal clerk, received $133.6 million of that settlement, and Brockovich herself was given a bonus of $2 million.
A study released in 2010 by the California Cancer Registry showed that cancer rates in Hinkley "remained unremarkable from 1988 to 2008". An epidemiologist involved in the study said that the 196 cases of cancer reported during the most recent survey of 1996 through 2008 were fewer than what he would expect based on demographics and the regional rate of cancer. However, in June 2013 "Mother Jones" magazine featured a critique from the Center for Public Integrity of the author's work on the later epidemiological studies.
, average Cr-6 levels in Hinkley were recorded as 1.19 ppb with a peak of 3.09 ppb. For comparison, the PG&E Topock Compressor Station on the California-Arizona border averaged 7.8 ppb with peaks of 31.8 ppb based on a PG&E Background Study.
Working with Edward L. Masry, a lawyer based in Thousand Oaks, California, Brockovich went on to participate in other anti-pollution lawsuits. One suit accused the Whitman Corporation of chromium contamination in Willits, California. Another, which listed 1,200 plaintiffs, alleged contamination near PG&E's Kettleman Hills compressor station in Kings County, California, along the same pipeline as the Hinkley site. The Kettleman suit was settled for $335 million in 2006.
In 2003, Brockovich received settlements of $430,000 from two parties and an undisclosed amount from a third party to settle her lawsuit alleging toxic mold in her Agoura Hills, California, home. After experiencing problems with mold contamination in her own home in the Conejo Valley, Brockovich became a prominent activist and educator in this area as well.
Brockovich and Masry filed suit against the Beverly Hills Unified School District in 2003, in which the district was accused of harming the health and safety of its students by allowing a contractor to operate a cluster of oil wells on campus. Brockovich and Masry alleged that 300 cancer cases were linked to the oil wells. Subsequent testing and epidemiological investigation failed to corroborate a substantial link, and Los Angeles County Superior Court Judge Wendell Mortimer granted summary judgment against the plaintiffs. In May 2007, the School District announced that it was to be paid $450,000 as reimbursement for legal expenses.
Brockovich assisted in the filing of a lawsuit against Prime Tanning Corp. of St. Joseph, Missouri, in April 2009. The lawsuit claims that waste sludge from the production of leather, containing high levels of hexavalent chromium, was distributed to farmers in northwest Missouri to use as fertilizer on their fields. It is believed to be a potential cause of an abnormally high number of brain tumors (70 since 1996) around the town of Cameron, Missouri, which is currently being investigated by the EPA.
In June 2009, Brockovich began investigating a case of contaminated water in Midland, Texas. "Significant amounts" of hexavalent chromium were found in the water of more than 40 homes in the area, some of which have now been fitted with state-monitored filters on their water supply. Brockovich said "The only difference between here and Hinkley is that I saw higher levels here than I saw in Hinkley."
In 2012, Brockovich got involved in the mysterious case of 14 students from LeRoy, New York, who began reporting perplexing medical symptoms including tics and speech difficulty. Brockovich believed environmental pollution from the 1970 Lehigh Valley Railroad derailment was the cause and conducted testing in the area. Brockovich was supposed to return to town to present her findings, but never did; in the meantime the students' doctors determined the cause was mass psychogenic illness and that the media exposure was making it worse. No environmental causes were found after repeat testing and the students improved once the media attention died down.
In early 2016, Brockovich became involved in potential litigation against Southern California Gas for a large methane leak from its underground storage facility near the community of Porter Ranch north of Los Angeles (see Aliso Canyon gas leak).
Brockovich's work in bringing litigation against Pacific Gas and Electric was the focus of the 2000 feature film, "Erin Brockovich", starring Julia Roberts in the title role. The film was nominated for five Academy Awards: Best Actress in a Leading Role, Best Actor in a Supporting Role, Best Director, Best Picture, and Best Writing in a Screenplay Written Directly for the Screen. Roberts won the Academy Award for Best Actress for her portrayal of Erin Brockovich. Erin Brockovich herself had a cameo role as a waitress named Julia R.
Brockovich had a more extensive role in the 2012 documentary "Last Call at the Oasis", which focused on not only water pollution but also the overall state of water scarcity as it relates to water policy in the United States.
Brockovich's book, titled "Take It From Me: Life's a Struggle But You Can Win", () was published in 2001. | https://en.wikipedia.org/wiki?curid=9799 |
Economy of Finland
The economy of Finland is a highly industrialised, mixed economy with a per capita output similar to that of other western European economies such as France, Germany and the United Kingdom. The largest sector of Finland's economy is services at 72.7 percent, followed by manufacturing and refining at 31.4 percent. Primary production is 2.9 percent.
With respect to foreign trade, the key economic sector is manufacturing. The largest industries are electronics (21.6 percent), machinery, vehicles and other engineered metal products (21.1 percent), forest industry (13.1 percent), and chemicals (10.9 percent). Finland has timber and several mineral and freshwater resources. Forestry, paper factories, and the agricultural sector (on which taxpayers spend around 2 billion euro annually) are politically sensitive to rural residents. The Greater Helsinki area generates around a third of GDP.
In a 2004 OECD comparison, high-technology manufacturing in Finland ranked second largest after Ireland. Knowledge-intensive services have also ranked the smallest and slow-growth sectors – especially agriculture and low-technology manufacturing – second largest after Ireland. Investment was below expected. Overall short-term outlook was good and GDP growth has been above many EU peers. Finland has the 4th largest knowledge economy in Europe, behind Sweden, Denmark and the UK. The economy of Finland tops the ranking of Global Information Technology 2014 report by the World Economic Forum for concerted output between business sector, scholarly production and the governmental assistance on Information and communications technology.
Finland is highly integrated in the global economy, and international trade is a third of GDP. The European Union makes 60 percent of the total trade. The largest trade flows are with Germany, Russia, Sweden, the United Kingdom, the United States, Netherlands and China. Trade policy is managed by the European Union, where Finland has traditionally been among the free trade supporters, except for agriculture. Finland is the only Nordic country to have joined the Eurozone; Denmark and Sweden have retained their traditional currencies, whereas Iceland and Norway are not members of the EU at all.
Being geographically distant from Western and Central Europe in relation to other Nordic countries, Finland struggled behind in terms of industrialization apart from the production of paper, which partially replaced the export of timber solely as a raw material towards the end of the nineteenth century. But as a relatively poor country, it was vulnerable to shocks to the economy such as the great famine of 1867–1868, which wiped out 15 percent of the population.
Until the 1930s, the Finnish economy was predominantly agrarian and, as late as in the 1950s, more than half the population and 40 percent of output were still in the primary sector.
Property rights were strong. While nationalization committees were set up in France and the United Kingdom, Finland avoided nationalizations. Finnish industry recovered quickly after Second World War. By the end of 1946 industrial output surpassed pre-war numbers. In the immediate post-war period of 1946 to 1951, industry continued to grow rapidly. Many factors contributed to the rapid industrial growth such as war reparations which were largely paid in manufactured products, devaluation of currency in 1945 and 1949, which made dollar rise by 70% against Finish markka and thus boosted exports to the West as well as rebuilding the country which increased demand for industrial products. In 1951, the Korean War boosted exports. Finland practiced an active exchange rate policy and devaluation was used several times to raise the competitiveness of exporting industries.
Between 1950 and 1975, Finland's industry was at the mercy of international economic trends. The fast industrial growth in 1953-1955 was followed by a period of more moderate growth which started in 1956. The causes for the deceleration of growth were the general strike of 1956, as well as weakened export trends and easing of the strict regulation of Finland's foreign trade in 1957, which compelled industry to compete against ever toughening international challengers. An economic recession brought industrial output down by 3.4% in 1958. Industry, however, recovered quickly during the international economic boom that followed the recession. One reason for this was the devaluation of the Finnish markka which increased the value of the US dollar up by 39% against the Finnish markka.
International economy was stable in the 1960s. This trend can be seen in Finland as well, where steady growth of industrial output throughout the decade was recorded.
After failed experiments with protectionism, Finland eased restrictions and concluded a free trade agreement with the European Community in 1973, making its markets more competitive. Finland's industrial output declined in 1975. The decline was caused by the free trade agreement that has been made between Finland and the European Community in 1973. The agreement subjected Finnish industry to ever toughening international competition and a strong contraction duly followed in Finland's exports to the West. In 1976 and 1977 growth of industrial output was almost zero, but in 1978 it swung back towards strong growth again. In 1978 and 1979 industrial output grew at above average rate. The stimuli for this were three devaluations of Finnish markka, which lowered value of the markka by a total of 19%. Impacts from the Oil Crisis on Finnish industry were also alleviated by Finland's bilateral trade with the Soviet Union.
Local education markets expanded and an increasing number of Finns also went abroad to study in the United States or Western Europe, bringing back advanced skills. There was a quite common, but pragmatic-minded, credit and investment cooperation by state and corporations, though it was considered with suspicion. Support for capitalism was widespread. On the other hand, communists (Finnish People's Democratic League) have received the most votes (23.2%) in 1958 parliamentary elections. Savings rate hovered among the world's highest, at around 8% until the 1980s. In the beginning of the 1970s, Finland's GDP per capita reached the level of Japan and the UK. Finland's economic development shared many aspects with export-led Asian countries. The official policy of neutrality enabled Finland to trade both with Western and Comecon markets. Significant bilateral trade was conducted with the Soviet Union, but this did not grow into a dependence.
Like other Nordic countries, Finland has liberalized its system of economic regulation since late 1980s. Financial and product market regulations were modified. Some state enterprises were privatized and some tax rates were altered.
In 1991, the Finnish economy fell into a severe recession. This was caused by a combination of economic overheating (largely due to a change in the banking laws in 1986 which made credit much more accessible), depressed markets with key trading partners (particularly the Swedish and Soviet markets) as well as local markets, slow growth with other trading partners, and the disappearance of the Soviet bilateral trade. Stock market and housing prices declined by 50%. The growth in the 1980s was based on debt, and when the defaults began rolling in, GDP declined by 13% and unemployment increased from a virtual full employment to one fifth of the workforce. The crisis was amplified by trade unions' initial opposition to any reforms. Politicians struggled to cut spending and the public debt doubled to around 60% of GDP. Much of the economic growth in the 1980s was based on debt financing, and the debt defaults led to a savings and loan crisis. A total of over 10 billion euros were used to bail out failing banks, which led to banking sector consolidation.
After devaluations, the depression bottomed out in 1993.
Finland joined the European Union in 1995. The central bank was given an inflation-targeting mandate until Finland joined the euro zone. The growth rate has since been one of the highest of OECD countries and Finland has topped many indicators of national performance.
Finland was one of the 11 countries joining the third phase of the Economic and Monetary Union of the European Union, adopting the euro as the country's currency, on 1 January 1999. The national currency markka (FIM) was withdrawn from circulation and replaced by the euro (EUR) at the beginning of 2002.
The following table shows the main economic indicators in 1980–2017. Inflation under 2% is in green.
Finland's climate and soils make growing crops a particular challenge. The country lies between 60° and 70° north latitude - as far north as Alaska - and has severe winters and relatively short growing seasons that are sometimes interrupted by frosts. However, because the Gulf Stream and the North Atlantic Drift Current moderate the climate, and because of the relatively low elevation of the land area, Finland contains half of the world's arable land north of 60° north latitude. In response to the climate, farmers have relied on quick-ripening and frost-resistant varieties of crops. Most farmland had originally been either forest or swamp, and the soil had usually required treatment with lime and years of cultivation to neutralise excess acid and to develop fertility. Irrigation was generally not necessary, but drainage systems were often needed to remove excess water.
Until the late nineteenth century, Finland's isolation required that most farmers concentrate on producing grains to meet the country's basic food needs. In the fall, farmers planted rye; in the spring, southern and central farmers started oats, while northern farmers seeded barley. Farms also grew small quantities of potatoes, other root crops, and legumes. Nevertheless, the total area under cultivation was still small. Cattle grazed in the summer and consumed hay in the winter. Essentially self-sufficient, Finland engaged in very limited agricultural trade.
This traditional, almost autarkic, production pattern shifted sharply during the late nineteenth century, when inexpensive imported grain from Russia and the United States competed effectively with local grain. At the same time, rising domestic and foreign demand for dairy products and the availability of low-cost imported cattle feed made dairy and meat production much more profitable. These changes in market conditions induced Finland's farmers to switch from growing staple grains to producing meat and dairy products, setting a pattern that persisted into the late 1980s.
In response to the agricultural depression of the 1930s, the government encouraged domestic production by imposing tariffs on agricultural imports. This policy enjoyed some success: the total area under cultivation increased, and farm incomes fell less sharply in Finland than in most other countries. Barriers to grain imports stimulated a return to mixed farming, and by 1938 Finland's farmers were able to meet roughly 90 percent of the domestic demand for grain.
The disruptions caused by the Winter War and the Continuation War caused further food shortages, especially when Finland ceded territory, including about one-tenth of its farmland, to the Soviet Union. The experiences of the depression and the war years persuaded the Finns to secure independent food supplies to prevent shortages in future conflicts.
After the war, the first challenge was to resettle displaced farmers. Most refugee farmers were given farms that included some buildings and land that had already been in production, but some had to make do with "cold farms," that is, land not in production that usually had to be cleared or drained before crops could be sown. The government sponsored large-scale clearing and draining operations that expanded the area suitable for farming. As a result of the resettlement and land-clearing programs, the area under cultivation expanded by about 450,000 hectares, reaching about 2.4 million hectares by the early 1960s. Finland thus came to farm more land than ever before, an unusual development in a country that was simultaneously experiencing rapid industrial growth.
During this period of expansion, farmers introduced modern production practices. The widespread use of modern inputs—chemical fertilisers and insecticides, agricultural machinery, and improved seed varieties—sharply improved crop yields. Yet the modernisation process again made farm production dependent on supplies from abroad, this time on imports of petroleum and fertilisers. By 1984 domestic sources of energy covered only about 20 percent of farm needs, while in 1950 domestic sources had supplied 70 percent of them. In the aftermath of the oil price increases of the early 1970s, farmers began to return to local energy sources such as firewood. The existence of many farms that were too small to allow efficient use of tractors also limited mechanisation. Another weak point was the existence of many fields with open drainage ditches needing regular maintenance; in the mid-1980s, experts estimated that half of the cropland needed improved drainage works. At that time, about 1 million hectares had underground drainage, and agricultural authorities planned to help install such works on another million hectares. Despite these shortcomings, Finland's agriculture was efficient and productive—at least when compared with farming in other European countries.
Forests play a key role in the country's economy, making it one of the world's leading wood producers and providing raw materials at competitive prices for the crucial wood-processing industries. As in agriculture, the government has long played a leading role in forestry, regulating tree cutting, sponsoring technical improvements, and establishing long-term plans to ensure that the country's forests continue to supply the wood-processing industries.
Finland's wet climate and rocky soils are ideal for forests. Tree stands do well throughout the country, except in some areas north of the Arctic Circle. In 1980 the forested area totaled about 19.8 million hectares, providing 4 hectares of forest per capita—far above the European average of about 0.5 hectares. The proportion of forest land varied considerably from region to region. In the central lake plateau and in the eastern and northern provinces, forests covered up to 80 percent of the land area, but in areas with better conditions for agriculture, especially in the southwest, forests accounted for only 50 to 60 percent of the territory. The main commercial tree species—pine, spruce, and birch—supplied raw material to the sawmill, pulp, and paper industries. The forests also produced sizable aspen and elder crops.
The heavy winter snows and the network of waterways were used to move logs to the mills. Loggers were able to drag cut trees over the winter snow to the roads or water bodies. In the southwest, the sledding season lasted about 100 days per year; the season was even longer to the north and the east. The country's network of lakes and rivers facilitated log floating, a cheap and rapid means of transport. Each spring, crews floated the logs downstream to collection points; tugs towed log bundles down rivers and across lakes to processing centers. The waterway system covered much of the country, and by the 1980s Finland had extended roadways and railroads to areas not served by waterways, effectively opening up all of the country's forest reserves to commercial use.
Forestry and farming were closely linked. During the twentieth century, government land redistribution programmes had made forest ownership widespread, allotting forestland to most farms. In the 1980s, private farmers controlled 35 percent of the country's forests; other persons held 27 percent; the government, 24 percent; private corporations, 9 percent; and municipalities and other public bodies, 5 percent. The forestlands owned by farmers and by other people—some 350,000 plots—were the best, producing 75 to 80 percent of the wood consumed by industry; the state owned much of the poorer land, especially that in the north.
The ties between forestry and farming were mutually beneficial. Farmers supplemented their incomes with earnings from selling their wood, caring for forests, or logging; forestry made many otherwise marginal farms viable. At the same time, farming communities maintained roads and other infrastructure in rural areas, and they provided workers for forest operations. Indeed, without the farming communities in sparsely populated areas, it would have been much more difficult to continue intensive logging operations and reforestation in many prime forest areas.
The Ministry of Agriculture and Forestry has carried out forest inventories and drawn up silvicultural plans. According to surveys, between 1945 and the late 1970s foresters had cut trees faster than the forests could regenerate them. Nevertheless, between the early 1950s and 1981, Finland was able to boost the total area of its forests by some 2.7 million hectares and to increase forest stands under 40 years of age by some 3.2 million hectares. Beginning in 1965, the country instituted plans that called for expanding forest cultivation, draining peatland and waterlogged areas, and replacing slow-growing trees with faster-growing varieties. By the mid-1980s, the Finns had drained 5.5 million hectares, fertilized 2.8 million hectares, and cultivated 3.6 million hectares. Thinning increased the share of trees that would produce suitable lumber, while improved tree varieties increased productivity by as much as 30 percent.
Comprehensive silvicultural programmes had made it possible for the Finns simultaneously to increase forest output and to add to the amount and value of the growing stock. By the mid-1980s, Finland's forests produced nearly 70 million cubic meters of new wood each year, considerably more than was being cut. During the postwar period, the annual cut increased by about 120 percent to about 50 million cubic meters. Wood burning fell to one-fifth the level of the immediate postwar years, freeing up wood supplies for the wood-processing industries, which consumed between 40 million and 45 million cubic meters per year. Indeed, industry demand was so great that Finland needed to import 5 million to 6 million cubic meters of wood each year.
To maintain the country's comparative advantage in forest products, Finnish authorities moved to raise lumber output toward the country's ecological limits. In 1984 the government published the Forest 2000 plan, drawn up by the Ministry of Agriculture and Forestry. The plan aimed at increasing forest harvests by about 3 percent per year, while conserving forestland for recreation and other uses. It also called for enlarging the average size of private forest holdings, increasing the area used for forests, and extending forest cultivation and thinning. If successful, the plan would make it possible to raise wood deliveries by roughly one-third by the end of the twentieth century. Finnish officials believed that such growth was necessary if Finland was to maintain its share in world markets for wood and paper products.
Since the 1990s, Finnish industry, which for centuries had relied on the country's vast forests, has become increasingly dominated by electronics and services, as globalization lead to a decline of more traditional industries. Outsourcing resulted in more manufacturing being transferred abroad, with Finnish-based industry focusing to a greater extent on R&D and hi-tech electronics.
The Finnish electronics and electrotechnics industry relies on heavy investment in R&D, and has been accelerated by the liberalisation of global markets. Electrical engineering started in the late 19th century with generators and electric motors built by Gottfried Strömberg, now part of the ABB Group. Other Finnish companies – such as Instru, Vaisala and Neles (now part of Metso) - have succeeded in areas such as industrial automation, medical and meteorological technology. Nokia was once a world leader in mobile telecommunications.
Finland has an abundance of minerals, but many large mines have closed down, and most raw materials are now imported. For this reason, companies now tend to focus on high added-value processing of metals. The exports include steel, copper, chromium, zinc and nickel, and finished products such as steel roofing and cladding, welded steel pipes, copper pipe and coated sheets. Outokumpu is known for developing the flash smelting process for copper production and stainless steel.
With regard to vehicles, the Finnish motor industry consists mostly of manufacturers of tractors (Valtra, formerly Valmet tractor), forest machines (f.ex. Ponsse), military vehicles (Sisu, Patria), trucks (Sisu Auto), buses and Valmet Automotive, a contract manufacturer, whose factory in Uusikaupunki produces Mercedes-Benz cars. Shipbuilding is an important industry: the world's largest cruise ships are built in Finland; also, the Finnish company Wärtsilä produces the world's largest diesel engines and has market share of 47%. In addition, Finland also produces train rolling stock.
The manufacturing industry is a significant employer of about 400,000 people.
The chemical industry is one of the Finland's largest industrial sectors with its roots in tar making in the 17th century. It produces an enormous range of products for the use of other industrial sectors, especially for forestry and agriculture. In addition, its produces plastics, chemicals, paints, oil products, pharmaceuticals, environmental products, biotech products and petrochemicals. In the beginning of this millennium, biotechnology was regarded as one of the most promising high-tech sectors in Finland. In 2006 it was still considered promising, even though it had not yet become "the new Nokia".
Forest products has been the major export industry in the past, but diversification and growth of the economy has reduced its share. In the 1970s, the pulp and paper industry accounted for half of Finnish exports. Although this share has shrank, pulp and paper is still a major industry with 52 sites across the country. Furthermore, several of large international corporations in this business are based in Finland. Stora Enso and UPM were placed No. 1 and No. 3 by output in the world, both producing more than ten million tons. M-real and Myllykoski also appear on the top 100 list.
Finland's energy supply is divided as follows: nuclear power - 26%, net imports - 20%, hydroelectric power - 16%, combined production district heat - 18%, combined production industry - 13%, condensing power - 6%.
One half of all the energy consumed in Finland goes to industry, one fifth to heating buildings and one fifth to transport. Lacking indigenous fossil fuel resources, Finland has been an energy importer. This might change in the future since Finland is currently building its fifth and approved the building permits for its sixth and seventh reactors. There are some uranium resources in Finland, but to date no commercially viable deposits have been identified for exclusive mining of uranium. However, permits have been granted to Talvivaara to produce uranium from the tailings of their nickel-cobalt mine.
Notable companies in Finland include Nokia, the former market leader in mobile telephony; Stora Enso, the largest paper manufacturer in the world; Neste Oil, an oil refining and marketing company; UPM-Kymmene, the third largest paper manufacturer in the world; Aker Finnyards, the manufacturer of the world's largest cruise ships (such as Royal Caribbean's "Freedom of the Seas"); Rovio Mobile, video game developer most notable for creating Angry Birds; KONE, a manufacturer of elevators and escalators; Wärtsilä, a producer of power plants and ship engines; and Finnair, the largest Helsinki-Vantaa based international airline. Additionally, many Nordic design firms are headquartered in Finland. These include the Fiskars owned Iittala Group, Artek a furniture design firm co-created by Alvar Aalto, and Marimekko made famous by Jacqueline Kennedy Onassis. Finland has sophisticated financial markets comparable to the UK in efficiency. Though foreign investment is not as high as some other European countries, the largest foreign-headquartered companies included names such as ABB, Tellabs, Carlsberg, and Siemens.
Around 70-80% of the equity quoted on the Helsinki Stock Exchange are owned by foreign-registered entities. The larger companies get most of their revenue from abroad, and the majority of their employees work outside the country. Cross-shareholding has been abolished and there is a trend towards an Anglo-Saxon style of corporate governance. However, only around 15% of residents have invested in stock market, compared to 20% in France, and 50% in the US.
Between 2000–2003, early stage venture capital investments relative to GDP were 8.5 percent against 4 percent in the EU and 11.5 in the US. Later stage investments fell to the EU median. Invest in Finland and other programs attempt to attract investment. In 2000 FDI from Finland to overseas was 20 billion euro and from overseas to Finland 7 billion euro. Acquisitions and mergers have internationalized business in Finland.
Although some privatization has been gradually done, there are still several state-owned companies of importance. The government keeps them as strategic assets or because they are natural monopoly. These include e.g. Neste (oil refining and marketing), VR (rail), Finnair, VTT (research) and Posti Group (mail). Depending on the strategic importance, the government may hold either 100%, 51% or less than 50% stock. Most of these have been transformed into regular limited companies, but some are quasi-governmental ("liikelaitos"), with debt backed by the state, as in the case of VTT.
Finland's income is generated by the approximately 1.8 million private sector workers, who make an average 25.1 euro per hour (before the median 60% tax wedge) in 2007. According to a 2003 report, residents worked on average around 10 years for the same employer and around 5 different jobs over a lifetime. 62 percent worked for small and medium-sized enterprises. Female employment rate was high and gender segregation on career choices was higher than in the US. In 1999 part-time work rate was one of the smallest in OECD.
Future liabilities are dominated by the pension deficit. Unlike in Sweden, where pension savers can manage their investments, in Finland employers choose a pension fund for the employee. The pension funding rate is higher than in most Western European countries, but still only a portion of it is funded and pensions exclude health insurances and other unaccounted promises. Directly held public debt has been reduced to around 32 percent in 2007. In 2007, the average household savings rate was -3.8 and household debt 101 percent of annual disposable income, a typical level in Europe.
In 2008, the OECD reported that "the gap between rich and poor has widened more in Finland than in any other wealthy industrialised country over the past decade" and that "Finland is also one of the few countries where inequality of incomes has grown between the rich and the middle-class, and not only between rich and poor."
In 2006, there were 2,381,500 households of average size 2.1 people. Forty percent of households consisted of single person, 32 percent two and 28 percent three or more. There were 1.2 million residential buildings in Finland and the average residential space was 38 square metres per person. The average residential property (without land) cost 1,187 euro per square metre and residential land on 8.6 euro per square metre. Consumer energy prices were 8-12 euro cent per kilowatt hour. 74 percent of households had a car. There were 2.5 million cars and 0.4 other vehicles.
Around 92 percent have mobile phones and 58 percent Internet connection at home. The average total household consumption was 20,000 euro, out of which housing at around 5500 euro, transport at around 3000 euro, food and beverages excluding alcoholic at around 2500 euro, recreation and culture at around 2000 euro. Upper-level white-collar households (409,653) consumed an average 27,456 euro, lower-level white-collar households (394,313) 20,935 euro, and blue-collar households (471,370) 19,415 euro.
The unemployment rate was 10.3% in 2015. The employment rate is (persons aged 15–64) 66.8%. Unemployment security benefits for those seeking employment are at an average OECD level. The labor administration funds labour market training for unemployed job seekers, the training for unemployed job seeker can last up to 6 months, which is often vocational. The aim of the training is to improve the channels of finding employment.
The American economist and "The New York Times" columnist Paul Krugman has suggested that the short term costs of euro membership to the Finnish economy outweigh the large gains caused by greater integration with the European economy. Krugman notes that Sweden, which has yet to join the single currency, had similar rates of growth compared to Finland for the period since the introduction of the euro.
Membership of the euro protects Finland from currency fluctuations, which is particularly important for small member states of the European Union like Finland that are highly integrated into the larger European economy. If Finland had retained its own currency, unpredictable exchange rates would prevent the country from selling its products at competitive prices on the European market. In fact, business leaders in Sweden, which is obliged to join the euro when its economy has converged with the eurozone, are almost universal in their support for joining the euro. Although Sweden's currency is not officially pegged to the euro like Denmark's currency the Swedish government maintains an unofficial peg. This exchange rate policy has in the short term benefited the Swedish economy in two ways; (1) much of Sweden's European trade is already denominated in euros and therefore bypasses any currency fluctuation and exchange rate losses, (2) it allows Sweden's non-euro-area exports to remain competitive by dampening any pressure from the financial markets to increase the value of the currency.
Maintaining this balance has allowed the Swedish government to borrow on the international financial markets at record low interest rates and allowed the Swedish central bank to quantitatively ease into a fundamentally sound economy. This has led Sweden's economy to prosper at the expense of less sound economies who have been impacted by the 2008 financial crisis. Sweden's economic performance has therefore been slightly better than Finland's since the financial crisis of 2008. Much of this disparity has, however, been due to the economic dominance of Nokia, Finland's largest company and Finland's only major multinational. Nokia supported and greatly benefited from the euro and the European single market, particularly from a common European digital mobile phone standard (GSM), but it failed to adapt when the market shifted to mobile computing.
One reason for the popularity of the euro in Finland is the memory of a 'great depression' which began in 1990, with Finland not regaining it competitiveness until approximately a decade later when Finland joined the single currency. Some American economists like Paul Krugman claim not to understand the benefits of a single currency and allege that poor economic performance is the result of membership of the single currency. These economists do not, however, advocate separate currencies for the states of the United States, many of which have quite disparate economies.
Finnish politicians have often emulated other Nordics and the Nordic model. Nordic's have been free-trading and relatively welcoming to skilled migrants for over a century, though in Finland immigration is a relatively new phenomenon. This is due largely to Finland's less hospitable climate and the fact that the Finnish language shares roots with none of the major world languages, making it more challenging than average for most to learn. The level of protection in commodity trade has been low, except for agricultural products.
As an economic environment, Finland's judiciary is efficient and effective. Finland is highly open to investment and free trade. Finland has top levels of economic freedom in many areas, although there is a heavy tax burden and inflexible job market. Finland is ranked 16th (ninth in Europe) in the 2008 Index of Economic Freedom. Recently, Finland has topped the patents per capita statistics, and overall productivity growth has been strong in areas such as electronics. While the manufacturing sector is thriving, OECD points out that the service sector would benefit substantially from policy improvements. The IMD World Competitiveness Yearbook 2007 ranked Finland 17th most competitive, next to Germany, and lowest of the Nordics. while the World Economic Forum report has ranked Finland the most competitive country. Finland is one of the most fiscally responsible EU countries.
Economists attribute much growth to reforms in the product markets. According to OECD, only four EU-15 countries have less regulated product markets (UK, Ireland, Denmark and Sweden) and only one has less regulated financial markets (Denmark). Nordic countries were pioneers in liberalising energy, postal, and other markets in Europe. The legal system is clear and business bureaucracy less than most countries. For instance, starting a business takes an average of 14 days, compared to the world average of 43 days and Denmark's average of 6 days. Property rights are well protected and contractual agreements are strictly honored. Finland is rated one of the least corrupted countries in Corruption Perceptions Index. Finland is rated 13th in the Ease of Doing Business Index. It indicates exceptional ease to trade across borders (5th), enforce contracts (7th), and close a business (5th), and exceptional hardship to employ workers (127th) and pay taxes (83rd).
According to the OECD, Finland's job market is the least flexible of the Nordic countries. Finland increased job market regulation in the 1970s to provide stability to manufacturers. In contrast, during the 1990s, Denmark liberalised its job market, Sweden moved to more decentralised contracts, whereas Finnish trade unions blocked many reforms. Many professions have legally recognized industry-wide contracts that lay down common terms of employment including seniority levels, holiday entitlements, and salary levels, usually as part of a Comprehensive Income Policy Agreement. Those who favor less centralized labor market policies consider these agreements bureaucratic, inflexible, and along with tax rates, a key contributor to unemployment and distorted prices. Centralized agreements may hinder structural change as there are fewer incentives to acquire better skills, although Finland already enjoys one of the highest skill-levels in the world.
Tax is collected mainly from municipal income tax, state income tax, state value added tax, customs fees, corporate taxes and special taxes. There are also property taxes, but municipal income tax pays most of municipal expenses. Taxation is conducted by a state agency, Verohallitus, which collects income taxes from each paycheck, and then pays the difference between tax liability and taxes paid as tax rebate or collects as tax arrears afterward. Municipal income tax is a flat tax of nominally 15-20%, with deductions applied, and directly funds the municipality (a city or rural locality). The state income tax is a progressive tax; low-income individuals do not necessarily pay any. The state transfers some of its income as state support to municipalities, particularly the poorer ones. Additionally, the state churches - Finnish Evangelical Lutheran Church and Finnish Orthodox Church - are integrated to the taxation system in order to tax their members.
The middle income worker's tax wedge is 46% and effective marginal tax rates are very high. Value-added tax is 24% for most items. Capital gains tax is 30-34% and corporate tax is 20%, about the EU median. Property taxes are low, but there is a transfer tax (1.6% for apartments or 4% for individual houses) for home buyers. There are high excise taxes on alcoholic beverages, tobacco, automobiles and motorcycles, motor fuels, lotteries, sweets and insurances. For instance, McKinsey estimates that a worker has to pay around 1600 euro for another's 400 euro service - restricting service supply and demand - though some taxation is avoided in the black market and self-service culture. Another study by Karlson, Johansson & Johnsson estimates that the percentage of the buyer's income entering the service vendor's wallet (inverted tax wedge) is slightly over 15%, compared to 10% in Belgium, 25% in France, 40% in Switzerland and 50% in the United States. Tax cuts have been in every post-depression government's agenda and the overall tax burden is now around 43% of GDP compared to 51.1% in Sweden, 34.7% in Germany, 33.5% in Canada, and 30.5% in Ireland.
State and municipal politicians have struggled to cut their consumption, which is very high at 51.7% of GDP compared to 56.6% in Sweden, 46.9 in Germany, 39.3 in Canada, and 33.5% in Ireland. Much of the taxes are spent on public sector employees, which amount to 124,000 state employees and 430,000 municipal employees. That is 113 per 1000 residents (over a quarter of workforce) compared to 74 in the US, 70 in Germany, and 42 in Japan (8% of workforce). The Economist Intelligence Unit's ranking for Finland's e-readiness is high at 13th, compared to 1st for United States, 3rd for Sweden, 5th for Denmark, and 14th for Germany. Also, early and generous retirement schemes have contributed to high pension costs. Social spending such as health or education is around OECD median. Social transfers are also around OECD median. In 2001 Finland's outsourced proportion of spending was below Sweden's and above most other Western European countries. Finland's health care is more bureaucrat-managed than in most Western European countries, though many use private insurance or cash to enjoy private clinics. Some reforms toward more equal marketplace have been made in 2007–2008. In education, child nurseries, and elderly nurseries private competition is bottom-ranking compared to Sweden and most other Western countries. Some public monopolies such Alko remain, and are sometimes challenged by the European Union. The state has a programme where the number of jobs decreases by attrition: for two retirees, only one new employee is hired.
Finland's export-dependent economy continuously adapted to the world market; in doing so, it changed Finnish society as well. The prolonged worldwide boom, beginning in the late 1940s and lasting until the first oil crisis in 1973, was a challenge that Finland met and from which it emerged with a highly sophisticated and diversified economy, including a new occupational structure. Some sectors kept a fairly constant share of the work force. Transportation and construction, for example, each accounted for between 7 and 8 percent in both 1950 and 1985, and manufacturing's share rose only from 22 to 24 percent. However, both the commercial and the service sectors more than doubled their share of the work force, accounting, respectively, for 21 and 28 percent in 1985. The greatest change was the decline of the economically active population employed in agriculture and forestry, from approximately 50 percent in 1950 to 10 percent in 1985. The exodus from farms and forests provided the labour power needed for the growth of other sectors.
Studies of Finnish mobility patterns since World War II have confirmed the significance of this exodus. Sociologists have found that people with a farming background were present in other occupations to a considerably greater extent in Finland than in other West European countries. Finnish data for the early 1980s showed that 30 to 40 percent of those in occupations not requiring much education were the children of farmers, as were about 25 percent in upper-level occupations, a rate two to three times that of France and noticeably higher than that even of neighboring Sweden. Finland also differed from the other Nordic countries in that the generational transition from the rural occupations to white-collar positions was more likely to be direct, bypassing manual occupations.
The most important factor determining social mobility in Finland was education. Children who attained a higher level of education than their parents were often able to rise in the hierarchy of occupations. A tripling or quadrupling in any one generation of the numbers receiving schooling beyond the required minimum reflected the needs of a developing economy for skilled employees. Obtaining advanced training or education was easier for some than for others, however, and the children of white-collar employees still were more likely to become white-collar employees themselves than were the children of farmers and blue-collar workers. In addition, children of white-collar professionals were more likely than not to remain in that class.
The economic transformation also altered income structure. A noticeable shift was the reduction in wage differentials. The increased wealth produced by an advanced economy was distributed to wage earners via the system of broad income agreements that evolved in the postwar era. Organized sectors of the economy received wage hikes even greater than the economy's growth rate. As a result, blue-collar workers' income came, in time, to match more closely the pay of lower level white-collar employees, and the income of the upper middle class declined in relation to that of other groups.
The long trend of growth in living standards paired with diminishing differences between social classes was dramatically reversed during the 1990s. For the first time in the history of Finland income differences have sharply grown. This change has been mostly driven by the growth of income from capital to the wealthiest segment of the population. | https://en.wikipedia.org/wiki?curid=10712 |
Telecommunications in Finland
Finland has excellent communications, and is considered one of the most advanced information societies in the world.
Telephones – main lines in use: 2.368 million (2004)
Telephones – mobile cellular: 4.988 million (2004)
Telephone system: General Assessment: Modern system with excellent service.
Domestic: Digital fiber-optic fixed-line network and an extensive cellular network provide domestic needs. There are three major cellular network providers with independent networks (Elisa Oyj, Telia Finland and DNA Oyj). There are several smaller providers which may have independent networks in smaller areas, but are generally dependent on rented networks. There is a great variety of cellular providers and contracts, and competition is particularly fierce.
International: Country code – 358; 2 submarine cable (Finland-Estonia and Finland-Sweden Connection); satellite earth stations – access to Intelsat transmission service via a Swedish satellite earth station, 1 Inmarsat (Atlantic and Indian Ocean regions); note – Finland shares the Inmarsat earth station with the other Nordic countries (Denmark, Iceland, Norway, and Sweden).
There is a national public radio and television company Yleisradio (Yle), which is funded by television license fees, and two major private media companies, Alma Media and Sanoma, with national TV channels. Yle maintains four TV channels YLE1, YLE2, Teema and FST5. There are four commercial, national channels: Alma Media has MTV3 and SubTV, and Sanoma has Nelonen and Jim. There are also a lot of pay-TV channels. News Corporation introduced itself to the market in 2012 with the Fox channel, which was preceded by Finnish-owned SuomiTV.
AM 2, FM 186, shortwave 1 (1998)
120 (plus 431 repeaters) (1999)
Television is broadcast as digital (DVB-T) only since August 2007. On cable, only digital (DVB-C) will be broadcast from 2008 on.
Internet country code: .fi
Internet hosts: 1,503,976 (2005)
Internet users: 3.286 million (2005)
In 2011, there were over 3.5 million broadband subscriptions in Finland, and the number of both them and mobile data transmission subscriptions continued to grow. | https://en.wikipedia.org/wiki?curid=10713 |
Transport in Finland
The transport system of Finland is well-developed. Factors affecting traffic include the sparse population and long distance between towns and cities, and the cold climate with waterways freezing and land covered in snow for winter.
The extensive road system is utilized by most internal cargo and passenger traffic. , the country's network of main roads has a total length of around and all public roads . The motorway network totals with additional reserved only for motor traffic. Road network expenditure of around €1 billion is paid with vehicle and fuel taxes that amount to around €1.5 billion and €1 billion, respectively.
The main international passenger gateway is Helsinki-Vantaa Airport with over 20 million passengers in 2018. About 25 airports have scheduled passenger services. They are financed by competitive fees and rural airport may be subsidized. The Helsinki-Vantaa based Finnair (known for an Asia-focused strategy), Nordic Regional Airlines provide air services both domestically and internationally. Helsinki has an optimal location for great circle routes between Western Europe and the Far East. Hence, many international travelers visit Helsinki on a stop-over between Asia and Europe.
Despite low population density, taxpayers spend annually around €350 million in maintaining railway tracks even to many rural towns. Operations are privatized and currently the only operator is the state-owned VR. It has 5 percent passenger market share (out of which 80 percent are urban trips in Greater Helsinki) and 25 percent cargo market share. Helsinki has an urban rail network.
Icebreakers keep the 23 ports open all year round. There is passenger traffic from Helsinki and Turku, which have ferry connections to Tallinn, Mariehamn, Sweden and several other destinations.
Road transport in Finland is the most popular method of transportation, particularly in rural areas where the railway network does not extend to. there are of public roads, of which are paved. The main road network comprises over of road.
64% of all traffic on public roads takes place on main roads, which are divided into class I ("/") and class II ("/") main roads. Motorways have been constructed in the country since the 1960s, but they are still reasonably rare because traffic volumes are not large enough to motivate their construction. There are of motorways. Longest stretches are Helsinki–Turku (Main road 1/E18), Helsinki–Tampere (Main road 3/E12), Helsinki–Heinola (Main road 4/E75), and Helsinki–Vaalimaa (Main road 7/E18). The world's northernmost motorway is also located in Finland between Keminmaa and Tornio (Main road 29/E8).
There are no toll roads in Finland.
Speed limits change depending on the time of the year; the maximum speed limit on motorways is in the summer and in the winter. The main roads usually have speed limits of either 100 km/h or . Speed limits in urban areas range between and . If no other speed limit is signposted, the general speed limit in Finland is in built-up areas and outside.
, there are 4,95 million registered automobiles, of which 2,58 million cars. Average age of cars (museum cars excluded) is over ten years, and typically the cars are destroyed in age of 24 years. In 2015, ca. 123 000 new vehicles were registered in Finland. About 550,000–600,000 used automobiles are sold each year in Finland. During 2011–2014 the most sold car brand was Volkswagen. It had a market share of 12% of new cars.
Coaches are mainly operated by private companies and provide services widely across the country. There is a large network of ExpressBus services with connections to all major cities and the most important rural areas as well as a burgeoning OnniBus 'cheap bus' network. Coach stations are operated by Matkahuolto.
Local bus services inside cities and towns have often been tightly regulated by the councils. Many councils also have their own bus operators, such as Tampere City Transit (TKL), which operates some bus lines on a commercial basis in competition with privately owned providers. Regional bus lines have been regulated by the provincial administration to protect old transit companies, leading to cartel situations like TLO in the Turku region, but strong regional regulating bodies, like the Helsinki Regional Transport Authority (HSL/HRT), whose routes are put out to tender exist as well and will become the norm after the transitional period during the 2010s.
In 2015, number of road traffic accidents involving personal injury was 5,164. In them, 266 persons were killed. The number of road deaths per million inhabitants is just below the European average. Traffic safety has improved significantly since the early 1970s, when more than one thousand people died in road traffic every year.
Municipal law 30-31 § gives right to Referendum since year 1990. Citizens of Turku collected 15,000 names in one month for referendum against the underground car park. Politicians with in the elections unknown financing from the parking company neglected the citizens opinion. According to International Association of Public Transport UITP parking places are the among the most effective ways to promote private car use in the city. Therefore, many European cities have cancelled the expensive underground car parking after the 1990s. The EU recommended actions cover develop guidance for concrete measures for the internalisation of external costs for car traffic also in urban areas. In Finland the shops routinely offer free parking for private cars.
The Finnish railway network consists of a total of of railways built with . of track is electrified. In 2010, passengers made 13.4 million long distance voyages and 55.5 million trips in local traffic. On the same year, over of freight were transported.
Passenger trains are operated by the state-owned VR. They serve all the major cities and many rural areas, complemented by bus connections where needed. Most passenger train services originate or terminate at Helsinki Central railway station, and a large proportion of the passenger rail network radiates out of Helsinki. High-speed Pendolino services are operated from Helsinki to other major cities like Jyväskylä, Joensuu, Kuopio, Oulu, Tampere and Turku. Modern InterCity services complement the Pendolino network, and cheaper and older long and short distance trains operate in areas with fewer passengers.
The Helsinki area has three urban rail systems: a tramway, a metro, and a commuter rail system. Light rail systems are currently being planned for Helsinki and also for Turku and Tampere, two of the country's other major urban centres.
There are plans to link Helsinki to Turku and Tampere by high-speed lines resulting in journey times of an hour between the capital and the two cities. A link to Kouvola is also planned. The estimated cost of these lines is €10 billion.
In Finland there have been three cities with trams: Helsinki, Turku and Viipuri. Only Helsinki has retained its tramway network. The trams in Viipuri, having been lost to Soviet Union in 1945, ceased operations in 1957, while the Turku tramway network shut down in 1972.
In november 2016, Tampere city council approved the construction of a new light rail system. Construction of phase 1 begun late 2016 and is scheduled to finish in 2021. Turku also has preliminary plans for new tram system, but no decision to build it has been made.
Helsinki currently operates 13 tramlines on a network of approximately of track in passenger service. The trams have annually 57 million passengers.
There are 148 airfields, 74 of which have paved runways. 21 airports are served by scheduled passenger flights. By far the largest airport is Helsinki-Vantaa Airport, and the second largest by passenger volume is Oulu Airport. The larger airports are managed by the state-owned Finavia (formerly the Finnish Civil Aviation Administration). Finnair, Nordic Regional Airlines and Norwegian Air Shuttle are the main carriers for domestic flights.
Helsinki-Vantaa airport is Finland's global gateway with scheduled non-stop flights to such places as Bangkok, Beijing, Guangzhou, Nagoya, New York, Osaka, Shanghai, Hong Kong and Tokyo. Helsinki has an optimal location for great circle airline traffic routes between Western Europe and the Far East. The airport is located approximately 19 kilometers north of Helsinki's downtown in the city of Vantaa, thus the name Helsinki-Vantaa.
Other airports with regular scheduled international connections are Kokkola-Pietarsaari Airport, Mariehamn Airport, Tampere-Pirkkala Airport, Turku Airport and Vaasa Airport.
The Finnish Maritime Administration is responsible for the maintenance of Finland's waterway network. Finland's waterways includes some of coastal fairways and of Finland waterways (on rivers, canals, and lakes). Saimaa Canal connects Lake Saimaa, and thus much of the inland waterway system of Finland, with the Baltic Sea at Vyborg (Viipuri). However, the lower part of the canal is currently located in Russia. To facilitate through shipping, Finland leases the Russian section of the canal from Russia (the original agreement with the Soviet Union dates to 1963).
The largest general port is Port of Hamina-Kotka. Port of Helsinki is the busiest passenger harbour, and it also has significant cargo traffic. By cargo tons, the five busiest ports are HaminaKotka, Helsinki, Rauma, Kilpilahti and Naantali.
Icebreakers keep 23 ports open for traffic even in winter. The ports in Gulf of Bothnia need icebreakers in average six months a year, while in Gulf of Finland icebreakers are needed for three months a year.
Frequent ferry service connects Finland with Estonia and Sweden. Baltic cruise liners regularly call on the
port of Helsinki as well. In domestic service, ferries connect Finland's islands with the mainland. Finland's cargo ports
move freight both for Finland's own needs and for transshipment to Russia. | https://en.wikipedia.org/wiki?curid=10714 |
Finnish Defence Forces
The Finnish Defence Forces (, ) are responsible for the defence of Finland. A universal male conscription is in place, under which all men above 18 years of age serve for 165, 255, or 347 days. Alternative non-military service for all men, and volunteer service for all women are possible.
Finland is the only non-NATO European Union state bordering Russia. Finland's official policy states that a wartime military strength of 280,000 personnel constitutes a sufficient deterrent. The army consists of a highly mobile field army backed up by local defence units. The army defends the national territory and its military strategy employs the use of the heavily forested terrain and numerous lakes to wear down an aggressor, instead of attempting to hold the attacking army on the frontier.
Finland's defence budget equals approximately €3.2 billion or 1.3% of GDP. The voluntary overseas service is highly popular and troops serve around the world in UN, NATO and EU missions. Homeland defence willingness against a superior enemy is at 76%, one of the highest rates in Europe.
In war time the Finnish Border Guard (which is its own military unit in peacetime) will become part of the Finnish Defence Forces.
After Finland's declaration of independence on 6 December 1917, the Civic Guards were proclaimed the troops of the government on 25 January 1918 and C.G.E Mannerheim was appointed as Commander-in-Chief of these forces the next day. Fighting between the White Guards (as the Civic Guards were commonly known) and the Red Guards had already broken out about a week before around Viipuri, in what became known as the Finnish Civil War.
In the war, the Whites were victorious in large part thanks to the leadership of Mannerheim and the lead by example offensive mindedness of 1,800 German-trained Finnish Jägers, who brought with them German tactical doctrine and military culture. The post-war years were characterized by the Volunteer Campaigns that came to an end in 1920 with the signing of the Treaty of Tartu, which ended the state of war between Finland and Soviet Russia and defined the internationally recognized borders of Finland.
After winning the Civil War, the Finnish peacetime army was organized as three divisions and a brigade by professional German officers. It became the basic structure for the next 20 years. The coast was guarded by former czarist coastal fortifications and ships taken as prizes of war. The Air Force had already been formed in March 1918, but remained a part of the Army and did not become a fully independent fighting force until 1928.
The new government instituted conscription after the Civil War and also introduced a mobilization system and compulsory refresher courses for reservists. An academy providing basic officer training ("Kadettikoulu") was established in 1919, the founding of a General Staff College ("Sotakorkeakoulu") followed in 1924, and in 1927 a tactical training school ("Taistelukoulu") for company-grade and junior officers and NCOs was set up. The requirement of one year of compulsory service was greater than that imposed by any other Scandinavian country in the 1920s and the 1930s, but political opposition to defense spending left the military badly equipped to resist an attack by the Soviet Union, the only security threat in Finnish eyes.
When the Soviets invaded in November 1939, the Finns defeated the Red Army on numerous occasions, including at the crucial Battle of Suomussalmi. These successes were in large part thanks to the application of motti tactics. While the Finns ultimately lost the war and were forced to agree to the Moscow Peace Treaty, the Soviet objective of conquering Finland failed, in part due to the threat of Allied intervention. During the war the Finns lost 25,904 men, while Soviet losses were 167,976 dead.
Finland fought in the Continuation War alongside Germany from 1941 to 1944. Thanks to Nazi-German aid, the army was now much better equipped, and the period of conscription had been increased to two years, making possible the formation of sixteen infantry divisions. Having initially deployed on the defensive, the Finns took advantage of the weakening of the Soviet positions as a consequence of Operation Barbarossa, swiftly recovering their lost territories and invading Soviet territory in Karelia, eventually settling into defensive positions from December 1941 onwards. The Soviet offensive of June 1944 undid these Finnish gains and, while failing in its objective of destroying the Finnish army and forcing Finland's unconditional surrender, forced Finland out of the war. The Finnish were able to preserve their independence with key defensive victories over the Red Army. The Battle of Tali-Ihantala being very significant.
These conflicts involving Finland has had a significant impact on the Finnish defense force of today, while other European militaries has cut-down on their forces, Finland has still maintained a large conscript-based reserve army. As stated in a Swedish report; "The reason why the FDF chose to maintain this model while its Nordic neighbours jumped on the expeditionary bandwagon is not hard to see. Sharing a 1340km border with Russia, the need for large ground forces is self-explanatory. Furthermore, memories of World War II – in which over 2 per cent of the population perished in two brutal wars with the Soviet Union – are very much alive in Finland".
The demobilization and regrouping of the Finnish Defence Forces were carried out in late 1944 under the supervision of the Soviet-dominated Allied Control Commission. Following the Treaty of Paris in 1947, which imposed restrictions on the size and equipment of the armed forces and required disbandment of the Civic Guard, Finland reorganized its defense forces. The fact that the conditions of the peace treaty did not include prohibitions on reserves or mobilization made it possible to contemplate an adequate defense establishment within the prescribed limits. The reorganization resulted in the adoption of the brigade -in place of the division- as the standard formation.
For the first two decades after the Second World War, the Finnish Defence Forces relied largely on obsolete wartime material. Defence spending remained minimal until the early 1960s. During the peak of the Cold War, the Finnish government made a conscious effort to increase defence capability. This resulted in the commissioning of several new weapons systems and the strengthening of the defence of Finnish Lapland by the establishment of new garrisons in the area. From 1968 onwards, the Finnish government adopted the doctrine of territorial defence, which requires the use of large land areas to delay and wear out a potential aggressor. The doctrine was complemented by the concept of total defence which calls for the use of all resources of society for national defence in case of a crisis. From the mid-1960s onwards the Finnish Defence Forces also began to specifically prepare to defeat a strategic strike, the kind which the Soviet Union employed successfully to topple the government of Czechoslovakia in 1968. In an all-out confrontation between the two major blocs, Finnish objective would have been to prevent any military incursions to Finnish territory and thereby keep Finland outside the war.
The collapse of the Soviet Union in 1991 did not eliminate the military threat perceived by the government, but the nature of the threat has changed. While the concept of total, territorial defence was not dropped, the military planning has moved towards the capability to prevent and frustrate a strategic attack toward the vital regions of the country.
The end of the Cold War has also allowed new opportunities which would have previously been seen as breaking Finland's stance of neutrality. This has meant for example participation in the War in Afghanistan and the Nordic Battlegroup.
The defence forces are currently undergoing key procurement programmes for all the three branches. The Navy is scheduled to get its largest vessels since the Väinämöinen class with the new 100m+ Pohjanmaa-class corvette. The Air Force is in the process of acquiring a replacement for the McDonnell Douglas F/A-18 Hornet fighter for 10€ billion. Meanwhile, the Army is planning to replace the Patria Pasi armoured vehicles with the also domestic Protolab Misu. The standard issue assault rifle RK 62 is also being upgraded to a new variant.
The Finnish Defence Forces are under the command of the Chief of Defence, who is directly subordinate to the President of the Republic in matters related to the military command. Decisions concerning military orders are made by the President of the Republic in consultation with the Prime Minister and the Minister of Defence.
Apart from the Defence Command (, ), the military branches are the Finnish Army (, ), the Finnish Navy (, ) and the Finnish Air Force (, ). The Border Guard (, ) (including the coast guard) is under the authority of the Ministry of the Interior, but can be incorporated fully or in part into the defence forces when required by defence readiness. All logistical duties of the Defence Forces are carried out by the Defence Forces Logistics Command (), which has three Logistics Regiment for each military province.
The Army is divided into eight brigade-level units (). Under the brigades, there were 12 military districts (), which were responsible for carrying out the draft, training and crisis-time activation of reservists and for planning and executing territorial defence of their areas. The military districts were disbanded in 2014, as a part of the 800 million euro savings the Finnish Defence Forces had to carry out.
The Navy consists of headquarters and four brigade-level units: Coastal Fleet (), Coastal Brigade (), Nyland Brigade (, ), and Naval Academy (). The Coastal Fleet includes all the surface combatants of the Navy, while Coastal Brigade and Nyland Brigade train coastal troops.
The Air Force consists of headquarters and four brigade-level units: Satakunta, Lapland and Karelian Air Commands () and Air Force Academy (). They are responsible for securing the integrity of the Finnish airspace during peace and for conducting aerial warfare independently during a crisis.
The military training of the reservists is primarily the duty of the Defence Forces, but it is assisted by the National Defence Training Association of Finland (). This association provides reservists with personal, squad, platoon and company level military training. Most of the 2,000 instructors of the association are volunteers certified by the Defence Forces, but when Defence Forces materiel is used, the training always takes place under the supervision of career military personnel. Annually, the Defence Forces requests the Association to run specialized exercises for some 8,500 personnel placed in reserve units, and an additional 16,500 reservists participate in military courses where the participants are not directly selected by the Defence Forces. The legislation concerning the association will require that the chairman and the majority of the members of its board are chosen by the Finnish Government. The other board members are chosen by NGOs active in the national defence.
The Finnish defence forces is based on universal male conscription. All men above 18 years of age are liable to serve either 6, 9 or 12 months. Some 27,000 conscripts are trained annually. 80% of Finnish men complete their service. The conscripts at first receive basic training, after which they are assigned to various units for special training. Privates who are trained for tasks not requiring special skills serve for 6 months. In technically demanding tasks the time of service is 9, or in some cases 12 months. Those selected for NCO (non-commissioned officer) or officer training serve 12 months. At the completion of the service, the conscripts receive a reserve military rank of private, lance corporal, corporal, sergeant or second lieutenant, depending on their training and accomplishments. After their military service, the conscripts are placed in reserve until the end of their 50th or 60th living year, depending on their military rank. During their time in reserve, the reservists are liable to participate in military refresher exercises for a total of 40, 75 or 100 days, depending on their military rank. In addition, all reservists are liable for activation in a situation where the military threat against Finland has seriously increased, in full or partial mobilization or in a large-scale disaster or a virulent epidemic. The males who do not belong to the reserve may only be activated in case of full mobilization, and those rank-and-file personnel who have fulfilled 50 years of age only with a specific parliamentary decision.
Military service can be started after turning 18. The service can be delayed due to studies, work or other personal reasons until the 28th birthday, but these reasons do not result in exemptions. In addition to lodging, food, clothes and health care the conscripts receive between 5 and 11.70 euros per day, depending on the time they have served. The state also pays for any rental and electricity bills the conscripts incur during their service. If the conscripts have families, they are entitled to benefits as well. It is illegal to fire an employee due to military service or due to a refresher exercise or activation. Voluntary females in military service receive a small additional benefit, because they are expected to provide their own underwear and other personal items.
The military service consists of lessons, practical training, various cleaning and maintenance duties and field exercises. Most weekends conscripts can leave the barracks on Friday and are expected to return by midnight on Sunday. A small force of conscripts are kept in readiness on weekends to aid civil agencies in various types of emergency situations, to guard the premises and to maintain defence in case of a sudden military emergency. Field exercises can go on regardless of the time of day or week.
The training of conscripts is based on "joukkotuotanto"-principle (lit. English "troop production"). In this system, 80% of the conscripts are trained to fulfill a specific role in a specific wartime military unit. Each brigade-level unit is responsible for producing specified reserve units from the conscripts it has been allocated. As the reservists are discharged, they receive a specific wartime placement in the unit with which they have trained during their conscription. As the conscripts age, their unit is given new, different tasks and materiel. Typically, reservists are placed for the first five years in first-line units, then moved to military formations with less demanding tasks, while the reservists unable to serve in the unit are substituted with reservists from the reserve without specific placement. In refresher exercises, the unit is then given new training for these duties, if the defence funding permits this.
The inhabitants of the demilitarized Åland islands are exempt from military service. By the Conscription act of 1950, they are however required to serve a time at a local institution, like the coast guard instead. However, until such service has been arranged, they are freed from service obligation. The non-military service of Åland islands has not been arranged since the introduction of the act, and there are no plans to institute it. The inhabitants of Åland islands can also volunteer for military service on the mainland. Jehovah's Witnesses were exempt until February 2019. It is also possible to serve either weapon-free military service of 270 or 362 days or undergo a 12-month-long non-military service. Finnish law requires that men who do not want to serve the defense of the country in any capacity (so-called total objectors) be sentenced to a prison term of 197 days. As of 1995, women were permitted to serve on a voluntary basis and pursue careers as officers. In conscription, women have consideration time of six weeks, during which they have the choice to halt their service without any other specific reason. After the said six weeks, all the same laws and jurisdictions apply to them as to men. Unlike in many other countries women are allowed to serve in all combat arms including front-line infantry and special forces.
The Finnish military ranks follow the Western usage in the officer ranks. As a Finnish peculiarity, the rank of lieutenant has three grades: 2nd lieutenant, lieutenant and senior lieutenant. The 2nd lieutenant is a reserve officer rank, active commissioned officers beginning their service as lieutenants.
The basic structure of the NCO ranks is a variant of the German rank structure, but the rank system has some peculiarities due to different personnel groups. The duties carried out by NCOs in most Western armed forces are carried out by
In a case of war, most of the NCO duties would be carried out by reserve NCOs who have received their training during conscription.
The rank and file of the Finnish Defence Forces is composed of conscripts serving in the ranks of private, lance corporal and NCO student.
Finland does not have attack helicopters, submarines, long-range ballistic missiles (Finland has however updated its M270 MLRSs capable to shoot ATACMS tactical ballistic missiles). Legislation forbids nuclear weapons entirely.
Finland has taken part in peacekeeping operations since 1956 (the number of Finnish peacekeepers who have served since 1956 amounts to 43,000). In 2003 over a thousand Finnish peacekeepers were involved in peacekeeping operations, including UN and NATO led missions. According to the Finnish law the maximum simultaneous strength of the peacekeeping forces is limited to 2,000 soldiers.
Since 1956, 39 Finnish soldiers have died while serving in peacekeeping operations
Since 1996 the Pori Brigade has trained parts of the Finnish Rapid Deployment Force (FRDF), which can take part in international crisis management/peacekeeping operations at short notice. The Nyland/Uusimaa Brigade has started training the Amphibious Task Unit (ATU) in recent years, a joint Swedish-Finnish international task unit.
Since 2006, Finland has participated in the formation of European Union Battlegroups. Finland will be participating to two European Union Battlegroups in 2011.
International operations Finland is participating in by deploying military units (personnel strength in parenthesis):
Other international operations Finland is participating in with staff personnel, military observers and similar (personnel strength in parenthesis):
The Finnish military doctrine is based on the concept of total defence. The term total means that all sectors of the government and economy are involved in the defence planning. In principle, each ministry has the responsibility for planning its operations during a crisis. There are no special emergency authorities, such as the U.S. Federal Emergency Management Agency (FEMA) or Russian Ministry of Emergency Situations. Instead, each authority regularly trains for crises and has been allocated a combination of normal and emergency powers it needs to keep functioning in any conceivable situation. In a war, all resources of society may be diverted to ensure the survival of the nation. The legal basis for such measures is found in the Readiness Act and in the State of Defence Act, which would come into force through a presidential decision verified by parliament in the case of a crisis.
The main objective of the doctrine is to establish and maintain a military force capable of deterring any potential aggressor from using Finnish territory or applying military pressure against Finland. To accomplish this, the defence is organised on the doctrine of territorial defence. The stated main principles of the territorial defence doctrine are
The defence planning is organised to counteract three threat situations:
In all cases, the national objective is to keep the vital areas, especially the capital area in Finnish possession. In other areas, the size of the country is used to delay and wear down the invader, until the enemy may be defeated in an area of Finnish choosing. The Army carries most of the responsibility for this task.
The key wartime army units in 2015 are:
The total number of territorial and regional units is undisclosed.
The army units are mostly composed of reservists, the career soldiers manning the command and specialty positions.
The role of the Navy is to repel all attacks carried out against Finnish coasts and to safeguard the territorial integrity during peacetime and the "gray" phase of the conflict. The maritime defence relies on combined use of coastal artillery, missile systems and naval mines to wear down the attacker. The Air Force is used to deny the invader the air superiority and to protect most important troops and objects of national importance in conjunction with the ground-based air defence. As the readiness of the Air Force and the Navy is high even during the peacetime, the career personnel have a much more visible role in the wartime duties of these defence branches.
The Border Guard has the responsibility for border security in all situations. During a war, it will contribute to the national defence partially integrated into the army, its total mobilized strength being some 11,600 troops. One of the projected uses for the Border Guard is guerrilla warfare in areas temporarily occupied by enemy.
The army is organised into operative forces which consist of approximately 61 000 men and territorial forces which consist of 176 000 men. The following list is the wartime organisation of the Finnish army from 1.1.2008
Territorial forces: | https://en.wikipedia.org/wiki?curid=10715 |
Foreign relations of Finland
The foreign relations of Finland are the responsibility of the president of Finland, who leads foreign policy in cooperation with the government. Implicitly the government is responsible for internal policy and decision making in the European Union. Within the government, preparative discussions are conducted in the government committee of foreign and security policy ("ulko- ja turvallisuuspoliittinen ministerivaliokunta"), which includes the Prime Minister and at least the Minister of Foreign Affairs and the Minister of Defence, and at most four other ministers as necessary. The committee meets with the President as necessary. Laws concerning foreign relations are discussed in the parliamentary committee of foreign relations ("ulkoasiainvaliokunta, utrikesutskottet"). The Ministry of Foreign Affairs implements the foreign policy.
During the Cold War, Finland's foreign policy was based on official neutrality between the Western powers and the Soviet Union, while simultaneously stressing Nordic cooperation in the framework of the Nordic Council and cautious economic integration with the West as promoted by the Bretton-Woods Agreement and the free trade treaty with the European Economic Community. Finland shares this history with close neighbour Sweden, which Finland was a part of until the split of the Swedish empire in 1809. Finland did not join the Soviet Union's economic sphere (Comecon) but remained a free-market economy and conducted bilateral trade with the Soviet Union. After the dissolution of the Soviet Union in 1991, Finland unilaterally abrogated the last restrictions imposed on it by the Paris peace treaties of 1947 and the Finno-Soviet Agreement of Friendship, Cooperation, and Mutual Assistance. The government filed an application for membership in the European Union (EU) three months after the dissolution of the Soviet Union and became a member in 1995. Finland did not attempt to join NATO, even though post-Soviet countries on the Baltic Sea and elsewhere joined. Nevertheless, defence policymakers have quietly converted to NATO equipment and contributed troops.
President Martti Ahtisaari and the coalition governments led Finland closer to the core EU in the late 1990s. Finland was considered a cooperative model state, and Finland did not oppose proposals for a common EU defence policy. This was reversed in the 2000s, when Tarja Halonen and Erkki Tuomioja made Finland's official policy to resist other EU members' plans for common defense. However, Halonen allowed Finland to join European Union Battlegroups in 2006 and the NATO Response Force in 2008.
Relations with Russia are cordial and common issues include bureaucracy (particularly at the Vaalimaa border crossing), airspace violations, development aid Finland gives to Russia (especially in environmental problems that affect Finland), and Finland's energy dependency on Russian gas and electricity. Behind the scenes, the administration has witnessed a resurrection of Soviet-era tactics. The National Security Agency, Finnish Security Intelligence Service, estimates that the known number of Russian agents from Foreign Intelligence Service (SVR) and GRU now exceeds Cold War levels and there are unknown numbers of others.
As of March 2011 Finland maintains diplomatic relations with all UN member states.
After independence from Russia in 1917, the Finnish Civil War, including interventions by Imperial Germany and Soviet Russia, and failure of the Communist revolution, resulted in the official ban on Communism, and strengthening relations with Western countries. Overt alliance with Germany was not possible due to the result of the First World War, but in general the period of 1918 to 1939 was characterised by economic growth and increasing integration to the Western world economy. Relations with Soviet Russia from 1918 to 1939 were icy; voluntary expeditions to Russia called heimosodat ended only in 1922, four years after the conclusion of the Finnish Civil War. However, attempts to establish military alliances were unsuccessful. Thus, when the Winter War broke out, Finland was left alone to resist the Soviet attack. Later, during the Continuation War, Finland declared "co-belligerency" with Nazi Germany, and allowed Northern Finland to be used as a German attack base. The peace settlement in 1944 with the Soviet Union led to the Lapland War in 1945, where Finland fought Germans in northern Finland.
From the end of the Continuation War with the Soviet Union in 1944 until 1991, the policy was to avoid superpower conflicts and to build mutual confidence with the Western powers and the Soviet Union. Although the country was culturally, socially, and politically Western, Finns realised they had to live in peace with the USSR and take no action that might be interpreted as a security threat. The dissolution of the Soviet Union in 1991 opened up dramatic new possibilities for Finland and has resulted in the Finns actively seeking greater participation in Western political and economic structures. The popular support for the strictly self-defensive doctrine remains.
In the 2000 constitution, where diverse constitutional laws were unified into one statute, the leading role of the President was slightly moderated. However, because the constitution still stipulates only that the President leads foreign policy and the government internal policy, the responsibility over European Union affairs is not explicitly resolved. Implicitly this belongs to the powers of the government. In a cohabitation situation as with Matti Vanhanen's recent second government right-wing government and left-wing President Tarja Halonen, there can be friction between government ministers and the president.
The arrangement has been criticised by Risto E. J. Penttilä for not providing a simple answer of who's in charge.
Finnish foreign policy emphasises its participation in multilateral organisations. Finland joined the United Nations in 1955 and the European Union in 1995. As noted, the country also is a member of NATO's Partnership for Peace as well as an observer in the Euro-Atlantic Partnership Council. The military has been prepared to be more compatible with NATO, as co-operation with NATO in peacekeeping is needed, but military alliance does not have popular support.
In the European Union, Finland is a member of the Eurozone, and in addition, the Schengen treaty abolishing passport controls. 60% of foreign trade is to the EU. Other large trade partners are Russia and the United States.
Finland is well represented in the UN civil service in proportion to its population and belongs to several of its specialised and related agencies. Finnish troops have participated in United Nations peacekeeping activities since 1956, and the Finns continue to be one of the largest per capita contributors of peacekeepers in the world. Finland is an active participant in the Organization for Security and Cooperation in Europe (OSCE) and in early 1995 assumed the co-chairmanship of the OSCE's Minsk Group on the Nagorno-Karabakh conflict.
Cooperation with the other Scandinavian countries also is important to Finland, and it has been a member of the Nordic Council since 1955. Under the council's auspices, the Nordic countries have created a common labor market and have abolished immigration controls among themselves. The council also serves to coordinate social and cultural policies of the participating countries and has promoted increased cooperation in many fields.
In addition to the organisations already mentioned, Finland is a member of the International Bank for Reconstruction and Development, the International Monetary Fund, the World Trade Organization, the International Finance Corporation, the International Development Association, the Bank for International Settlements, the Asian Development Bank, the Inter-American Development Bank, the Council of Europe, and the Organisation for Economic Co-operation and Development.
Following the dissolution of the Soviet Union, Finland has moved steadily towards integration into Western institutions and abandoned its formal policy of neutrality, which has been recast as a policy of military nonalliance coupled with the maintenance of a credible, independent defence. Finland's 1994 decision to buy 64 F-18 Hornet fighter planes from the United States signalled the abandonment of the country's policy of balanced arms purchases from Communist countries and Western countries.
In 1994, Finland joined NATO's Partnership for Peace; the country is also an observer in the North Atlantic Cooperation Council. Finland became a full member of the EU in January 1995, at the same time acquiring observer status in the EU's defence arm, the Western European Union.
Generally, Finland has abided by the principle of neutrality and has good relations with nearly all countries, as evidenced by the freedom of travel that a Finnish passport gives.
Finland has established diplomatic relations with all United Nations member states, plus the Holy See and Kosovo. | https://en.wikipedia.org/wiki?curid=10716 |
Telecommunications in France
Telecommunications in France is highly developed. France is served by an extensive system of automatic telephone exchanges connected by modern networks of fiber-optic cable, coaxial cable, microwave radio relay, and a domestic satellite system; cellular telephone service is widely available, expanding rapidly, and includes roaming service to foreign countries.
The telephony system employs an extensive system of modern network elements such as digital telephone exchanges, mobile switching centres, media gateways and signalling gateways at the core, interconnected by a wide variety of transmission systems using fibre-optics or Microwave radio relay networks. The access network, which connects the subscriber to the core, is highly diversified with different copper-pair, optic-fibre and wireless technologies. The fixed-line telecommunications market is dominated by the former state-owned monopoly France Telecom.
Telephones - main lines in use: 36.441 million; 35.5 million (metropolitan France) (2009)
Telephones - mobile cellular: 60.95 million; 59.543 million (metropolitan France) (2009)
Satellite earth stations - 2 Intelsat (with total of 5 antennas - 2 for Indian Ocean and 3 for Atlantic Ocean), NA Eutelsat, 1 Inmarsat (Atlantic Ocean region); HF radiotelephone communications with more than 20 countries
Radio broadcast stations: AM 41, FM about 3,500 (this figure is an approximation and includes many repeaters), shortwave 2 (1998)
Radios: 55.3 million (1997)
Television broadcast stations: 584 (plus 9,676 repeaters) (1995)
Televisions: 34.8 million (1997)
Internet country code: .fr
Internet Service Providers (ISPs): 62 (2000)
Internet hosts: 15,182,001; 15.161 million (metropolitan France) (2010)
Internet users: 45.262 million; 44.625 million (metropolitan France) (2009)
France currently has 4 mobile networks, Orange, SFR, Bouygues Telecom and Free all of which are licensed for UMTS. All except Free are also licensed for GSM. In 2016 Q3, Orange had 28.966 million mobile phone customers, SFR had 14.577 million, Bouygues had 12.660 million, Free Mobile had 12.385 million, and the MVNOs had 7.281 million.
Before the launch of Free Mobile in January 2012, the number of physical mobile phone operators was very limited. For example, Sweden currently has 4 licensed operators with their own networks despite a smaller and sparser population than France's, making improved coverage less economically rewarding. However, France has a number of MVNOs which increases competition.
However, Free Mobile obtained its licence in December 2009 and operates since January 2012.
In France, the satellite telecommunications system TELECOM 1 (TC1) will provide high-speed, broadband transfer of digital data between different sections of subscribing companies. Conventional telecommunications links between continental France and its overseas departments will also be supplied. | https://en.wikipedia.org/wiki?curid=10722 |
Transport in France
Transportation in France relies on one of the densest networks in the world with 146 km of road and 6.2 km of rail lines per 100 km2. It is built as a web with Paris at its center. Rail, road, air and water are all widely developed forms of transportation in France.
The first important human improvements were the Roman roads linking major settlements and providing quick passage for marching armies.
All through the Middle Ages improvements were few and second rate. Transport became slow and awkward to use. The early modern period saw great improvements. There was a very quick production of canals connecting rivers. It also saw great changes in oceanic shipping. Rather than expensive galleys, wind powered ships that were much faster and had more room for cargo became popular for coastal trade. Transatlantic shipping with the New World turned cities such as Nantes, Bordeaux, Cherbourg-Octeville and Le Havre into major ports.
There is a total of of railway in France, mostly operated by SNCF (Société nationale des chemins de fer français), the French national railway company. Like the road system, the French railways are subsidised by the state, receiving €13.2 billion in 2013. The railway system is a small portion of total travel, accounting for less than 10% of passenger travel.
From 1981 onwards, a newly constructed set of high-speed "Lignes à Grande Vitesse" (LGV) lines linked France's most populous areas with the capital, starting with Paris-Lyon. In 1994, the Channel Tunnel opened, connecting France and Great Britain by rail under the English Channel. The TGV has set many world speed records, the most recent on 3 April 2007, when a new version of the TGV dubbed the V150 with larger wheels than the usual TGV, and a stronger engine, broke the world speed record for conventional rail trains, reaching 574.8 km/h (357.2 mph).
Trains, unlike road traffic, drive on the left (except in Alsace-Moselle). Metro and tramway services are not thought of as trains and usually follow road traffic in driving on the right (except the Lyon Metro).
Six cities in France currently have a rapid transit service (frequently known as a 'metro'). Full metro systems are in operation in Paris (16 lines), Lyon (4 lines) and Marseille (2 lines). Light metro (VAL-type) systems are in use in Lille (2 lines), Toulouse (2 lines) and Rennes (1 line).
In spite of the closure of most of France's first generation tram systems in earlier years, a fast-growing number of France's major cities have modern tram or light rail networks, including Paris, Lyon (Lyon having the biggest one), Toulouse, Montpellier, Saint-Étienne, Strasbourg and Nantes. Recently the tram has seen a very big revival with many experiments such as ground level power supply in Bordeaux, or trolleybuses pretending to be trams in Nancy.
This way of travelling started disappearing in France at the end of the 1930s. Only Lille, Marseille and Saint-Étienne have never given up their tram systems. Since the 1980s, several cities have re-introduced it.
The following French towns and cities run light rail or tram systems:
Tram systems are planned or under construction in Tours, and Fort-de-France.
The revival of tram networks in France has brought about a number of technical developments both in the traction systems and in the styling of the cars:
Dualis extends the reach of the Citadis family from Railway Gazette, 2 June 2007. Retrieved 15 February 2009.
Prominent bi-articulated "tram-like" Van Hool vehicles (Mettis) are used in Metz since 2013. They work as classic trams but without needing rails and catenaries, and can transport up to 155 passengers while being ecological thanks to a diesel-electric hybrid engine.
In the starting up, batteries feed the engine of the bus, which can then roll 150 meters before the diesel engine takes over.
There are ~ of roads in France. The French motorway network or autoroute system consists largely of toll roads, except around large cities and in parts of the north. It is a network totalling of motorways operated by private companies such as Sanef (Société des autoroutes du Nord et de l'Est de la France). It has the 8th largest highway network in the world, trailing only the United States, China, Russia, Japan, Canada, Spain and Germany.
France currently counts 30,500 km of major trunk roads or routes nationales and state-owned motorways. By way of comparison, the "routes départementales" cover a total distance of 365,000 km. The main trunk road network reflects the centralising tradition of France: the majority of them leave the gates of Paris. Indeed, trunk roads begin on the parvis of Notre-Dame of Paris at Kilometre Zero. To ensure an effective road network, new roads not serving Paris were created.
France is believed to be the most car-dependent country in Europe. In 2005, 937 billion vehicle kilometres were travelled in France (85% by car).
In order to overcome this dependence, in France and many more countries the long distance coaches' market has been liberalised. Since 2015, with the law Macron, the market has exploded: the increasing demand lead to a higher supply of bus services and coach companies.
In most, if not all, French cities, urban bus services are provided at a flat-rate charge for individual journeys. Many cities have bus services that operate well out into the suburbs or even the country. Fares are normally cheap, but rural services can be limited, especially on weekends.
Trains have long had a monopoly on inter-regional buses, but in 2015 the French government introduced reforms to allow bus operators to travel these routes.
The French natural and man-made waterways network is the largest in Europe extending to over of which (VNF, ), the French navigation authority, manages the navigable sections. Some of the navigable rivers include the Loire, Seine and Rhône. The assets managed by VNF comprise of waterways, made up of of canals and of navigable rivers, 494 dams, 1595 locks, 74 navigable aqueducts, 65 reservoirs, 35 tunnels and a land area of . Two significant waterways not under VNF's control are the navigable sections of the River Somme and the Brittany Canals, which are both under local management.
Approximately 20% of the network is suitable for commercial boats of over 1000 tonnes and the VNF has an ongoing programme of maintenance and modernisation to increase depth of waterways, widths of locks and headroom under bridges to support France's strategy of encouraging freight onto water.
France has an extensive merchant marine, including 55 ships of size Gross register tonnage 1,000 and above. The country also maintains a captive register for French-owned ships in Iles Kerguelen (French Southern and Antarctic Lands).
French companies operate over 1,400 ships of which 700 are registered in France. France's 110 shipping firms employ 12,500 personnel at sea and 15,500 on shore. Each year, 305 million tonnes of goods and 15 million passengers are transported by sea. Marine transport is responsible for 72% of France's imports and exports.
France also boasts a number of seaports and harbours, including Bayonne, Bordeaux, Boulogne-sur-Mer, Brest, Calais, Cherbourg-Octeville, Dunkerque, Fos-sur-Mer, La Pallice, Le Havre, Lorient, Marseille, Nantes, Nice, Paris, Port-la-Nouvelle, Port-Vendres, Roscoff, Rouen, Saint-Nazaire, Saint-Malo, Sète, Strasbourg and Toulon.
There are approximately 478 airports in France (1999 est.) and by a 2005 estimate, there are three heliports. 288 of the airports have paved runways, with the remaining 199 being unpaved.
Among the airspace governance authorities active in France, one is Aéroports de Paris, which has authority over the Paris region, managing 14 airports including the two busiest in France, Charles de Gaulle Airport and Orly Airport. The former, located in Roissy near Paris, is the fifth busiest airport in the world with 60 million passenger movements in 2008, and France's primary international airport, serving over 100 airlines.
The national carrier of France is Air France, a full service global airline which flies to 20 domestic destinations and 150 international destinations in 83 countries (including Overseas departments and territories of France) across all 6 major continents. | https://en.wikipedia.org/wiki?curid=10723 |
French Armed Forces
The French Armed Forces () encompass the Army, the Navy, the Air Force, the National Guard and the Gendarmerie of the French Republic. The President of France heads the armed forces as "chef des armées".
France has the fifth largest defence budget in the world and the first in the European Union (EU). It has the largest armed forces in size in the European Union. According to Credit Suisse, the French Armed Forces are ranked as the world's sixth-most powerful military.
The military history of France encompasses an immense panorama of conflicts and struggles extending for more than 2,000 years across areas including modern France, greater Europe, and French territorial possessions overseas. According to the British historian Niall Ferguson, France has participated in 50 of the 125 major European wars fought since 1495, and in 168 battles fought since 387 BC, they have won 109, drawn 10 and lost 49: this makes France the most successful military power in European history—in terms of number of fought and won.
The Gallo-Roman conflict predominated from 60 BC to 50 BC, with the Romans emerging victorious in the conquest of Gaul by Julius Caesar. After the decline of the Roman Empire, a Germanic tribe known as the Franks took control of Gaul by defeating competing tribes. The "land of Francia," from which France gets its name, had high points of expansion under kings Clovis I and Charlemagne. In the Middle Ages, rivalries with England and the Holy Roman Empire prompted major conflicts such as the Norman Conquest and the Hundred Years' War. With an increasingly centralized monarchy, the first standing army since Roman times, and the use of artillery, France expelled the English from its territory and came out of the Middle Ages as the most powerful nation in Europe, only to lose that status to Spain following defeat in the Italian Wars. The Wars of Religion crippled France in the late 16th century, but a major victory over Spain in the Thirty Years' War made France the most powerful nation on the continent once more. In parallel, France developed its first colonial empire in Asia, Africa, and in the Americas. Under Louis XIV, France achieved military supremacy over its rivals, but escalating conflicts against increasingly powerful enemy coalitions checked French ambitions and left the kingdom bankrupt at the opening of the 18th century.
Resurgent French armies secured victories in dynastic conflicts against the Spanish, Polish, and Austrian crowns. At the same time, France was fending off attacks on its colonies. As the 18th century advanced, global competition with Great Britain led to the Seven Years' War, where France lost its North American holdings. Consolation came in the form of dominance in Europe and the American Revolutionary War, where extensive French aid in the form of money and arms, and the direct participation of its army and navy led to America's independence. Internal political upheaval eventually led to 23 years of nearly continuous conflict in the French Revolutionary Wars and the Napoleonic Wars. France reached the zenith of its power during this period, dominating the European continent in an unprecedented fashion under Napoleon Bonaparte, but by 1815 it had been restored to its pre-Revolutionary borders. The rest of the 19th century witnessed the growth of the Second French colonial empire as well as French interventions in Belgium, Spain, and Mexico. Other major wars were fought against Russia in the Crimea, Austria in Italy, and Prussia within France itself.
Following defeat in the Franco-Prussian War, Franco-German rivalry erupted again in the First World War. France and its allies were victorious this time. Social, political, and economic upheaval in the wake of the conflict led to the Second World War, in which the Allies were defeated in the Battle of France and the French government surrendered and was replaced with an authoritarian regime. The Allies, including the government in exile's Free French Forces and later a liberated French nation, eventually emerged victorious over the Axis powers. As a result, France secured an occupation zone in Germany and a permanent seat on the United Nations Security Council. The imperative of avoiding a third Franco-German conflict on the scale of those of two world wars paved the way for European integration starting in the 1950s. France became a nuclear power and since the 1990s its military action is most often seen in cooperation with NATO and its European partners.
Today, French military doctrine is based on the concepts of national independence, nuclear deterrence ("see Force de frappe"), and military self-sufficiency. France is a charter member of NATO, and has worked actively with its allies to adapt NATO—internally and externally—to the post-Cold War environment. In December 1995, France announced that it would increase its participation in NATO's military wing, including the Military Committee (France withdrew from NATO's military bodies in 1966 whilst remaining full participants in the Organisation's political Councils). France remains a firm supporter of the Organisation for Security and Co-operation in Europe and other cooperative efforts. Paris hosted the May 1997 NATO-Russia Summit which sought the signing of the Founding Act on Mutual Relations, Cooperation and Security. Outside of NATO, France has actively and heavily participated in both coalition and unilateral peacekeeping efforts in Africa, the Middle East, and the Balkans, frequently taking a lead role in these operations. France has undertaken a major restructuring to develop a professional military that will be smaller, more rapidly deployable, and better tailored for operations outside of mainland France. Key elements of the restructuring include: reducing personnel, bases and headquarters, and rationalisation of equipment and the armaments industry.
Since the end of the Cold War, France has placed a high priority on arms control and non-proliferation. French Nuclear testing in the Pacific, and the sinking of the "Rainbow Warrior" strained French relations with its Allies, South Pacific states (namely New Zealand), and world opinion. France agreed to the Nuclear Non-Proliferation Treaty in 1992 and supported its indefinite extension in 1995. After conducting a controversial final series of six nuclear tests on Mururoa in the South Pacific, the French signed the Comprehensive Test Ban Treaty in 1996. Since then, France has implemented a moratorium on the production, export, and use of anti-personnel landmines and supports negotiations leading toward a universal ban. The French are key players in the adaptation of the Treaty on Conventional Armed Forces in Europe to the new strategic environment. France remains an active participant in: the major programs to restrict the transfer of technologies that could lead to the proliferation of weapons of mass destruction: the Nuclear Suppliers Group, the Australia Group (for chemical and biological weapons), and the Missile Technology Control Regime. France has also signed and ratified the Chemical Weapons Convention.
On 31 July 2007, President Nicolas Sarkozy ordered M. Jean-Claude Mallet, a member of the Council of State, to head up a thirty-five member commission charged with a wide-ranging review of French defence. The commission issued its White Paper in early 2008. Acting upon its recommendations, President Sarkozy began making radical changes in French defense policy and structures starting in the summer of 2008. In keeping with post-Cold War changes in European politics and power structures, the French military's traditional focus on territorial defence will be redirected to meet the challenges of a global threat environment. Under the reorganisation, the identification and destruction of terrorist networks both in metropolitan France and in francophone Africa will be the primary task of the French military. Redundant military bases will be closed and new weapons systems projects put on hold to finance the restructuring and global deployment of intervention forces. In a historic change, Sarkozy furthermore has declared that France "will now participate fully in NATO," four decades after former French president General Charles de Gaulle withdrew from the alliance's command structure and ordered American troops off French soil.
There are currently 36,000 French troops deployed in foreign territories—such operations are known as "OPEX" for "Opérations Extérieures" ("External Operations"). Among other countries, France provides troops for the United Nations force stationed in Haiti following the 2004 Haiti rebellion. France has sent troops, especially special forces, into Afghanistan to help the United States and NATO forces fight the remains of the Taliban and Al Qaeda. In Opération Licorne a force of a few thousand French soldiers is stationed in Ivory Coast on a UN peacekeeping mission. These troops were initially sent under the terms of a mutual protection pact between France and the Ivory Coast, but the mission has since evolved into the current UN peacekeeping operation. The French Armed Forces have also played a leading role in the ongoing UN peacekeeping mission along the Lebanon-Israel border as part of the cease-fire agreement that brought the 2006 Lebanon War to an end. Currently, France has 2,000 army personnel deployed along the border, including infantry, armour, artillery and air defence. There are also naval and air personnel deployed offshore.
The French Joint Force and Training Headquarters (État-Major Interarmées de Force et d'Entraînement) at Air Base 110 near Creil maintains the ability to command a medium or large-scale international operation, and runs exercises . In 2011, from 19 March, France participated in the enforcement of a no-fly zone over northern Libya, during the Libyan Civil war, in order to prevent forces loyal to Muammar Gaddafi from carrying out air attacks on Anti-Gaddafi forces. This operation was known as Opération Harmattan and was part of France's involvement in the conflict in the NATO-led coalition, enforcing UN Security Council Resolution 1973. On 11 January 2013 France begun Operation Serval to fight Islamists in Mali with African support but without NATO involvement.
In May 2014, high ranking defence chiefs of the French Armed Forces threatened to resign if the defence budget received further cuts on top of those already announced in the 2013 White Paper. They warned that further cuts would leave the armed forces unable to support operations abroad.
The head of the French armed forces is the President of the Republic, in his role as "chef des armées". However, the Constitution puts civil and military government forces at the disposal of the "gouvernement" (the executive cabinet of ministers chaired by the Prime Minister, who are not necessarily of the same political side as the president). The Minister of the Armed Forces (as of 2017, the incumbent Florence Parly) oversees the military's funding, procurement and operations. Historically, France relied a great deal on conscription to provide manpower for its military, in addition to a minority of professional career soldiers. Following the Algerian War, the use of non-volunteer draftees in foreign operations was ended; if their unit was called up for duty in war zones, draftees were offered the choice between requesting a transfer to another unit or volunteering for the active mission. In 1996, President Jacques Chirac's government announced the end of conscription and in 2001, conscription formally was ended. Young people must still, however, register for possible conscription (should the situation call for it). As of 2017 the French Armed Forces have total menpower of 426,265, and has an active personnel of 368,962 (with the Gendarmerie National).
It breaks down as follows (2015):
The reserve element of the French Armed Forces consists of two structures; the Operational Reserve and the Citizens Reserve. As of 2015 the strength of the Operational Reserve is 27,785 personnel.
Apart from the three main service branches, the French Armed Forces also includes a fourth paramilitary branch called the National Gendarmerie. It had a reported strength of 103,000 active personnel and 25,000 reserve personnel in 2018. They are used in everyday law enforcement, and also form a coast guard formation under the command of the French Navy. There are however some elements of the Gendarmerie that participate in French external operations, providing specialised law enforcement and supporting roles.
Historically the National Guard functioned as the Army's reserve national defense and law enforcement militia. After 145 years since its disbandment, due to the risk of terrorist attacks in the country, the Guard was officially reactivated, this time as a service branch of the Armed Forces, on 12 October 2016.
Since 2019 young French citizens can fulfill the mandatory service "Service national universel (SNU)" in one of the military.
The French armed forces are divided into five service branches:
In addition, the National Gendarmerie form a Coast Guard force called the Gendarmerie Maritime which is commanded by the French Navy.
The National Gendarmerie is primarily a military and airborne capable police force which serves as a rural and general purpose police force.
Reactivated in 2016, the National Guard serves as the official primary military and police reserve service of the Armed Forces. It also doubles as a force multiplier for law enforcement personnel during contingencies and to reinforce military personnel whenever being deployed within France and abroad. | https://en.wikipedia.org/wiki?curid=10724 |
Foreign relations of France
In the 19th century France built a new French colonial empire second only to the British Empire. It was humiliated in the Franco-Prussian War of 1870–71, which marked the rise of Germany to dominance in Europe. France was on the winning side of the First World War, but fared poorly in the Second World War. It fought losing wars in Indochina (ending in 1954) and Algeria (ending in 1962). The Fourth Republic collapsed and the Fifth Republic began in 1958 to the present. Under Charles De Gaulle it tried to block American and British influence on the European community. Since 1945 France has been a founding member of the United Nations, of NATO, and of the European Coal and Steel Community (the European Union's predecessor). As a charter member of the United Nations, France holds one of the permanent seats in the Security Council and is a member of most of its specialized and related agencies.
France is also a founding member of the Union for the Mediterranean and the La Francophonie and plays a key role, both in regional and in international affairs.
François Mitterrand, a Socialist, emphasized European unity and the preservation of France's special relationships with its former colonies in the face of "Anglo-Saxon influence." A part of the enacted policies was formulated in the Socialist Party's 110 Propositions for France, the electoral program for the 1981 presidential election. He had a warm and effective relationship with the conservative German Chancellor Helmut Kohl. They promoted French-German bilateralism in Europe and strengthened military cooperation between the two countries.
Shortly after taking office, President Sarkozy began negotiations with Colombian president Álvaro Uribe and the left-wing guerrilla FARC, regarding the release of hostages held by the rebel group, especially Franco-Colombian politician Ingrid Betancourt. According to some sources, Sarkozy himself asked for Uribe to release FARC's "chancellor" Rodrigo Granda.
. Furthermore, he announced on 24 July 2007, that French and European representatives had obtained the extradition of the Bulgarian nurses detained in Libya to their country. In exchange, he signed with Gaddafi security, health care and immigration pacts – and a $230 million (168 million euros) MILAN antitank missile sale. The contract was the first made by Libya since 2004, and was negotiated with MBDA, a subsidiary of EADS. Another 128 million euros contract would have been signed, according to Tripoli, with EADS for a TETRA radio system. The Socialist Party (PS) and the Communist Party (PCF) criticized a "state affair" and a "barter" with a "Rogue state". The leader of the PS, François Hollande, requested the opening of a parliamentary investigation.
On 8 June 2007, during the 33rd G8 summit in Heiligendamm, Sarkozy set a goal of reducing French CO2 emissions by 50% by 2050 in order to prevent global warming. He then pushed forward the important Socialist figure of Dominique Strauss-Kahn as European nominee to the International Monetary Fund (IMF). Critics alleged that Sarkozy proposed to nominate Strauss-Kahn as managing director of the IMF to deprive the Socialist Party of one of its more popular figures.
Sarkozy normalised what had been strained relations with NATO. In 2009, France again was a fully integrated NATO member. François Hollande has continued the same policy.
Socialist François Hollande won election in 2012 as president. He adopted a generally hawkish foreign-policy, in close collaboration with Germany in regard to opposing Russian moves against Ukraine, and in sending the military to fight radical Islamists in Africa. He takes a hard line with regard to the Greek debt crisis. François Hollande launched two military operations in Africa: Operation Serval in Mali (the French armed forces stopped an Islamist takeover of Bamako, the nation's capital city); and Operation Sangaris (to restore peace there as tensions between different religious communities had turned into a violent conflict). France was also the first European nation to join the United States in bombing the Islamic State of Iraq and the Levant. Under President Hollande, France's stances on the civil war in Syria and Iran's nuclear program has been described as "hawkish".
Sophie Meunier in 2017 ponders whether France is still relevant in world affairs:
Polls indicate that American president Barack Obama was highly popular in France, but Donald Trump has been extremely unpopular. Natalie Nougayrède argues:
In July 2019, the UN ambassadors from 22 nations, including France, signed a joint letter to the UNHRC condemning China’s mistreatment of the Uyghurs as well as its mistreatment of other minority groups, urging the Chinese government to close the Xinjiang re-education camps.
ACCT, AfDB, AsDB, Australia Group, BDEAC, BIS, CCC, CDB (non-regional), CE, CERN, EAPC, EBRD, ECA (associate), ECE, ECLAC, EIB, EMU, ESA, ESCAP, EU, FAO, FZ, G-5, G-7, G-10, IADB, IAEA, IBRD, ICAO, ICC, ICC, ICRM, IDA, IEA, IFAD, IFC, IFRCS, IHO, ILO, IMF, International Maritime Organization, Inmarsat, InOC, Intelsat, Interpol, IOC, IOM, ISO, ITU, ITUC, MINURSO, MIPONUH, MONUC, NAM (guest), NATO, NEA, NSG, OAS (observer), OECD, OPCW, OSCE, PCA, SPC, UN, UN Security Council, UNCTAD, UNESCO, UNHCR, UNIDO, UNIFIL, UNIKOM, UNITAR, UNMIBH, UNMIK, UNOMIG, UNRWA, UNTSO, UNU, UPU, WADB (nonregional), WEU, WFTU, WHO, WIPO, WMO, WToO, WTrO, Zangger Committee
France established relations with the Middle East during the reign of Louis XIV. To keep Austria from intervening into its plans regarding Western Europe he lent limited support to the Ottoman Empire, though the victories of Prince Eugene of Savoy destroyed these plans. In the nineteenth century France together with Great Britain tried to strengthen the Ottoman Empire, the now "Sick man of Europe", to resist Russian expansion, culminating in the Crimean War.
France also pursued close relations with the semi-autonomous Egypt. In 1869 Egyptian workers -under the supervision of France- completed the Suez Canal. A rivalry emerged between France and Britain for control of Egypt, and eventually Britain emerged victorious by buying out the Egyptian shares of the company before the French had time to act.
After the unification of Germany in 1871, Germany successfully attempted to co-opt France's relations with the Ottomans. In World War I the Ottoman Empire joined the Central Powers, and was defeated by France and Britain. After the collapse of the Ottoman Empire France and Britain divided the Middle East between themselves. France received Syria and Lebanon.
These colonies were granted independence after 1945, but France still tried to forge cultural and educational bonds between the areas, particularly with Lebanon. Relationships with Syria are more strained, due to the policies of that country. In 2005, France, along with the United States, pressured Syria to evacuate Lebanon.
In the post-World War II era French relations with the Arab Middle East reached a very low point. The war in Algeria between Muslim fighters and French colonists deeply concerned the rest of the Muslim world. The Algerian fighters received much of their supplies and funding from Egypt and other Arab powers, much to France's displeasure.
Most damaging to Franco-Arab relations, however, was the Suez Crisis. It greatly diminished France's reputation in the region. France openly supported the Israeli attack on the Sinai peninsula, and was working against Nasser, then a popular figure in the Middle East. The Suez Crisis also made France and the United Kingdom look again like imperialist powers attempting to impose their will upon weaker nations. Another hindrance to France's relations with the Arab Middle East was its close alliance with Israel during the 1950s.
This all changed with the coming of Charles de Gaulle to power. De Gaulle's foreign policy was centered around an attempt to limit the power and influence of both superpowers, and at the same time increase France's international prestige. De Gaulle hoped to move France from being a follower of the United States to becoming the leading nation of a large group of non-aligned countries. The nations de Gaulle looked at as potential participants in this group were those in France's traditional spheres of influence: Africa and the Middle East. The former French colonies in eastern and northern Africa were quite agreeable to these close relations with France. These nations had close economic and cultural ties to France, and they also had few other suitors amongst the major powers. This new orientation of French foreign policy also appealed strongly to the leaders of the Arab nations. None of them wanted to be dominated by either of the superpowers, and they supported France's policy of trying to balance the US and the USSR and to prevent either from becoming dominant in the region. The Middle Eastern leaders wanted to be free to pursue their own goals and objectives, and did not want to be chained to either alliance bloc. De Gaulle hoped to use this common foundation to build strong relations between the nations. He also hoped that good relations would improve France's trade with the region. De Gaulle also imagined that these allies would look up to the more powerful French nation, and would look to it in leadership in matters of foreign policy.
The end of the Algerian conflict in 1962 accomplished much in this regard. France could not portray itself as a leader of the oppressed nations of the world if it still was enforcing its colonial rule upon another nation. The battle against the Muslim separatists that France waged in favour of the minority of white settlers was an extremely unpopular one throughout the Muslim world. With the conflict raging it would have been close to impossible for France to have had positive relations with the nations of the Middle East. The Middle Eastern support for the FLN guerillas was another strain on relations that the end of the conflict removed. Most of the financial and material support for the FLN had come from the nations of the Middle East and North Africa. This was especially true of Nasser's Egypt, which had long supported the separatists. Egypt is also the most direct example of improved relations after the end of hostilities. The end of the war brought an immediate thaw to Franco-Egyptian relations, Egypt ended the trial of four French officers accused of espionage, and France ended its trade embargo against Egypt.
In 1967 de Gaulle completely overturned France's Israel policy. De Gaulle and his ministers reacted very harshly to Israel's actions in the Six-Day War. The French government and de Gaulle condemned Israel's treatment of refugees, warned that it was a mistake to occupy the West Bank and Gaza Strip, and also refused to recognize the Israeli control of Jerusalem. The French government continued to criticize Israel after the war and de Gaulle spoke out against other Israeli actions, such as the operations against the Palestine Liberation Organization in Lebanon. France began to use its veto power to oppose Israel in the UN, and France sided with the Arab states on almost all issues brought to the international body. Most importantly of all, however, de Gaulle's government imposed an arms embargo on the Israeli state. The embargo was in fact applied to all the combatants, but very soon France began selling weaponry to the Arab states again. As early as 1970 France sold Libya a hundred Dassault Mirage fighter jets. However, after 1967 France continued to support Israel's right to exist, as well as Israel's many preferential agreements with France and the European Economic Community.
In the second half of the 20th century, France increased its expenditures in foreign aid greatly, to become second only to the United States in total aid amongst the Western powers and first on a per capita basis. By 1968 France was paying out $855 million per year in aid far more than either West Germany or the United Kingdom. The vast majority of French aid was directed towards Africa and the Middle East, usually either as a lever to promote French interests or to help with the sale of French products (e.g. arms sales). France also increased its expenditures on other forms of aid sending out skilled individuals to developing countries to provide technical and cultural expertise.
The combination of aid money, arms sales, and diplomatic alignments helped to erase the memory of the Suez Crisis and the Algerian War in the Arab world and France successfully developed amicable relationships with the governments of many of the Middle Eastern states. Nasser and de Gaulle, who shared many similarities, cooperated on limiting American power in the region. Nasser proclaimed France as the only friend of Egypt in the West. France and Iraq also developed a close relationship with business ties, joint military training exercises, and French assistance in Iraq's nuclear program in the 1970s. France improved relations with its former colony Syria, and eroded cultural links were partially restored.
In terms of trade France did receive some benefits from the improved relations with the Middle East. French trade with the Middle East increased by over fifty percent after de Gaulle's reforms. The weaponry industries benefited most as France soon had lucrative contracts with many of the regimes in the Middle East and North Africa, though these contracts account for a negligible part of France's economy.
De Gaulle had hoped that by taking a moderate path and not strongly supporting either side France could take part in the Middle East peace process between Israel and the Arab nations. Instead it has been excluded from any major role.
France plays a significant role in Africa, especially in its former colonies, through extensive aid programs, commercial activities, military agreements, and cultural impact. In those former colonies where the French presence remains important, France contributes to political, military, and social stability. Many think that French policy in Africa – particularly where British interests are also involved – is susceptible to what is known as 'Fashoda syndrome'. Others have criticized the relationship as neocolonialism under the name "Françafrique", stressing France's support of various dictatorships, among others: Omar Bongo, Idriss Déby, and Denis Sassou Nguesso.
France has extensive political and economical relations with Asian countries, including China, India, Japan, South Korea and Southeast Asia as well as an increasing presence in regional fora. France was instrumental in launching the Asia–Europe Meeting (ASEM) process which could eventually emerge as a competitor to APEC. France is seeking to broaden its commercial presence in China and will pose a competitive challenge to U.S. business, particularly in aerospace, high-tech, and luxury markets. In Southeast Asia, France was an architect of the Paris Peace Accords, which ended the conflict in Cambodia.
France does not have formal diplomatic relationships with North Korea. North Korea however maintains a "delegation" (not an embassy nor a consulate) near Paris. As most countries, France does not recognize, nor have formal diplomatic relationships with Taiwan, due to its recognition of China; however, Taiwan maintains a representation office in Paris which is similar to an embassy. Likewise, the French Institute in Taipei has an administrative consular section that delivers visas and fulfills other missions normally dealt with by diplomatic outposts.
France has maintained its status as key power in Western Europe because of its size, location, strong economy, membership in European organizations, strong military posture and energetic diplomacy. France generally has worked to strengthen the global economic and political influence of the EU and its role in common European defense and collective security.
France supports the development of a European Security and Defence Identity (ESDI) as the foundation of efforts to enhance security in the European Union. France cooperates closely with Germany and Spain in this endeavor. | https://en.wikipedia.org/wiki?curid=10725 |
French Polynesia
French Polynesia (; ; ), officially the Collectivity of French Polynesia, is an overseas collectivity of the French Republic and its sole overseas country. It is composed of 118 geographically dispersed islands and atolls stretching over an expanse of more than in the South Pacific Ocean. Its total land area is .
French Polynesia is divided into five groups of islands: the Society Islands archipelago, composed of the Windward Islands and the Leeward Islands; the Tuamotu Archipelago; the Gambier Islands; the Marquesas Islands; and the Austral Islands. Among its 118 islands and atolls, 67 are inhabited. Tahiti, which is located within the Society Islands, is the most populous island, having close to 69% of the population of French Polynesia . Papeete, located on Tahiti, is the capital. Although not an integral part of its territory, Clipperton Island was administered from French Polynesia until 2007.
Following the Great Polynesian Migration, European explorers visited the islands of French Polynesia on several occasions. Traders and whaling ships also visited. In 1842, the French took over the islands and established a French protectorate they called "" (French Establishments/Settlements of Oceania).
In 1946, the ' became an overseas territory under the constitution of the French Fourth Republic, and Polynesians were granted the right to vote through citizenship. In 1957, the ' were renamed French Polynesia. In 1983 French Polynesia became a member of the Pacific Community, a regional development organization. Since 28 March 2003, French Polynesia has been an overseas collectivity of the French Republic under the constitutional revision of article 74, and later gained, with law 2004-192 of 27 February 2004, an administrative autonomy, two symbolic manifestations of which are the title of the President of French Polynesia and its additional designation as an overseas country.
Scientists believe the Great Polynesian Migration commenced around 1500 BC as Austronesian peoples went on a journey using celestial navigation to find islands in the South Pacific Ocean. The first islands of French Polynesia to be settled were the Marquesas Islands in about 200 BC. The Polynesians later ventured southwest and discovered the Society Islands around AD 300.
European encounters began in 1521 when Portuguese explorer Ferdinand Magellan, sailing at the service of the Spanish Crown, sighted Puka-Puka in the Tuāmotu-Gambier Archipelago. In 1606 another Spanish expedition under Pedro Fernandes de Queirós sailed through Polynesia sighting an inhabited island on 10 February which they called Sagitaria (or Sagittaria), probably the island of Rekareka to the southeast of Tahiti. In 1722, Dutchman Jakob Roggeveen while on an expedition sponsored by the Dutch West India Company, charted the location of six islands in the Tuamotu Archipelago and two islands in the Society Islands, one of which was Bora Bora.
British explorer Samuel Wallis became the first European navigator to visit Tahiti in 1767. French explorer Louis Antoine de Bougainville also visited Tahiti in 1768, while British explorer James Cook arrived in 1769. Cook would stop in Tahiti again in 1773 during his second voyage to the Pacific, and once more in 1777 during his third and last voyage before being killed in Hawaii.
In 1772, the Spanish Viceroy of Peru Don Manuel de Amat ordered a number of expeditions to Tahiti under the command of Domingo de Bonechea who was the first European to explore all of the main islands beyond Tahiti. A short-lived Spanish settlement was created in 1774, and for a time some maps bore the name "Isla de Amat" after Viceroy Amat. Christian missions began with Spanish priests who stayed in Tahiti for a year. Protestants from the London Missionary Society settled permanently in Polynesia in 1797.
King Pōmare II of Tahiti was forced to flee to Mo'orea in 1803; he and his subjects were converted to Protestantism in 1812. French Catholic missionaries arrived on Tahiti in 1834; their expulsion in 1836 caused France to send a gunboat in 1838. In 1842, Tahiti and Tahuata were declared a French protectorate, to allow Catholic missionaries to work undisturbed. The capital of Papeetē was founded in 1843. In 1880, France annexed Tahiti, changing the status from that of a protectorate to that of a colony. The island groups were not officially united until the establishment of the French protectorate in 1889.
After France declared a protectorate over Tahiti in 1840 and fought a war with Tahiti (1844–1847), the British and French signed the Jarnac Convention in 1847, declaring that the kingdoms of Raiatea, Huahine and Bora Bora were to remain independent from either powers and that no single chief was to be allowed to reign over the entire archipelago. France eventually broke the agreement, and the islands were annexed and became a colony in 1888 (eight years after the Windward Islands) after many native resistances and conflicts called the Leewards War, lasting until 1897.
In the 1880s, France claimed the Tuamotu Archipelago, which formerly belonged to the Pōmare Dynasty, without formally annexing it. Having declared a protectorate over Tahuata in 1842, the French regarded the entire Marquesas Islands as French. In 1885, France appointed a governor and established a general council, thus giving it the proper administration for a colony. The islands of Rimatara and Rūrutu unsuccessfully lobbied for British protection in 1888, so in 1889 they were annexed by France. Postage stamps were first issued in the colony in 1892. The first official name for the colony was "Établissements de l'Océanie" (Establishments in Oceania); in 1903 the general council was changed to an advisory council and the colony's name was changed to "Établissements Français de l'Océanie" (French Establishments in Oceania).
In 1940, the administration of French Polynesia recognised the Free French Forces and many Polynesians served in World War II. Unknown at the time to the French and Polynesians, the Konoe Cabinet in Imperial Japan on 16 September 1940 included French Polynesia among the many territories which were to become Japanese possessions, as part of the "Eastern Pacific Government-General" in the post-war world. However, in the course of the war in the Pacific the Japanese were not able to launch an actual invasion of the French islands.
In 1946, Polynesians were granted French citizenship and the islands' status was changed to an overseas territory; the islands' name was changed in 1957 to "Polynésie Française" (French Polynesia). In 1962, France's early nuclear testing ground of Algeria became independent and the Moruroa atoll in the Tuamotu Archipelago was selected as the new testing site; tests were conducted underground after 1974. In 1977, French Polynesia was granted partial internal autonomy; in 1984, the autonomy was extended. French Polynesia became a full overseas collectivity of France in 2003.
In September 1995, France stirred up widespread protests by resuming nuclear testing at Fangataufa atoll after a three-year moratorium. The last test was on 27 January 1996. On 29 January 1996, France announced that it would accede to the Comprehensive Test Ban Treaty, and no longer test nuclear weapons.
French Polynesia was relisted in the UN List of Non-Self Governing Territories in 2013, making it eligible for a UN-backed independence referendum. The relisting was made after the indigenous opposition was voiced and supported by the Polynesian Leaders Group, Pacific Conference of Churches, Women's International League for Peace and Freedom, Non-Aligned Movement, World Council of Churches, and Melanesian Spearhead Group.
Under the terms of Article 74 of the French constitution and the Organic Law 2014-192 on the statute of autonomy of French Polynesia, politics of French Polynesia takes place in a framework of a parliamentary representative democratic French overseas collectivity, whereby the President of French Polynesia is the head of government, and of a multi-party system. Executive power is exercised by the government. Legislative power is vested in both the government and the Assembly of French Polynesia (the territorial assembly).
Political life in French Polynesia has been marked by great instability since the mid-2000s. On 14 September 2007, the pro-independence leader Oscar Temaru, was elected president of French Polynesia for the third time in three years (with 27 of 44 votes cast in the territorial assembly). He replaced former president Gaston Tong Sang, opposed to independence, who lost a no-confidence vote in the Assembly of French Polynesia on 31 August after the longtime former president of French Polynesia, Gaston Flosse, hitherto opposed to independence, sided with his long enemy Oscar Temaru to topple the government of Gaston Tong Sang. Oscar Temaru, however, had no stable majority in the Assembly of French Polynesia, and new territorial elections were held in February 2008 to solve the political crisis.
The party of Gaston Tong Sang won the territorial elections, but that did not solve the political crisis: the two minority parties of Oscar Temaru and Gaston Flosse, who together have one more member in the territorial assembly than the political party of Gaston Tong Sang, allied to prevent Gaston Tong Sang from becoming president of French Polynesia. Gaston Flosse was then elected president of French Polynesia by the territorial assembly on 23 February 2008 with the support of the pro-independence party led by Oscar Temaru, while Oscar Temaru was elected speaker of the territorial assembly with the support of the anti-independence party led by Gaston Flosse. Both formed a coalition cabinet. Many observers doubted that the alliance between the anti-independence Gaston Flosse and the pro-independence Oscar Temaru, designed to prevent Gaston Tong Sang from becoming president of French Polynesia, could last very long.
At the French municipal elections held in March 2008, several prominent mayors who are member of the Flosse-Temaru coalition lost their offices in key municipalities of French Polynesia, which was interpreted as a disapproval of the way Gaston Tong Sang, whose party French Polynesian voters had placed first in the territorial elections the month before, had been prevented from becoming president of French Polynesia by the last minute alliance between Flosse and Temaru's parties. Eventually, on 15 April 2008 the government of Gaston Flosse was toppled by a constructive vote of no confidence in the territorial assembly when two members of the Flosse-Temaru coalition left the coalition and sided with Tong Sang's party. Gaston Tong Sang was elected president of French Polynesia as a result of this constructive vote of no confidence, but his majority in the territorial assembly is very narrow. He offered posts in his cabinet to Flosse and Temaru's parties which they both refused. Gaston Tong Sang has called all parties to help end the instability in local politics, a prerequisite to attract foreign investors needed to develop the local economy.
Between 1946 and 2003, French Polynesia had the status of an overseas territory (', or "TOM"). In 2003, it became an overseas collectivity (', or COM). Its statutory law of 27 February 2004 gives it the particular designation of "overseas country inside the Republic" ("", or POM), but without legal modification of its status.
Despite a local assembly and government, French Polynesia is not in a free association with France, like the Cook Islands with New Zealand. As a French overseas collectivity, the local government has no competence in justice, university education, security and defense. Services in these areas are directly provided and administered by the Government of France, including the National Gendarmerie (which also polices rural and border areas in metropolitan France), and French military forces. The collectivity government retains control over primary and secondary education, health, town planning, and the environment. The highest representative of the State in the territory is the High Commissioner of the Republic in French Polynesia ().
French Polynesia also sends three deputies to the French National Assembly, one representing the Leeward Islands administrative subdivision and the south-western suburbs of Papeete, another one representing Papeete and its north-eastern suburbs, plus the commune (municipality) of Mo'orea-Mai'ao, the Tuāmotu-Gambier administrative division, and the Marquesas Islands administrative division, and the last one representing the rest of Tahiti and the Austral Islands administrative subdivision. French Polynesia also sends two senators to the French Senate.
French Polynesians vote in the French presidential elections. In the 2007 French presidential election, in which the pro-independence leader Oscar Temaru openly called to vote for the Socialist candidate Ségolène Royal while the parties opposed to independence generally supported the center-right candidate Nicolas Sarkozy, the turnout in French Polynesia was 69.12% in the first round of the election and 74.67% in the second round, both in favour of Sarkozy (as compared to Metropolitan France in the 2nd round: Nicolas Sarkozy 51.9%; Ségolène Royal 48.1%).
The islands of French Polynesia make up a total land area of , scattered over more than of ocean. There are 118 islands in French Polynesia and many more islets or "motus" around atolls. The highest point is Mount Orohena on Tahiti.
It is made up of six archipelagos. The largest and most populated island is Tahiti, in the Society Islands.
The archipelagos are:
Aside from Tahiti, some other important atolls, islands, and island groups in French Polynesia are: Ahē, Bora Bora, Hiva 'Oa, Huahine, Mai'ao, Maupiti, Meheti'a, Mo'orea, Nuku Hiva, Raiatea, Taha'a, Tetiaroa, Tupua'i and Tūpai.
French Polynesia has five administrative subdivisions ("):
Total population at the August 2017 census was 275,918 inhabitants. At the 2017 census, 68.7% of the population of French Polynesia lived on the island of Tahiti alone. The urban area of Papeete, the capital city, has 136,771 inhabitants (2017 census).
At the 2017 census, 89.0% of people living in French Polynesia were born in French Polynesia (up from 87.3% in 2007), 8.1% were born in metropolitan France (down from 9.3% in 2007), 1.2% were born in overseas France outside of French Polynesia (down from 1.4% in 2007), and 1.7% were born in foreign countries (down from 2.0% in 2007). The population of natives of metropolitan France living in French Polynesia has declined in relative terms since the 1980s, but in absolute terms their population peaked at the 2007 census with 24,265 natives of metropolitan France living in French Polynesia that year (not counting their children born in French Polynesia). With the local economic crisis, their population declined to 22,278 at the 2012 census, and 22,387 at the 2017 census.
At the 1988 census, the last census which asked questions regarding ethnicity, 66.5% of people were ethnically unmixed Polynesians, 7.1% were ethnically Polynesians with light European and/or East Asian mixing, 11.9% were Europeans (mostly French), 9.3% were people of mixed European and Polynesian descent, the so-called Demis (literally meaning "Half"), and 4.7% were East Asians (mainly Chinese).
Chinese, Demis, and the white populace are essentially concentrated on the island of Tahiti, particularly in the urban area of Papeete, where their share of the population is thus much greater than in French Polynesia overall. Despite a long history of ethnic mixing, ethnic tensions have been growing in recent years, with politicians using a xenophobic discourse and fanning the flame of nationalism.
French is the only official language of French Polynesia. An organic law of 12 April 1996 states that "French is the official language, Tahitian and other Polynesian languages can be used." At the 2017 census, among the population whose age was 15 and older, 73.9% of people reported that the language they spoke the most at home was French (up from 68.6% at the 2007 census), 20.2% reported that the language they spoke the most at home was Tahitian (down from 24.3% at the 2007 census), 2.6% reported Marquesan and 0.2% the related Mangareva language (same percentages for both at the 2007 census), 1.2% reported any of the Austral languages (down from 1.3% at the 2007 census), 1.0% reported Tuamotuan (down from 1.5% at the 2007 census), 0.6% reported a Chinese dialect (41% of which was Hakka) (down from 1.0% at the 2007 census), and 0.4% another language (more than half of which was English) (down from 0.5% at the 2007 census).
At the same census, 95.2% of people whose age was 15 or older reported that they could speak, read and write French (up from 94.7% at the 2007 census), whereas only 1.3% reported that they had no knowledge of French (down from 2.0% at the 2007 census). 86.5% of people whose age was 15 or older reported that they had some form of knowledge of at least one Polynesian language (up from 86.4% at the 2007 census but down from 87.8% at the 2012 census), whereas 13.5% reported that they had no knowledge of any of the Polynesian languages (down from 13.6% at the 2007 census but up from 12.2% at the 2012 census).
French Polynesia appeared in the world music scene in 1992, recorded by French musicologist Pascal Nabet-Meyer with the release of The Tahitian Choir's recordings of unaccompanied vocal Christian music called himene tārava. This form of singing is common in French Polynesia and the Cook Islands, and is notable for a unique drop in pitch at the end of the phrases, a characteristic formed by several different voices, accompanied by a steady grunting of staccato, nonlexical syllables.
Christianity is the main religion of the islands. A majority of 54% belongs to various Protestant churches, especially the Maohi Protestant Church, which is the largest and accounts for more than 50% of the population. It traces its origins to Pomare II, the king of Tahiti, who converted from traditional beliefs to the Reformed tradition brought to the islands by the London Missionary Society.
Latin Rite Roman Catholics constitute a large minority of 30% of the population, which has its own ecclesiastical province, comprising the Metropolitan Archdiocese of Papeete and its only suffragan, the Diocese of Taiohae. The Church of Jesus Christ of Latter-day Saints had 28,147 members . Community of Christ, another denomination within the Latter-Day Saint tradition, claimed 7,990 total French Polynesian members as of 2015 including Mareva Arnaud Tchong who serves in the church's governing Council of Twelve Apostles. There were about 3,000 Jehovah's Witnesses in Tahiti .
There are an estimated 500 Muslims in French Polynesia,
The sport of football in the island of Tahiti is run by the Fédération Tahitienne de Football.
The Polynesian traditional sport va'a is practiced in all the islands. French Polynesia hosts the an international race between Tahiti, Huahine and Bora Bora.
French Polynesia is famous for its reef break waves. Teahupo'o is probably the most renowned, regularly ranked in the best waves of the world. This site hosts the annual Billabong Pro Tahiti surf competition, the 7th stop of the World Championship Tour.
There are many spots to practice kitesurfing in French Polynesia, with Tahiti, Moorea, Bora-Bora, Maupiti and Raivavae being among the most iconic.
French Polynesia is internationally known for diving. Each archipelago offers opportunities for divers. Rangiroa and Fakarava in the Tuamotu islands are the most famous spots in the area.
Rugby is also popular in French Polynesia, specifically Rugby union.
The legal tender of French Polynesia is the CFP Franc which has a fixed exchange rate with the Euro. The nominal gross domestic product (or GDP) of French Polynesia in 2014 was 5.623 billion U.S. dollars at market local prices, the sixth-largest economy in Oceania after Australia, New Zealand, Hawaii, New Caledonia, and Papua New Guinea. The GDP per capita was $20,098 in 2014 (at market exchange rates, not at PPP), lower than in Hawaii, Australia, New Zealand, and New Caledonia, but higher than all the independent insular states of Oceania. Both per capita and total figures were significantly lower than those recorded before the financial crisis of 2007–08.
French Polynesia has a moderately developed economy, which is dependent on imported goods, tourism, and the financial assistance of mainland France. Tourist facilities are well developed and are available on the major islands. Main agricultural productions are coconuts (copra), vegetables and fruits. French Polynesia exports noni juice, a high quality vanilla, and the famous black Tahitian pearls which accounted for 55% of exports (in value) in 2008.
French Polynesia's seafloor contains rich deposits of nickel, cobalt, manganese, and copper that are not exploited.
In 2008, French Polynesia's imports amounted to 2.2 billion U.S. dollars and exports amounted to 0.2 billion U.S. dollars.
There are 53 airports in French Polynesia; 46 are paved. Fa'a'ā International Airport is the only international airport in French Polynesia. Each island has its own airport that serves flights to other islands. Air Tahiti is the main airline that flies around the islands.
In 2017, Alcatel Submarine Networks, a unit of Nokia, launched a massive project to connect many of the islands in French Polynesia with underwater fiber optic cable. The project, called NATITUA will improve French Polynesian broadband connectivity by linking Tahiti to 10 islands in the Tuamotu and Marquesas archipelagos. In August 2018, a celebration was held to commemorate the arrival of a submarine cable from Papeete to the atoll of Hao, extending the network by about 1000 kilometres. | https://en.wikipedia.org/wiki?curid=10737 |
Demographics of French Polynesia
This article is about the demographic features of the population of French Polynesia, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population.
The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. | https://en.wikipedia.org/wiki?curid=10740 |
Politics of French Polynesia
Politics of French Polynesia takes place in a framework of a parliamentary representative democratic French overseas collectivity, whereby the President of French Polynesia is the head of government, and of a multi-party system. Executive power is exercised by the government. Legislative power is vested in both the government and the Assembly of French Polynesia.
Between 1946 and 2003, French Polynesia had the status of an overseas territory (French: "territoire d'outre-mer", or "TOM"). In 2003 it became an overseas collectivity (French: "collectivité d'outre-mer", or COM). Its statutory law of 27 February 2004 gives it the particular designation of "overseas country" to underline the large autonomy of the territory.
The President of the French Republic is represented by the High Commissioner of the Republic in French Polynesia ("Haut-Commissaire de la République en Polynésie française"). The government is headed by the President of French Polynesia. He submits as Council of Ministers a list of members of the Territorial Assembly, the Assembly of French Polynesia ("Assemblée de la Polynésie française"), for approval by them to serve as ministers.
It has been hinted new elected president on September 14 will be an interim president before a new round of fresh parliamentary elections expected to take place by the end of the year, followed by a new presidential election.
French Polynesia elects the Assembly of French Polynesia ("Assemblée de la Polynésie française"), the unicameral legislature on the territorial level. The Assembly of French Polynesia has 57 members, elected for a five-year term by proportional representation in multi-seat constituencies. Since the territorial elections of March 6, 2001, the parity bill now binds that the number of women matches the number of men at the Assembly.
The members of the Assembly of French Polynesia are elected in 6 different electoral districts or electoral circumscriptions () which slightly differ from the administrative subdivisions ("subdivisions administratives") on the Tuamotus and the Gambier Islands. The 6 electoral circumscriptions ("circonscriptions électorales") are:
Court of Appeal or Cour d'Appel; Court of the First Instance or Tribunal de Premiere Instance; Court of Administrative Law or Tribunal Administratif.
French Polynesia has 5 administrative subdivisions ():
"note:" Clipperton Island (), just off the coast of Mexico, was administered by France from French Polynesia.
ESCAP (associate), FZ, ITUC, SPC, WMO | https://en.wikipedia.org/wiki?curid=10741 |
Economy of French Polynesia
The economy of French Polynesia is one of a developed country with a service sector accounting for 75%. French Polynesia's GDP per capita is around $22,000, one of the highest in the Pacific region.
Before French colonisation, the Polynesian islands that constitute nowadays French Polynesia, relied on a subsistence economy. Work was heavily organised and performed by the community as a whole under the direction of the Arii ruling class and the priests. Mountains were terraced for agriculture production, river banks were contained by stone walls, artificial soil was created on atolls in large trenches, and large systems made out of coral stone walls trapped and stocked live fish. Production outputs were divided by the ruling class between the population.
After the contact was established with European ships, foreign diseases killed large portions of the populations, and Christian beliefs and clergy produced a huge shift in the culture of those islands. With fewer population to feed, more land per capita was available, and the land use switched toward the limited production required by a family to live. Habitations moved toward seashores as the population relied more on the lagoon and sea trade. European ships stopped in those islands to purchase water, salt pork meat, dried fish and fresh fruits.
As French, English and Americans settled, part of the agriculture moved towards exports of oranges, coprah, coffee, cotton, and vanilla. They also exported Tahitian black pearls and sandalwood. Santal wood nearly disappeared, cotton production was short lived as the USA's south recovered from the American Civil War, and coffee and orange trees suffered from imported diseases that stopped those exports. Coprah and vanilla prices and competition worldwide impacted heavily those productions in the second half of the 20th century, although they still exist. The guano mining at Makatea started in 1917 and stopped in 1966 when the stocks were depleted.
In 1962, France stationed military personnel in the region and started nuclear experimentations in Moruroa. French Polynesia's economy switched to services to support the military and the growing tourist industry.
Tourism accounts nowadays for about 13% of the GDP, and is a primary source of foreign currency earnings. The tourist industry was heavily impacted after the 9/11 terrorist attacks and the 2008 economic crises, and never really recovered since. There are around 160,000 tourists per year. The local government mostly focuses its action at developing a high-end market with luxurious hotels built with foreign investment and French tax cut incentives, but many of these investments close after a few years. The subsidized air company Air Tahiti Nui brings tourists from France, Los Angeles, Japan and China. Other companies also operate, like Air France and Air New Zealand.
The small manufacturing sector primarily processes agricultural products. Vanilla and pearls are its main exports.
The public administration is an important part of the GDP and a provider of stable employment. The French republic finances the functionaries working in education, justice, hospitals, gendarmerie (military police), and military. The local government controls its own administration, like the ministry of agriculture, and oversees the administration and buildings of some sectors like schools. The local government also influence a large part of the economy through subsidies and development programs.
Some parts of the economy involve quasi-monopolistic groups due to the small economy size, the challenges of a country of small islands spread in a huge oceanic space, and the action of the government through subsidies and public companies. Some sectors show an important horizontal and vertical integration trend. Recently, the local government tries to maintain a healthy competition and regulate the growth of the biggest groups, but face many challenges. For example, it was unable to prevent a major supermarket group to develop its own vegetable production, ending its supplying contracts with local farmers. But it blocked the merger of two local shipping companies to avoid a monopoly on some trade routes. The price of shipping goods between islands is fixed by the government, and subsidies are provided for transporting some items like farming products or construction materials.
Some products' price margins are controlled by the local government to reduce the disparity of prices between the different archipelagos. Import taxes and VAT are fixed and collected by the local government that also control what imports are allowed to protect its agriculture and nature from diseases and invasive species.
The majority of the population is of mixed Polynesian and European origin. Around 5% of the population is of Asian origin, descending from farm workers imported in the 19th century to work in the cotton fields. They are present in the administration and trading sector of the economy. The recent metropolitan population is mostly involved in the state administration and in small and medium-sized enterprises.
Most Polynesians in agriculture farm traditional products like taro, ufi, casava and sweet potato to feed themselves and small surplus are sold for monetary income alongside a small fishing activity. Farmers of Asian origin tends to produce European and Asian vegetables for the local market.
The Moorea island developed pineapple production for local market and supplying the juice factory. Maupiti and Huahine produce watermelons. Tahiti and Tahaa have a small production of sugarcane for rum distillery.
Tahiti produces a small quantity of fresh milk, mostly for the local yogurt factory, as most of the population is used to drinking UHT and powdered milk from France and New Zealand. French Polynesia has a single slaughterhouse treating beef, pork, and chickens. The local beef meat production is very limited and mostly used to supply the local corned beef factory. Most of the meat comes from New Zealand, amounting to around 10% of the exports of fresh meat of this country. Two charcuteries produce ham, sausages, and pâtés from local and imported pork.
The copra production is heavily subsidized as the local government treats it as a form of social support for the remote islands with a limited range of economic activities possibilities like Tuamotu atolls. The copra is milled by the Huilerie de Tahiti to produce coconut oil mostly used for the monoi. The coconut cake residue is used as a cattle and pork feed, and surplus used to be exported to New Zealand.
Vanilla production depends heavily on the situation in Madagascar. When a typhoon hit this main supplier of vanilla, the market price increased worldwide and the local Polynesian government started a heavy program of subsidies and loans to develop vanilla farms. As the Polynesian production increased and Madagascar recovered, prices dropped and a lot of Polynesian farmers stopped caring for their vanilla plants. The plants are fragile and require regular care of experienced farmers. Diseases and insects can heavily reduce the production, and the cost of chemical products used impact the farmer harder when the vanilla prices are low. As the vanilla production falls, the price increase and the government started a new program of development, starting a new cycle. Despite the high price of Tahitian dried vanilla on the international market, it usually still finds buyers in the high-end market because of the specificities of its cultivar and quality.
In the 1990s, the commercial production of Noni started because of the supposed benefits of the juice of this fruit. Exports were mostly directed toward the North American market. But this production was short-lived, falling quickly from 7000 tonnes in 2005 down to 2000 tonnes in 2008, as the plant can be easily farmed in any tropical climate, especially in countries with lower labor costs and more land.
A small vineyard production exists in Rangiroa atoll and is aimed at the high-end market, capitalizing on its rarity and specificity of a vine grown on coral soil in a tropical island.
French Polynesia's electricity production in 2004 was 477 GWh. In 1998 59.72% of French Polynesia's electricity came from fossil fuel with the remainder from hydropower.
French Polynesia uses the Comptoirs Francais du Pacifique franc (CFPF), with 1 CFP franc subdivided into 100 centimes. The CFP franc was formerly linked at the exact official rate of 0.055 French francs to one Pacifique franc. When France switched its currency to the euro in 1999 this static link remained true, so that the rate is now about 119.26 Pacifique franc to one euro (1 euro being exactly 6.55957 French francs). In 2016 the exchange rate was 110.2 CFP francs per US dollar. | https://en.wikipedia.org/wiki?curid=10742 |
Telecommunications in French Polynesia
This article is about communications systems in French Polynesia.
The Honotua fiber optic cable connected Tahiti to Hawaii in 2010, increasing Internet speeds to 20 gigabits per second from 500 megabits per second. The cable will also connect to Moorea and the Leeward Islands of Huahine, Raiatea and Bora Bora.
Main lines in use: 32,000 (1995)
Mobile cellular: 4,000 (1995)
Telephone system:
Domestic:
N/A
International:
Satellite Earth station—1 Intelsat (Pacific Ocean)
Radio broadcast stations:
AM 2, FM 14, shortwave 2 (1998)
Radios:
128,000 (1997)
Television broadcast stations:
7 (plus 17 low-power repeaters) (1997)
Televisions:
40,000 (1997)
Internet Service Providers (ISPs):
OPT (national operator),
Country code (Top Level Domain): PF
ITU Prefix: F
Amateur radio prefix (Designated by France): FO | https://en.wikipedia.org/wiki?curid=10743 |
French Southern and Antarctic Lands
The French Southern and Antarctic Lands (, TAAF) is an Overseas Territory ( or ) of France. It consists of:
The territory is sometimes referred to as the French Southern Lands () or the French Southern Territories, usually to emphasize non-recognition of French sovereignty over "Adélie Land" as part of the Antarctic Treaty System.
Approximately 150 (in the winter) to 310 (in the summer) persons live in the French Southern and Antarctic Lands but they are only military personnel, officials, scientific researchers and support staff. The territory has legally no permanent civilian population.
On July 5, 2019, the French Austral Lands and Seas were inscribed as a UNESCO World Heritage Site.
The French Southern and Antarctic Lands have formed a (an overseas territory) of France since 1955. Formerly, they were administered from Paris by an assisted by a secretary-general; since December 2004, however, their administrator has been a "préfet", currently Cécile Pozzo di Borgo, with headquarters in Saint Pierre on Réunion Island.
The territory is divided into five districts:
Each district is headed by a district chief, who has powers similar to those of a French mayor (including recording births and deaths and being an officer of judicial police).
Because there is no permanent population, there is no elected assembly, nor does the territory send representatives to the national parliament.
The territory includes Amsterdam Island, Saint Paul Island, the Crozet Islands, and the Kerguelen Islands in the southern Indian Ocean near 43°S, 67°E, along with "Adélie Land", the sector of Antarctica claimed by France, named by the French explorer Jules Dumont d'Urville after his wife.
"Adélie Land" (about ) and the islands, totaling , have no indigenous inhabitants, though in 1997 there were about 100 researchers whose numbers varied from winter (July) to summer (January).
Amsterdam Island and Saint Paul Island are extinct volcanoes and have been delineated as the Amsterdam and Saint-Paul Islands temperate grasslands ecoregion. The highest point in the territory is Mont Ross on Kerguelen Island at . There are very few airstrips on the islands, only existing on islands with weather stations, and the of coastline have no ports or harbors, only offshore anchorages.
The islands in the Indian Ocean are supplied by the special ship "Marion Dufresne" sailing out of Le Port in Réunion Island. Terre Adélie is supplied by "L'Astrolabe" sailing out of Hobart in Tasmania.
However, the territory has a merchant marine fleet totaling (in 1999) 2,892,911 GRT/, including seven bulk carriers, five cargo ships, ten chemical tankers, nine container ships, six liquefied gas carriers, 24 petroleum tankers, one refrigerated cargo ship, and ten roll-on-roll-off (RORO) carriers. This fleet is maintained as a subset of the French register that allows French-owned ships to operate under more liberal taxation and manning regulations than permissible under the main French register. This register, however, is to vanish, replaced by the International French Register ("Registre International Français", RIF).
The territory's natural resources are limited to fish and crustaceans. Economic activity is limited to servicing meteorological and geophysical research stations and French and other fishing fleets.
The main fish resources are Patagonian toothfish and spiny lobster. Both are poached by foreign fleets; because of this, the French Navy, and occasionally other services, patrol the zone and arrest poaching vessels. Such arrests can result in heavy fines and/or the seizure of the ship.
France previously sold licenses to foreign fisheries to fish the Patagonian toothfish; because of overfishing, it is now restricted to a small number of fisheries from Réunion Island.
The territory takes in revenues of about €16 million a year.
The French Southern Territories (i.e. the TAAF excluding Adélie Land) have been given the following country codes: FS (FIPS) and (ISO 3166-1 alpha-2). | https://en.wikipedia.org/wiki?curid=10747 |
History of French Guiana
The history of French Guiana spans many centuries. Before the first Europeans arrived, there was no written history in the territory. It was originally inhabited by a number of Native American peoples, among them the Kalina (Caribs), Arawak, Emerillon, Galibi, Palikur, Wayampi (also known as Oyampi), and Wayana. The first Europeans arrived in the expeditions of Christopher Columbus, shortly before 1500.
In 1498 French Guiana was first visited by Europeans when Christopher Columbus sailed to the region on his third voyage and named it the "Land of Pariahs". In 1608 the Grand Duchy of Tuscany sent an expedition to the area in order to create an Italian colony for the commerce of Amazonian products to Renaissance Italy, but the sudden death of Ferdinando I de' Medici, Grand Duke of Tuscany stopped it.
In 1624 the French attempted to settle in the area, but was forced to abandon it in the face of hostility from the Portuguese, who viewed it as a violation of the Treaty of Tordesillas. However French settlers returned in 1630 and in 1643 managed to establish a settlement at Cayenne along with some small-scale plantations. This second attempt would again be abandoned following Amerindian attacks. In 1658 the Dutch West Indies Company seized French territory to establish the Dutch colony of Cayenne. The French returned once more in 1664, and founded a second settlement at Sinnamary (this was attacked by the Dutch in 1665).
In 1667 the English seized the area. Following the Treaty of Breda on 31 July 1667 the area was given back to France. The Dutch briefly occupied it for a period in 1676.
After the Treaty of Paris in 1763, which deprived France of almost all her possessions in the Americas other than Guiana and a few islands, Louis XV sent thousands of settlers to Guiana who were lured there with stories of plentiful gold and easy fortunes to be made. Instead they found a land filled with hostile natives and tropical diseases. One and a half years later only a few hundred survived. These fled to three small islands which could be seen off shore and named them the Iles de Salut (or "Islands of Salvation"). The largest was called Royal Island, another St. Joseph (after the patron saint of the expedition), and the smallest of the islands, surrounded by strong currents, Île du Diable (the infamous "Devil's Island"). When the survivors of this ill-fated expedition returned home, the terrible stories they told of the colony left a lasting impression in France.
In 1776, Pierre-Victor Malouet was appointed to the Colony, who brought in Jean Samuel Guisan to establish agriculture in the colony. The relatively good period ended in 1792 during the French Revolution, when the first prison for priests and political enemies opened in Sinnamary which set a precedent.
During the Revolution, the National Convention voted to abolish slavery in February 1794, months after the rebelling slaves had already announced an abolition of slavery in Saint-Domingue. However, the 1794 decree was only implemented in Saint-Domingue, Guadeloupe and French Guiana, and was a dead letter in Senegal, Mauritius, Réunion and Martinique, the last of which had been conquered by the British, who maintained the institution of slavery on that Caribbean island.
In 1794, after the death of Robespierre, 193 of his followers were sent to French Guiana. In 1797 the republican general Pichegru and many deputies and journalists were also sent to the colony. When they arrived they found that only 54 of the 193 deportées sent out three years earlier were left; 11 had escaped, and the rest had died of tropical fevers and other diseases. Pichegru managed to escape to the United States and then returned to France where he was eventually executed for plotting against Napoleon.
Later on, slaves were brought out from Africa and plantations were established along the more disease-free rivers. Exports of sugar, hardwood, Cayenne pepper and other spices brought a certain prosperity to the colony for the first time. Cayenne, the capital, was surrounded by plantations, some of which had several thousand slaves.
In 1809 an Anglo-Portuguese naval squadron took French Guiana (ousting governor Victor Hugues) and gave it to the Portuguese in Brazil. However, with the signing of the Treaty of Paris in 1814 the region was handed back to the French, though a Portuguese presence remained until 1817.
In 1848 France abolished slavery and the ex-slaves fled into the rainforest, setting up communities similar to the ones they had come from in Africa. Subsequently called Maroons, they formed a sort of buffer zone between the Europeans (who settled along the coast and main rivers) and the unconquered (and often hostile) Native American tribes of the inland regions. Deprived of slave labour the plantations were soon taken over by the jungle, and the planters ruined.
In 1850 several shiploads of Indians, Malays and Chinese were brought out to work the plantations but, instead, they set up shops in Cayenne and other settlements.
In 1852 the first shiploads of chained convicts arrived from France. In 1885, to get rid of habitual criminals and to increase the number of colonists, the French Parliament passed a law that anyone, male or female, who had more than three sentences for theft of more than three months each, would be sent to French Guiana as a "relégué". These "relégués" were to be kept in prison there for six months but then freed to become settlers in the colony. However, this experiment failed dismally. The ex-prisoners, unable to make a living off the land found themselves forced to revert to crime or to eke out a hand-to-mouth existence until they died. In fact, transportation to French Guiana as a "relégué" amounted to a life sentence, and usually a short life sentence, as most of the "relégués" died very quickly from disease and malnutrition.
The prisoners would arrive at St Laurent du Maroni before being transported to various camps throughout the country. The Iles du Salut were used to house political prisoners and for solitary confinement. The islands became notorious for the brutality of life there, centering on the notorious Devil's Island. Famous figures sent to the islands included Alfred Dreyfus (in 1895) and Henri Charrière (in the 1930s). Charrière managed to escape and later wrote a best-selling book about his experiences called "Papillon".
In 1853, gold was discovered in the interior, precipitating border disputes with Brazil and Suriname (these were later settled in 1891, 1899 and 1915, although a small region of the border with Suriname remains in dispute). The Republic of Independent Guyana, in French "La République de la Guyane indépendante" and commonly referred to by the name of the capital "Counani", was created in the area which was disputed by France (as part of French Guiana) and Brazil in the late nineteenth century.
The territory of Inini, consisting of most of the interior of French Guiana, was created in 1930. It was abolished in 1946.
During World War II the local government declared its allegiance to the Vichy government, despite widespread support for Charles de Gaulle. This government was removed on 22 March 1943.
On January 12, 1943 a Jewish family of seven was deported on the SS "Cap Arcona" headed toward Italy. The family ended up at Auschwitz II concentration camp. One member of the family died in crematorium 2.
French Guiana became an overseas "département" of France on 19 March 1946.
The infamous penal colonies, including Devil's Island, were gradually phased out and then formally closed in 1951. At first, only those freed prisoners who could raise the fare for their return passage to France were able to go home, so French Guiana was haunted after the official closing of the prisons by numerous freed convicts leading an aimless existence in the colony.
Visitors to the site in December 1954 reported being deeply shocked by the conditions and the constant screams from the cell-block still in use for convicts who had gone insane and which had only tiny ventilation slots at the tops of the walls under the roof. Food was pushed in and bodies removed once a day.
In 1964 Kourou was chosen to be launch site for rockets, largely due to its favourable location near the equator. The Guiana Space Centre was built and became operational in 1968. This has provided some local employment and the mainly foreign technicians, and hundreds of troops stationed in the region to prevent sabotage, bring a little income to the local economy.
The 1970s saw the settlement of Hmong refugees from Laos in the county, primarily to the towns of Javouhey and Cacao. The Green Plan ("Le Plan Vert") of 1976 aimed to improve production, though it had only limited success. A movement for increased autonomy from France gained momentum in the 70s and 80s, along with the increasing success of the Parti Socialiste Guyanais.
Protests by those calling for more autonomy from France have become increasingly vocal. Protests in 1996, 1997 and 2000 all ended in violence. While many Guianese wish to see more autonomy, support for complete independence is low.
In a 2010 referendum, French Guianans voted against autonomy.
On March 20, 2017, French Guianans began going on strike and demonstrating for more resources and infrastructure. March 28, 2017 saw the largest demonstration ever held in French Guiana. | https://en.wikipedia.org/wiki?curid=10761 |
Economy of French Guiana
The economy of French Guiana is tied closely to that of mainland France through subsidies and imports. Besides the French space center at Kourou, fishing and forestry are the most important economic activities in French Guiana. The large reserves of tropical hardwoods, not fully exploited, support an expanding sawmill industry which provides saw logs for export. Cultivation of crops is limited to the coastal area, where the population is largely concentrated; rice and manioc are the major crops. French Guiana is heavily dependent on imports of food and energy. Unemployment is a serious problem, particularly among younger workers.
Budget:
"revenues:"
$135,5 million
"expenditures:"
$135,5 million, including capital expenditures of $105 million (1996)
Electricity - production:
465,2 GWh (2003)
Electricity - production by source:
"fossil fuel:"
100%
"hydro:"
0%
"nuclear:"
0%
"other:"
0% (1998)
Electricity - consumption:
432,6 GWh (2003)
Electricity - exports:
0 kWh (2003)
Electricity - imports:
0 kWh (2003)
Agriculture - products:
rice, manioc (tapioca), sugar, cocoa, vegetables, bananas; cattle, pigs, poultry
Currency:
Euro
Fiscal year:
calendar year
" The economic accounts of Guyana in 2006: first results " (PDF) . Retrieved 2008-01-14 . | https://en.wikipedia.org/wiki?curid=10765 |
Telecommunications in French Guiana
Telephones - main lines in use:
47,000 (1995)
Telephones - mobile cellular:
NA
Telephone system:
"domestic:"
fair open wire and microwave radio relay system
"international:"
satellite earth station - 1 Intelsat (Atlantic Ocean)
Radio broadcast stations:
AM 2, FM 14 (including 6 repeaters), shortwave 6 (including 5 repeaters) (1998)
Radios:
104,000 (1997)
Television broadcast stations:
3 (plus eight low-power repeaters) (1997)
Televisions:
30,000 (1997)
Internet Service Providers (ISPs):
NA
Country code (Top-level domain): GF | https://en.wikipedia.org/wiki?curid=10766 |
Transport in French Guiana
There are four types of public transport in French Guiana.
The bus of the Joint Association of Public Transport (SMTC) serve only the municipality of Cayenne. There are seven lines. Some lines have 3 or 4 buses. The price of ticket, the normal rate is 1.10 euro for 2011. Note that the SMTC is a public body, and the lines are restricted. SMTC buses now have a system of air conditioning. The seats are more comfortable. These buses are also used to transport students from different schools. During the school year, the pregnant woman and senior citizens need to manage to find a place because these buses are always full, especially on market days. Buses only stop at bus stops (signs or shelters). The wait time is about 15 to 40 minutes depending on the lines.
Line No. 1 dessert habitat areas such as: CHATENAY, HORTH, GRANT, COULEE D’OR, JARDIN DE ZEPHIR, BOURDA, COLIBRI, et ZEPHIR, and major traffic generators in the Greater Cayenne (center city and administrative district schools). With a bus scheduled to arrive every 30 minutes Monday to Saturday, and a bus every hour on Saturday afternoon. Amplitude (2 bus Max): 5 h beginning 45, 20 end 14 h.
Line No. 2 dessert habitat areas such as: RESIDENCE UNIVERSITAIRE, STANILAS, PASTEUR, RESIDENCE DE BADUEL, and major traffic generators in the Greater Cayenne (downtown and schools). With a bus scheduled to arrive every 35 minutes Monday to Saturday, and a bus every hour on Saturday afternoon. Amplitude (2 bus Max): 6 h beginning 15, 01 end 20 h.
Line No. 3 serves residential areas such as: MANGUIER, THEMIRE, MANGO, BONHOMME, NOVA PARC, VENDÔME, CHEMIN TARZAN, MONT-LUCAS, and major traffic generators in the Greater Cayenne (downtown, administrative district and schools). With a bus scheduled to arrive every 22 minutes Monday to Saturday, and a bus every hour on Saturday afternoon. Amplitude (4 buses Max): 5 h beginning 45, end 19 h 56Line 4
dessert habitat areas such as: RENOVATION URBAINE, EAU LISETTE, URANUS, ROSERAIE, MORTIN, ZENITH (Matoury), and major traffic generators in the Greater Cayenne (downtown, industrial park and schools). With a bus scheduled to arrive every 40 minutes Monday to Saturday, and a bus every hour on Saturday afternoon. Amplitude (2 bus Max): 5 h beginning 50, 20 end 13 h.
Line No. 5 serves residential areas such as: LES ALIZES, ANATOLE, BRUTUS, CESAIRE, EAU LISETTE, BONHOMME, CABASSOU, and major traffic generators in the town of Cayenne (city, district administrative and schools). With a bus scheduled to arrive every 30 minutes Monday to Saturday, and a bus every hour on Saturday afternoon. Amplitude (2 bus Max): 6 pm start 00, end 20 h 04.
Line No. 1 PC (a small ring) serves an area of habitat in the Making (PETIT LUCAS, SAINT MARTIN, BOKRIS), downtown, industrial zones and GALMOT Collery, CAF services, other areas activity (Match MONTJOLY 2, C.C. KATOURY) and corresponds with all lines of the network by structuring ways. With a bus scheduled to arrive every hour from Monday to Saturday. Amplitude (1 bus): 5 h beginning 55, 19 end 20 h.
Line No. 2 PC (inner ring 2), serves the same areas as the Line No. PC 1 but in reverse. With a bus scheduled to arrive every hour from Monday to Saturday. Amplitude (1 bus): 6 h beginning 10, 20 end 10 h.
Air links, provided by Air Guyane, daily link Cayenne to Maripasoula and Saül. Price from 100 € to 120 € return.
The taxi group (called Taxi Co) offers travel in minibuses of nine seats. Their schedules are random, based as it only when the minibus is full or nearly so. To connect the Commons Saint-Laurent du Maroni Mana Organabo Iracoubo Sinnamary, Kourou Macouria Cayenne Rémire-Montjoly Degrad des Cannes Balata and St. Rose of Lima Matoury Cocoa or Stoupan (common Roura), Régina Saint-Georges-de it Oyapock. All taxis are unaware systematically Degrad des Cannes or St. Rose of Lima. It is best to ask the driver before boarding. Montsinéry-Tonnegrande is the only coastal town not to be served by taxi group. Some example prices are (the go): €10 Cayenne-Kourou, St. Laurent-€25 Kourou, Cayenne-Saint-Georges-de-Oyapock €40.
Since early 2010, an agreement was established between the General Council, responsible for organizing transport between the towns, and some carriers Taxi Co. The new public service became known as TIG (Long Distance Transport of Guyana). The TIG coexist with Taxi Co. The vehicle types are the same (a sticker "TIG" stands for the first second) but the Convention requires that transportation TIG have fixed schedules (vehicles go, they are filled or not) on defined paths (no collection or deposit on demand, no shortcuts). Prices are generally lower than the prices charged by the Taxi Co (example: Cayenne in Saint-Georges-de-Oyapock at €31 instead of 40 €).
Canoe-taxis are on the Maroni (Surinamese border) between Saint-Laurent du Maroni and Apatou, as well as the Oyapock (Brazilian border) between Saint-Georges-de-Oyapock and the Brazilian city of Oiapoque. Apatou is located at 3 o'clock on the Maroni. Departures are daily at 11:00 (Degrad of the Ice in Saint-Laurent du Maroni) and back to 7 am. The price is 11 € one way. To cross the Maroni (to Albina), count 5 € each way. Oiapoque is located 15 minutes from St. George. Departures are on demand, usually when a number of passengers is met. Unlike Maroni, there is no curfew on the Oyapock. Canoes are available however, fewer night and the prices are somewhat higher (in 2012, crossing for a person: 5 € / day 10 reais, 7 € / night 15 reais).
A short railway is used within the Guiana Space Centre (this short railway is for transporting spacecraft inside the base to the launch pad, not for passenger use). The railway is double tracked and used by unpowered rail cars (tanker cars, flatcars and launch table transporter platforms fitted with bogies) and are towed by rubber wheeled vehicles with railway wheels or bogies to ride along the rail tracks.
From 1880s to sometime after 1926 a steam narrow gauge railway was used for gold mines in Saint-Elie and two other lines were partially built and never used.
Prison railways were built in the 1890s but the line was abandoned after prisons closed and disappeared sometime after 1946.
There are no other railways in French Guiana and none have existed for revenue passenger service, and there are no connections to neighbouring countries.
"total:"
1,817 km
"paved:"
727 km
"unpaved:"
1,090 km (1995 est.)
460 km, navigable by small oceangoing vessels and river and coastal steamers; 3,300 km navigable by native craft
Cayenne, Degrad des Cannes, Saint-Laurent du Maroni
none (1999 est.)
A 1999 estimate counted 11 airports in the department. The main airport of French Guiana is Cayenne – Félix Eboué Airport.
"total:"
4
"over 3,047 m:"
1
"914 to 1,523 m:"
2
"under 914 m:"
1 (1999 est.)
"total:"
7
"914 to 1,523 m:"
2
"under 914 m:"
5 (1999 est.) | https://en.wikipedia.org/wiki?curid=10767 |
François Truffaut
François Roland Truffaut ( , ; ; 6 February 1932 – 21 October 1984) was a French film director, screenwriter, producer, actor, and film critic. He is widely regarded as one of the founders of the French New Wave. In a film career lasting over a quarter of a century, he remains an icon of the French film industry, having worked on over 25 films. Truffaut's film "The 400 Blows" came to be a defining film of the French New Wave movement, and was followed by four sequels, "Antoine et Colette", "Stolen Kisses", "Bed and Board", and "Love on the Run", between 1958 and 1979.
Truffaut's 1973 film "Day for Night" earned him critical acclaim and several accolades, including the BAFTA Award for Best Film and the Academy Award for Best Foreign Language Film. His other notable films include "Shoot the Piano Player" (1960), "Jules and Jim" (1961), "The Soft Skin" (1964), "The Wild Child" (1970), "Two English Girls" (1971), "The Last Metro" (1980), and "The Woman Next Door" (1981).
Truffaut was born in Paris on 6 February 1932. His mother was Janine de Montferrand. His mother's future husband, Roland Truffaut, accepted him as an adopted son and gave him his surname. He was passed around to live with various nannies and his grandmother for a number of years. It was his grandmother who instilled in him her love of books and music. He lived with his grandmother until her death, when Truffaut was eight years old. It was only after his grandmother's death that he lived with his parents for the first time. The identity of Truffaut's biological father was unknown, though a private detective agency in 1968 revealed that their inquiry into the matter led to a Roland Levy, a Jewish dentist from Bayonne. Truffaut's mother's family disputed the findings but Truffaut himself believed and embraced them.
Truffaut would often stay with friends and try to be out of the house as much as possible. He knew Robert Lachenay from childhood, and they would be lifelong best friends. Lachenay was the inspiration for the character René Bigey in "The 400 Blows" and would work as an assistant on some of Truffaut's films. It was the cinema that offered him the greatest escape from an unsatisfying home life. He was eight years old when he saw his first movie, Abel Gance's "Paradis Perdu" ("Paradise Lost") from 1939. It was there that his obsession began. He frequently played truant from school and would sneak into theaters because he didn't have enough money for admission. After being expelled from several schools, at the age of fourteen he decided to become self-taught. Two of his academic goals were to watch three movies a day and read three books a week.
Truffaut frequented Henri Langlois' Cinémathèque Française where he was exposed to countless foreign films from around the world. It was here that he became familiar with American cinema and directors such as John Ford, Howard Hawks and Nicholas Ray, as well as those of British director Alfred Hitchcock.
After starting his own film club in 1948, Truffaut met André Bazin, who would have great effect on his professional and personal life. Bazin was a critic and the head of another film society at the time. He became a personal friend of Truffaut's and helped him out of various financial and criminal situations during his formative years.
Truffaut joined the French Army in 1950, aged 18, but spent the next two years trying to escape. Truffaut was arrested for attempting to desert the army and incarcerated in military prison. Bazin used his various political contacts to get Truffaut released and set him up with a job at his newly formed film magazine "Cahiers du cinéma".
Over the next few years, Truffaut became a critic (and later editor) at "Cahiers", where he became notorious for his brutal, unforgiving reviews. He was called "The Gravedigger of French Cinema" and was the only French critic not invited to the Cannes Film Festival in 1958. He supported Bazin in the development of one of the most influential theories of cinema itself, the auteur theory.
In 1954, Truffaut wrote an article in "Cahiers du cinéma" called "Une Certaine Tendance du Cinéma Français" ("A Certain Trend of French Cinema"), in which he attacked the current state of French films, lambasting certain screenwriters and producers, and listing eight directors he considered incapable of devising the kinds of "vile" and "grotesque" characters and storylines that he declared were characteristic of the mainstream French film industry: Jean Renoir, Robert Bresson, Jean Cocteau, Jacques Becker, Abel Gance, Max Ophuls, Jacques Tati and Roger Leenhardt. The article caused a storm of controversy, and also landed Truffaut an offer to write for the nationally circulated, more widely read cultural weekly "Arts-Lettres-Spectacles". Truffaut would pen more than 500 film articles for that publication over the next four years.
Truffaut later devised the auteur theory, which stated that the director was the "author" of his work; that great directors such as Renoir or Hitchcock have distinct styles and themes that permeate all of their films. Although his theory was not widely accepted then, it gained some support in the 1960s from American critic Andrew Sarris. In 1967, Truffaut published his book-length interview of Hitchcock, "Hitchcock/Truffaut" (New York: Simon and Schuster).
After having been a critic, Truffaut decided to make films of his own. He started out with the short film "Une Visite" in 1955 and followed that up with "Les Mistons" in 1957.
After seeing Orson Welles' "Touch of Evil" at the Expo 58, he was inspired to make his feature film directorial debut with "The 400 Blows", which was released in 1959 to much critical and commercial acclaim. Truffaut received a Best Director award from the Cannes Film Festival, the same festival that had banned him only one year earlier.
The film follows the character of Antoine Doinel through his perilous misadventures in school, an unhappy home life and later reform school. The film is highly autobiographical. Both Truffaut and Doinel were only children of loveless marriages; they both committed petty crimes of theft and truancy from the military. Truffaut cast Jean-Pierre Léaud as Antoine Doinel. Léaud was seen as an ordinary boy of 14 who auditioned for the role after seeing a flyer, but interviews filmed after the film's release (one is included on the Criterion DVD of the film) reveal Léaud's natural sophistication and an instinctive understanding of acting for the camera. Léaud and Truffaut collaborated on several films over the years. Their most noteworthy collaboration was the continuation of the Antoine Doinel character in a series of films called "The Antoine Doinel Cycle".
The primary focus of "The 400 Blows" is on the life of a young character by the name of Antoine Doinel. This film follows this character through his troubled adolescence. He is caught in between an unstable parental relationship and an isolated youth. The film focuses on the real life events of the director, François Truffaut. From birth Truffaut was thrown into an undesired situation. As he was born out of wedlock, his birth had to remain a secret because of the social stigma associated with illegitimacy. He was registered as "A child born to an unknown father" in the hospital records. He was looked after by a nurse for an extended period of time. His mother eventually married and her husband Roland gave his surname, Truffaut, to François.
Although he was legally accepted as a legitimate child, his parents did not accept him. The Truffauts had another child who died shortly after birth. This experience saddened them greatly and as a result they despised François because of the memory of regret that he represented (Knopf 4). He was an outcast from his earliest years, dismissed as an unwanted child. François was sent to live with his grandparents. It wasn't until François's grandmother's death that his parents took him in, much to the dismay of his own mother. The experiences with his mother were harsh. He recalled being treated badly by her but he found comfort in his father's laughter and overall spirit. The relationship with Roland was more comforting than the one with his own mother. François had a very depressing childhood after moving in with his parents. They would leave him alone whenever they would go on vacations. He even recalled memories of being alone during Christmas. Being left alone forced François into a sense of independence, he would often do various tasks around the house in order to improve it such as painting or changing the electric outlets. Sadly, these kind gestures often resulted in a catastrophic event causing him to get scolded by his mother. His father would mostly laugh them off.
"The 400 Blows" marked the beginning of the French New Wave movement, which gave directors such as Jean-Luc Godard, Claude Chabrol and Jacques Rivette a wider audience. The New Wave dealt with a self-conscious rejection of traditional cinema structure. This was a topic on which Truffaut had been writing for years.
Following the success of "The 400 Blows", Truffaut featured disjunctive editing and seemingly random voice-overs in his next film "Shoot the Piano Player" (1960) starring Charles Aznavour. Truffaut has stated that in the middle of filming, he realized that he hated gangsters. But since gangsters were a main part of the story, he toned up the comical aspect of the characters and made the movie more attuned to his liking.
Even though "Shoot the Piano Player" was much appreciated by critics, it performed poorly at the box office. While the film focused on two of the French New Wave's favorite elements, American film noir and themselves, Truffaut never again experimented as heavily.
In 1962, Truffaut directed his third movie, "Jules and Jim", a romantic drama starring Jeanne Moreau. The film was very popular and highly influential.
In 1963, Truffaut was approached to direct an American film called "Bonnie and Clyde", with a treatment written by "Esquire" journalists, David Newman and Robert Benton intended to introduce the French New Wave to Hollywood. Although he was interested enough to help in script development, Truffaut ultimately declined, but not before interesting Jean-Luc Godard and American actor and would be producer, Warren Beatty, the latter of whom proceeded with the film with director Arthur Penn.
His fourth movie as director was "The Soft Skin" (1964) which was not well acclaimed on initial release.
Truffaut's first non-French film was a 1966 adaptation of Ray Bradbury's classic science fiction novel "Fahrenheit 451", showcasing Truffaut's love of books. His only English-speaking film made on location in England was a great challenge for Truffaut, because he barely spoke English himself. This was also his first film shot in color by cinematographer Nicolas Roeg. The larger-scale production was difficult for Truffaut, who had worked only with small crews and budgets. The shoot was also strained by a conflict with lead actor Oscar Werner, who was unhappy with his character and stormed off set, leaving Truffaut to shoot scenes using a body double shot from behind. The film was a commercial failure, and Truffaut never worked outside France again. The film's cult standing has steadily grown, although some critics remain mixed on it as an adaptation. A 2014 consideration of the film by Charles Silver praises it.
Truffaut worked on projects with varied subjects. "The Bride Wore Black" (1968), a brutal tale of revenge, is a stylish homage to the films of Alfred Hitchcock (once again starring Jeanne Moreau).
"Stolen Kisses" (1968) was a continuation of the Antoine Doinel Cycle starring Claude Jade as Antoine's fiancée and later wife Christine Darbon. It was a big hit on the international art circuit. A short time later Claude Jade made her Hollywood debut in Hitchcock's "Topaz".
"Mississippi Mermaid" (1969), with Catherine Deneuve and Jean-Paul Belmondo is an identity-bending romantic thriller.
"The Wild Child" (1970) included Truffaut's acting debut in the lead role of 18th century physician Jean Marc Gaspard Itard.
"Bed and Board" (1970) was another Antoine Doinel film, also with Claude Jade from "Stolen kisses" who is now Léaud's on-screen-wife.
"Two English Girls" (1971) is the female reflection of the same love story as "Jules et Jim". It is based on a story written by Henri-Pierre Roché, who also wrote "Jules and Jim". It is about a man who falls equally in love with two sisters, and their love affair over a period of years.
"Such a Gorgeous Kid Like Me" (1972) was a screwball comedy that was not well received.
"Day for Night" won Truffaut a Best Foreign Film Oscar in 1973. The film is probably his most reflective work. It is the story of a film crew trying to finish their film while dealing with all of the personal and professional problems that accompany making a movie. Truffaut plays the director of the fictional film being made. This film features scenes shown in his previous films. It is considered to be his best film since his earliest work. "Time" magazine placed it on their list of 100 Best Films of the Century (along with "The 400 Blows").
In 1975, Truffaut gained more notoriety with "The Story of Adèle H." Isabelle Adjani in the title role earned a nomination for a Best Actress Oscar.
Truffaut's 1976 film "Small Change" gained a Golden Globe Nomination for Best Foreign Film.
"The Man Who Loved Women" (1977), a romantic drama, was a minor hit.
"The Green Room" (1978) starred Truffaut himself in the lead.
It was a box office flop so he made "Love on the Run" (1979) starring Jean-Pierre Léaud and Claude Jade as the final movie of the Doinel Cycle.
One of Truffaut's final films gave him an international revival. In 1980, his film "The Last Metro" garnered twelve César Award nominations with ten wins, including Best Director.
Truffaut's last film was shot in black and white, giving his career a sense of having bookends. "Confidentially Yours" is Truffaut's tribute to his favorite director, Alfred Hitchcock. It deals with numerous Hitchcockian themes, such as private guilt versus public innocence, a woman investigating a murder, and anonymous locations.
Some of Truffaut's films feature the character Antoine Doinel, played by the actor Jean-Pierre Léaud. He began his career in "The 400 Blows" at the age of fourteen, and continued as the favorite actor and "double" of Truffaut. The series continued with "Antoine and Colette" (a short film in the anthology "Love at Twenty"), "Stolen Kisses" (in which he falls in love with Christine Darbon alias Claude Jade), "Bed and Board" about the married couple Antoine and Christine—and, finally, "Love on the Run", where the couple go through a divorce.
In the last films, Léaud's girlfriend and later wife, Christine Darbon, was played by Truffaut's favorite actress, Claude Jade. During the filming of "Stolen Kisses", Truffaut himself fell in love with, and was briefly engaged to, Claude Jade.
A keen reader, Truffaut adapted many literary works, including two novels by Henri-Pierre Roché, Ray Bradbury's "Fahrenheit 451", Henry James' "The Altar of the Dead", filmed as "The Green Room", and several American detective novels.
Truffaut's other films were from original screenplays, often co-written by the screenwriters Suzanne Schiffman or Jean Gruault. They featured diverse subjects, the sombre "The Story of Adèle H.", inspired by the life of the daughter of Victor Hugo, with Isabelle Adjani; "Day for Night", shot at the Victorine Studios describing the ups and downs of film-making; and "The Last Metro", set during the German occupation of France, a film rewarded by ten César Awards.
Known as lifelong cinephile, Truffaut once (according to the 1993 documentary film "") threw a hitchhiker he had picked up out of his car after learning that the hitchhiker didn't like films.
Truffaut is admired among other filmmakers and several tributes to his work have appeared in other films such as "Almost Famous", "Face" and "The Diving Bell and the Butterfly", as well as novelist Haruki Murakami's "Kafka on the Shore".
He also acted, appearing in Steven Spielberg's 1977 film "Close Encounters of the Third Kind", where he played scientist Claude Lacombe.
Truffaut expressed his admiration for filmmakers such as Luis Buñuel, Ingmar Bergman, Robert Bresson, Roberto Rossellini, and Alfred Hitchcock. Truffaut wrote "Hitchcock/Truffaut", a book about Hitchcock, based on a lengthy series of interviews.
On Jean Renoir, he said: "I think Renoir is the only filmmaker who's practically infallible, who has never made a mistake on film. And I think if he never made mistakes, it's because he always found solutions based on simplicity—human solutions. He's one film director who never pretended. He never tried to have a style, and if you know his work—which is very comprehensive, since he dealt with all sorts of subjects—when you get stuck, especially as a young filmmaker, you can think of how Renoir would have handled the situation, and you generally find a solution".
In 1973, Jean-Luc Godard accused Truffaut of making a movie that was a "lie" ("Day For Night"), and Truffaut replied with a 20-page letter in which he accused Godard of being a radical-chic hypocrite, a man who believed everyone to be "equal" in theory only. "The Ursula Andress of militancy—like Brando—a piece of shit on a pedestal." The two never spoke or saw each other again. However, as noted by Serge Toubiana and Antoine de Baecque in their biography of Truffaut, Godard tried to reconcile their friendship later on, and after Truffaut's death wrote the introduction to a collection of his letters and a lengthy tribute in his video-essay film "Histoire(s) du cinéma".
Truffaut was married to Madeleine Morgenstern from 1957 to 1965, and they had two daughters, Laura (born 1959) and Eva (born 1961). Madeleine was the daughter of Ignace Morgenstern, managing director of one of France's largest film distribution companies, and was largely responsible for securing funding for Truffaut's first films.
Truffaut was an inveterate womanizer and had affairs with many of his leading ladies, including Marie-France Pisier ("Antoine and Colette", "Love on the Run"), Jeanne Moreau ("Jules and Jim", "The Bride Wore Black"), Françoise Dorléac ("The Soft Skin"), Julie Christie ("Fahrenheit 451"), Catherine Deneuve ("Mississippi Mermaid", "The Last Metro"), and Jacqueline Bisset ("Day for Night"). Truffaut also fell for Isabelle Adjani during the filming of "The Story of Adele H." but his advances were rebuffed.
In 1968 Truffaut was engaged to actress Claude Jade ("Stolen Kisses", "Bed and Board", "Love on the Run"); he and Fanny Ardant ("The Woman Next Door", "Confidentially Yours") lived together from 1981 to 1984 and had a daughter, Joséphine Truffaut (born 28 September 1983).
Truffaut was an atheist, although he had great respect for the Catholic Church and even requested a mass for his funeral.
In July 1983, Truffaut rented France Gall's and Michel Berger's house outside Honfleur, Normandy (composing for Philippe Labro's film "Rive droite, rive gauche") when he had his first stroke and was diagnosed with a brain tumor. He was expected to attend his friend Miloš Forman's "Amadeus" premiere when he died on 21 October 1984, aged 52, at the American Hospital in Neuilly-sur-Seine in France.
At the time of his death, he had numerous films in preparation. He had intended to make 30 films and then retire to write books for the remainder of his life. He was five films short of this personal aim. He is buried in Montmartre Cemetery. | https://en.wikipedia.org/wiki?curid=10770 |
Fair use
Fair use is a doctrine in the law of the United States that permits limited use of copyrighted material without having to first acquire permission from the copyright holder. Fair use is one of the limitations to copyright intended to balance the interests of copyright holders with the public interest in the wider distribution and use of creative works by allowing as a defense to copyright infringement claims certain limited uses that might otherwise be considered infringement. Like "fair dealing" rights that exist in most countries with a British legal history, the fair use right is a general exception that applies to all different kinds of uses with all types of works and turns on a flexible proportionality test that examines the purpose of the use, the amount used, and the impact on the market of the original work. The innovation of the fair use right in US law is that it applies to a list of purposes that is preceded by the opening clause "such as." This has allowed courts to apply it to technologies never envisioned in the original statute including Internet search, the VCR, and the reverse engineering of software.
The 1710 Statute of Anne, an act of the Parliament of Great Britain, created copyright law to replace a system of private ordering enforced by the Stationers' Company. The Statute of Anne did not provide for legal unauthorized use of material protected by copyright. In "Gyles v Wilcox", the Court of Chancery established the doctrine of "fair abridgement", which permitted unauthorized abridgement of copyrighted works under certain circumstances. Over time, this doctrine evolved into the modern concepts of fair use and fair dealing. Fair use was a common-law doctrine in the U.S. until it was incorporated into the Copyright Act of 1976, .
The term "fair use" originated in the United States. Although related, the limitations and exceptions to copyright for teaching and library archiving in the U.S. are located in a different section of the statute. A similar-sounding principle, fair dealing, exists in some other common law jurisdictions but in fact it is more similar in principle to the enumerated exceptions found under civil law systems. Civil law jurisdictions have other limitations and exceptions to copyright.
In response to perceived over-expansion of copyrights, several electronic civil liberties and free expression organizations began in the 1990s to add fair use cases to their dockets and concerns. These include the Electronic Frontier Foundation ("EFF"), the American Civil Liberties Union, the National Coalition Against Censorship, the American Library Association, numerous clinical programs at law schools, and others. The "Chilling Effects" archive was established in 2002 as a coalition of several law school clinics and the EFF to document the use of cease and desist letters. In 2006 Stanford University began an initiative called "The Fair Use Project" (FUP) to help artists, particularly filmmakers, fight lawsuits brought against them by large corporations.
Examples of fair use in United States copyright law include commentary, search engines, criticism, parody, news reporting, research, and scholarship. Fair use provides for the legal, unlicensed citation or incorporation of copyrighted material in another author's work under a four-factor test.
The U.S. Supreme Court has traditionally characterized fair use as an affirmative defense, but in "Lenz v. Universal Music Corp." (2015) (the "dancing baby" case), the U.S. Court of Appeals for the Ninth Circuit concluded that fair use was not merely a defense to an infringement claim, but was an expressly authorized right, and an exception to the exclusive rights granted to the author of a creative work by copyright law: "Fair use is therefore distinct from affirmative defenses where a use infringes a copyright, but there is no liability due to a valid excuse, e.g., misuse of a copyright."
The four factors of analysis for fair use set forth above derive from the opinion of Joseph Story in "Folsom v. Marsh", in which the defendant had copied 353 pages from the plaintiff's 12-volume biography of George Washington in order to produce a separate two-volume work of his own. The court rejected the defendant's fair use defense with the following explanation:
The statutory fair use factors quoted above come from the Copyright Act of 1976, which is codified at . They were intended by Congress to restate, but not replace, the prior judge-made law. As Judge Pierre N. Leval has written, the statute does not "define or explain [fair use's] contours or objectives." While it "leav[es] open the possibility that other factors may bear on the question, the statute identifies none." That is, courts are entitled to consider other factors in addition to the four statutory factors.
The first factor is "the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes." To justify the use as fair, one must demonstrate how it either advances knowledge or the progress of the arts through the addition of something new.
In the 1841 copyright case Folsom v. Marsh, Justice Joseph Story wrote:
A key consideration in later fair use cases is the extent to which the use is "transformative". In the 1994 decision "Campbell v. Acuff-Rose Music Inc", the U.S. Supreme Court held that when the purpose of the use is transformative, this makes the first factor more likely to favor fair use. Before the "Campbell" decision, federal Judge Pierre Leval argued that transformativeness is central to the fair use analysis in his 1990 article, Toward a Fair Use Standard. "Blanch v. Koons" is another example of a fair use case that focused on transformativeness. In 2006, Jeff Koons used a photograph taken by commercial photographer Andrea Blanch in a collage painting. Koons appropriated a central portion of an advertisement she had been commissioned to shoot for a magazine. Koons prevailed in part because his use was found transformative under the first fair use factor.
The "Campbell" case also addressed the subfactor mentioned in the quotation above, "whether such use is of a commercial nature or is for nonprofit educational purposes." In an earlier case, "Sony Corp. of America v. Universal City Studios, Inc.", the Supreme Court had stated that "every commercial use of copyrighted material is presumptively . . . unfair." In "Campbell", the court clarified that this is not a "hard evidentiary presumption" and that even the tendency that commercial purpose will "weigh against a finding of fair use . . . will vary with the context." The "Campbell" court held that hip-hop group 2 Live Crew's parody of the song "Oh, Pretty Woman" was fair use, even though the parody was sold for profit. Thus, having a commercial purpose does not preclude a use from being found fair, even though it makes it less likely.
Likewise, the noncommercial purpose of a use makes it more likely to be found a fair use, but it does not make it a fair use automatically. For instance, in "L.A. Times v. Free Republic", the court found that the noncommercial use of "Los Angeles Times" content by the Free Republic website was not fair use, since it allowed the public to obtain material at no cost that they would otherwise pay for. Richard Story similarly ruled in "Code Revision Commission and State of Georgia v. Public.Resource.Org, Inc." that despite the fact that it is a non-profit and didn't sell the work, the service profited from its unauthorized publication of the Official Code of Georgia Annotated because of "the attention, recognition, and contributions" it received in association with the work.
Another factor is whether the use fulfills any of the preamble purposes, also mentioned in the legislation above, as these have been interpreted as "illustrative" of transformative use.
It is arguable, given the dominance of a rhetoric of the "transformative" in recent fair use determinations, that the first factor and transformativeness in general have become the most important parts of fair use.
Although the Supreme Court has ruled that the availability of copyright protection should not depend on the artistic quality or merit of a work, fair use analyses consider certain aspects of the work to be relevant, such as whether it is fictional or non-fictional.
To prevent the private ownership of work that rightfully belongs in the public domain, facts and ideas are not protected by copyright—only their particular expression or fixation merits such protection. On the other hand, the social usefulness of freely available information can weigh against the appropriateness of copyright for certain fixations. The Zapruder film of the assassination of President Kennedy, for example, was purchased and copyrighted by "Time" magazine. Yet its copyright was not upheld, in the name of the public interest, when "Time" tried to enjoin the reproduction of stills from the film in a history book on the subject in "Time Inc v. Bernard Geis Associates".
In the decisions of the Second Circuit in "Salinger v. Random House" and in "New Era Publications Int'l v. Henry Holt & Co", the aspect of whether the copied work has been previously published was considered crucial, assuming the right of the original author to control the circumstances of the publication of his work or preference not to publish at all. However, Judge Pierre N. Leval views this importation of certain aspects of France's "droit moral d'artiste" (moral rights of the artist) into American copyright law as "bizarre and contradictory" because it sometimes grants greater protection to works that were created for private purposes that have little to do with the public goals of copyright law, than to those works that copyright was initially conceived to protect. This is not to claim that unpublished works, or, more specifically, works not intended for publication, do not deserve legal protection, but that any such protection should come from laws about privacy, rather than laws about copyright. The statutory fair use provision was amended in response to these concerns by adding a final sentence: "The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors."
The third factor assesses the amount and substantiality of the copyrighted work that has been used. In general, the less that is used in relation to the whole, the more likely the use will be considered fair.
Using most or all of a work does not bar a finding of fair use. It simply makes the third factor less favorable to the defendant. For instance, in "Sony Corp. of America v. Universal City Studios, Inc." copying entire television programs for private viewing was upheld as fair use, at least when the copying is done for the purposes of time-shifting. In "Kelly v. Arriba Soft Corporation", the Ninth Circuit held that copying an entire photo to use as a thumbnail in online search results did not even weigh against fair use, "if the secondary user only copies as much as is necessary for his or her intended use".
However, even the use of a small percentage of a work can make the third factor unfavorable to the defendant, because the "substantiality" of the portion used is considered in addition to the amount used. For instance, in "Harper & Row v. Nation Enterprises", the U.S. Supreme Court held that a news article's quotation of fewer than 400 words from President Ford's 200,000-word memoir was sufficient to make the third fair use factor weigh against the defendants, because the portion taken was the "heart of the work". This use was ultimately found not to be fair.
The fourth factor measures the effect that the allegedly infringing use has had on the copyright owner's ability to exploit his original work. The court not only investigates whether the defendant's specific use of the work has significantly harmed the copyright owner's market, but also whether such uses in general, if widespread, would harm the potential market of the original. The burden of proof here rests on the copyright owner, who must demonstrate the impact of the infringement on commercial use of the work.
For example, in "Sony Corp v. Universal City Studios", the copyright owner, Universal, failed to provide any empirical evidence that the use of Betamax had either reduced their viewership or negatively impacted their business. In "Harper & Row," the case regarding President Ford's memoirs, the Supreme Court labeled the fourth factor "the single most important element of fair use" and it has enjoyed some level of primacy in fair use analyses ever since. Yet the Supreme Court's more recent announcement in "Campbell v. Acuff-Rose Music Inc" that "all [four factors] are to be explored, and the results weighed together, in light of the purposes of copyright" has helped modulate this emphasis in interpretation.
In evaluating the fourth factor, courts often consider two kinds of harm to the potential market for the original work.
Courts recognize that certain kinds of market harm do not negate fair use, such as when a parody or negative review impairs the market of the original work. Copyright considerations may not shield a work against adverse criticism.
As explained by Judge Leval, courts are permitted to include additional factors in their analysis.
One such factor is acknowledgement of the copyrighted source. Giving the name of the photographer or author may help, but it does not automatically make a use fair. While plagiarism and copyright infringement are related matters, they are not identical. Plagiarism (using someone's words, ideas, images, etc. without acknowledgment) is a matter of professional ethics, while copyright is a matter of law, and protects exact expression, "not" ideas. One can plagiarize even a work that is not protected by copyright, for example by passing off a line from Shakespeare as one's own. Conversely, attribution prevents accusations of plagiarism, but it does not prevent infringement of copyright. For example, reprinting a copyrighted book without permission, while citing the original author, would be copyright infringement but not plagiarism.
The U.S. Supreme Court described fair use as an affirmative defense in "Campbell v. Acuff-Rose Music, Inc." This means that in litigation on copyright infringement, the defendant bears the burden of raising and proving that the use was fair and not an infringement. Thus, fair use need not even be raised as a defense unless the plaintiff first shows (or the defendant concedes) a "prima facie" case of copyright infringement. If the work was not copyrightable, the term had expired, or the defendant's work borrowed only a small amount, for instance, then the plaintiff cannot make out a "prima facie" case of infringement, and the defendant need not even raise the fair use defense. In addition, fair use is only one of many limitations, exceptions, and defenses to copyright infringement. Thus, a "prima facie" case can be defeated without relying on fair use. For instance, the Audio Home Recording Act establishes that it is legal, using certain technologies, to make copies of audio recordings for non-commercial personal use.
Some copyright owners claim infringement even in circumstances where the fair use defense would likely succeed, in hopes that the user will refrain from the use rather than spending resources in their defense. Strategic lawsuit against public participation (SLAPP) cases that allege copyright infringement, patent infringement, defamation, or libel may come into conflict with the defendant's right to freedom of speech, and that possibility has prompted some jurisdictions to pass anti-SLAPP legislation that raises the plaintiff's burdens and risk.
Although fair use ostensibly permits certain uses without liability, many content creators and publishers try to avoid a potential court battle by seeking a legally unnecessary license from copyright owners for "any" use of non-public domain material, even in situations where a fair use defense would likely succeed. The simple reason is that the license terms negotiated with the copyright owner may be much less expensive than defending against a copyright suit, or having the mere possibility of a lawsuit threaten the publication of a work in which a publisher has invested significant resources.
Fair use rights take precedence over the author's interest. Thus the copyright holder cannot use a non-binding disclaimer, or notification, to revoke the right of fair use on works. However, binding agreements such as contracts or licence agreements may take precedence over fair use rights.
The practical effect of the fair use doctrine is that a number of conventional uses of copyrighted works are not considered infringing. For instance, quoting from a copyrighted work in order to criticize or comment upon it or teach students about it, is considered a fair use. Certain well-established uses cause few problems. A teacher who prints a few copies of a poem to illustrate a technique will have no problem on all four of the above factors (except possibly on amount and substantiality), but some cases are not so clear. All the factors are considered and balanced in each case: a book reviewer who quotes a paragraph as an example of the author's style will probably fall under fair use even though they may sell their review commercially; but a non-profit educational website that reproduces whole articles from technical magazines will probably be found to infringe if the publisher can demonstrate that the website affects the market for the magazine, even though the website itself is non-commercial.
Fair use is decided on a case by case basis, on the entirety of circumstances. The same act done by different means or for a different purpose can gain or lose fair use status. Even repeating an identical act at a different time can make a difference due to changing social, technological, or other surrounding circumstances.
The case "Oracle America, Inc. v. Google, Inc." revolves around the use of application programming interfaces (APIs) used to define functionality of the Java programming language, created by Sun Microsystems and now owned by Oracle Corporation. Google used the APIs' definition and their structure, sequence and organization (SSO) in creating the Android to support the mobile device market. Oracle had sued Google in 2010 over both patent and copyright violations, but after two cycles, the case matter was narrowed down to whether Google's use of the definition and SSO of Oracle's Java APIs (determined to be copyrightable) was within fair use. The Federal Circuit Court of Appeals has ruled against Google, stating that while Google could defend its use in the nature of the copyrighted work, its use was not transformative, and more significantly, it commercially harmed Oracle as they were also seeking entry to the mobile market. The case, should this ruling hold, could have a significant impact on developing products for interoperability using APIs, such as with many open source projects.
In April 2006, the filmmakers of the "Loose Change" series were served with a lawsuit by Jules and Gédéon Naudet over the film's use of their footage, specifically footage of the firefighters discussing the collapse of the World Trade Center.
With the help of an intellectual property lawyer, the creators of Loose Change successfully argued that a majority of the footage used was for historical purposes and was significantly transformed in the context of the film. They agreed to remove a few shots that were used as B-roll and served no purpose to the greater discussion. The case was settled and a potential multimillion-dollar lawsuit was avoided.
"This Film Is Not Yet Rated" also relied on fair use to feature several clips from copyrighted Hollywood productions. The director had originally planned to license these clips from their studio owners but discovered that studio licensing agreements would have prohibited him from using this material to criticize the entertainment industry. This prompted him to invoke the fair use doctrine, which permits limited use of copyrighted material to provide analysis and criticism of published works.
In 2009, fair use appeared as a defense in lawsuits against filesharing. Charles Nesson argued that file-sharing qualifies as fair use in his defense of alleged filesharer Joel Tenenbaum. Kiwi Camara, defending alleged filesharer Jammie Thomas, announced a similar defense.
However, the Court in the case at bar rejected the idea that file-sharing is fair use.
A U.S. court case from 2003, "Kelly v. Arriba Soft Corp.," provides and develops the relationship between thumbnails, inline linking and fair use. In the lower District Court case on a motion for summary judgment, Arriba Soft's use of thumbnail pictures and inline linking from Kelly's website in Arriba Soft's image search engine was found not to be fair use. That decision was appealed and contested by Internet rights activists such as the Electronic Frontier Foundation, who argued that it was fair use.
On appeal, the Ninth Circuit Court of Appeals found in favor of the defendant, Arriba Soft. In reaching its decision, the court utilized the statutory four-factor analysis. First, it found the purpose of creating the thumbnail images as previews to be sufficiently transformative, noting that they were not meant to be viewed at high resolution as the original artwork was. Second, the photographs had already been published, diminishing the significance of their nature as creative works. Third, although normally making a "full" replication of a copyrighted work may appear to violate copyright, here it was found to be reasonable and necessary in light of the intended use. Lastly, the court found that the market for the original photographs would not be substantially diminished by the creation of the thumbnails. To the contrary, the thumbnail searches could increase the exposure of the originals. In looking at all these factors as a whole, the court found that the thumbnails were fair use and remanded the case to the lower court for trial after issuing a revised opinion on July 7, 2003. The remaining issues were resolved with a default judgment after Arriba Soft had experienced significant financial problems and failed to reach a negotiated settlement.
In August 2008, Judge Jeremy Fogel of the Northern District of California ruled in "Lenz v. Universal Music Corp." that copyright holders cannot order a deletion of an online file without determining whether that posting reflected "fair use" of the copyrighted material. The case involved Stephanie Lenz, a writer and editor from Gallitzin, Pennsylvania, who made a home video of her thirteen-month-old son dancing to Prince's song Let's Go Crazy and posted the video on YouTube. Four months later, Universal Music, the owner of the copyright to the song, ordered YouTube to remove the video under the Digital Millennium Copyright Act. Lenz notified YouTube immediately that her video was within the scope of fair use, and she demanded that it be restored. YouTube complied after six weeks, rather than the two weeks required by the Digital Millennium Copyright Act. Lenz then sued Universal Music in California for her legal costs, claiming the music company had acted in bad faith by ordering removal of a video that represented fair use of the song. On appeal, the Court of Appeals for the Ninth Circuit ruled that a copyright owner must affirmatively consider whether the complained of conduct constituted fair use before sending a takedown notice under the Digital Millennium Copyright Act, rather than waiting for the alleged infringer to assert fair use. 801 F.3d 1126 (9th Cir. 2015). "Even if, as Universal urges, fair use is classified as an 'affirmative defense,' we hold—for the purposes of the DMCA—fair use is uniquely situated in copyright law so as to be treated differently than traditional affirmative defenses. We conclude that because 17 U.S.C. § 107 created a type of non-infringing use, fair use is "authorized by the law" and a copyright holder must consider the existence of fair use before sending a takedown notification under § 512(c)."
In June 2011, Judge Philip Pro of the District of Nevada ruled in "Righthaven v. Hoehn" that the posting of an entire editorial article from the Las Vegas Review Journal in a comment as part of an online discussion was unarguably fair use. Judge Pro noted that "Noncommercial, nonprofit use is presumptively fair. ... Hoehn posted the Work as part of an online discussion. ... This purpose is consistent with comment, for which 17 U.S.C. § 107 provides fair use protection. ... It is undisputed that Hoehn posted the entire work in his comment on the Website. ... wholesale copying does not preclude a finding of fair use. ... there is no genuine issue of material fact that Hoehn's use of the Work was fair and summary judgment is appropriate." On appeal, the Court of Appeals for the Ninth Circuit ruled that Righthaven did not even have the standing needed to sue Hoehn for copyright infringement in the first place.
In addition to considering the four fair use factors, courts deciding fair use cases also look to the standards and practices of the professional community where the case comes from. Among the communities are documentarians, librarians, makers of Open Courseware, visual art educators, and communications professors.
Such codes of best practices have permitted communities of practice to make more informed risk assessments in employing fair use in their daily practice. For instance, broadcasters, cablecasters, and distributors typically require filmmakers to obtain errors and omissions insurance before the distributor will take on the film. Such insurance protects against errors and omissions made during the copyright clearance of material in the film. Before the "Documentary Filmmakers' Statement of Best Practices in Fair Use" was created in 2005, it was nearly impossible to obtain errors and omissions insurance for copyright clearance work that relied in part on fair use. This meant documentarians had either to obtain a license for the material or to cut it from their films. In many cases, it was impossible to license the material because the filmmaker sought to use it in a critical way. Soon after the best practices statement was released, all errors and omissions insurers in the U.S. shifted to begin offering routine fair use coverage.
Before 1991, sampling in certain genres of music was accepted practice and the copyright considerations were viewed as largely irrelevant. The strict decision against rapper Biz Markie's appropriation of a Gilbert O'Sullivan song in the case "Grand Upright Music, Ltd. v. Warner Bros. Records Inc." changed practices and opinions overnight. Samples now had to be licensed, as long as they rose "to a level of legally cognizable appropriation." This left the door open for the "de minimis" doctrine, for short or unrecognizable samples; such uses would not rise to the level of copyright infringement, because under the "de minimis" doctrine, "the law does not care about trifles." However, 3 years later, the Sixth Circuit effectively eliminated the "de minimis" doctrine in the "Bridgeport Music, Inc. v. Dimension Films" case, holding that artists must "get a license or do not sample". The Court later clarified that its opinion did not apply to fair use, but between "Grand Upright" and "Bridgeport", practice had effectively shifted to eliminate unlicensed sampling.
Producers or creators of parodies of a copyrighted work have been sued for infringement by the targets of their ridicule, even though such use may be protected as fair use. These fair use cases distinguish between parodies, which use a work in order to poke fun at or comment on the work itself and satire, or comment on something else. Courts have been more willing to grant fair use protections to parodies than to satires, but the ultimate outcome in either circumstance will turn on the application of the four fair use factors.
For example, when Tom Forsythe appropriated Barbie dolls for his photography project "Food Chain Barbie" (depicting several copies of the doll naked and disheveled and about to be baked in an oven, blended in a food mixer, and the like), Mattel lost its copyright infringement lawsuit against him because his work effectively parodies Barbie and the values she represents. In "Rogers v. Koons", Jeff Koons tried to justify his appropriation of Art Rogers' photograph "Puppies" in his sculpture "String of Puppies" with the same parody defense. Koons lost because his work was not presented as a parody of Rogers' photograph in particular, but as a satire of society at large. This was insufficient to render the use fair.
In "Campbell v. Acuff-Rose Music Inc" the U.S. Supreme Court recognized parody as a potential fair use, even when done for profit. Roy Orbison's publisher, Acuff-Rose Music, had sued 2 Live Crew in 1989 for their use of Orbison's "Oh, Pretty Woman" in a mocking rap version with altered lyrics. The Supreme Court viewed 2 Live Crew's version as a ridiculing commentary on the earlier work, and ruled that when the parody was itself the product rather than mere advertising, commercial nature did not bar the defense. The "Campbell" court also distinguished parodies from satire, which they described as a broader social critique not intrinsically tied to ridicule of a specific work and so not deserving of the same use exceptions as parody because the satirist's ideas are capable of expression without the use of the other particular work.
A number of appellate decisions have recognized that a parody may be a protected fair use, including the Second ("Leibovitz v. Paramount Pictures Corp."); the Ninth ("Mattel v. Walking Mountain Productions"); and the Eleventh Circuits ("Suntrust Bank v. Houghton Mifflin Co."). In the 2001 "Suntrust Bank" case, Suntrust Bank and the Margaret Mitchell estate unsuccessfully brought suit to halt the publication of "The Wind Done Gone", which reused many of the characters and situations from "Gone with the Wind" but told the events from the point of view of the enslaved people rather than the slaveholders. The Eleventh Circuit, applying "Campbell", found that "The Wind Done Gone" was fair use and vacated the district court's injunction against its publication.
Cases in which a satirical use was found to be fair include "Blanch v. Koons" and "Williams v. Columbia Broadcasting Systems".
The transformative nature of computer based analytical processes such as text mining, web mining and data mining has led many to form the view that such uses would be protected under fair use. This view was substantiated by the rulings of Judge Denny Chin in "Authors Guild, Inc. v. Google, Inc.", a case involving mass digitisation of millions of books from research library collections. As part of the ruling that found the book digitisation project was fair use, the judge stated "Google Books is also transformative in the sense that it has transformed book text into data for purposes of substantive research, including data mining and text mining in new areas".
Text and data mining was subject to further review in "Authors Guild v. HathiTrust", a case derived from the same digitization project mentioned above. Judge Harold Baer, in finding that the defendant's uses were transformative, stated that 'the search capabilities of the [HathiTrust Digital Library] have already given rise to new methods of academic inquiry such as text mining."
There is a substantial body of fair use law regarding reverse engineering of computer software, hardware, network protocols, encryption and access control systems.
In May 2015, artist Richard Prince released an exhibit of photographs at the Gagosian Gallery in New York, entitled "New Portraits". His exhibit consisted of screenshots of Instagram users' pictures, which were largely unaltered, with Prince's commentary added beneath. Although no Instagram users authorized Prince to use their pictures, Prince argued that the addition of his own commentary the pictures constituted fair use, such that he did not need permission to use the pictures or to pay royalties for his use. One of the pieces sold for $90,000. With regard to the works presented by Painter, the gallery where the pictures were showcased posted notices that "All images are subject to copyright." Several lawsuits were filed against Painter over the New Portraits exhibit.
While U.S. fair use law has been influential in some countries, some countries have fair use criteria drastically different from those in the U.S., and some countries do not have a fair use framework at all. Some countries have the concept of fair dealing instead of fair use, while others use different systems of limitations and exceptions to copyright. Many countries have some reference to an exemption for educational use, though the extent of this exemption varies widely.
Sources differ on whether fair use is fully recognized by countries other than the United States. American University's "infojustice.org" published a compilation of portions of over 40 nations' laws that explicitly mention fair use or fair dealing, and asserts that some of the fair dealing laws, such as Canada's, have evolved (such as through judicial precedents) to be quite close to those of the United States. This compilation includes fair use provisions from Bangladesh, Israel, South Korea, the Philippines, Sri Lanka, Taiwan, Uganda, and the United States. However, Paul Geller's 2009 "International Copyright Law and Practice" says that while some other countries recognize similar exceptions to copyright, only the United States and Israel fully recognize the concept of fair use.
The International Intellectual Property Alliance (IIPA), a lobby group of U.S. copyright industry bodies, has objected to international adoption of U.S.-style fair use exceptions, alleging that such laws have a dependency on common law and long-term legal precedent that may not exist outside the United States.
In November 2007, the Israeli Knesset passed a new copyright law that included a U.S.-style fair use exception. The law, which took effect in May 2008, permits the fair use of copyrighted works for purposes such as private study, research, criticism, review, news reporting, quotation, or instruction or testing by an educational institution. The law sets up four factors, similar to the U.S. fair use factors (see above), for determining whether a use is fair.
On September 2, 2009, the Tel Aviv District court ruled in "The Football Association Premier League Ltd. v. Ploni" that fair use is a user right. The court also ruled that streaming of live soccer games on the Internet is fair use. In doing so, the court analyzed the four fair use factors adopted in 2007 and cited U.S. case law, including "Kelly v. Arriba Soft Corp." and "Perfect 10, Inc. v. Amazon.com, Inc.".
An amendment in 2012 to the section 13(2)(a) of the Copyright Act 1987 created an exception called 'fair dealing' which is not restricted in its purpose. The four factors for fair use as specified in US law are included.
Fair use exists in Polish law and is covered by the Polish copyright law articles 23 to 35.
Compared to the United States, Polish fair use distinguishes between private and public use. In Poland, when the use is public, its use risks fines. The defendant must also prove that his use was private when accused that it was not, or that other mitigating circumstances apply. Finally, Polish law treats all cases in which private material was made public as a potential copyright infringement, where fair use can apply, but has to be proven by reasonable circumstances.
Section 35 of the Singaporean Copyright Act 1987 has been amended in 2004 to allow a 'fair dealing' exception for any purpose. The four fair use factors similar to US law are included in the new section 35.
The Korean Copyright Act was amended to include a fair use provision, Article 35-3, in 2012. The law outlines a four-factor test similar to that used under U.S. law:
Fair dealing allows specific exceptions to copyright protections. The open-ended concept of fair use is generally not observed in jurisdictions where fair dealing is in place, although this does vary. Fair dealing is established in legislation in Australia, Canada, New Zealand, Singapore, India, South Africa and the United Kingdom, among others.
While Australian copyright exceptions are based on the Fair Dealing system, since 1998 a series of Australian government inquiries have examined, and in most cases recommended, the introduction of a "flexible and open" Fair Use system into Australian copyright law. From 1998 to 2017 there have been eight Australian government inquiries which have considered the question of whether fair use should be adopted in Australia. Six reviews have recommended Australia adopt a "Fair Use" model of copyright exceptions: two enquiries specifically into the Copyright Act (1998, 2014); and four broader reviews (both 2004, 2013, 2016). One review (2000) recommended against the introduction of fair use and another (2005) issued no final report. Two of the recommendations were specifically in response to the stricter copyright rules introduced as part of the Australia–United States Free Trade Agreement (AUSFTA), while the most recent two, by the Australian Law Reform Commission (ALRC) and the Productivity Commission (PC) were with reference to strengthening Australia's "digital economy".
The "Copyright Act of Canada" establishes fair dealing in Canada, which allows specific exceptions to copyright protection. In 1985, the Sub-Committee on the Revision of Copyright rejected replacing fair dealing with an open-ended system, and in 1986 the Canadian government agreed that "the present fair dealing provisions should not be replaced by the substantially wider 'fair use' concept". Since then, the Canadian fair dealing exception has broadened. It is now similar in effect to U.S. fair use, even though the frameworks are different.
CCH Canadian Ltd v. Law Society of Upper Canada [2004] 1 S.C.R. 339, is a landmark Supreme Court of Canada case that establishes the bounds of fair dealing in Canadian copyright law. The Law Society of Upper Canada was sued for copyright infringement for providing photocopy services to researchers. The Court unanimously held that the Law Society's practice fell within the bounds of fair dealing.
Within the United Kingdom, fair dealing is a legal doctrine that provides an exception to the nation's copyright law in cases where the copyright infringement is for the purposes of non-commercial research or study, criticism or review, or for the reporting of current events.
A balanced copyright law provides an economic benefit to many high-tech businesses such as search engines and software developers. Fair use is also crucial to non-technology industries such as insurance, legal services, and newspaper publishers.
On September 12, 2007, the Computer and Communications Industry Association (CCIA), a group representing companies including Google Inc., Microsoft Inc., Oracle Corporation, Sun Microsystems, Yahoo! and other high-tech companies, released a study that found that fair use exceptions to US copyright laws were responsible for more than $4.5 trillion in annual revenue for the United States economy representing one-sixth of the total US GDP. The study was conducted using a methodology developed by the World Intellectual Property Organization.
The study found that fair use dependent industries are directly responsible for more than eighteen percent of US economic growth and nearly eleven million American jobs. "As the United States economy becomes increasingly knowledge-based, the concept of fair use can no longer be discussed and legislated in the abstract. It is the very foundation of the digital age and a cornerstone of our economy," said Ed Black, President and CEO of CCIA. "Much of the unprecedented economic growth of the past ten years can actually be credited to the doctrine of fair use, as the Internet itself depends on the ability to use content in a limited and unlicensed manner."
Fair Use Week is an international event that celebrates fair use and fair dealing. Fair Use Week was first proposed on a Fair Use Allies listserv, which was an outgrowth of the Library Code of Best Practices Capstone Event, celebrating the development and promulgation of ARL's "Code of Best Practices in Fair Use for Academic and Research Libraries". While the idea was not taken up nationally, Copyright Advisor at Harvard University, launched the first ever Fair Use Week at Harvard University in February 2014, with a full week of activities celebrating fair use. The first Fair Use Week included blog posts from national and international fair use experts, live fair use panels, fair use workshops, and a Fair Use Stories Tumblr blog, where people from the world of art, music, film, and academia shared stories about the importance of fair use to their community. The first Fair Use Week was so successful that in 2015 ARL teamed up with Courtney and helped organize the Second Annual Fair Use Week, with participation from many more institutions. ARL also launched an official Fair Use Week website, which was transferred from Pia Hunter, who attended the Library Code of Best Practices Capstone Event and had originally purchased the domain name fairuseweek.org. | https://en.wikipedia.org/wiki?curid=10772 |
Flying car
A flying car is a type of personal air vehicle or roadable aircraft that provides door-to-door transportation by both ground and air. The term "flying car" is also sometimes used to include hovercars.
Many prototypes have been built since the early 20th century, using a variety of flight technologies, such as distributed propulsion; some have true VTOL performance. The PAL-V Liberty roadable aircraft targeted 2021 to become the first flying car in full production.
Their appearance is often predicted by futurologists, with their failure ever to reach production leading to the catchphrase, "Where's my flying car?" Flying cars are also a popular theme in fantasy and science fiction stories.
In 1926, Henry Ford displayed an experimental single-seat aeroplane that he called the "sky flivver". The project was abandoned two years later when a distance-record attempt flight crashed, killing the pilot. The Flivver was not a flying car at all, but it did get press attention at the time, exciting the public that they would have a mass-produced affordable airplane product that would be made, marketed, sold, and maintained just like an automobile. The airplane was to be as commonplace in the future as the Model T of the time.
In 1940, Henry Ford famously predicted: "Mark my word: a combination airplane and motorcar is coming. You may smile, but it will come.”
In 1942, the Soviet armed forces experimented with a gliding tank, the Antonov A-40, but it was not capable of flying on its own.
The Aerocar, designed and built by Molt Taylor, made a successful flight in December 1949, and in following years versions underwent a series of road and flying tests. Chuck Berry featured the concept in his 1956 song "You Can't Catch Me", and in December 1956 the Civil Aviation Authority approved the design for mass production, but despite wide publicity and an improved version produced in 1989, Taylor did not succeed in getting the flying car into production. In total, six Aerocars were built.
In the period between 1956-1958, Ford's Advanced Design studio built the Volante Tri-Athodyne, a 3/8 scale concept car model. It was designed to have three ducted fans, each with their own motor, that would lift it off the ground and move it through the air. In public relation release, Ford noted that "the day where there will be an aero-car in every garage is still some time off", but added that "the Volante indicates one direction that the styling of such a vehicle would take".
In 1957, Popular Mechanics reported that Hiller Helicopters was developing a ducted-fan aircraft that would be easier to fly than helicopters, and should cost a lot less. Hiller engineers expected that this type of an aircraft would become the basis for a whole family of special-purpose aircraft.
In 1956, the US Army's Transportation Research Command began an investigation into "flying jeeps", ducted-fan-based aircraft that were envisioned to be smaller and easier to fly than helicopters. In 1957, Chrysler, Curtiss-Wright, and Piasecki were assigned contracts for building and delivery of prototypes. They all delivered their prototypes; however, Piasecki's VZ-8 was the most successful of the three. While it would normally operate close to the ground, it was capable of flying to several thousand feet, proving to be stable in flight. Nonetheless, the Army decided that the "Flying Jeep concept [was] unsuitable for the modern battlefield", and concentrated on the development of conventional helicopters. In addition to the army contract, Piasecki was developing the Sky Car, a modified version of its VZ-8 for civilian use..
In the mid-1980s, former Boeing engineer Fred Barker, founded Flight Innovations Inc. and began the development of the Sky Commuter, a small duct fans-based VTOL aircraft. It was a compact, two-passenger and was made primarily of composite materials. In 2008, the remaining prototype was sold for £86k on eBay.
As of 2017, several companies were developing electric flying cars, or eVTOLs, for production by 2020, including:
In 2016, AeroMobil was test-flying a prototype that obtained Slovak ultralight certification. When the final product will be available or how much it will cost is not yet specified. In 2018, it unveiled a concept that resembled a flying sportscar with VTOL capability.
Urban Aeronautics' X-Hawk is a VTOL turbojet powered aircraft announced in 2006 with a first flight planned for 2009. It was intended to operate much like a tandem rotor helicopter, but with ducted fans rather than exposed rotors. The requisite decrease in rotor size would also decrease fuel efficiency. The X-Hawk was being promoted for rescue and utility functions. As of 2013, no flights had been reported.
Terrafugia have a flying road vehicle, the Terrafugia Transition On 7 May 2013, Terrafugia announced the TF-X, a plug-in hybrid tilt-rotor vehicle that would be the first fully autonomous flying car. It would have a range of per flight and batteries are rechargeable by the engine. Development of TF-X is expected to last 8–12 years, which means it will not come to market before 2019.
The Moller Skycar M400 is a prototype personal VTOL (vertical take-off and landing) aircraft which is powered by four pairs of in-tandem Wankel rotary engines, and is approaching the problems of satellite-navigation, incorporated in the proposed Small Aircraft Transportation System. Moller also advises that, currently, the Skycar would only be allowed to fly from airports & heliports. The Skycar M400 has tiny wheels and no road capability at all. Moller has been developing VTOL craft since the late 1960s, but no Moller vehicle has ever achieved free flight out of ground effect. The proposed Autovolantor model has an all-electric version powered by Altairnano batteries.
The Xplorair PX200 was a French project of single-seater VTOL aircraft without rotating airfoil, relying on the Coandă effect and using an array of small jet engines called "thermoreactors" embedded within tiltwings' body. Announced in 2007, the project has been funded by the Government of France and was supported by various aerospace firms. A full-scale drone was scheduled for flight at Paris Air Show 2017, followed by the commercialization of a single seat flying car in the years after.
The SkyRider X2R is a prototype of a flying car developed by MACRO Industries, Inc. It is lighter than the Moller Skycar which has never successfully flown untethered.
The production-ready single-engine, roadable PAL-V Liberty autogyro, or gyrocopter, debuted at the Geneva Motor Show in March 2018, then became the first flying car in production, and was set to launch in 2020, will full production scheduled for 2021 in Gujarat, India.
Flying cars were planned to enter Russian market in 2018.
Turkey's top UAV producer Baykar is focusing on working on it's flying car named Cezeri. It was first introduced on TEKNOFEST Istanbul in 2019.
A practical flying car must be capable of safe, reliable and environmentally-friendly operation both on public roads and in the air. For widespread adoption it must also be able to fly without a qualified pilot at the controls and come at affordable purchase and running costs.
Many types of aircraft technologies and form factors have been tried. The simplest and earliest approach was to give a driveable car added, bolt-on fixed flying surfaces and propeller. However, such a design must either tow its removable parts on a separate trailer behind it or return to its last landing point before taking off again. Other conventional takeoff fixed-wing designs include folding wings, which the car carries with it when driven on the road.
Vertical takeoff and landing (VTOL) designs include rotorcraft with folding blades, as well as ducted-fan and tiltrotor vehicles. Most design concepts have inherent problems. Ducted-fan aircraft such as the Moller Skycar tend to easily lose stability and have been unable to travel at greater than 30–40 knots. Tiltrotors, such as the V-22 Osprey convertiplane, are generally noisy. To date, no vertical takeoff and landing (VTOL) vehicle has ever demonstrated adequate road capabilities.
The autogyro has an unpowered lifting rotor, relying on its forward airspeed to generate lift. For road use it requires a folding rotor.
Although statistically, commercial flying is much safer than driving, unlike commercial planes personal flying cars might not have as many safety checks and their pilots would not be as well trained. Humans already have problems with the aspect of driving in two dimensions (forward and backwards, side to side), adding in the up and down aspect would make "driving" or flying as it would be, much more difficult; however, this problem might be solved via the sole use of self-flying and self-driving cars. In mid-air collisions and mechanical failures, the aircraft could fall from the sky or go through an emergency landing, resulting in deaths and property damage. In addition, poor weather conditions, such as low air density, lightning storms and heavy rain, snow or fog could be challenging and affect the aircraft's aerodynamics.
A major problem, which increases rapidly with wider adoption, is the risk of mid-air collisions. Another is the unscheduled or emergency landing of a flying car on an unprepared location beneath, including the possibility of accident debris. Regulatory regimes are being developed in anticipation of a large increase in the numbers of roadable aircraft and personal air vehicles in the near future, and compliance with these regimes will be necessary for safe flight.
Mechanically, the challenges of flight are so strict that every opportunity must be taken to keep weight to a minimum and a typical airframe is lightweight and easily damaged. On the other hand a road vehicle must be able to withstand significant impact loads from casual incidents as well as low-speed and high-speed impacts, and the high strength this demands can add considerable weight. A practical flying car must be both strong enough to pass road safety standards and light enough to fly.
A flying car capable of widespread use must operate safely within a heavily populated urban environment. The lift and propulsion systems must be quiet, and have safety shrouds around all moving parts such as rotors, and must not create excessive pollution.
A basic flying car requires the person at the controls to be both a qualified road driver and aircraft pilot. This is impractical for the majority of people and so wider adoption will require computer systems to de-skill piloting. These include aircraft maneuvering, navigation and emergency procedures, all in potentially crowded airspace. Fly-by-wire computers can also make up for many deficiencies in flight dynamics, such as stability. A practical flying car may need to be a fully autonomous vehicle in which people are present only as passengers.
The need for the propulsion system to be both small and powerful can at present only be met using advanced and expensive technologies. The cost of manufacture could therefore be as much as 10 million dollars.
Flying cars would be used for shorter distances, at higher frequency, and at lower speeds and lower altitudes than conventional passenger aircraft. However optimal fuel efficiency for airplanes is obtained at high altitudes and high subsonic speeds, so a flying car's energy efficiency would be low compared to a conventional aircraft. Similarly, the flying car's road performance would be compromised by the requirements of flight, so it would be less economical than a conventional motor car as well.
The flying car was and remains a common feature of conceptions of the future, including imagined near futures such as those of the 21st century.
In 1999 the U.S. journalist Gail Collins noted:
As a result, flying cars have been referred to jokingly with the question "Where's my flying car?", emblematic of the supposed failure of modern technology to match futuristic visions that were promoted in earlier decades.
Comedian Lewis Black had a similar routine early in the decade, in which he says, "This new millennium sucks! It's exactly the same as the old millennium! You know why? No flying cars!"
The flying car has been depicted in many works of fantasy and science fiction. | https://en.wikipedia.org/wiki?curid=10773 |
Film editing
Film editing is both a creative and a technical part of the post-production process of filmmaking. The term is derived from the traditional process of working with film which increasingly involves the use of digital technology.
The film editor works with the raw footage, selecting shots and combining them into sequences which create a finished motion picture. Film editing is described as an art or skill, the only art that is unique to cinema, separating filmmaking from other art forms that preceded it, although there are close parallels to the editing process in other art forms such as poetry and novel writing. Film editing is often referred to as the "invisible art" because when it is well-practiced, the viewer can become so engaged that he or she is not aware of the editor's work.
On its most fundamental level, film editing is the art, technique and practice of assembling shots into a coherent sequence. The job of an editor is not simply to mechanically put pieces of a film together, cut off film slates or edit dialogue scenes. A film editor must creatively work with the layers of images, story, dialogue, music, pacing, as well as the actors' performances to effectively "re-imagine" and even rewrite the film to craft a cohesive whole. Editors usually play a dynamic role in the making of a film. Sometimes, auteurist film directors edit their own films, for example, Akira Kurosawa, Bahram Beyzai, Steven Soderbergh, and the Coen brothers.
With the advent of digital editing, film editors and their assistants have become responsible for many areas of filmmaking that used to be the responsibility of others. For instance, in past years, picture editors dealt only with just that—picture. Sound, music, and (more recently) visual effects editors dealt with the practicalities of other aspects of the editing process, usually under the direction of the picture editor and director. However, digital systems have increasingly put these responsibilities on the picture editor. It is common, especially on lower budget films, for the editor to sometimes cut in temporary music, mock up visual effects and add temporary sound effects or other sound replacements. These temporary elements are usually replaced with more refined final elements produced by the sound, music and visual effects teams hired to complete the picture.
Early films were short films that were one long, static, and locked-down shot. Motion in the shot was all that was necessary to amuse an audience, so the first films simply showed activity such as traffic moving along a city street. There was no story and no editing. Each film ran as long as there was film in the camera.
The use of film editing to establish continuity, involving action moving from one sequence into another, is attributed to British film pioneer Robert W. Paul's "Come Along, Do!", made in 1898 and one of the first films to feature more than one shot. In the first shot, an elderly couple is outside an art exhibition having lunch and then follow other people inside through the door. The second shot shows what they do inside. Paul's 'Cinematograph Camera No. 1' of 1896 was the first camera to feature reverse-cranking, which allowed the same film footage to be exposed several times and thereby to create super-positions and multiple exposures. One of the first films to use this technique, Georges Méliès's "The Four Troublesome Heads" from 1898, was produced with Paul's camera.
The further development of action continuity in multi-shot films continued in 1899-1900 at the Brighton School in England, where it was definitively established by George Albert Smith and James Williamson. In that year, Smith made "As Seen Through a Telescope", in which the main shot shows street scene with a young man tying the shoelace and then caressing the foot of his girlfriend, while an old man observes this through a telescope. There is then a cut to close shot of the hands on the girl's foot shown inside a black circular mask, and then a cut back to the continuation of the original scene.
Even more remarkable was James Williamson's "Attack on a China Mission Station", made around the same time in 1900. The first shot shows the gate to the mission station from the outside being attacked and broken open by Chinese Boxer rebels, then there is a cut to the garden of the mission station where a pitched battle ensues. An armed party of British sailors arrived to defeat the Boxers and rescue the missionary's family. The film used the first "reverse angle" cut in film history.
James Williamson concentrated on making films taking action from one place shown in one shot to the next shown in another shot in films like "Stop Thief!" and "Fire!", made in 1901, and many others. He also experimented with the close-up, and made perhaps the most extreme one of all in "The Big Swallow", when his character approaches the camera and appears to swallow it. These two filmmakers of the Brighton School also pioneered the editing of the film; they tinted their work with color and used trick photography to enhance the narrative. By 1900, their films were extended scenes of up to 5 minutes long.
Other filmmakers then took up all these ideas including the American Edwin S. Porter, who started making films for the Edison Company in 1901. Porter worked on a number of minor films before making "Life of an American Fireman" in 1903. The film was the first American film with a plot, featuring action, and even a closeup of a hand pulling a fire alarm. The film comprised a continuous narrative over seven scenes, rendered in a total of nine shots. He put a dissolve between every shot, just as Georges Méliès was already doing, and he frequently had the same action repeated across the dissolves. His film, "The Great Train Robbery" (1903), had a running time of twelve minutes, with twenty separate shots and ten different indoor and outdoor locations. He used cross-cutting editing method to show simultaneous action in different places.
These early film directors discovered important aspects of motion picture language: that the screen image does not need to show a complete person from head to toe and that splicing together two shots creates in the viewer's mind a contextual relationship. These were the key discoveries that made all non-live or non live-on-videotape narrative motion pictures and television possible—that shots (in this case, whole scenes since each shot is a complete scene) can be photographed at widely different locations over a period of time (hours, days or even months) and combined into a narrative whole. That is, "The Great Train Robbery" contains scenes shot on sets of a telegraph station, a railroad car interior, and a dance hall, with outdoor scenes at a railroad water tower, on the train itself, at a point along the track, and in the woods. But when the robbers leave the telegraph station interior (set) and emerge at the water tower, the audience believes they went immediately from one to the other. Or that when they climb on the train in one shot and enter the baggage car (a set) in the next, the audience believes they are on the same train.
Sometime around 1918, Russian director Lev Kuleshov did an experiment that proves this point. (See Kuleshov Experiment) He took an old film clip of a headshot of a noted Russian actor and intercut the shot with a shot of a bowl of soup, then with a child playing with a teddy bear, then with a shot an elderly woman in a casket. When he showed the film to people they praised the actor's acting—the hunger in his face when he saw the soup, the delight in the child, and the grief when looking at the dead woman. Of course, the shot of the actor was years before the other shots and he never "saw" any of the items. The simple act of juxtaposing the shots in a sequence made the relationship.
Before the widespread use of digital non-linear editing systems, the initial editing of all films was done with a positive copy of the film negative called a film workprint (cutting copy in UK) by physically cutting and splicing together pieces of film. Strips of footage would be hand cut and attached together with tape and then later in time, glue. Editors were very precise; if they made a wrong cut or needed a fresh positive print, it cost the production money and time for the lab to reprint the footage. Additionally, each reprint put the negative at risk of damage. With the invention of a splicer and threading the machine with a viewer such as a Moviola, or "flatbed" machine such as a K.-E.-M. or Steenbeck, the editing process sped up a little bit and cuts came out cleaner and more precise. The Moviola editing practice is non-linear, allowing the editor to make choices faster, a great advantage to editing episodic films for television which have very short timelines to complete the work. All film studios and production companies who produced films for television provided this tool for their editors. Flatbed editing machines were used for playback and refinement of cuts, particularly in feature films and films made for television because they were less noisy and cleaner to work with.
They were used extensively for documentary and drama production within the BBC's Film Department. Operated by a team of two, an editor and assistant editor, this tactile process required significant skill but allowed for editors to work extremely efficiently.
Today, most films are edited digitally (on systems such as Media Composer, Final Cut Pro or Premiere Pro) and bypass the film positive workprint altogether. In the past, the use of a film positive (not the original negative) allowed the editor to do as much experimenting as he or she wished, without the risk of damaging the original. With digital editing, editors can experiment just as much as before except with the footage completely transferred to a computer hard drive.
When the film workprint had been cut to a satisfactory state, it was then used to make an edit decision list (EDL). The negative cutter referred to this list while processing the negative, splitting the shots into rolls, which were then contact printed to produce the final film print or answer print. Today, production companies have the option of bypassing negative cutting altogether. With the advent of digital intermediate ("DI"), the physical negative does not necessarily need to be physically cut and hot spliced together; rather the negative is optically scanned into the computer(s) and a cut list is confirmed by a DI editor.
In the early years of film, editing was considered a technical job; editors were expected to "cut out the bad bits" and string the film together. Indeed, when the Motion Picture Editors Guild was formed, they chose to be "below the line", that is, not a creative guild, but a technical one. Women were not usually able to break into the "creative" positions; directors, cinematographers, producers, and executives were almost always men. Editing afforded creative women a place to assert their mark on the filmmaking process. The history of film has included many women editors such as Dede Allen, Anne Bauchens, Margaret Booth, Barbara McLean, Anne V. Coates, Adrienne Fazan, Verna Fields, Blanche Sewell and Eda Warren.
Post-production editing may be summarized by three distinct phases commonly referred to as the editor's cut, the director's cut, and the final cut.
There are several editing stages and the editor's cut is the first. An editor's cut (sometimes referred to as the "Assembly edit" or "Rough cut") is normally the first pass of what the final film will be when it reaches picture lock. The film editor usually starts working while principal photography starts. Sometimes, prior to cutting, the editor and director will have seen and discussed "dailies" (raw footage shot each day) as shooting progresses. As production schedules have shortened over the years, this co-viewing happens less often. Screening dailies give the editor a general idea of the director's intentions. Because it is the first pass, the editor's cut might be longer than the final film. The editor continues to refine the cut while shooting continues, and often the entire editing process goes on for many months and sometimes more than a year, depending on the film.
When shooting is finished, the director can then turn his or her full attention to collaborating with the editor and further refining the cut of the film. This is the time that is set aside where the film editor's first cut is molded to fit the director's vision. In the United States, under the rules of the Directors Guild of America, directors receive a minimum of ten weeks after completion of principal photography to prepare their first cut. While collaborating on what is referred to as the "director's cut", the director and the editor go over the entire movie in great detail; scenes and shots are re-ordered, removed, shortened and otherwise tweaked. Often it is discovered that there are plot holes, missing shots or even missing segments which might require that new scenes be filmed. Because of this time working closely and collaborating – a period that is normally far longer and more intricately detailed than the entire preceding film production – many directors and editors form a unique artistic bond.
Often after the director has had their chance to oversee a cut, the subsequent cuts are supervised by one or more producers, who represent the production company or movie studio. There have been several conflicts in the past between the director and the studio, sometimes leading to the use of the "Alan Smithee" credit signifying when a director no longer wants to be associated with the final release.
In motion picture terminology, a montage (from the French for "putting together" or "assembly") is a film editing technique.
There are at least three senses of the term:
Although film director D.W. Griffith was not part of the montage school, he was one of the early proponents of the power of editing — mastering cross-cutting to show parallel action in different locations, and codifying film grammar in other ways as well. Griffith's work in the teens was highly regarded by Lev Kuleshov and other Soviet filmmakers and greatly influenced their understanding of editing.
Kuleshov was among the very first to theorize about the relatively young medium of the cinema in the 1920s. For him, the unique essence of the cinema — that which could be duplicated in no other medium — is editing. He argues that editing a film is like constructing a building. Brick-by-brick (shot-by-shot) the building (film) is erected. His often-cited Kuleshov Experiment established that montage can lead the viewer to reach certain conclusions about the action in a film. Montage works because viewers infer meaning based on context. Sergei Eisenstein was briefly a student of Kuleshov's, but the two parted ways because they had different ideas of montage. Eisenstein regarded montage as a dialectical means of creating meaning. By contrasting unrelated shots he tried to provoke associations in the viewer, which were induced by shocks. But Eisenstein did not always do his own editing, and some of his most important films were edited by Esfir Tobak.
A montage sequence consists of a series of short shots that are edited into a sequence to condense narrative. It is usually used to advance the story as a whole (often to suggest the passage of time), rather than to create symbolic meaning. In many cases, a song plays in the background to enhance the mood or reinforce the message being conveyed. One famous example of montage was seen in the 1968 film "", depicting the start of man's first development from apes to humans. Another example that is employed in many films is the sports montage. The sports montage shows the star athlete training over a period of time, each shot having more improvement than the last. Classic examples include Rocky and the Karate Kid.
The word's association with Sergei Eisenstein is often condensed—too simply—into the idea of "juxtaposition" or into two words: "collision montage," whereby two adjacent shots that oppose each other on formal parameters or on the content of their images are cut against each other to create a new meaning not contained in the respective shots: Shot a + Shot b = New Meaning c.
The association of collision montage with Eisenstein is not surprising. He consistently maintained that the mind functions dialectically, in the Hegelian sense, that the contradiction between opposing ideas (thesis versus antithesis) is resolved by a higher truth, synthesis. He argued that conflict was the basis of "all" art, and never failed to see montage in other cultures. For example, he saw montage as a guiding principle in the construction of "Japanese hieroglyphics in which two independent ideographic characters ('shots') are juxtaposed and "explode" into a concept. Thus:
Eye + Water = Crying
Door + Ear = Eavesdropping
Child + Mouth = Screaming
Mouth + Dog = Barking.
Mouth + Bird = Singing."
He also found montage in Japanese haiku, where short sense perceptions are juxtaposed, and synthesized into a new meaning, as in this example:
As Dudley Andrew notes, "The collision of attractions from line to line produces the unified psychological effect which is the hallmark of haiku and montage."
Continuity is a term for the consistency of on-screen elements over the course of a scene or film, such as whether an actor's costume remains the same from one scene to the next, or whether a glass of milk held by a character is full or empty throughout the scene. Because films are typically shot out of sequence, the script supervisor will keep a record of continuity and provide that to the film editor for reference. The editor may try to maintain continuity of elements, or may intentionally create a discontinuous sequence for stylistic or narrative effect.
The technique of continuity editing, part of the classical Hollywood style, was developed by early European and American directors, in particular, D.W. Griffith in his films such as "The Birth of a Nation" and "Intolerance". The classical style embraces temporal and spatial continuity as a way of advancing the narrative, using such techniques as the 180 degree rule, Establishing shot, and Shot reverse shot. Often, continuity editing means finding a balance between literal continuity and perceived continuity. For instance, editors may condense action across cuts in a non-distracting way. A character walking from one place to another may "skip" a section of floor from one side of a cut to the other, but the cut is constructed to appear continuous so as not to distract the viewer.
Early Russian filmmakers such as Lev Kuleshov (already mentioned) further explored and theorized about editing and its ideological nature. Sergei Eisenstein developed a system of editing that was unconcerned with the rules of the continuity system of classical Hollywood that he called Intellectual montage.
Alternatives to traditional editing were also explored by early surrealist and Dada filmmakers such as Luis Buñuel (director of the 1929 "Un Chien Andalou") and René Clair (director of 1924's "Entr'acte" which starred famous Dada artists Marcel Duchamp and Man Ray).
The French New Wave filmmakers such as Jean-Luc Godard and François Truffaut and their American counterparts such as Andy Warhol and John Cassavetes also pushed the limits of editing technique during the late 1950s and throughout the 1960s. French New Wave films and the non-narrative films of the 1960s used a carefree editing style and did not conform to the traditional editing etiquette of Hollywood films. Like its Dada and surrealist predecessors, French New Wave editing often drew attention to itself by its lack of continuity, its demystifying self-reflexive nature (reminding the audience that they were watching a film), and by the overt use of jump cuts or the insertion of material not often related to any narrative. Three of the most influential editors of French New Wave films were the women who (in combination) edited 15 of Godard's films: Francoise Collin, Agnes Guillemot, and Cecile Decugis, and another notable editor is Marie-Josèphe Yoyotte, the first black woman editor in French cinema and editor of "The 400 Blows".
Since the late 20th century Post-classical editing has seen faster editing styles with nonlinear, discontinuous action.
Vsevolod Pudovkin noted that the editing process is the one phase of production that is truly unique to motion pictures. Every other aspect of filmmaking originated in a different medium than film (photography, art direction, writing, sound recording), but editing is the one process that is unique to film. Filmmaker Stanley Kubrick was quoted as saying: "I love editing. I think I like it more than any other phase of filmmaking. If I wanted to be frivolous, I might say that everything that precedes editing is merely a way of producing a film to edit."
According to writer-director Preston Sturges: [T]here is a law of natural cutting and that this replicates what an audience in a legitimate theater does for itself. The more nearly the film cutter approaches this law of natural interest, the more invisible will be his cutting. If the camera moves from one person to another at the exact moment that one in the legitimate theatre would have turned his head, one will not be conscious of a cut. If the camera misses by a quarter of a second, one will get a jolt. There is one other requirement: the two shots must be approximate of the same tone value. If one cuts from black to white, it is jarring. At any given moment, the camera must point at the exact spot the audience wishes to look at. To find that spot is absurdly easy: one has only to remember where one was looking at the time the scene was made.
Assistant editors aid the editor and director in collecting and organizing all the elements needed to edit the film. The Motion Picture Editors Guild defines an assistant editor as "a person who is assigned to assist an Editor. His [or her] duties shall be such as are assigned and performed under the immediate direction, supervision, and responsibility of the editor." When editing is finished, they oversee the various lists and instructions necessary to put the film into its final form. Editors of large budget features will usually have a team of assistants working for them. The first assistant editor is in charge of this team and may do a small bit of picture editing as well, if necessary. Often assistant editors will perform temporary sound, music, and visual effects work. The other assistants will have set tasks, usually helping each other when necessary to complete the many time-sensitive tasks at hand. In addition, an apprentice editor may be on hand to help the assistants. An apprentice is usually someone who is learning the ropes of assisting.
Television shows typically have one assistant per editor. This assistant is responsible for every task required to bring the show to the final form. Lower budget features and documentaries will also commonly have only one assistant.
The organizational aspects job could best be compared to database management. When a film is shot, every piece of picture or sound is coded with numbers and timecode. It is the assistant's job to keep track of these numbers in a database, which, in non-linear editing, is linked to the computer program. The editor and director cut the film using digital copies of the original film and sound, commonly referred to as an "offline" edit. When the cut is finished, it is the assistant's job to bring the film or television show "online". They create lists and instructions that tell the picture and sound finishers how to put the edit back together with the high-quality original elements. Assistant editing can be seen as a career path to eventually becoming an editor. Many assistants, however, do not choose to pursue advancement to the editor, and are very happy at the assistant level, working long and rewarding careers on many films and television shows.
Notes
Bibliography
Further reading
Wikibooks
Wikiversity | https://en.wikipedia.org/wiki?curid=10775 |
Friedrich Wöhler
Friedrich Wöhler () FRS(For) HFRSE (31 July 1800 – 23 September 1882) was a German chemist, known for his work in inorganic chemistry, being the first to isolate the chemical elements beryllium and yttrium in pure metallic form. He was the first to prepare several inorganic compounds including silane and silicon nitride.
Wöhler is known for seminal contributions in organic chemistry, in particular the Wöhler synthesis of urea. His synthesis of the organic compound urea in the laboratory from inorganic precursors refuted the prevailing belief that organic compounds could only be produced by living organisms due to a "life force". Wöhler also introduced the concept of a functional group, which was a new concept that advanced understanding of organic compounds.
Friedrich Wöhler was born in Eschersheim, Germany, and was the son of a veterinarian. His secondary education was at the Frankfurt Gymnasium. During his time at the gymnasium, Wöhler began chemical experimentation in a home laboratory provided by his father. He began his higher education were at Marburg University in 1820.
On 2 September 1823 Wöhler passed his examinations as a Doctor of Medicine, Surgery, and Obstetrics at Heidelberg University, having studied in the laboratory of chemist Leopold Gmelin. Gmelin encouraged him to focus on chemistry, and arranged for Wöhler to conduct research under the direction of chemist Jöns Jakob Berzelius in Stockholm, Sweden. Wöhler's time in Stockholm with Berzelius marked the beginning of a long professional relationship between the two scientists. Wöhler translated some of Berzelius's scientific writings into the German language for the purpose of international publication.
From 1826 to 1831 Wöhler taught chemistry at the Polytechnic School in Berlin. From 1831 until 1836 he taught at the Polytechnic School at Kassel. In the spring of 1836, he became Friedrich Stromeyer's successor as Ordinary Professor of Chemistry in the University of Göttingen, where he served as chemistry professor for 21 years. He remained affiliated with the University of Göttingen until his death in 1882. During his time at the University of Göttingen, approximately 8000 research students trained in his laboratory. In 1834, he was elected a foreign member of the Royal Swedish Academy of Sciences.
Wöhler investigated more than twenty‐five chemical elements during his career.
Hans Christian Ørsted was the first to separate out the element aluminium, in 1825, using a reduction of aluminium chloride with a potassium amalgam. Although Ørsted published his findings on the isolation of aluminium in the form of small particles, no other investigators were able to replicate his findings until 1936. Ørsted is now credited with discovering aluminium. Ørsted's findings on aluminium preparation were developed further by Wöhler, with Ørsted's permission. Wöhler modified Ørsted's methods, substituting potassium metal for potassium amalgam for the reduction of aluminium chloride. Using this improved method, Wöhler isolated aluminium powder in pure form on 22 October 1827. He showed that the aluminium powder could be solidified balls of pure metallic aluminium in 1845. For this work, Wöhler is credited with the first isolation of aluminium metal in pure form.
In 1828 Wöhler was the first to isolate the element beryllium in pure metallic form (also independently isolated by Antoine Bussy). In the same year, he became the first to isolate the element yttrium in pure metallic form. He achieved these preparations by heating the anhydrous chlorides of beryllium and yttrium with potassium metal.
In 1850, Wöhler determined that what was believed until then to be metallic titanium was in fact a mixture of titanium, carbon, and nitrogen, from which he derived the purest form isolated to that time. (Elemental titanium was later isolated in completely pure form in 1910, by Matthew A. Hunter.) He also developed a chemical synthesis of calcium carbide and silicon nitride.
Wöhler, working with French chemist Sainte Claire Deville, isolated the element boron in a crystalline form. He also isolated the element silicon in a crystalline form. Crystalline forms of these two elements were previously unknown. In 1856, working with Heinrich Buff, Wöhler prepared the inorganic compound silane (SiH4. He prepared the first samples of boron nitride by melting together boric acid and potassium cyanide. He also developed a method for preparation of calcium carbide.
Wöhler had an interest in the chemical composition of meteorites. He showed that some meteoric stones contain organic matter. He analyzed meteorites, and for many years wrote the digest on the literature of meteorites in the "Jahresberichte über die Fortschritte der Chemie". Wöhler accumulated the best private collection of meteoric stones and irons then existing.
In 1832, lacking his own laboratory facilities at Kassel, Wöhler worked with Justus Liebig in his Giessen laboratory.
In 1834, Wöhler and Liebig published an investigation of the oil of bitter almonds. Through their detailed analysis of the chemical composition of this oil, they proved by their experiments that a group of carbon, hydrogen, and oxygen atoms can behave chemically as if it were the equivalent of a single atom, can take the place of an atom in a chemical compound, and can be exchanged for other atoms in chemical compounds. Specifically, in their research on the oil of bitter almonds, they showed that a group of elements with chemical composition C7H5O can be thought of as a single functional group, which came to be known as a benzoyl radical. In this way, the investigations of Wöhler and Liebig established a new concept in organic chemistry referred to as compound radicals, a concept which had a profound influence on the development of organic chemistry. Many more such functional groups were later identified by subsequent investigators with wide utility in chemistry.
Liebig and Wöhler explored the concept of chemical isomerism, the idea that two chemical compounds with identical chemical compositions could in fact be different substances because of different arrangements of the atoms in the chemical structure. Aspects of chemical isomerism had originated in the research of Berzelius. Liebig and Wöhler investigated silver fulminate and silver cyanate. These two compounds have the same chemical composition, yet are chemically different. Silver fulminate is explosive, while silver cyanate is a stable compound. Liebig and Wöhler recognized these as being examples of structural isomerism, which was a significant advance in the understanding of chemical isomerism.
Wöhler has also been regarded as a pioneer in organic chemistry as a result of his 1828 demonstration of the laboratory synthesis of urea from ammonium cyanate, in a chemical reaction that came to be known as the "Wöhler synthesis". Urea and ammonium cyanate are further examples of structural isomers of chemical compounds. Heating ammonium cyanate converts it into urea, which is its isomer. In a letter to Swedish chemist Jöns Jacob Berzelius the same year, he wrote, 'In a manner of speaking, I can no longer hold my chemical water. I must tell you that I can make urea without the use of kidneys of any animal, be it man or dog.'
Wöhler's demonstration of urea synthesis has become regarded as a refutation of vitalism, the hypothesis that living things are alive because of some special "vital force".
It was the beginning of the end for one popular vitalist hypothesis, the idea that "organic" compounds could be made only by living things.
In responding to Wöhler, Jöns Jakob Berzelius clearly acknowledged that Wöhler's results were highly significant for the understanding of organic chemistry, calling the findings a "jewel" for Wöhler's "laurel wreath". Both scientists also recognized the work's importance to the study of isomerism, a new area of research.
Wöhler's role in overturning vitalism is at times said to have become exaggerated over time. This tendency can be traced back to Hermann Kopp's "History of Chemistry" (in four volumes, 1843–1847). He emphasized the importance of Wöhler's research as a refutation of vitalism, but ignored its importance to understanding chemical isomerism, setting a tone for subsequent writers.
The notion that Wöhler single-handedly overturned vitalism also gained popularity after it appeared in a popular history of chemistry published in 1931, which, "ignoring all pretense of historical accuracy, turned Wöhler into a crusader".
Wöhler's discoveries had significant influence on the theoretical basis of chemistry. The journals of every year from 1820 to 1881 contain original scientific contributions from him. The "Scientific American" supplement for 1882 stated that "for two or three of his researches he deserves the highest honor a scientific man can obtain, but the sum of his work is absolutely overwhelming. Had he never lived, the aspect of chemistry would be very different from that it is now".
Wöhler's notable research students included chemists Georg Ludwig Carius, Heinrich Limpricht, Rudolph Fittig, Adolph Wilhelm Hermann Kolbe, Albert Niemann, Vojtěch Šafařík, Wilhelm Kühne and Augustus Voelcker.
Wöhler was elected a Fellow of the Royal Society of London in 1854. He was an Honorary Fellow of the Royal Society of Edinburgh.
"The Life and Work of Friedrich Wöhler (1800–1882)" (2005) by Robin Keen is considered to be "the first detailed scientific biography" of Wöhler.
Friedrich Wöhler was first married to his cousin Franziska Maria Wöhler (b. 25 September 1811 in Kassel) in Kassel on 1 June 1830. The couple had two children, a boy (August, b. 22. May 1831 in Berlin) and a girl named Sophie (b. 1 June 1832 in Kassel). After the death of Franziska (11 June 1832 in Kassel) he married Julie Pfeiffer (b. 13 July 1813 in Kassel) on 16 July 1834 in Kassel. The couple had four daughters (Fanny, Helene, Emilie and Pauline).
Further works from Wöhler: | https://en.wikipedia.org/wiki?curid=10777 |
Funk
Funk is a music genre that originated in African-American communities in the mid-1960s when musicians created a rhythmic, danceable new form of music through a mixture of soul music, jazz, and rhythm and blues (R&B). Funk de-emphasizes melody and chord progressions and focuses on a strong rhythmic groove of a bassline played by an electric bassist and a drum part played by a drummer, often at slower tempos than other popular music. Like much of African-inspired music, funk typically consists of a complex groove with rhythm instruments playing interlocking grooves that created a "hypnotic" and "danceable feel". Funk uses the same richly colored extended chords found in bebop jazz, such as minor chords with added sevenths and elevenths, or dominant seventh chords with altered ninths and thirteenths.
Funk originated in the mid-1960s, with James Brown's development of a signature groove that emphasized the downbeat—with heavy emphasis on the first beat of every measure ("The One"), and the application of swung 16th notes and syncopation on all basslines, drum patterns, and guitar riffs. Other musical groups, including Sly and the Family Stone, The Meters, and Parliament-Funkadelic, soon began to adopt and develop Brown's innovations. Notable funk women include Chaka Khan, Marva Whitney, Lyn Collins, Brides of Funkenstein, Vicki Anderson, Anna King (The JB's singer), and Parlet.
Funk derivatives include the psychedelic funk of Sly Stone and George Clinton; the avant-funk of groups such as Talking Heads and the Pop Group; boogie, a form of post-disco dance music; electro music, a hybrid of electronic music and funk; funk metal (e.g., Living Colour, Faith No More); G-funk, a mix of gangsta rap and funk; Timba, a form of funky Cuban popular dance music; and funk jam. Funk samples and breakbeats have been used extensively in hip hop and various forms of electronic dance music, such as house music, and Detroit techno. It is also the main influence of go-go, a subgenre associated with funk.
The word "funk" initially referred (and still refers) to a strong odor. It is originally derived from Latin "fumigare" (which means "to smoke") via Old French ""fungiere"" and, in this sense, it was first documented in English in 1620. In 1784 "funky" meaning "musty" was first documented, which, in turn, led to a sense of "earthy" that was taken up around 1900 in early jazz slang for something "deeply or strongly felt". Ethnomusicologist Portia Maultsby states that the expression "funk" comes from the Central African word ""lu-funki"" and art historian Robert Farris Thompson says the word comes from the Kikongo term ""lu-fuki""; in both proposed origins, the term refers to body odor. Thompson's proposed Kikongo origin word, "lu-fuki" is used by African musicians to praise people "for the integrity of their art" and for having "worked out" to reach their goals. Even though in white culture, the term "funk" can have negative connotations of odor or being in a bad mood ("in a funk"), in African communities, the term "funk", while still linked to body odor, had the positive sense that a musician's hard-working, honest effort led to sweat, and from their "physical exertion" came an "exquisite" and "superlative" performance.
In early jam sessions, musicians would encourage one another to "get down" by telling one another, "Now, put some "stank" on it!". At least as early as 1907, jazz songs carried titles such as "Funky". The first example is an unrecorded number by Buddy Bolden, remembered as either "Funky Butt" or "Buddy Bolden's Blues" with improvised lyrics that were, according to Donald M. Marquis, either "comical and light" or "crude and downright obscene" but, in one way or another, referring to the sweaty atmosphere at dances where Bolden's band played. As late as the 1950s and early 1960s, when "funk" and "funky" were used increasingly in the context of jazz music, the terms still were considered indelicate and inappropriate for use in polite company. According to one source, New Orleans-born drummer Earl Palmer "was the first to use the word 'funky' to explain to other musicians that their music should be made more syncopated and danceable." The style later evolved into a rather hard-driving, insistent rhythm, implying a more "carnal quality". This early form of the music set the pattern for later musicians. The music was identified as slow, sexy, loose, riff-oriented and danceable.
Like soul, funk is based on dance music, so it has a strong "rhythmic role". The sound of funk is as much based on the "spaces between the notes" as the notes that are played; as such, rests between notes are important. While there are rhythmic similarities between funk and disco, funk has a "central dance beat that's slower, sexier and more syncopated than disco", and funk rhythm section musicians add more "subtextures", complexity and "personality" onto the main beat than a programmed synth-based disco ensemble.
Before funk, most pop music was based on sequences of eighth notes, because the fast tempos made further subdivisions of the beat infeasible. The innovation of funk was that by using slower tempos, funk "created space for further rhythmic subdivision, so a bar of 4/4 could now accommodate possible 16 note placements." Specifically, by having the guitar and drums play in "motoring" sixteenth-note rhythms, it created the opportunity for the other instruments to play "more syncopated, broken-up style", which facilitated a move to more "liberated" basslines. Together, these "interlocking parts" created a "hypnotic" and "danceable feel".
A great deal of funk is rhythmically based on a two-celled onbeat/offbeat structure, which originated in sub-Saharan African music traditions. New Orleans appropriated the bifurcated structure from the Afro-Cuban mambo and conga in the late 1940s, and made it its own. New Orleans funk, as it was called, gained international acclaim largely because James Brown's rhythm section used it to great effect.
Funk uses the same richly colored extended chords found in bebop jazz, such as minor chords with added sevenths and elevenths, or dominant seventh chords with altered ninths. Some examples of chords used in funk are minor eleventh chords (e.g., F minor 11th); dominant seventh with added sharp ninth and a suspended fourth (e.g., C7 (#9) sus 4); dominant ninth chords (e.g., F9); and minor sixth chords (e.g., C minor 6). The six-ninth chord is used in funk (e.g., F 6/9); it is a major chord with an added sixth and ninth. In funk, minor seventh chords are more common than minor triads because minor triads were found to be too "thin"-sounding. Some of the best known and most skillful soloists in funk have jazz backgrounds. Trombonist Fred Wesley and saxophonist Pee Wee Ellis and Maceo Parker are among the most notable musicians in the funk music genre, with both of them working with James Brown, George Clinton and Prince.
However, unlike bebop jazz, with its complex, rapid-fire chord changes, funk virtually abandoned chord changes, creating static single chord vamps (often alternating a minor seventh chord and a related dominant seventh chord, such as A minor to D7) with melodo-harmonic movement and a complex, driving rhythmic feel. Even though some funk songs are mainly one-chord vamps, the rhythm section musicians may embellish this chord by moving it up or down a semitone or a tone to create chromatic passing chords. For example, "Play that funky music" (by Wild Cherry) mainly uses an E ninth chord, but it also uses F#9 and F9.
The chords used in funk songs typically imply a Dorian or Mixolydian mode, as opposed to the major or natural minor tonalities of most popular music. Melodic content was derived by mixing these modes with the blues scale. In the 1970s, jazz music drew upon funk to create a new subgenre of jazz-funk, which can be heard in recordings by Miles Davis ("Live-Evil", "On the Corner"), and Herbie Hancock ("Head Hunters").
Funk continues the African musical tradition of improvisation, in that in a funk band, the group would typically "feel" when to change, by "jamming" and "grooving", even in the studio recording stage, which might only be based on the skeleton framework for each song. Funk uses "collective improvisation", in which musicians at rehearsals would have what was metaphorically a musical "conversation", an approach which extended to the onstage performances.
Funk creates an intense groove by using strong guitar riffs and basslines played on electric bass. Like Motown recordings, funk songs use basslines as the centerpiece of songs. Indeed, funk has been called the style in which the bassline is most prominent in the songs, with the bass playing the "hook" of the song. Early funk basslines used syncopation (typically syncopated eighth notes), but with the addition of more of a "driving feel" than in New Orleans funk, and they used blues scale notes along with the major third above the root. Later funk basslines use sixteenth note syncopation, blues scales, and repetitive patterns, often with leaps of an octave or a larger interval.
Funk basslines emphasize repetitive patterns, locked-in grooves, continuous playing, and slap and popping bass. Slapping and popping uses a mixture of thumb-slapped low notes (also called "thumped") and finger "popped" (or plucked) high notes, allowing the bass to have a drum-like rhythmic role, which became a distinctive element of funk. Notable slap and funky players include Bernard Edwards (Chic), Robert "Kool" Bell, Mark Adams (Slave), Johnny Flippin (Fatback) and Bootsy Collins. While slap and funky is important, some influential bassists who play funk, such as Rocco Prestia (from Tower of Power), did not use the approach, and instead used a typical fingerstyle method based on James Jamerson's Motown playing style. Larry Graham from Sly and the Family Stone is an influential bassist.
Funk bass has an "earthy, percussive kind of feel", in part due to the use of muted, rhythmic ghost notes (also called "dead notes"). Some funk bass players use electronic effects units to alter the tone of their instrument, such as "envelope filters" (an auto-wah effect that creates a "gooey, slurpy, quacky, and syrupy" sound) and imitate keyboard synthesizer bass tones (e.g., the Mutron envelope filter) and overdriven fuzz bass effects, which are used to create the "classic fuzz tone that sounds like old school Funk records". Other effects that are used include the flanger and bass chorus. Collins also used a Mutron Octave Divider, an octave pedal that, like the Octavia pedal popularized by Hendrix, can double a note an octave above and below to create a "futuristic and fat low-end sound".
Funk drumming creates a groove by emphasizing the drummer's "feel and emotion", which including "occasional tempo fluctuations", the use of swing feel in some songs (e.g., "Cissy Strut" by The Meters and "I'll Take You There" by The Staple Singers, which have a half-swung feel), and less use of fills (as they can lessen the groove). Drum fills are "few and economical", to ensure that the drumming stays "in the pocket", with a steady tempo and groove. These playing techniques are supplemented by a set-up for the drum kit that often includes muffled bass drums and toms and tightly tuned snare drums. Double bass drumming sounds are often done by funk drummers with a single pedal, an approach which "accents the second note... [and] deadens the drumhead's resonance", which gives a short, muffled bass drum sound.
In Tower Of Power drummer David Garibaldi's playing, there are many "ghost notes" and rim shots. A key part of the funk drumming style is using the hi-hat, with opening and closing the hi-hats during playing (to create "splash" accent effects) being an important approach. Two-handed sixteenth notes on the hi-hats, sometimes with a degree of swing feel, is used in funk.
Jim Payne states that funk drumming uses a "wide-open" approach to improvisation around rhythmic ideas from Latin music, ostinatos, that are repeated "with only slight variations", an approach which he says causes the "mesmerizing" nature of funk. Payne states that funk can be thought of as "rock played in a more syncopated manner", particularly with the bass drum, which plays syncopated eighth note and sixteenth note patterns that were innovated by Clive Williams (with Joe Tex); George Brown (with Kool & the Gang) and James "Diamond" Williams (with The Ohio Players). As with rock, the snare backbeats on beats two and four are still used in most funk (albeit with additional soft ghost notes).
Some funk bands used two drummers in shows, such as James Brown's band, the JBs. By using two drummers, the JB band was able to maintain a "solid syncopated" rhythmic sound, which contributed to the band's distinctive "Funky Drummer" rhythm.
In funk, guitarists often mix playing chords of a short duration (nicknamed "stabs") with faster rhythms and riffs. Guitarists playing rhythmic parts often play sixteenth notes, including with percussive "ghost notes". Chord extensions are favored, such as ninth chords. Typically, funk uses "two interlocking [electric] guitar parts", with a rhythm guitarist and a "tenor guitarist" who plays single notes. The two guitarists trade off their lines to create a "call-and-response, intertwined pocket." If a band only has one guitarist, this effect may be recreated by overdubbing in the studio, or, in a live show, by having a single guitarist play both parts, to the degree that this is possible.
In funk bands, guitarists typically play in a percussive style, using a style of picking called the ""chank"" or ""chicken scratch"", in which the guitar strings are pressed lightly against the fingerboard and then quickly released just enough to get a muted “scratching” sound that is produced by rapid rhythmic strumming of the opposite hand near the bridge. Earliest examples of that technic used on rhythm and blues is listened on Johnny Otis song "Willie and the Hand Jive" in 1957, with the future James Brown band guitar player Jimmy Nolen. The technique can be broken down into three approaches: the "chika", the "chank" and the "choke". With the "chika" comes a muted sound of strings being hit against the fingerboard; "chank" is a staccato attack done by releasing the chord with the fretting hand after strumming it; and "choking" generally uses all the strings being strummed and heavily muted.
The result of these factors was a rhythm guitar sound that seemed to float somewhere between the low-end thump of the electric bass and the cutting tone of the snare and hi-hats, with a rhythmically melodic feel that fell deep in the pocket. Guitarist Jimmy Nolen, longtime guitarist for James Brown, developed this technique. On Brown's "Give It Up or Turnit a Loose" (1969), however, Jimmy Nolen's guitar part has a bare bones tonal structure. The pattern of attack-points is the emphasis, not the pattern of pitches. The guitar is used the way that an African drum, or idiophone would be used. Nolen created a "clean, trebly tone" by using "hollow-body jazz guitars with single-coil P-90 pickups" plugged into a Fender Twin Reverb amp with the mid turned down low and the treble turned up high.
Funk guitarists playing rhythm guitar generally avoid distortion effects and amp overdrive to get a clean sound, and given the importance of a crisp, high sound, Fender Stratocasters and Telecasters were widely used for their cutting treble tone. The mids are often cut by guitarists to help the guitar sound different from the horn section, keyboards and other instruments. Given the focus on providing a rhythmic groove, and the lack of emphasis on instrumental guitar melodies and guitar solos, sustain is not sought out by funk rhythm guitarists. Funk rhythm guitarists use compressor volume-control effects to enhance the sound of muted notes, which boosts the “clucking” sound and adds "percussive excitement to funk rhythms" (an approach used by Nile Rodgers).
Guitarist Eddie Hazel from Funkadelic is notable for his solo improvisation (partiularly for the solo on "Maggot Brain") and guitar riffs, the tone of which was shaped by a Maestro Fuzz-Tone FZ-1A pedal. Hazel, along with guitarist Ernie Isley of the Isley Brothers, was influenced by Jimi Hendrix's improvised, wah-wah infused solos. Ernie Isley was tutored at an early age by Hendrix, when Hendrix was a part of the Isley Brothers backing band and temporarily lived in the Isleys' household.Funk guitarists use the wah-wah sound effect along with muting the notes to create a percussive sound for their guitar riffs. The phaser effect is often used in funk and R&B guitar playing for its filter sweeping sound effect, an example being the Isley Brothers' song "Who's That Lady". Michael Hampton, another P-Funk guitarist, was able to play Hazel's virtuostic solo on "Maggot Brain", using a solo approach that added in string bends and Hendrix-style feedback.
A range of keyboard instruments are used in funk. Acoustic piano is used in funk, including in “September” by Earth Wind & Fire and “Will It Go ‘Round in Circles” by Billy Preston. The electric piano is used on songs such as Herbie Hancock’s “Chameleon” (a Fender Rhodes) and “Mercy, Mercy, Mercy” by Joe Zawinul (a Wurlitzer). The clavinet is used for its percussive tone, and it can be heard in songs such as Stevie Wonder's “Superstition” and “Higher Ground” and Bill Withers' song “Use Me”. The Hammond B-3 organ is used in funk, in songs such as “Cissy Strut” by The Meters and “Love the One You’re With” (with Aretha Franklin singing and Billy Preston on keyboards).
Bernie Worrell's range of keyboards from his recordings with Parliament Funkadelic demonstrate the wide range of keyboards used in funk, as they include the Hammond organ (“Funky Woman,” “Hit It and Quit It,” “Wars of Armageddon”); RMI electric piano (“I Wanna Know If It’s Good to You?,” “Free Your Mind,” “Loose Booty”); acoustic piano (“Funky Dollar Bill,” “Jimmy’s Got a Little Bit of Bitch in Him”); clavinet (“Joyful Process,” “Up for the Down Stroke,” “Red Hot Mama”); Minimoog synthesizer ("Atmosphere", “Flash Light,” “Aqua Boogie,” “Knee Deep,” “Let’s Take It to the Stage”); and ARP string ensemble synth (“Chocolate City,” , "Tear the roof of the sucker", “Undisco Kidd”).
Synthesizers were used in funk both to add to the deep sound of the electric bass, or even to replace the electric bass altogether in some songs. Funk synthesizer bass, most often a Minimoog, was used because it could create layered sounds and new electronic tones that were not feasible on electric bass.
In the 1970s, funk used many of the same vocal styles that were used in African-American music in the 1960s, including singing influences from blues, gospel, jazz and doo-wop. Like these other African-American styles, funk used "[y]ells, shouts, hollers, moans, humming, and melodic riffs", along with styles such as call and response and narration of stories (like the African oral tradition approach). The call and response in funk can be between the lead singer and the band members who act as backup vocalists.
As funk emerged from soul, the vocals in funk share soul's approach; however, funk vocals tend to be "more punctuated, energetic, rhythmically percussive[,] and less embellished" with ornaments, and the vocal lines tend to resemble horn parts and have "pushed" rhythms. Funk bands such as Earth, Wind & Fire have harmony vocal parts. Songs like "Super Bad" by James Brown included "double-voice" along with "yells, shouts and screams". Funk singers used a "black aesthetic" to perform that made use of "colorful and lively exchange of gestures, facial expressions, body posture, and vocal phrases" to create an engaging performance.
The lyrics in funk music addressed issues faced by the African American community in the United States during the 1970s, which arose due to the move away from an industrial, working-class economy to an information economy, which harmed the Black working class. Funk songs by The Ohio Players, Earth, Wind & Fire, and James Brown raised issues faced by lower-income Blacks in their song lyrics, such as poor "economic conditions and themes of poor inner-city life in the black communities".
The Funkadelic song "One Nation Under A Groove" (1978) is about the challenges that Blacks overcame during the 1960s civil rights movement, and it includes an exhortation for Blacks in the 1970s to capitalize on the new "social and political opportunities" that had become available in the 1970s. The Isley Brothers song "Fight the Power" (1975) has a political message. Parliament's song "Chocolate City" (1975) metaphorically refers to Washington D.C. and other US cities that have a mainly Black population, and it draws attention to the potential power that Black voters wield and suggests that a Black President be considered in the future.
The political themes of funk songs and the aiming of the messages to a Black audience echoed the new image of Blacks that was created in Blaxploitation films, which depicted "African-America men and women standing their ground and fighting for what was right". Both funk and Blaxploitation films addressed issues faced by Blacks and told stories from a Black perspective. Another link between 1970s funk and Blaxploitation films is that many of these films used funk soundtracks (e.g., Curtis Mayfield for "Superfly"; James Brown and Fred Wesley for "Black Caesar" and War for "Youngblood").
Funk songs included metaphorical language that was understood best by listeners who were "familiar with the black aesthetic and [black] vernacular". For example, funk songs included expressions such as "shake your money maker", "funk yourself right out" and "move your boogie body". Another example is the use of "bad" in the song "Super Bad" (1970), which black listeners knew meant "good" or "great".
In the 1970s, to get around radio obscenity restrictions, funk artists would use words that sounded like non-allowed words and double entendres to get around these restrictions. For example, The Ohio Players had a song entitled "Fopp" which referred to "Fopp me right, don't you fopp me wrong/We'll be foppin' all night long...". Some funk songs used made-up words which suggested that they were "writing lyrics in a constant haze of marijuana smoke", such as Parliament's "Aqua Boogie (A Psychoalphadiscobetabioaquadooloop)", which includes words such as "bioaquadooloop". The mainstream white listener base was often not able to understand funk's lyrical messages, which contributed to funk's lack of popular music chart success with white audiences during the 1970s.
Horn section arrangements with groups of brass instruments are used in funk songs. Funk horn sections could include saxophone (often tenor sax), trumpet, trombone, and for larger horn sections, such as quintets and sextets, a baritone sax. Horn sections played "rhythmic and syncopated" parts, often with "offbeat phrases" that emphasize "rhythmic displacement". Funk song introductions are an important place for horn arrangements.
Funk horn sections performed in a "rhythmic percussive style" that mimicked the approach used by funk rhythm guitarists. Horn sections would "punctuate" the lyrics by playing in the spaces between vocals, using "short staccato rhythmic blast[s]". Notable funk horn players included Alfred "PeeWee" Ellis, trombonist Fred Wesley, and alto sax player Maceo Parker. Notable funk horn sections including the "Phoenix Horns" (with Earth, Wind & Fire), the "Horny Horns" (with Parliament), the "Memphis Horns" (with Isaac Hayes), and "MFSB" (with Curtis Mayfield).
The instruments in funk horn sections varied. If there were two brass instruments, it could be trumpet and tenor sax, trumpet and trombone, or two saxes. If there were three brass players, it could be trumpet, sax and trombone or a trumpet and two saxes. A quartet of brass instruments would often be a pair of an instrument type and two other instruments. Quintets would typically take a pair of brass instruments (saxes or trumpets), and add different high and low brass instruments. With six instruments, a brass section would typically be two pairs of brass instruments plus a trombone and baritone sax holding down the bottom end.
Notable songs with funk horn sections include:
In bands or shows where hiring a horn section is not feasible, a keyboardist can play the horn section parts on a synthesizer with "keyboard brass patches", however, choosing an authentic-sounding synthesizer and brass patch is important. In the 2010s, with micro-MIDI synths, it may even be possible to have another instrumentalist play the keyboard brass parts, thus enabling the keyboardist to continue to comp throughout the song.
Funk bands in the 1970s adopted Afro-American fashion and style, including "Bell-bottom pants, platform shoes, hoop earring[s], Afros [hairstyles], leather vests... beaded necklaces", dashiki shirts, jumpsuits and boots. In contrast to earlier bands such as The Temptations, which wore "matching suits" and "neat haircuts" to appeal to white mainstream audiences, funk bands adopted an "African spirit" in their outfits and style. George Clinton and Parliament are known for their imaginative costumes and "freedom of dress", which included bedsheets acting as robes and capes.
The distinctive characteristics of African-American musical expression are rooted in sub-Saharan African music traditions, and find their earliest expression in spirituals, work chants/songs, praise shouts, gospel, blues, and "body rhythms" (hambone, patting juba, and ring shout clapping and stomping patterns). Funk music is an amalgam of soul music, soul jazz, R&B, and Afro-Cuban rhythms absorbed and reconstituted in New Orleans. Like other styles of African-American musical expression including jazz, soul music and R&B, funk music accompanied many protest movements during and after the Civil Rights Movement. Funk allowed everyday experiences to be expressed to challenge daily struggles and hardships fought by lower and working class communities.
Gerhard Kubik notes that with the exception of New Orleans, early blues lacked complex polyrhythms, and there was a "very specific absence of asymmetric time-line patterns (key patterns) in virtually all early twentieth century African-American music ... only in some New Orleans genres does a hint of simple time line patterns occasionally appear in the form of transient so-called 'stomp' patterns or stop-time chorus. These do not function in the same way as African time lines."
In the late 1940s this changed somewhat when the two-celled time line structure was brought into New Orleans blues. New Orleans musicians were especially receptive to Afro-Cuban influences precisely at the time when R&B was first forming. Dave Bartholomew and Professor Longhair (Henry Roeland Byrd) incorporated Afro-Cuban instruments, as well as the clave pattern and related two-celled figures in songs such as "Carnival Day" (Bartholomew 1949) and "Mardi Gras In New Orleans" (Longhair 1949). Robert Palmer reports that, in the 1940s, Professor Longhair listened to and played with musicians from the islands and "fell under the spell of Perez Prado's mambo records." Professor Longhair's particular style was known locally as "rumba-boogie".
One of Longhair's great contributions was his particular approach of adopting two-celled, clave-based patterns into New Orleans rhythm and blues (R&B). Longhair's rhythmic approach became a basic template of funk. According to Dr. John (Malcolm John "Mac" Rebennack, Jr.), the Professor "put funk into music ... Longhair's thing had a direct bearing I'd say on a large portion of the funk music that evolved in New Orleans." In his "Mardi Gras in New Orleans", the pianist employs the 2-3 clave onbeat/offbeat motif in a rumba-boogie "guajeo".
The syncopated, but straight subdivision feel of Cuban music (as opposed to swung subdivisions) took root in New Orleans R&B during this time. Alexander Stewart states: "Eventually, musicians from outside of New Orleans began to learn some of the rhythmic practices [of the Crescent City]. Most important of these were James Brown and the drummers and arrangers he employed. Brown's early repertoire had used mostly shuffle rhythms, and some of his most successful songs were 12/8 ballads (e.g. 'Please, Please, Please' (1956), 'Bewildered' (1961), 'I Don't Mind' (1961)). Brown's change to a funkier brand of soul required 4/4 metre and a different style of drumming." Stewart makes the point: "The singular style of rhythm & blues that emerged from New Orleans in the years after World played an important role in the development of funk. In a related development, the underlying rhythms of American popular music underwent a basic, yet generally unacknowledged transition from triplet or shuffle feel to even or straight eighth notes."
James Brown credited Little Richard's 1950s R&B road band, from New Orleans, as "the first to put the funk into the rhythm" of rock and roll. Following his temporary exit from secular music to become an evangelist in 1957, some of Little Richard's band members joined Brown and the Famous Flames, beginning a long string of hits for them in 1958. By the mid-1960s, James Brown had developed his signature groove that emphasized the downbeat—with heavy emphasis on the first beat of every measure to etch his distinctive sound, rather than the backbeat that typified African-American music. Brown often cued his band with the command "On the one!," changing the percussion emphasis/accent from the one-two-three-four backbeat of traditional soul music to the one-two-three-four downbeat – but with an even-note syncopated guitar rhythm (on quarter notes two and four) featuring a hard-driving, repetitive brassy swing. This one-three beat launched the shift in Brown's signature music style, starting with his 1964 hit single, "Out of Sight" and his 1965 hits, "Papa's Got a Brand New Bag" and "I Got You (I Feel Good)".
Brown's style of funk was based on interlocking, contrapuntal parts: syncopated basslines, 16th beat drum patterns, and syncopated guitar riffs. The main guitar ostinatos for "Ain't it Funky" (c. late 1960s) are an example of Brown's refinement of New Orleans funk— an irresistibly danceable riff, stripped down to its rhythmic essence. On "Ain't it Funky" the tonal structure is barebones. Brown's innovations led to him and his band becoming the seminal funk act; they also pushed the funk music style further to the forefront with releases such as "Cold Sweat" (1967), "Mother Popcorn" (1969) and "Get Up (I Feel Like Being A) Sex Machine" (1970), discarding even the twelve-bar blues featured in his earlier music. Instead, Brown's music was overlaid with "catchy, anthemic vocals" based on "extensive vamps" in which he also used his voice as "a percussive instrument with frequent rhythmic grunts and with rhythm-section patterns ... [resembling] West African polyrhythms" – a tradition evident in African-American work songs and chants. Throughout his career, Brown's frenzied vocals, frequently punctuated with screams and grunts, channeled the "ecstatic ambiance of the black church" in a secular context.
After 1965, Brown's bandleader and arranger was Alfred "Pee Wee" Ellis. Ellis credits Clyde Stubblefield's adoption of New Orleans drumming techniques, as the basis of modern funk: "If, in a studio, you said 'play it funky' that could imply almost anything. But 'give me a New Orleans beat' – you got exactly what you wanted. And Clyde Stubblefield was just the epitome of this funky drumming." Stewart states that the popular feel was passed along from "New Orleans—through James Brown's music, to the popular music of the 1970s." Concerning the various funk motifs, Stewart states that this model "...is different from a time line (such as clave and tresillo) in that it is not an exact pattern, but more of a loose organizing principle."
In a 1990 interview, Brown offered his reason for switching the rhythm of his music: "I changed from the upbeat to the downbeat ... Simple as that, really." According to Maceo Parker, Brown's former saxophonist, playing on the downbeat was at first hard for him and took some getting used to. Reflecting back to his early days with Brown's band, Parker reported that he had difficulty playing "on the one" during solo performances, since he was used to hearing and playing with the accent on the second beat.
Other musical groups picked up on the rhythms and vocal style developed by James Brown and his band, and the funk style began to grow. Dyke and the Blazers, based in Phoenix, Arizona, released "Funky Broadway" in 1967, perhaps the first record of the soul music era to have the
word "funky" in the title. In 1969 Jimmy McGriff released "Electric Funk", featuring his distinctive organ over a blazing horn section. Meanwhile, on the West Coast, Charles Wright & the Watts 103rd Street Rhythm Band was releasing funk tracks beginning with its first album in 1967, culminating in the classic single "Express Yourself" in 1971. Also from the West Coast area, more specifically Oakland, San Francisco, came the band Tower of Power (TOP), which formed in 1968. Their debut album "East Bay Grease", released 1970, is considered an important milestone in funk. Throughout the 1970s, TOP had many hits, and the band helped to make funk music a successful genre, with a broader audience.
In 1970, Sly & the Family Stone's "Thank You (Falettinme Be Mice Elf Agin)" reached #1 on the charts, as did "Family Affair" in 1971. Notably, these afforded the group and the genre crossover success and greater recognition, yet such success escaped comparatively talented and moderately popular funk band peers. The Meters defined funk in New Orleans, starting with their top ten R&B hits "Sophisticated Cissy" and "Cissy Strut" in 1969. Another group who defined funk around this time were the Isley Brothers, whose funky 1969 #1 R&B hit, "It's Your Thing", signaled a breakthrough in African-American music, bridging the gaps of the jazzy sounds of Brown, the psychedelic rock of Jimi Hendrix, and the upbeat soul of Sly & the Family Stone and Mother's Finest. The Temptations, who had previously helped to define the "Motown Sound" – a distinct blend of pop-soul – adopted this new psychedelic sound towards the end of the 1960s as well. Their producer, Norman Whitfield, became an innovator in the field of psychedelic soul, creating hits with a newer, funkier sound for many Motown acts, including "War" by Edwin Starr, "Smiling Faces Sometimes" by the Undisputed Truth and "Papa Was A Rollin' Stone" by the Temptations. Motown producers Frank Wilson ("Keep On Truckin'") and Hal Davis ("Dancing Machine") followed suit. Stevie Wonder and Marvin Gaye also adopted funk beats for some of their biggest hits in the 1970s, such as "Superstition" and "You Haven't Done Nothin'", and "I Want You" and "Got To Give It Up", respectively.
A new group of musicians began to further develop the "funk rock" approach. Innovations were prominently made by George Clinton, with his bands Parliament and Funkadelic. Together, they produced a new kind of funk sound heavily influenced by jazz and psychedelic rock. The two groups shared members and are often referred to collectively as "Parliament-Funkadelic." The breakout popularity of Parliament-Funkadelic gave rise to the term "P-Funk", which referred to the music by George Clinton's bands, and defined a new subgenre. Clinton played a principal role in several other bands, including Parlet, the Horny Horns, and the Brides of Funkenstein, all part of the P-Funk conglomerate. "P-funk" also came to mean something in its quintessence, of superior quality, or "sui generis".
The 1970s were the era of highest mainstream visibility for funk music. In addition to Parliament Funkadelic, artists like Sly and the Family Stone, Rufus & Chaka Khan, Bootsy's Rubber Band, the Isley Brothers, Ohio Players, Con Funk Shun, Kool and the Gang, the Bar-Kays, Commodores, Roy Ayers, and Stevie Wonder, among others, were successful in getting radio play. Disco music owed a great deal to funk. Many early disco songs and performers came directly from funk-oriented backgrounds. Some disco music hits, such as all of Barry White's hits, "Kung Fu Fighting" by Biddu and Carl Douglas, Donna Summer's "Love To Love You Baby", Diana Ross' "Love Hangover", KC and the Sunshine Band's "I'm Your Boogie Man", "I'm Every Woman" by Chaka Khan (also known as the Queen of Funk), and Chic's "Le Freak" conspicuously include riffs and rhythms derived from funk. In 1976, Rose Royce scored a number-one hit with a purely dance-funk record, "Car Wash". Even with the arrival of disco, funk became increasingly popular well into the early 1980s.
Funk music was also exported to Africa, and it melded with African singing and rhythms to form Afrobeat. Nigerian musician Fela Kuti, who was heavily influenced by James Brown's music, is credited with creating the style and terming it "Afrobeat".
Jazz-funk is a subgenre of jazz music characterized by a strong back beat (groove), electrified sounds and an early prevalence of analog synthesizers. The integration of funk, soul, and R&B music and styles into jazz resulted in the creation of a genre whose spectrum is quite wide and ranges from strong jazz improvisation to soul, funk or disco with jazz arrangements, jazz riffs, and jazz solos, and sometimes soul vocals. Jazz-funk is primarily an American genre, where it was popular throughout the 1970s and the early 1980s, but it also achieved noted appeal on the club-circuit in England during the mid-1970s. Similar genres include soul jazz and jazz fusion, but neither entirely overlap with jazz-funk. Notably jazz-funk is less vocal, more arranged and featured more improvisation than soul jazz, and retains a strong feel of groove and R&B versus some of the jazz fusion production.
In the 1970s, at the same time that jazz musicians began to explore blending jazz with rock to create jazz fusion, major jazz performers began to experiment with funk. Jazz-funk recordings typically used electric bass and electric piano in the rhythm section, in place of the double bass and acoustic piano that were typically used in jazz up till that point. Pianist and bandleader Herbie Hancock was the first of many big jazz artists who embraced funk during the decade. Hancock's Headhunters band (1973) played the jazz-funk style. The Headhunters' lineup and instrumentation, retaining only wind player Bennie Maupin from Hancock's previous sextet, reflected his new musical direction. He used percussionist Bill Summers in addition to a drummer. Summers blended African, Afro-Cuban, and Afro-Brazilian instruments and rhythms into Hancock's jazzy funk sound.
"On the Corner" (1972) was jazz trumpeter-composer Miles Davis's seminal foray into jazz-funk. Like his previous works though, "On the Corner" was experimental. Davis stated that "On the Corner" was an attempt at reconnecting with the young black audience which had largely forsaken jazz for rock and funk. While there is a discernible funk influence in the timbres of the instruments employed, other tonal and rhythmic textures, such as the Indian tambora and tablas, and Cuban congas and bongos, create a multi-layered soundscape. From a musical standpoint, the album was a culmination of sorts of the recording studio-based "musique concrète" approach that Davis and producer Teo Macero (who had studied with Otto Luening at Columbia University's Computer Music Center) had begun to explore in the late 1960s. Both sides of the record featured heavy funk drum and bass grooves, with the melodic parts snipped from hours of jams and mixed in the studio.
Also cited as musical influences on the album by Davis were the contemporary composer Karlheinz Stockhausen.
In the 1980s, largely as a reaction against what was seen as the over-indulgence of disco, many of the core elements that formed the foundation of the P-Funk formula began to be usurped by electronic instruments, drum machines and synthesizers. Horn sections of saxophones and trumpets were replaced by synth keyboards, and the horns that remained were given simplified lines, and few horn solos were given to soloists. The classic electric keyboards of funk, like the Hammond B3 organ, the Hohner Clavinet and/or the Fender Rhodes piano began to be replaced by the new digital synthesizers such as the Prophet-5, Oberheim OB-X, and Yamaha DX7. Electronic drum machines such as the Roland TR-808, Linn LM-1, and Oberheim DMX began to replace the "funky drummers" of the past, and the slap and pop style of bass playing were often replaced by synth keyboard basslines. Lyrics of funk songs began to change from suggestive double entendres to more graphic and sexually explicit content.
Eric Clapton and Michael Jackson covered Yellow Magic Orchestra's "Behind the Mask". In 1980, YMO was the first band to use the TR-808 programmable drum machine, while Kraftwerk and YMO's sound influenced later electro-funk artists such as Afrika Bambaataa and Mantronix.
Rick James was the first funk musician of the 1980s to assume the funk mantle dominated by P-Funk in the 1970s. His 1981 album "Street Songs", with the singles "Give It to Me Baby" and "Super Freak", resulted in James becoming a star, and paved the way for the future direction of explicitness in funk.
Beginning in the late 1970s, Prince used a stripped-down, dynamic instrumentation similar to James. However, Prince went on to have as much of an impact on the sound of funk as any one artist since Brown; he combined eroticism, technology, an increasing musical complexity, and an outrageous image and stage show to ultimately create music as ambitious and imaginative as P-Funk. Prince formed the Time, originally conceived as an opening act for him and based on his "Minneapolis sound", a hybrid mixture of funk, R&B, rock, pop & new wave. Eventually, the band went on to define their own style of stripped-down funk based on tight musicianship and sexual themes.
Similar to Prince, other bands emerged during the P-Funk era and began to incorporate uninhibited sexuality, dance-oriented themes, synthesizers and other electronic technologies to continue to craft funk hits. These included Cameo, Zapp, the Gap Band, the Bar-Kays, and the Dazz Band, who all found their biggest hits in the early 1980s. By the latter half of the 1980s, pure funk had lost its commercial impact; however, pop artists from Michael Jackson to Duran Duran often used funk beats.
Influenced by Yellow Magic Orchestra and Kraftwerk, the American musician Afrika Bambaataa developed electro-funk, a minimalist machine-driven style of funk with his single "Planet Rock" in 1982. Also known simply as electro, this style of funk was driven by synthesizers and the electronic rhythm of the TR-808 drum machine. The single "Renegades of Funk" followed in 1983.
While funk was all but driven from the radio by slick commercial hip hop, contemporary R&B and new jack swing, its influence continued to spread. Artists like Steve Arrington and Cameo still received major airplay and had huge global followings. Rock bands began copying elements of funk to their sound, creating new combinations of "funk rock" and "funk metal". Extreme, Red Hot Chili Peppers, Living Colour, Jane's Addiction, Prince, Primus, Urban Dance Squad, Fishbone, Faith No More, Rage Against the Machine, Infectious Grooves, and Incubus spread the approach and styles garnered from funk pioneers to new audiences in the mid-to-late 1980s and the 1990s. These bands later inspired the underground mid-1990s funkcore movement and current funk-inspired artists like Outkast, Malina Moye, Van Hunt, and Gnarls Barkley.
In the 1990s, artists like Me'shell Ndegeocello, Brooklyn Funk Essentials and the (predominantly UK-based) acid jazz movement including artists and bands such as Jamiroquai, Incognito, Galliano, Omar, Los Tetas and the Brand New Heavies carried on with strong elements of funk. However, they never came close to reaching the commercial success of funk in its heyday, with the exception of Jamiroquai whose album "Travelling Without Moving" sold about 11.5 million units worldwide. Meanwhile, in Australia and New Zealand, bands playing the pub circuit, such as Supergroove, Skunkhour and the Truth, preserved a more instrumental form of funk.
Since the late 1980s hip hop artists have regularly sampled old funk tunes. James Brown is said to be the most sampled artist in the history of hip hop, while P-Funk is the second most sampled artist; samples of old Parliament and Funkadelic songs formed the basis of West Coast G-funk.
Original beats that feature funk-styled bass or rhythm guitar riffs are also not uncommon. Dr. Dre (considered the progenitor of the G-funk genre) has freely acknowledged to being heavily influenced by George Clinton's psychedelic funk: "Back in the 70s that's all people were doing: getting high, wearing Afros, bell-bottoms and listening to Parliament-Funkadelic. That's why I called my album "The Chronic" and based my music and the concepts like I did: because his shit was a big influence on my music. Very big". Digital Underground was a large contributor to the rebirth of funk in the 1990s by educating their listeners with knowledge about the history of funk and its artists. George Clinton branded Digital Underground as "Sons of the P", as their second full-length release is also titled. DU's first release, Sex Packets, was full of funk samples, with the most widely known "The Humpty Dance" sampling Parliament's "Let's Play House". A very strong funk album of DU's was their 1996 release "Future Rhythm". Much of contemporary club dance music, drum and bass in particular has heavily sampled funk drum breaks.
Funk is a major element of certain artists identified with the jam band scene of the late 1990s and 2000s. Phish began playing funkier jams in their sets around 1996, and 1998's "The Story of the Ghost" was heavily influenced by funk. Medeski Martin & Wood, Robert Randolph & the Family Band, Galactic, Widespread Panic, Jam Underground, Diazpora, Soulive, and Karl Denson's Tiny Universe all drew heavily from the funk tradition. Lettuce, a band of Berklee College Of Music graduates, was formed in the late 1990s as a pure-funk emergence was being felt through the jam band scene. Many members of the band including keyboardist Neal Evans went on to other projects such as Soulive or the Sam Kininger Band. Dumpstaphunk builds upon the New Orleans tradition of funk, with their gritty, low-ended grooves and soulful four-part vocals. Formed in 2003 to perform at the New Orleans Jazz & Heritage Festival, the band features keyboardist Ivan Neville and guitarist Ian Neville of the famous Neville family, with two bass players and female funk drummer Nikki Glaspie (formerly of Beyoncé Knowles's world touring band, as well as the Sam Kininger Band), who joined the group in 2011.
Since the mid-1990s the nu-funk or funk revivalist scene, centered on the deep funk collectors scene, is producing new material influenced by the sounds of rare funk 45s. Labels include Desco, Soul Fire, Daptone, Timmion, Neapolitan, Bananarama, Kay-Dee, and Tramp. These labels often release on 45 rpm records. Although specializing in music for rare funk DJs, there has been some crossover into the mainstream music industry, such as Sharon Jones' 2005 appearance on "Late Night with Conan O'Brien". Those with mix acid jazz, acid house, trip hop, and other genres with funk include Tom Tom Club, Brainticket, Groove Armada, et al.
Funk has also been incorporated into modern R&B music by many female singers such as Beyoncé with her 2003 hit "Crazy in Love" (which samples the Chi-Lites' "Are You My Woman"), Mariah Carey in 2005 with "Get Your Number" (which samples "Just an Illusion" by British band Imagination), Jennifer Lopez in 2005 with "Get Right" (which samples Maceo Parker's "Soul Power '74" horn sound), Amerie with her song "1 Thing" (which samples the Meters' "Oh, Calcutta!"), and also Tamar Braxton in 2013 with "The One" (which samples "Juicy Fruit" by Mtume).
During the 2000s and early 2010s, some punk funk bands such as Out Hud and Mongolian MonkFish performed in the indie rock scene. Indie band Rilo Kiley, in keeping with their tendency to explore a variety of rockish styles, incorporated funk into their song "The Moneymaker" on the album "Under the Blacklight". Prince, with his later albums, gave a rebirth to the funk sound with songs like "The Everlasting Now", "Musicology", "Ol' Skool Company", and "Black Sweat". Particle, for instance, is part of a scene which combined the elements of digital music made with computers, synthesizers, and samples with analog instruments, sounds, and improvisational and compositional elements of funk.
From the early 1970s onwards, funk has developed various subgenres. While George Clinton and the Parliament were making a harder variation of funk, bands such as Kool and the Gang, Ohio Players and Earth, Wind and Fire were making disco-influenced funk music.
Following the work of Jimi Hendrix in the late 1960s, black funk artists such as Sly and the Family Stone pioneered a style known as psychedelic funk by borrowing techniques from psychedelic rock music, including wah pedals, fuzz boxes, echo chambers, and vocal distorters, as well as blues rock and jazz. In the following years, groups such as George Clinton's Parliament-Funkadelic continued this sensibility, employing synthesizers and rock-oriented guitar work.
Funk rock (also written as "funk-rock" or "funk/rock") fuses funk and rock elements. Its earliest incarnation was heard in the late '60s through the mid-'70s by musicians such as Jimi Hendrix, Frank Zappa, Gary Wright, David Bowie, Mother's Finest, and Funkadelic on their earlier albums.
Many instruments may be incorporated into funk rock, but the overall sound is defined by a definitive bass or drum beat and electric guitars. The bass and drum rhythms are influenced by funk music but with more intensity, while the guitar can be funk-or-rock-influenced, usually with distortion. Prince, Jesse Johnson, Red Hot Chili Peppers and Fishbone are major artists in funk rock.
The term "avant-funk" has been used to describe acts who combined funk with art rock's concerns. Simon Frith described the style as an application of progressive rock mentality to rhythm rather than melody and harmony. Simon Reynolds characterized avant-funk as a kind of psychedelia in which "oblivion was to be attained not through rising above the body, rather through immersion in the physical, self loss through animalism."
Acts in the genre include German krautrock band Can, American funk artists Sly Stone and George Clinton, and a wave of early 1980s UK and US post-punk artists (including Public Image Ltd, Talking Heads, the Pop Group, Cabaret Voltaire, D.A.F., A Certain Ratio, and 23 Skidoo) who embraced black dance music styles such as disco and funk. The artists of the late 1970s New York no wave scene also explored avant-funk, influenced by figures such as Ornette Coleman. Reynolds noted these artists' preoccupations with issues such as alienation, repression and technocracy of Western modernity.
Go-go originated in the Washington, D.C. area with which it remains associated, along with other spots in the Mid-Atlantic. Inspired by singers such as Chuck Brown, the "Godfather of Go-go", it is a blend of funk, rhythm and blues, and early hip hop, with a focus on lo-fi percussion instruments and in-person jamming in place of dance tracks. As such, it is primarily a dance music with an emphasis on live audience call and response. Go-go rhythms are also incorporated into street percussion.
Boogie (or electro-funk) is an electronic music mainly influenced by funk and post-disco. The minimalist approach of boogie, consisting of synthesizers and keyboards, helped to establish electro and house music. Boogie, unlike electro, emphasizes the slapping techniques of bass guitar but also bass synthesizers. Artists include Vicky "D", Komiko, Peech Boys, Kashif, and later Evelyn King.
Electro funk is a hybrid of electronic music and funk. It essentially follows the same form as funk, and retains funk's characteristics, but is made entirely (or partially) with a use of electronic instruments such as the TR-808. Vocoders or talkboxes were commonly implemented to transform the vocals. The pioneering electro band Zapp commonly used such instruments in their music. Bootsy Collins also began to incorporate a more electronic sound on later solo albums. Other artists include Herbie Hancock, Afrika Bambaataa, Egyptian Lover, Vaughan Mason & Crew, Midnight Star and Cybotron.
Funk metal (sometimes typeset differently such as "funk-metal") is a fusion genre of music which emerged in the 1980s, as part of the alternative metal movement. It typically incorporates elements of funk and heavy metal (often thrash metal), and in some cases other styles, such as punk and experimental music. It features hard-driving heavy metal guitar riffs, the pounding bass rhythms characteristic of funk, and sometimes hip hop-style rhymes into an alternative rock approach to songwriting. A primary example is the all-African-American rock band Living Colour, who have been said to be "funk-metal pioneers" by "Rolling Stone". During the late 1980s and early 1990s, the style was most prevalent in California – particularly Los Angeles and San Francisco.
G-funk is a fusion genre of music which combines gangsta rap and funk. It is generally considered to have been invented by West Coast rappers and made famous by Dr. Dre. It incorporates multi-layered and melodic synthesizers, slow hypnotic grooves, a deep bass, background female vocals, the extensive sampling of P-Funk tunes, and a high-pitched portamento saw wave synthesizer lead. Unlike other earlier rap acts that also utilized funk samples (such as EPMD and the Bomb Squad), G-funk often used fewer, unaltered samples per song.
Timba is a form of funky Cuban popular dance music. By 1990, several Cuban bands had incorporated elements of funk and hip-hop into their arrangements, and expanded upon the instrumentation of the traditional conjunto with an American drum set, saxophones and a two-keyboard format. Timba bands like La Charanga Habanera or Bamboleo often have horns or other instruments playing short parts of tunes by Earth, Wind and Fire, Kool and the Gang or other U.S. funk bands. While many funk motifs exhibit a clave-based structure, they are created intuitively, without a conscious intent of aligning the various parts to a guide-pattern. Timba incorporates funk motifs into an overt and intentional clave structure.
Funk jam is a fusion genre of music which emerged in the 1990s. It typically incorporates elements of funk and often exploratory guitar, along with extended cross genre improvisations; often including elements of jazz, ambient, electronic, Americana, and hip hop including improvised lyrics. Phish, Soul Rebels Brass Band, Galactic, and Soulive are all examples of funk bands that play funk jam.
Despite funk's popularity in modern music, few people have examined the work of funk women. Notable funk women include Chaka Khan, Labelle, Brides of Funkenstein, Klymaxx, Mother's Finest, Lyn Collins, Betty Davis and Teena Marie. As cultural critic Cheryl Keyes explains in her essay "She Was Too Black for Rock and Too Hard for Soul: (Re)discovering the Musical Career of Betty Mabry Davis," most of the scholarship around funk has focused on the cultural work of men. She states that "Betty Davis is an artist whose name has gone unheralded as a pioneer in the annals of funk and rock. Most writing on these musical genres has traditionally placed male artists like Jimi Hendrix, George Clinton (of Parliament-Funkadelic), and bassist Larry Graham as trendsetters in the shaping of a rock music sensibility."
In "The Feminist Funk Power of Betty Davis and Renée Stout", Nikki A. Greene notes that Davis' provocative and controversial style helped her rise to popularity in the 1970s as she focused on sexually motivated, self-empowered subject matter. Furthermore, this affected the young artist's ability to draw large audiences and commercial success. Greene also notes that Davis was never made an official spokesperson or champion for the civil rights and feminist movements of the time, although more recently her work has become a symbol of sexual liberation for women of color. Davis' song "If I'm In Luck I Just Might Get Picked Up", on her self-titled debut album, sparked controversy, and was banned by the Detroit NAACP. Maureen Mahan, a musicologist and anthropologist, examines Davis' impact on the music industry and the American public in her article "They Say She's Different: Race, Gender, Genre, and the Liberated Black Femininity of Betty Davis.
Laina Dawes, the author of "What Are You Doing Here: A Black Woman's Life and Liberation in Heavy Metal", believes respectability politics is the reason artists like Davis do not get the same recognition as their male counterparts: "I blame what I call respectability politics as part of the reason the funk-rock some of the women from the '70s aren't better known. Despite the importance of their music and presence, many of the funk-rock females represented the aggressive behavior and sexuality that many people were not comfortable with."
According to Francesca T. Royster, in Rickey Vincent's book "Funk: The Music, The People, and The Rhythm of The One", he analyzes the impact of Labelle but only in limited sections. Royster criticizes Vincent's analysis of the group, stating: "It is a shame, then, that Vincent gives such minimal attention to Labelle's performances in his study. This reflects, unfortunately, a still consistent sexism that shapes the evaluation of funk music. In "Funk", Vincent's analysis of Labelle is brief—sharing a single paragraph with the Pointer Sisters in his three-page sub chapter, 'Funky Women.' He writes that while 'Lady Marmalade' 'blew the lid off of the standards of sexual innuendo and skyrocketed the group's star status,' the band's 'glittery image slipped into the disco undertow and was ultimately wasted as the trio broke up in search of solo status" (Vincent, 1996, 192). Many female artists who are considered to be in the genre of funk, also share songs in the disco, soul, and R&B genres; Labelle falls into this category of women who are split among genres due to a critical view of music theory and the history of sexism in the United States.
In recent years, artists like Janelle Monáe have opened the doors for more scholarship and analysis on the female impact on the funk music genre. Monáe's style bends concepts of gender, sexuality, and self-expression in a manner similar to the way some male pioneers in funk broke boundaries. Her albums center around Afro-futuristic concepts, centering on elements of female and black empowerment and visions of a dystopian future. In his article, "Janelle Monáe and Afro-sonic Feminist Funk", Matthew Valnes writes that Monae's involvement in the funk genre is juxtaposed with the traditional view of funk as a male-centered genre. Valnes acknowledges that funk is male-dominated, but provides insight to the societal circumstances that led to this situation.
Monáe's influences include her mentor Prince, Funkadelic, Lauryn Hill, and other funk and R&B artists, but according to Emily Lordi, "[Betty] Davis is seldom listed among Janelle Monáe's many influences, and certainly the younger singer's high-tech concepts, virtuosic performances, and meticulously produced songs are far removed from Davis's proto-punk aesthetic. But... like Davis, she also is closely linked with a visionary male mentor (Prince). The title of Monáe's 2013 album, "The Electric Lady", alludes to Hendrix's "Electric Ladyland", but it also implicitly cites the coterie of women that inspired Hendrix himself: that group, called the Cosmic Ladies or Electric Ladies, was together led by Hendrix's lover Devon Wilson and Betty Davis." | https://en.wikipedia.org/wiki?curid=10778 |
Frequency
Frequency is the number of occurrences of a repeating event per unit of time. It is also referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency. Frequency is measured in units of hertz (Hz) which is equal to one occurrence of a repeating event per second. The period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example: if a newborn baby's heart beats at a frequency of 120 times a minute (2 hertz), its period, , — the time interval between beats—is half a second (60 seconds divided by 120 beats). Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals (sound), radio waves, and light.
For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter "f" or by the Greek letter "formula_1" or "ν" (nu) (see e.g. Planck's formula).
The relation between the frequency and the period, formula_2, of a repeating event or oscillation is given by
The SI derived unit of frequency is the hertz (Hz), named after the German physicist Heinrich Hertz. One hertz means that an event repeats once per second. If a TV has a refresh rate of 1 hertz the TV's screen will change (or refresh) its picture once per second. A previous name for this unit was cycles per second (cps). The SI unit for period is the second.
A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. 60 rpm equals one hertz.
As a matter of convenience, longer and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are usually described by their frequency instead of period. These commonly used conversions are listed below:
For periodic waves in nondispersive media (that is, media in which the wave speed is independent of frequency), frequency has an inverse relationship to the wavelength, "λ" (lambda). Even in dispersive media, the frequency "f" of a sinusoidal wave is equal to the phase velocity "v" of the wave divided by the wavelength "λ" of the wave:
In the special case of electromagnetic waves moving through a vacuum, then "v = c", where "c" is the speed of light in a vacuum, and this expression becomes:
When waves from a monochrome source travel from one medium to another, their frequency remains the same—only their wavelength and speed change.
Measurement of frequency can be done in the following ways,
Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period, then dividing the count by the length of the time period. For example, if 71 events occur within 15 seconds the frequency is:
If the number of counts is not very large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called "gating error" and causes an average error in the calculated frequency of formula_11, or a fractional error of formula_12 where formula_2 is the timing interval and formula_14 is the measured frequency. This error decreases with frequency, so it is generally a problem at low frequencies where the number of counts N is small.
An older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is an intense repetitively flashing light (strobe light) whose frequency can be adjusted with a calibrated timing circuit. The strobe light is pointed at the rotating object and the frequency adjusted up and down. When the frequency of the strobe equals the frequency of the rotating or vibrating object, the object completes one cycle of oscillation and returns to its original position between the flashes of light, so when illuminated by the strobe the object appears stationary. Then the frequency can be read from the calibrated readout on the stroboscope. A downside of this method is that an object rotating at an integral multiple of the strobing frequency will also appear stationary.
Higher frequencies are usually measured with a frequency counter. This is an electronic instrument which measures the frequency of an applied repetitive electronic signal and displays the result in hertz on a digital display. It uses digital logic to count the number of cycles during a time interval established by a precision quartz time base. Cyclic processes that are not electrical in nature, such as the rotation rate of a shaft, mechanical vibrations, or sound waves, can be converted to a repetitive electronic signal by transducers and the signal applied to a frequency counter. As of 2018, frequency counters can cover the range up to about 100 GHz. This represents the limit of direct counting methods; frequencies above this must be measured by indirect methods.
Above the range of frequency counters, frequencies of electromagnetic signals are often measured indirectly by means of heterodyning (frequency conversion). A reference signal of a known frequency near the unknown frequency is mixed with the unknown frequency in a nonlinear mixing device such as a diode. This creates a heterodyne or "beat" signal at the difference between the two frequencies. If the two signals are close together in frequency the heterodyne is low enough to be measured by a frequency counter. This process only measures the difference between the unknown frequency and the reference frequency. To reach higher frequencies, several stages of heterodyning can be used. Current research is extending this method to infrared and light frequencies (optical heterodyne detection).
Visible light is an electromagnetic wave, consisting of oscillating electric and magnetic fields traveling through space. The frequency of the wave determines its color: is red light, is violet light, and between these (in the range 4-) are all the other colors of the visible spectrum. An electromagnetic wave can have a frequency less than , but it will be invisible to the human eye; such waves are called infrared (IR) radiation. At even lower frequency, the wave is called a microwave, and at still lower frequencies it is called a radio wave. Likewise, an electromagnetic wave can have a frequency higher than , but it will be invisible to the human eye; such waves are called ultraviolet (UV) radiation. Even higher-frequency waves are called X-rays, and higher still are gamma rays.
All of these waves, from the lowest-frequency radio waves to the highest-frequency gamma rays, are fundamentally the same, and they are all called electromagnetic radiation. They all travel through a vacuum at the same speed (the speed of light), giving them wavelengths inversely proportional to their frequencies.
where "c" is the speed of light ("c" in a vacuum, or less in other media), "f" is the frequency and λ is the wavelength.
In dispersive media, such as glass, the speed depends somewhat on frequency, so the wavelength is not quite inversely proportional to frequency.
Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances.. In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines pitch.
The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz), though the high frequency limit usually reduces with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.
In many media, such as air, the speed of sound is approximately independent of frequency, so the wavelength of the sound waves (distance between repetitions) is approximately inversely proportional to frequency.
In Europe, Africa, Australia, Southern South America, most of Asia, and Russia, the frequency of the alternating current in household electrical outlets is 50 Hz (close to the tone G), whereas in North America and Northern South America, the frequency of the alternating current in household electrical outlets is 60 Hz (between the tones B♭ and B; that is, a minor third above the European frequency). The frequency of the 'hum' in an audio recording can show where the recording was made, in countries using a European, or an American, grid frequency. | https://en.wikipedia.org/wiki?curid=10779 |
Film festival
A film festival is an organized, extended presentation of films in one or more cinemas or screening venues, usually in a single city or region. Increasingly, film festivals show some films outdoors. Films may be of recent date and, depending upon the festival's focus, can include international and domestic releases. Some festivals focus on a specific film-maker or genre (e.g. film noir) or subject matter (e.g. horror film festivals). A number of film festivals specialise in short films of a defined maximum length. Film festivals are typically annual events. Some film historians, including Jerry Beck, do not consider film festivals official releases of film.
The most prestigious film festivals in the world, known as the "Big Three", are Cannes, Venice, and Berlin. The "Big Five" also include Toronto and Sundance, which tend to present unreleased films or films that have only been shown domestically prior to its selection. The Toronto International Film Festival is North America's most popular festival in terms of attendance. The Venice Film Festival is the oldest film festival in the world.
The Venice Film Festival in Italy began in 1932 and is the oldest film festival still running. Raindance Film Festival is the UK's largest celebration of independent filmmaking and takes place in London in October.
Mainland Europe's biggest independent film festival is ÉCU The European Independent Film Festival, which started in 2006 and takes place every spring in Paris, France. Edinburgh International Film Festival is the longest-running festival in Great Britain as well as the longest continually running film festival in the world.
Australia's first and longest-running film festival is the Melbourne International Film Festival (1952), followed by the Sydney Film Festival (1954).
North America's first and longest running short film festival is the Yorkton Film Festival, established in 1947. The first film festival in the United States was the Columbus International Film & Video Festival, also known as The Chris Awards, held in 1953. According to the Film Arts Foundation in San Francisco, ""The Chris Awards" (is) one of the most prestigious documentary, educational, business and informational competitions in the U.S; (it is) the oldest of its kind in North America and celebrating its 54th year". It was followed four years later by the San Francisco International Film Festival, held in March 1957, which emphasized feature-length dramatic films. The festival played a major role in introducing foreign films to American audiences. Films in the first year included Akira Kurosawa's "Throne of Blood" and Satyajit Ray's "Pather Panchali".
Today, thousands of film festivals take place around the world—from high-profile festivals such as Sundance Film Festival and Slamdance Film Festival (Park City, Utah), to horror festivals such as Terror Film Festival (Philadelphia), and the Park City Film Music Festival, the first U.S. film festival dedicated to honoring music in film.
Film Funding competitions such as Writers and Filmmakers were introduced when the cost of production could be lowered significantly and internet technology allowed for the collaboration of film production.
Although there are notable for-profit festivals such as SXSW, most festivals operate on a nonprofit membership-based model, with a combination of ticket sales, membership fees, and corporate sponsorship constituting the majority of revenue. Unlike other arts nonprofits (performing arts, museums, etc.), film festivals typically receive few donations from the general public and are occasionally organized as nonprofit business associations instead of public charities. Film industry members often have significant curatorial input, and corporate sponsors are given opportunities to promote their brand to festival audiences in exchange for cash contributions. Private parties, often to raise investments for film projects, constitute significant "fringe" events. Larger festivals maintain year-round staffs often engaging in community and charitable projects outside the festival season.
While entries from established filmmakers are usually considered pluses by the organizers, most festivals require new or relatively unknown filmmakers to pay an entry fee to have their works considered for screening. This is especially so in larger film festivals, such as the Cannes Film Festival, Jaipur International Film Festival in Jaipur India, Toronto International Film Festival, Sundance Film Festival, South by Southwest, Montreal World Film Festival, and even smaller "boutique" festivals such as the Miami International Film Festival, British Urban Film Festival in London and Mumbai Women's International Film Festival in India.
On the other hand, some festivals—usually those accepting fewer films, and perhaps not attracting as many "big names" in their audiences as do Sundance and Telluride—require no entry fee. Rotterdam Film Festival, Mumbai Film Festival, and many smaller film festivals in the United States (the Stony Brook Film Festival on Long Island, the Northwest Filmmakers' Festival, and the Sicilian Film Festival in Miami), are examples.
The Portland International Film Festival charges an entry fee, but waives it for filmmakers from the Northwestern United States, and some others with regional focuses have similar approaches.
Several film festival submission portal websites exist to streamline filmmakers' entries into multiple festivals. They provide databases of festival calls for entry and offer filmmakers a convenient "describe once, submit many" service.
The core tradition of film festivals is competition, that is, the consideration of films with the intention of judging which are most deserving of various forms of recognition. In contrast to those films, some festivals may screen (i.e., project onto a movie screen before an audience) some films without treating them as part of the competition; the films are said to be "screened out..." (or "outside...") "of competition".
The three major film festivals in Europe are considered to be Cannes, Venice, and Berlin, and have been celebrated by the Three Colours trilogy (Blue for Venice, White for Berlin, and Red for Cannes). The most prestigious film festivals in north America are Sundance and Toronto.
In North America, the Toronto International Film Festival is North America's most popular festival. "Time" wrote it had "grown from its place as the most influential fall film festival to the most influential film festival, period". The Seattle International Film Festival is credited as being the largest film festival in the US, regularly showing over 400 films in a month across the city. The Tribeca Film Festival, South by Southwest, New York Film Festival, Woodstock Film Festival, Montreal World Film Festival, and Vancouver International Film Festival are also major North American festivals.
The festivals in Berlin, Cairo, Cannes, Goa, Karlovy Vary, Locarno, Mar del Plata, Montreal, Moscow, San Sebastián, Shanghai, Tallinn, Tokyo, Venice, and Warsaw are accredited by the International Federation of Film Producers Associations (FIAPF) in the category of competitive feature films. As a rule for films to compete, they must first be released during the festivals and not in any other previous venue beforehand.
Ann Arbor Film Festival started in 1963. It is the oldest continually operated experimental film festival in North America, and has become one of the premiere film festivals for independent and, primarily, experimental filmmakers to showcase work.
In the U.S., Telluride Film Festival, Sundance Film Festival, Austin Film Festival, Austin's South by Southwest, New York City's Tribeca Film Festival, London's London Eco-Film Festival and Slamdance Film Festival are all considered significant festivals for independent film. The Zero Film Festival is significant as the first and only festival exclusive to self-financed filmmakers. The biggest independent film festival in the UK is Raindance Film Festival.
The British Urban Film Festival (which specifically caters for Black and minority interests) was officially recognized in the 2020 New Year Honours list.
A few film festivals have focused on highlighting specific issues/ subjects. These festivals have included both mainstream and independent films. Some examples include military films, health-related film festivals and human rights films festivals.
There are festivals, especially in the US< that highlight and promote films made by or are about various ethnic groups and nationalities or feature the cinema from a specific foreign country. These include , , Latino-Americans, Arabs, Italian, German, French, Palestinian and . The Deauville American Film Festival in France is devoted to the cinema of the United States.
LGBTQ and are also popular.
The San Francisco International Film Festival, founded by Irving "Bud" Levin started in 1957, is the oldest continuously annual film festival in the United States. It highlights current trends in international filmmaking and video production with an emphasis on work that has not yet secured U.S. distribution.
The Toronto International Film Festival, founded by Bill Marshall, Henk Van der Kolk and Dusty Cohl, is regarded as North America's most major and most prestigious film festival, and is the most widely attended.
The Sundance film festival founded by Sterling Van Wagenen (then head of Wildwood, Robert Redford's company), John Earle, and Cirina Hampton Catania (both serving on the Utah Film Commission at the time) is a major festival for independent film.
The Woodstock Film Festival was launched in 2000 by filmmakers Meira Blaustein and Laurent Rejto with the goal to bring high quality independent film to the Hudson Valley region of New York. Indiewire has named the Woodstock Film Festival among the top 50 independent film festivals worldwide.
The Regina International Film Festival and awards (RIFFA) founded by John Thimothy, one of the top leading international film festivals in western Canada (Regina, Saskatchewan) represented 35 countries in 2018 festival . RIFFA annual Award show and red carpet arrival event is getting noticed in the contemporary film and fashion industries in Western Canada.
Toronto's Hot Docs founded by filmmaker Paul Jay, is the leading North American documentary film festival. Toronto has the largest number of film festivals in the world, ranging from cultural, independent, and historic films.
The Seattle International Film Festival, which screens 270 features and approximately 150 short films, is the largest American film festival in terms of the number of feature productions.
The Cartagena Film Festival, founded by Victor Nieto in 1960, is the oldest film festival in Latin America. The Festival de Gramado (or Gramado Film Festival) Gramado, Brazil, along with the Guadalajara International Film Festival in Guadalajara, the Morelia International Film Festival in Morelia, Michoacan Mexico, and the Los Cabos International Film Festival founded by Scott Cross (film director), Sean Cross, and Eduardo Sanchez Navarro, in Los Cabos Baja Sur Mexico are considered the most important film festivals of Latin America. In 2015, Variety called the Los Cabos International Film Festival the "Cannes of Latin America". The Huelva Ibero-American Film Festival has been held since 1975 in that Spanish city.
The Expresión en Corto International Film Festival is the largest competitive film festival in Mexico. It specializes in emerging talent, and is held in the last week of each July in the two colonial cities of San Miguel de Allende and Guanajuato. Oaxaca Film Fest. For Spanish-speaking countries, the Dominican International Film Festival takes place annually in Puerto Plata, Dominican Republic. The Valdivia International Film Festival is held annually in the city of Valdivia. It is arguable the most important film festival in Chile. There is also Filmambiente, held in Rio de Janeiro, Brazil, an international festival on environmental films and videos.
The Havana Film Festival was founded in 1979 and is the oldest continuous annual film festival in the Caribbean. Its focus is on Latin American cinema.
The Trinidad and Tobago Film Festival, founded in 2006, is dedicated to screening the newest films from the English-, Spanish, French- and Dutch-speaking Caribbean, as well as the region's diaspora. It also seeks to facilitate the growth of Caribbean cinema by offering a wide-ranging industry programme and networking opportunities.
The Lusca Fantastic Film Fest (formerly Puerto Rico Horror Film Fest) was also founded in 2006 and is the first and only international fantastic film festival in the Caribbean devoted to Sci-Fi, Thriller, Fantasy, Dark Humor, Bizarre, Horror, Anime, Adventure, Virtual Reality and Animation in Short and Feature Films.
Many film festivals are dedicated exclusively to animation.
A variety of regional festivals happen in various countries. Austin Film Festival is accredited by the Academy of Motion Picture Arts & Sciences, which makes all their jury award-winning narrative short and animated short films eligible for an Academy Award.
The International Film Festival of India, organized by the government of India, was founded in 1952. The Kolkata International Film Festival, founded in 1995, is the third oldest international film festival in India. The International Film Festival of Kerala organised by Government of Kerala held annually at Thiruvananthapuram is acknowledged as one of the leading cultural events in Indian.
The International Documentary and Short Film Festival of Kerala (IDSFFK), hosted by the Kerala State Chalachitra Academy, is a major documentary and short film festival.
Other notable festivals in India include the Osian's-Cinefan: Festival of Asian Cinema at New Delhi, which recently expanded to include Arab Cinema, Chennai Women's International Film Festival (CWIFF), the Annual Mumbai Film Festival in India, with its US$200,000 cash prize (www.mumbaifilmfest.com), and Mumbai Women's International Film Festival (MWIFF), an annual film festival in Mumbai featuring films made by women directors and women technicians.
Notable festivals include the Hong Kong International Film Festival (HKIFF), Busan International Film Festival (BIFF), and Kathmandu International Mountain Film Festival.
There are several significant film festivals held regularly in Africa. The biannual Panafrican Film and Television Festival of Ouagadougou (FESPACO) in Burkina Faso was established in 1969 and accepts for competition only films by African filmmakers and chiefly produced in Africa. The annual Durban International Film Festival in South Africa and Zanzibar International Film Festival in Tanzania have grown in importance for the film and entertainment industry, as they often screen the African premieres of many international films.
The Sahara International Film Festival, held annually in the Sahrawi refugee camps in western Algeria near the border of Western Sahara, is notable as the only film festival in the world to take place in a refugee camp. The festival has the two-fold aim of providing cultural entertainment and educational opportunities to refugees, and of raising awareness of the plight of the Sahrawi people, who have been exiled from their native Western Sahara for more than three decades.
The most important European film festivals are Venice Film Festival (late summer to early autumn), Cannes Film Festival (late spring to early summer) and Berlin International Film Festival (late winter to early spring), founded in 1932, 1946 and 1951 respectively. | https://en.wikipedia.org/wiki?curid=10781 |
History of film
Although the start of the history of film is not clearly defined, the commercial, public screening of ten of Lumière brothers' short films in Paris on 28 December 1895 can be regarded as the breakthrough of projected cinematographic motion pictures. There had been earlier cinematographic results and screenings by others, but they lacked either the quality, financial backing, stamina or the luck to find the momentum that propelled the cinématographe Lumière into a worldwide success.
Soon film production companies and studios were established all over the world. The first decade of motion picture saw film moving from a novelty to an established mass entertainment industry. The earliest films were in black and white, under a minute long, without recorded sound and consisted of a single shot from a steady camera.
Conventions toward a general cinematic language developed over the years with editing, camera movements and other cinematic techniques contributing specific roles in the narrative of films.
Special effects became a feature in movies since the late 1890s, popularized by Georges Méliès' fantasy films. Many effects were impossible or impractical to perform in theater plays and thus added more magic to the experience of movies.
Technical improvements added length (reaching 60 minutes for a feature film in 1906), synchronized sound recording (mainstream since the end of the 1920s), color (mainstream since the 1930s) and 3D (mainstream in theaters in the early 1950s and since the 2000s). Sound ended the necessity of interruptions of title cards, revolutionized the narrative possibilities for filmmakers, and became an integral part of moviemaking.
Popular new media, including television (mainstream since the 1950s), home video (mainstream since the 1980s) and internet (mainstream since the 1990s) influenced the distribution and consumption of films. Film production usually responded with content to fit the new media, and with technical innovations (including widescreen (mainstream since the 1950s), 3D and 4D film) and more spectacular films to keep theatrical screenings attractive.
Systems that were cheaper and more easily handled (including 8mm film, video and smartphone cameras) allowed for an increasing number of people to create films in varying qualities, for any purpose (including home movies and video art). The technical quality usually differed from professional movies, but became more or less equal with digital video and affordable high quality digital cameras.
Improving over time, digital production methods became more and more popular during the 1990s, resulting in increasingly realistic visual effects and popular feature-length computer animations.
Different film genres emerged and enjoyed variable degrees of success over time, with huge differences between for instance horror films (mainstream since the 1890s), newsreels (prevalent in U.S. cinemas between the 1910s and the late 1960s), musicals (mainstream since the late 1920s) and pornographic films (experiencing a Golden Age during the 1970s).
Film as an art form has drawn on several earlier traditions in the fields such as (oral) storytelling, literature, theatre and visual arts. Forms of art and entertainment that had already featured moving and/or projected images include:
Some ancient sightings of gods and spirits may have been conjured up by means of (concave) mirrors, camera obscura or unknown projectors. By the 16th century necromantic ceremonies and the conjuring of ghostly apparitions by charlatan "magicians" and "witches" seemed commonplace. The very first magic lantern shows seem to have continued this tradition with images of death, monsters and other scary figures. Around 1790 this was developed into multi-media ghost shows known as phantasmagoria that could feature mechanical slides, rear projection, mobile projectors, superimposition, dissolves, live actors, smoke (sometimes to project images upon), odors, sounds and even electric shocks. While the first magic lantern images seem to have been intended to scare audiences, soon all sorts of subjects appeared and the lantern was not only used for storytelling but also for education. In the 19th century several new and popular magic lantern techniques were developed, including dissolving views and several types of mechanical slides that created dazzling abstract effects (chromatrope, etc.) or that showed for instance falling snow, or the planets and their moons revolving.
Early photographic sequences, known as chronophotography, can be regarded as early motion picture recordings that could not yet be presented as moving pictures. Since 1878, Eadward Muybridge made hundreds of chronophotographic studies of the motion of animals and humans in real-time, soon followed by other chronophotographers like Étienne-Jules Marey, Georges Demenÿ and Ottomar Anschütz. Usually chronophotography was regarded as a serious, even scientific, method to study motion and almost exclusively involved humans or animals performing a simple movement in front of the camera. Soon after Muybridge published his first results as The Horse in Motion cabinet cards, people put the silhouette-like photographic images in zoetropes to watch them in motion. Most sequences could later be animated into very short films with fluent motion (relatively often the footage can be presented as a loop that repeats the motion seamlessly).
It is estimated that 80 to 90 percent of all silent films are lost. According to research by Library of Congress only 14 percent of the 10,919 silent feature films released by major American studios between 1912 and 1929 is located and preserved in original formats and another 11 percent is known from full-length foreign versions or on formats of lesser image quality. Movie catalogs, reviews and other documentation can provide some details on lost films, but much of early movie history will forever remain incomplete. Although larger trends and developments may have been properly perceived and documented, many details only came to be of interest much later and can be hard to trace. Indication of many specific "firsts" and other details may not have seemed important at the time and thus evidence can only be found in the 10 to 20 percent of films that have survived (with few titles readily available for study), or — much less reliable — indications can be found in contemporary written sources and later accounts.
In the 1890s, films were seen mostly via temporary storefront spaces and traveling exhibitors or as acts in vaudeville programs. A film could be under a minute long and would usually present a single scene, authentic or staged, of everyday life, a public event, a sporting event or slapstick. There was little to no cinematic technique, the film was usually black and white and it was without sound.
The novelty of realistic moving photographs was enough for a motion picture industry to blossom before the end of the century, in countries around the world. "The Cinema" was to offer a relatively cheap and simple way of providing entertainment to the masses. Filmmakers could record actors' performances, which then could be shown to audiences around the world. Travelogues would bring the sights of far-flung places, with movement, directly to spectators' hometowns. Movies would become the most popular visual art form of the late Victorian age.
The Berlin Wintergarten theater hosted an early movie presentation in front of an audience, shown by the Skladanowsky brothers in 1895. The Melbourne Athenaeum started to screen movies in 1896. Movie theaters became popular entertainment venues and social hubs in the early 20th century, much like cabarets and other theaters.
Until 1927, most motion pictures were produced without sound. This era is referred to as the silent era of film. To enhance the viewers' experience, silent films were commonly accompanied by live musicians in an orchestra, a theatre organ, and sometimes sound effects and even commentary spoken by the showman or projectionist. In most countries, intertitles came to be used to provide dialogue and narration for the film, thus dispensing with narrators, but in Japanese cinema human narration remained popular throughout the silent era. The technical problems were resolved by 1923.
Illustrated songs were a notable exception to this trend that began in 1894 in vaudeville houses and persisted as late as the late 1930s in film theaters. Live performance or sound recordings were paired with hand-colored glass slides projected through stereopticons and similar devices. In this way, song narrative was illustrated through a series of slides whose changes were simultaneous with the narrative development. The main purpose of illustrated songs was to encourage sheet music sales, and they were highly successful with sales reaching into the millions for a single song. Later, with the birth of film, illustrated songs were used as filler material preceding films and during reel changes.
The 1914 "The Photo-Drama of Creation" was a non-commercial attempt to combine the motion picture with a combination of slides and synchronize the resulting moving picture with audio. The film included hand-painted slides as well as other previously used techniques. Simultaneously playing the audio while the film was being played with a projector was required. Produced by the Watch Tower Bible and Tract Society of Pennsylvania (Jehovah's Witnesses), this eight–hour bible drama was being shown in 80 cities every day and almost eight million people in the United States and Canada saw the presentation.
Within eleven years of motion pictures, the films moved from a novelty show to an established large-scale entertainment industry. Films moved from a single shot, completely made by one person with a few assistants, towards films several minutes long consisting of several shots, which were made by large companies in something like industrial conditions.
By 1900, the first motion pictures that can be considered "films" emerged, and film-makers began to introduce basic editing techniques and film narrative.
Early movie cameras were fastened to the head of a tripod with only simple levelling devices provided. These cameras were effectively fixed during the course of a shot, and the first camera movements were the result of mounting a camera on a moving vehicle. The Lumière brothers shot a scene from the back of a train in 1896.
The first rotating camera for taking panning shots was built by Robert W. Paul in 1897, on the occasion of Queen Victoria's Diamond Jubilee. He used his camera to shoot the procession in one shot. His device had the camera mounted on a vertical axis that could be rotated by a worm gear driven by turning a crank handle, and Paul put it on general sale the next year. Shots taken using such a "panning" head were also referred to as 'panoramas' in the film catalogues.
Georges Méliès built one of the first film studios in May 1897. It had a glass roof and three glass walls constructed after the model of large studios for still photography, and it was fitted with thin cotton cloths that could be stretched below the roof to diffuse the direct rays of the sun on sunny days. Beginning in 1896, Méliès would go on to produce, direct, and distribute over 500 short films. The majority of these films were short, one-shot films completed in one take. Méliès drew many comparisons between film and the stage, which was apparent in his work. He realized that film afforded him the ability (via his use of time lapse photography) to "produce visual spectacles not achievable in the theater.
"The Execution of Mary Stuart", produced by the Edison Company for viewing with the Kinetoscope, showed Mary Queen of Scots being executed in full view of the camera. The effect was achieved by replacing the actor with a dummy for the final shot. Georges Méliès also utilized this technique in the making of "Escamotage d'un dame chez Robert-Houdin (The Vanishing Lady)". The woman is seen to vanish through the use of stop motion techniques.
The other basic technique for trick cinematography was the double exposure of the film in the camera. This was pioneered by George Albert Smith in July 1898 in England. The set was draped in black, and after the main shot, the negative was re-exposed to the overlaid scene. His "The Corsican Brothers" was described in the catalogue of the Warwick Trading Company in 1900: "By extremely careful photography the ghost appears *quite transparent*. After indicating that he has been killed by a sword-thrust, and appealing for vengeance, he disappears. A 'vision' then appears showing the fatal duel in the snow.”
G.A. Smith also initiated the special effects technique of reverse motion. He did this by repeating the action a second time, while filming it with an inverted camera, and then joining the tail of the second negative to that of the first. The first films made using this device were "Tipsy, Topsy, Turvy" and "The Awkward Sign Painter". The earliest surviving example of this technique is Smith's "The House That Jack Built", made before September 1900.
Cecil Hepworth took this technique further, by printing the negative of the forwards motion backwards frame by frame, so producing a print in which the original action was exactly reversed. To do this he built a special printer in which the negative running through a projector was projected into the gate of a camera through a special lens giving a same-size image. This arrangement came to be called a "projection printer", and eventually an "optical printer".
The use of different camera speeds also appeared around 1900 in the films of Robert W. Paul and Hepworth. Paul shot scenes from "On a Runaway Motor Car through Piccadilly Circus" (1899) with the camera turning very slowly. When the film was projected at the usual 16 frames per second, the scenery appeared to be passing at great speed. Hepworth used the opposite effect in "The Indian Chief and the Seidlitz Powder" (1901). The Chief's movements are sped up by cranking the camera much faster than 16 frames per second. This gives what we would call a "slow motion" effect.
The first films to consist of more than one shot appeared toward the end of the 19th century. A notable example was the French film of the life of Jesus Christ, "La vie du Christ (The Birth, the Life and the Death of Christ)", by Alice Guy. These weren't represented as a continuous film, the separate scenes were interspersed with lantern slides, a lecture, and live choral numbers, to increase the running time of the spectacle to about 90 minutes. Another example of this is the reproductions of scenes from the Greco-Turkish war, made by Georges Méliès in 1897. Although each scene was sold separately, they were shown one after the other by the exhibitors. Even Méliès' "Cendrillon (Cinderella)" of 1898 contained no action moving from one shot to the next one. To understand what was going on in the film the audience had to know their stories beforehand, or be told them by a presenter.
Real film continuity, involving action moving from one sequence into another, is attributed to British film pioneer Robert W. Paul's "Come Along, Do!", made in 1898 and one of the first films to feature more than one shot. In the first shot, an elderly couple is outside an art exhibition having lunch and then follow other people inside through the door. The second shot shows what they do inside. Paul's 'Cinematograph Camera No. 1' of 1895 was the first camera to feature reverse-cranking, which allowed the same film footage to be exposed several times and thereby to create super-positions and multiple exposures. This technique was first used in his 1901 film "Scrooge, or, Marley's Ghost".
The further development of action continuity in multi-shot films continued in 1899 at the Brighton School in England. In the latter part of that year, George Albert Smith made "The Kiss in the Tunnel". This started with a shot from a "phantom ride" at the point at which the train goes into a tunnel, and continued with the action on a set representing the interior of a railway carriage, where a man steals a kiss from a woman, and then cuts back to the phantom ride shot when the train comes out of the tunnel. A month later, the Bamforth company in Yorkshire made a restaged version of this film under the same title, and in this case they filmed shots of a train entering and leaving a tunnel from beside the tracks, which they joined before and after their version of the kiss inside the train compartment.
In 1900, continuity of action across successive shots was definitively established by George Albert Smith and James Williamson, who also worked in Brighton. In that year Smith made "As Seen Through a Telescope", in which the main shot shows street scene with a young man tying the shoelace and then caressing the foot of his girlfriend, while an old man observes this through a telescope. There is then a cut to close shot of the hands on the girl's foot shown inside a black circular mask, and then a cut back to the continuation of the original scene. Even more remarkable is James Williamson's "Attack on a China Mission Station" (1900). The first shot shows Chinese Boxer rebels at the gate; it then cuts to the missionary family in the garden, where a fight ensues. The wife signals to British sailors from the balcony, who come and rescue them. The film also used the first "reverse angle" cut in film history.
G.A Smith pioneered the use of the close-up shot in his 1900 films "As Seen Through a Telescope" and "Grandma's Reading Glass". He further developed the ideas of breaking a scene shot in one place into a series of shots taken from different camera positions over the next couple of years, starting with "The Little Doctors" of 1901. In a series of films he produced at this time, he also introduced the use of subjective and objective point-of-view shots, the creation of dream-time and the use of reversing. He summed up his work in "Mary Jane's Mishap" of 1903, with repeated cuts in to a close shot of a housemaid fooling around, along with superimpositions and other devices, before abandoning film-making to invent the Kinemacolor system of colour cinematography. His films were the first to establish the basics of coherent narrative and what became known as film language, or "film grammar".
James Williamson concentrated on making films taking action from one place shown in one shot to the next shown in another shot in films like "Stop Thief!", made in 1901, and many others. He also experimented with the close-up, and made perhaps the most extreme one of all in "The Big Swallow", when his character approaches the camera and appears to swallow it. These two film makers of the Brighton School also pioneered the editing of the film; they tinted their work with color and used trick photography to enhance the narrative. By 1900, their films were extended scenes of up to 5 minutes long.
Most films of this period were what came to be called "chase films". These were inspired by James Williamson's "Stop Thief!" of 1901, which showed a tramp stealing a leg of mutton from a butcher's boy in the first shot, then being chased through the second shot by the butcher's boy and assorted dogs, and finally being caught by the dogs in the third shot. Several British films made in the first half of 1903 extended the chase method of film construction. These included "An Elopement à la Mode" and "The Pickpocket: A Chase Through London", made by Alf Collins for the British branch of the French Gaumont company, "Daring Daylight Burglary", made by Frank Mottershaw at the Sheffield Photographic Company, and "Desperate Poaching Affray", made by William Haggar. Haggar in particular innovated the first extant panning shots; the poachers are chased by gamekeepers and police officers and the camera pans along, creating a sense of urgency and speed. His films were also recognised for their intelligent use of depth of staging and screen edges, while film academic Noël Burch praised Haggar's effective use of off-screen space. He was also one of the first film makers to purposefully introduce violence for entertainment; in "Desperate Poaching Affray" the villains are seen firing guns at their pursuers.
Other filmmakers took up all these ideas including the American Edwin S. Porter, who started making films for the Edison Company in 1901. Porter, a projectionist, was hired by Thomas Edison to develop his new projection model known as the Vitascope. Porter wanted to develop a style of filmmaking that would move away from the one-shot short films into a "story-telling [narrative]" style. When he began making longer films in 1902, he put a dissolve between every shot, just as Georges Méliès was already doing, and he frequently had the same action repeated across the dissolves. His film, "The Great Train Robbery" (1903), had a running time of twelve minutes, with twenty separate shots and ten different indoor and outdoor locations. He used cross-cutting editing method to show simultaneous action in different places. The time continuity in "The Great Train Robbery" was actually more confusing than that in the films it was modeled on, but nevertheless it was a greater success than them due to its Wild West violence. "The Great Train Robbery" served as one of the vehicles that would launch the film medium into mass popularity.
The Pathé company in France also made imitations and variations of Smith and Williamson's films from 1902 onwards using cuts between the shots, which helped to standardize the basics of film construction. An influential French film of the period was Méliès's 14-minute-long "A Trip to the Moon". It was extremely popular at the time of its release, and is the best-known of the hundreds of films made by Méliès. It was one of the first known science fiction films, and used innovative animation and special effects, including the well-known image of the spaceship landing in the Moon's eye. The sheer volume of Pathé's production led to their filmmakers giving a further precision and polish to the details of film continuity.
When cinematography was introduced, animation was familiar from various optical toys (in stroboscopic form), magic lantern shows (in mechanical form) and from Emile Reynaud's "Pantomimes Lumineuses". It took over a decade before animation started to play a role in cinemas with stop motion short films like Segundo de Chomón's "Le théâtre de Bob" (1906) and J. Stuart Blackton's "The Haunted Hotel" (1907) as well as hand-drawn short animation films like Blackton's 1906 film "Humorous Phases of Funny Faces" (with some cut-out animation) and Émile Cohl's "Fantasmagorie" (1908).
The world's first animated feature film was "El Apóstol" (1917), made by Italian-Argentine cartoonist Quirino Cristiani utilizing cutout animation. Cristiani also directed the first animated feature film with sound, "Peludópolis", released with a vitaphone sound-on-disc synchronization system soundtrack. Unfortunately, a fire that destroyed producer Federico Valle's film studio incinerated the only known copies of the movies, and they are now considered lost films.
Films at the time were no longer than one reel, although some multi-reel films had been made on the life of Christ in the first few years of cinema. The first feature-length multi-reel film in the world was the 1906 Australian production called "The Story of the Kelly Gang".
It traced the life of the legendary infamous outlaw and bushranger Ned Kelly (1855–1880) and ran for more than an hour with a reel length of approximately 4,000 feet (1,200 m). It was first shown at the Athenaeum Hall in Collins Street, Melbourne, Australia on 26 December 1906 and in the UK in January 1908.
The first successful permanent theatre showing only films was "The Nickelodeon", which was opened in Pittsburgh in 1905. By then there were enough films several minutes long available to fill a programme running for at least half an hour, and which could be changed weekly when the local audience became bored with it. Other exhibitors in the United States quickly followed suit, and within a couple of years there were thousands of these nickelodeons in operation. The American experience led to a worldwide boom in the production and exhibition of films from 1906 onwards.
By 1907 purpose-built cinemas for motion pictures were being opened across the United States, Britain and France. The films were often shown with the accompaniment of music provided by a pianist, though there could be more musicians. There were also a very few larger cinemas in some of the biggest cities. Initially, the majority of films in the programmes were Pathé films, but this changed fairly quickly as the American companies cranked up production. The programme was made up of just a few films, and the show lasted around 30 minutes. The reel of film, of maximum length , which usually contained one individual film, became the standard unit of film production and exhibition in this period. The programme was changed twice or more a week, but went up to five changes of programme a week after a couple of years. In general, cinemas were set up in the established entertainment districts of the cities. In 1907, Pathé began renting their films to cinemas through film exchanges rather than selling the films outright.
By about 1910, actors began to receive screen credit for their roles, and the way to the creation of film stars was opened. Films were increasingly longer, and began to feature proper plots and development.
The litigation over patents between all the major American film-making companies led to the formation of a trust to control the American film business, with each company in the trust being allocated production quotas (two reels a week for the biggest ones, one reel a week for the smaller). However, although 6,000 exhibitors signed up to the trust, about 2,000 others did not and began to fund new film producing companies. By 1912 the independents had nearly half of the market and the government defeated the trust by initiating anti-trust action at the same time.
In the early 20th century, before Hollywood, the motion picture industry was based in Fort Lee, New Jersey across the Hudson River from New York City. In need of a winter headquarters, moviemakers were attracted to Jacksonville, Florida due to its warm climate, exotic locations, excellent rail access, and cheaper labor, earning the city the title of "The Winter Film Capital of the World." New York-based Kalem Studios was the first to open a permanent studio in Jacksonville in 1908. Over the course of the next decade, more than 30 silent film companies established studios in town, including Metro Pictures (later MGM), Edison Studios, Majestic Films, King-Bee Film Company, Vim Comedy Company, Norman Studios, Gaumont Studios and the Lubin Manufacturing Company. Comedic actor and Georgia native Oliver "Babe" Hardy began his motion picture career here in 1914. He starred in over 36 short silent films his first year acting. With the closing of Lubin in early 1915, Oliver moved to New York then New Jersey to find film jobs. Acquiring a job with the Vim Company in early 1915, he returned to Jacksonville in the spring of 1917 before relocating to Los Angeles in October 1917. The first motion picture made in Technicolor and the first feature-length color movie produced in the United States, The Gulf Between, was also filmed on location in Jacksonville in 1917.
Jacksonville was especially important to the African American film industry. One notable individual in this regard is the European American producer Richard Norman, who created a string of films starring black actors in the vein of Oscar Micheaux and the Lincoln Motion Picture Company. In contrast to the degrading parts offered in certain white films such as "The Birth of a Nation", Norman and his contemporaries sought to create positive stories featuring African Americans in what he termed "splendidly assuming different roles."
Jacksonville's mostly conservative residents, however, objected to the hallmarks of the early movie industry, such as car chases in the streets, simulated bank robberies and fire alarms in public places, and even the occasional riot. In 1917, conservative Democrat John W. Martin was elected mayor on the platform of taming the city's movie industry. By that time, southern California was emerging as the major movie production center, thanks in large part to the move of film pioneers like William Selig and D.W. Griffith to the area. These factors quickly sealed the demise of Jacksonville as a major film destination.
Another factor for the industry's move west was that up until 1913, most American film production was still carried out around New York, but due to the monopoly of Thomas A. Edison, Inc.'s film patents and its litigious attempts to preserve it, many filmmakers moved to Southern California, starting with Selig in 1909. The sunshine and scenery was important for the production of Westerns, which came to form a major American film genre with the first cowboy stars, G.M. Anderson ("Broncho Billy") and Tom Mix. Selig pioneered the use of (fairly) wild animals from a zoo for a series of exotic adventures, with the actors being menaced or saved by the animals. Kalem Company sent film crews to places in America and abroad to film stories in the actual places they were supposed to have happened. Kalem also pioneered the female action heroine from 1912, with Ruth Roland playing starring roles in their Westerns.
In France, Pathé retained its dominant position, followed still by Gaumont, and then other new companies that appeared to cater to the film boom. A film company with a different approach was Film d'Art. This was set up at the beginning of 1908 to make films of a serious artistic nature. Their declared programme was to make films using only the best dramatists, artists and actors. The first of these was "L'Assassinat du Duc de Guise" ("The Assassination of the Duc de Guise"), a historical subject set in the court of Henri III. This film used leading actors from the Comédie-Française, and had a special accompanying score written by Camille Saint-Saëns. The other French majors followed suit, and this wave gave rise to the English-language description of films with artistic pretensions aimed at a sophisticated audience as "art films". By 1910, the French film companies were starting to make films as long as two, or even three reels, though most were still one reel long. This trend was followed in Italy, Denmark, and Sweden.
In Britain, the Cinematograph Act 1909 was the first primary legislation to specifically regulate the film industry. Film exhibitions often took place in temporary venues and the use of highly flammable cellulose nitrate for film, combined with limelight illumination, created a significant fire hazard. The Act specified a strict building code which required, amongst other things, that the projector be enclosed within a fire resisting enclosure.
Regular newsreels were exhibited from 1910 and soon became a popular way for finding out the news the British Antarctic Expedition to the South Pole was filmed for the newsreels as were the suffragette demonstrations that were happening at the same time. F. Percy Smith was an early nature documentary pioneer working for Charles Urban and he pioneered the use of time lapse and micro cinematography in his 1910 documentary on the growth of flowers.
With the worldwide film boom, yet more countries now joined Britain, France, Germany and the United States in serious film production. In Italy, production was spread over several centres, with Turin being the first and biggest. There, Ambrosio was the first company in the field in 1905, and remained the largest in the country through this period. Its most substantial rival was Cines in Rome, which started producing in 1906. The great strength of the Italian industry was historical epics, with large casts and massive scenery. As early as 1911, Giovanni Pastrone's two-reel "La Caduta di Troia (The Fall of Troy)" made a big impression worldwide, and it was followed by even bigger glasses like "Quo Vadis?" (1912), which ran for 90 minutes, and Pastrone's "Cabiria" of 1914, which ran for two and a half hours.
Italian companies also had a strong line in slapstick comedy, with actors like André Deed, known locally as "Cretinetti", and elsewhere as "Foolshead" and "Gribouille", achieving worldwide fame with his almost surrealistic gags.
The most important film-producing country in Northern Europe up until the First World War was Denmark. The Nordisk company was set up there in 1906 by Ole Olsen, a fairground showman, and after a brief period imitating the successes of French and British filmmakers, in 1907 he produced 67 films, most directed by Viggo Larsen, with sensational subjects like "Den hvide Slavinde (The White Slave)", "Isbjørnenjagt (Polar Bear Hunt)" and "Løvejagten (The Lion Hunt)". By 1910, new smaller Danish companies began joining the business, and besides making more films about the white slave trade, they contributed other new subjects. The most important of these finds was Asta Nielsen in "Afgrunden (The Abyss)", directed by Urban Gad for Kosmorama, This combined the circus, sex, jealousy and murder, all put over with great conviction, and pushed the other Danish filmmakers further in this direction. By 1912, the Danish film companies were multiplying rapidly.
The Swedish film industry was smaller and slower to get started than the Danish industry. Here, the important man was Charles Magnusson, a newsreel cameraman for the Svenskabiografteatern cinema chain. He started fiction film production for them in 1909, directing a number of the films himself. Production increased in 1912, when the company engaged Victor Sjöström and Mauritz Stiller as directors. They started out by imitating the subjects favoured by the Danish film industry, but by 1913 they were producing their own strikingly original work, which sold very well.
Russia began its film industry in 1908 with Pathé shooting some fiction subjects there, and then the creation of real Russian film companies by Aleksandr Drankov and Aleksandr Khanzhonkov. The Khanzhonkov company quickly became much the largest Russian film company, and remained so until 1918.
In Germany, Oskar Messter had been involved in film-making from 1896, but did not make a significant number of films per year until 1910. When the worldwide film boom started, he, and the few other people in the German film business, continued to sell prints of their own films outright, which put them at a disadvantage. It was only when Paul Davidson, the owner of a chain of cinemas, brought Asta Nielsen and Urban Gad to Germany from Denmark in 1911, and set up a production company, Projektions-AG "Union" (PAGU), for them, that a change-over to renting prints began. Messter replied with a series of longer films starring Henny Porten, but although these did well in the German-speaking world, they were not particularly successful internationally, unlike the Asta Nielsen films. Another of the growing German film producers just before World War I was the German branch of the French Éclair company, Deutsche Éclair. This was expropriated by the German government, and turned into DECLA when the war started. But altogether, German producers only had a minor part of the German market in 1914.
Overall, from about 1910, American films had the largest share of the market in all European countries except France, and even in France, the American films had just pushed the local production out of first place on the eve of World War I. So even if the war had not happened, American films may have become dominant worldwide. Although the war made things much worse for European producers, the technical qualities of American films made them increasingly attractive to audiences everywhere.
New film techniques that were introduced in this period include the use of artificial lighting, fire effects and Low-key lighting (i.e. lighting in which most of the frame is dark) for enhanced atmosphere during sinister scenes.
Continuity of action from shot to shot was also refined, such as in Pathé's "le Cheval emballé (The Runaway Horse)" (1907) where cross-cutting between parallel actions is used. D. W. Griffith also began using cross-cutting in the film "The Fatal Hour", made in July 1908. Another development was the use of the Point of View shot, first used in 1910 in Vitagraph's "Back to Nature". Insert shots were also used for artistic purposes; the Italian film "La mala planta (The Evil Plant)", directed by Mario Caserini had an insert shot of a snake slithering over the "Evil Plant".
As films grew longer, specialist writers were employed to simplify more complex stories derived from novels or plays into a form that could be contained on one reel. Genres began to be used as categories; the main division was into comedy and drama, but these categories were further subdivided.
Intertitles containing lines of dialogue began to be used consistently from 1908 onwards, such as in Vitagraph's "An Auto Heroine; or, The Race for the Vitagraph Cup and How It Was Won". The dialogue was eventually inserted into the middle of the scene and became commonplace by 1912. The introduction of dialogue titles transformed the nature of film narrative. When dialogue titles came to be always cut into a scene just after a character starts speaking, and then left with a cut to the character just before they finish speaking, then one had something that was effectively the equivalent of a present-day sound film.
The years of the First World War were a complex transitional period for the film industry. The exhibition of films changed from short one-reel programmes to feature films. Exhibition venues became larger and began charging higher prices.
In the United States, these changes brought destruction to many film companies, the Vitagraph company being an exception. Film production began to shift to Los Angeles during World War I. The Universal Film Manufacturing Company was formed in 1912 as an umbrella company. New entrants included the Jesse Lasky Feature Play Company, and Famous Players, both formed in 1913, and later amalgamated into Famous Players-Lasky. The biggest success of these years was David Wark Griffith's "The Birth of a Nation" (1915). Griffith followed this up with the even bigger "Intolerance" (1916), but, due to the high quality of film produced in the US, the market for their films was high.
In France, film production shut down due to the general military mobilization of the country at the start of the war. Although film production began again in 1915, it was on a reduced scale, and the biggest companies gradually retired from production. Italian film production held up better, although so called "diva films", starring anguished female leads were a commercial failure. In Denmark, the Nordisk company increased its production so much in 1915 and 1916 that it could not sell all its films, which led to a very sharp decline in Danish production, and the end of Denmark's importance on the world film scene.
The German film industry was seriously weakened by the war. The most important of the new film producers at the time was Joe May, who made a series of thrillers and adventure films through the war years, but Ernst Lubitsch also came into prominence with a series of very successful comedies and dramas.
At this time, studios were blacked out to allow shooting to be unaffected by changing sunlight. This was replaced with floodlights and spotlights. The widespread adoption of irising-in and out to begin and end scenes caught on in this period. This is the revelation of a film shot in a circular mask, which gradually gets larger until it expands beyond the frame. Other shaped slits were used, including vertical and diagonal apertures.
A new idea taken over from still photography was "soft focus". This began in 1915, with some shots being intentionally thrown out of focus for expressive effect, as in Mary Pickford starrer "Fanchon the Cricket".
It was during this period that camera effects intended to convey the subjective feelings of characters in a film really began to be established. These could now be done as Point of View (POV) shots, as in Sidney Drew's "The Story of the Glove" (1915), where a wobbly hand-held shot of a door and its keyhole represents the POV of a drunken man. The use of anamorphic (in the general sense of distorted shape) images first appears in these years with Abel Gance directed "la Folie du Docteur Tube (The Madness of Dr. Tube)". In this film the effect of a drug administered to a group of people was suggested by shooting the scenes reflected in a distorting mirror of the fair-ground type.
Symbolic effects taken over from conventional literary and artistic tradition continued to make some appearances in films during these years. In D. W. Griffith's "The Avenging Conscience" (1914), the title "The birth of the evil thought" precedes a series of three shots of the protagonist looking at a spider, and ants eating an insect. Symbolist art and literature from the turn of the century also had a more general effect on a small number of films made in Italy and Russia. The supine acceptance of death resulting from passion and forbidden longings was a major feature of this art, and states of delirium dwelt on at length were important as well.
The use of insert shots, i.e. close-ups of objects other than faces, had already been established by the Brighton school, but were infrequently used before 1914. It is really only with Griffith's "The Avenging Conscience" that a new phase in the use of the Insert Shot starts. As well as the symbolic inserts already mentioned, the film also made extensive use of large numbers of Big Close Up shots of clutching hands and tapping feet as a means of emphasizing those parts of the body as indicators of psychological tension.
Atmospheric inserts were developed in Europe in the late 1910s. This kind of shot is one in a scene which neither contains any of the characters in the story, nor is a Point of View shot seen by one of them. An early example is in Maurice Tourneur directed "The Pride of the Clan" (1917), in which there is a series of shots of waves beating on a rocky shore to demonstrate the harsh lives of the fishing folk. Maurice Elvey's "Nelson; The Story of England's Immortal Naval Hero" (1919) has a symbolic sequence dissolving from a picture of Kaiser Wilhelm II to a peacock, and then to a battleship.
By 1914, continuity cinema was the established mode of commercial cinema. One of the advanced continuity techniques involved an accurate and smooth transition from one shot to another. Cutting to "different" angles within a scene also became well-established as a technique for dissecting a scene into shots in American films. If the direction of the shot changes by more than ninety degrees, it is called a reverse-angle cutting. The leading figure in the full development of reverse-angle cutting was Ralph Ince in his films, such as "The Right Girl" and "His Phantom Sweetheart"
The use of flash-back structures continued to develop in this period, with the usual way of entering and leaving a flash-back being through a dissolve. The Vitagraph company's "The Man That Might Have Been" (William J. Humphrey, 1914), is even more complex, with a series of reveries and flash-backs that contrast the protagonist's real passage through life with what might have been, if his son had not died.
After 1914, cross cutting between parallel actions came to be used more so in American films than in European ones. Cross-cutting was often used to get new effects of contrast, such as the cross-cut sequence in Cecil B. DeMille's "The Whispering Chorus" (1918), in which a supposedly dead husband is having a liaison with a Chinese prostitute in an opium den, while simultaneously his unknowing wife is being remarried in church.
The general trend in the development of cinema, led from the United States, was towards using the newly developed specifically filmic devices for expression of the narrative content of film stories, and combining this with the standard dramatic structures already in use in commercial theatre. D. W. Griffith had the highest standing amongst American directors in the industry, because of the dramatic excitement he conveyed to the audience through his films. Cecil B. DeMille's "The Cheat" (1915), brought out the moral dilemmas facing their characters in a more subtle way than Griffith. DeMille was also in closer touch with the reality of contemporary American life. Maurice Tourneur was also highly ranked for the pictorial beauties of his films, together with the subtlety of his handling of fantasy, while at the same time he was capable of getting greater naturalism from his actors at appropriate moments, as in "A Girl's Folly" (1917).
Sidney Drew was the leader in developing "polite comedy", while slapstick was refined by Fatty Arbuckle and Charles Chaplin, who both started with Mack Sennett's Keystone company. They reduced the usual frenetic pace of Sennett's films to give the audience a chance to appreciate the subtlety and finesse of their movement, and the cleverness of their gags. By 1917 Chaplin was also introducing more dramatic plot into his films, and mixing the comedy with sentiment.
In Russia, Yevgeni Bauer put a slow intensity of acting combined with Symbolist overtones onto film in a unique way.
In Sweden, Victor Sjöström made a series of films that combined the realities of people's lives with their surroundings in a striking manner, while Mauritz Stiller developed sophisticated comedy to a new level.
In Germany, Ernst Lubitsch got his inspiration from the stage work of Max Reinhardt, both in bourgeois comedy and in spectacle, and applied this to his films, culminating in his "die Puppe" ("The Doll"), "die Austernprinzessin" ("The Oyster Princess") and "Madame DuBarry".
At the start of the First World War, French and Italian cinema had been the most globally popular. The war came as a devastating interruption to European film industries. The American industry, or "Hollywood", as it was becoming known after its new geographical center in California, gained the position it has held, more or less, ever since: film factory for the world and exporting its product to most countries on earth.
By the 1920s, the United States reached what is still its era of greatest-ever output, producing an average of 800 "feature" films annually, or 82% of the global total (Eyman, 1997). The comedies of Charlie Chaplin and Buster Keaton, the swashbuckling adventures of Douglas Fairbanks and the romances of Clara Bow, to cite just a few examples, made these performers' faces well known on every continent. The Western visual norm that would become classical continuity editing was developed and exported – although its adoption was slower in some non-Western countries without strong realist traditions in art and drama, such as Japan.
This development was contemporary with the growth of the studio system and its greatest publicity method, the star system, which characterized American film for decades to come and provided models for other film industries. The studios' efficient, top-down control over all stages of their product enabled a new and ever-growing level of lavish production and technical sophistication. At the same time, the system's commercial regimentation and focus on glamorous escapism discouraged daring and ambition beyond a certain degree, a prime example being the brief but still legendary directing career of the iconoclastic Erich von Stroheim in the late teens and the 1920s.
During late 1927, Warners released "The Jazz Singer", which was mostly silent but contained what is generally regarded as the first synchronized dialogue (and singing) in a feature film; but this process was actually accomplished first by Charles Taze Russell in 1914 with the lengthy film "The Photo-Drama of Creation". This drama consisted of picture slides and moving pictures synchronized with phonograph records of talks and music. The early sound-on-disc processes such as Vitaphone were soon superseded by sound-on-film methods like Fox Movietone, DeForest Phonofilm, and RCA Photophone. The trend convinced the largely reluctant industrialists that "talking pictures", or "talkies", were the future. A lot of attempts were made before the success of "The Jazz Singer", that can be seen in the List of film sound systems.
The change was remarkably swift. By the end of 1929, Hollywood was almost all-talkie, with several competing sound systems (soon to be standardized). Total changeover was slightly slower in the rest of the world, principally for economic reasons. Cultural reasons were also a factor in countries like China and Japan, where silents co-existed successfully with sound well into the 1930s, indeed producing what would be some of the most revered classics in those countries, like Wu Yonggang's "The Goddess" (China, 1934) and Yasujirō Ozu's "I Was Born, But..." (Japan, 1932). But even in Japan, a figure such as the benshi, the live narrator who was a major part of Japanese silent cinema, found his acting career was ending.
Sound further tightened the grip of major studios in numerous countries: the vast expense of the transition overwhelmed smaller competitors, while the novelty of sound lured vastly larger audiences for those producers that remained. In the case of the U.S., some historians credit sound with saving the Hollywood studio system in the face of the Great Depression (Parkinson, 1995). Thus began what is now often called "The Golden Age of Hollywood", which refers roughly to the period beginning with the introduction of sound until the late 1940s. The American cinema reached its peak of efficiently manufactured glamour and global appeal during this period. The top actors of the era are now thought of as the classic film stars, such as Clark Gable, Katharine Hepburn, Humphrey Bogart, Greta Garbo, and the greatest box office draw of the 1930s, child performer Shirley Temple.
Creatively, however, the rapid transition was a difficult one, and in some ways, film briefly reverted to the conditions of its earliest days. The late '20s were full of static, stagey talkies as artists in front of and behind the camera struggled with the stringent limitations of the early sound equipment and their own uncertainty as to how to utilize the new medium. Many stage performers, directors and writers were introduced to cinema as producers sought personnel experienced in dialogue-based storytelling. Many major silent filmmakers and actors were unable to adjust and found their careers severely curtailed or even ended.
This awkward period was fairly short-lived. 1929 was a watershed year: William Wellman with "Chinatown Nights" and "The Man I Love", Rouben Mamoulian with "Applause", Alfred Hitchcock with "Blackmail" (Britain's first sound feature), were among the directors to bring greater fluidity to talkies and experiment with the expressive use of sound (Eyman, 1997). In this, they both benefited from, and pushed further, technical advances in microphones and cameras, and capabilities for editing and post-synchronizing sound (rather than recording all sound directly at the time of filming).
Sound films emphasized black history and benefited different genres more so than silents did. Most obviously, the musical film was born; the first classic-style Hollywood musical was "The Broadway Melody" (1929) and the form would find its first major creator in choreographer/director Busby Berkeley ("42nd Street", 1933, "Dames", 1934). In France, avant-garde director René Clair made surreal use of song and dance in comedies like "Under the Roofs of Paris" (1930) and "Le Million" (1931). Universal Pictures began releasing gothic horror films like "Dracula" and "Frankenstein" (both 1931). In 1933, RKO Pictures released Merian C. Cooper's classic "giant monster" film "King Kong". The trend thrived best in India, where the influence of the country's traditional song-and-dance drama made the musical the basic form of most sound films (Cook, 1990); virtually unnoticed by the Western world for decades, this Indian popular cinema would nevertheless become the world's most prolific. ("See also Bollywood.")
At this time, American gangster films like "Little Caesar" and Wellman's "The Public Enemy" (both 1931) became popular. Dialogue now took precedence over "slapstick" in Hollywood comedies: the fast-paced, witty banter of "The Front Page" (1931) or "It Happened One Night" (1934), the sexual double entrendres of Mae West ("She Done Him Wrong", 1933) or the often subversively anarchic nonsense talk of the Marx Brothers ("Duck Soup", 1933). Walt Disney, who had previously been in the short cartoon business, stepped into feature films with the first English-speaking animated feature "Snow White and the Seven Dwarfs"; released by RKO Pictures in 1937. 1939, a major year for American cinema, brought such films as "The Wizard of Oz" and "Gone with The Wind".
Previously, it was believed that color films were first projected in 1909 at the Palace Theatre in London (the main problem with the color being that the technique, created by George Smith, (Kinemacolor) only used two colors: green and red, which were mixed additively). But in fact, it was in 1901 when the first color film in history was created. This untitled film was directed by photographer Edward Raymond Turner and his patron Frederick Marshall Lee. The way they did it was to use black and white film rolls, but have green, red, and blue filters go over the camera individually as it shot. To complete the film, they joined the original footage and filters on a special projector. However, both the shooting of the film and its projection suffered from major unrelated issues that, eventually, sank the idea.
Subsequently, in 1916, the technicolor technique arrived (trichromatic procedure (green, red, blue). Its use required a triple photographic impression, incorporation of chromatic filters and cameras of enormous dimensions). The first audiovisual piece that was completely realized with this technique was the short of Walt Disney "Flowers and Trees", directed by Burt Gillett in 1932. Even so, the first film to be performed with this technique will be "The Vanities Fair" (1935) by Rouben Mamoulian. Later on, the technicolor was extended mainly in the musical field as "The Wizard of Oz" or "Singin' in the Rain", in films such as "The Adventures of Robin Hood" or the animation film, "Snow White and the Seven Dwarfs".
The desire for wartime propaganda against the opposition created a renaissance in the film industry in Britain, with realistic war dramas like "49th Parallel" (1941), "Went the Day Well?" (1942), "The Way Ahead" (1944) and Noël Coward and David Lean's celebrated naval film "In Which We Serve" in 1942, which won a special Academy Award. These existed alongside more flamboyant films like Michael Powell and Emeric Pressburger's "The Life and Death of Colonel Blimp" (1943), "A Canterbury Tale" (1944) and "A Matter of Life and Death" (1946), as well as Laurence Olivier's 1944 film "Henry V", based on the Shakespearean history "Henry V". The success of "Snow White and the Seven Dwarfs" allowed Disney to make more animated features like "Pinocchio" (1940), "Fantasia" (1940), "Dumbo" (1941) and "Bambi" (1942).
The onset of US involvement in World War II also brought a proliferation of films as both patriotism and propaganda. American propaganda films included "Desperate Journey" (1942), "Mrs. Miniver" (1942), "Forever and a Day" (1943) and "Objective, Burma!" (1945). Notable American films from the war years include the anti-Nazi "Watch on the Rhine" (1943), scripted by Dashiell Hammett; "Shadow of a Doubt" (1943), Hitchcock's direction of a script by Thornton Wilder; the George M. Cohan biopic, "Yankee Doodle Dandy" (1942), starring James Cagney, and the immensely popular "Casablanca", with Humphrey Bogart. Bogart would star in 36 films between 1934 and 1942 including John Huston's "The Maltese Falcon" (1941), one of the first films now considered a classic film noir. In 1941, RKO Pictures released "Citizen Kane" made by Orson Welles. It is often considered the greatest film of all time. It would set the stage for the modern motion picture, as it revolutionized film story telling.
The strictures of wartime also brought an interest in more fantastical subjects. These included Britain's Gainsborough melodramas (including "The Man in Grey" and "The Wicked Lady"), and films like "Here Comes Mr. Jordan", "Heaven Can Wait", "I Married a Witch" and "Blithe Spirit". Val Lewton also produced a series of atmospheric and influential small-budget horror films, some of the more famous examples being "Cat People", "Isle of the Dead" and "The Body Snatcher". The decade probably also saw the so-called "women's pictures", such as "Now, Voyager", "Random Harvest" and "Mildred Pierce" at the peak of their popularity.
1946 saw RKO Radio releasing "It's a Wonderful Life" directed by Italian-born filmmaker Frank Capra. Soldiers returning from the war would provide the inspiration for films like "The Best Years of Our Lives", and many of those in the film industry had served in some capacity during the war. Samuel Fuller's experiences in World War II would influence his largely autobiographical films of later decades such as "The Big Red One". The Actor's Studio was founded in October 1947 by Elia Kazan, Robert Lewis, and Cheryl Crawford, and the same year Oskar Fischinger filmed "Motion Painting No. 1".
In 1943, "Ossessione" was screened in Italy, marking the beginning of Italian neorealism. Major films of this type during the 1940s included "Bicycle Thieves", "Rome, Open City", and "La Terra Trema". In 1952 "Umberto D" was released, usually considered the last film of this type.
In the late 1940s, in Britain, Ealing Studios embarked on their series of celebrated comedies, including "Whisky Galore!", "Passport to Pimlico", "Kind Hearts and Coronets" and "The Man in the White Suit", and Carol Reed directed his influential thrillers "Odd Man Out", "The Fallen Idol" and "The Third Man". David Lean was also rapidly becoming a force in world cinema with "Brief Encounter" and his Dickens adaptations "Great Expectations" and "Oliver Twist", and Michael Powell and Emeric Pressburger would experience the best of their creative partnership with films like "Black Narcissus" and "The Red Shoes".
The House Un-American Activities Committee investigated Hollywood in the early 1950s. Protested by the Hollywood Ten before the committee, the hearings resulted in the blacklisting of many actors, writers and directors, including Chayefsky, Charlie Chaplin, and Dalton Trumbo, and many of these fled to Europe, especially the United Kingdom.
The Cold War era zeitgeist translated into a type of near-paranoia manifested in themes such as invading armies of evil aliens, ("Invasion of the Body Snatchers", "The War of the Worlds"); and communist fifth columnists, ("The Manchurian Candidate").
During the immediate post-war years the cinematic industry was also threatened by television, and the increasing popularity of the medium meant that some film theatres would bankrupt and close. The demise of the "studio system" spurred the self-commentary of films like "Sunset Boulevard" (1950) and "The Bad and the Beautiful" (1952).
In 1950, the Lettrists avante-gardists caused riots at the Cannes Film Festival, when Isidore Isou's "Treatise on Slime and Eternity" was screened. After their criticism of Charlie Chaplin and split with the movement, the Ultra-Lettrists continued to cause disruptions when they showed their new hypergraphical techniques.
The most notorious film is Guy Debord's "Howls for Sade" of 1952.
Distressed by the increasing number of closed theatres, studios and companies would find new and innovative ways to bring audiences back. These included attempts to widen their appeal with new screen formats. Cinemascope, which would remain a 20th Century Fox distinction until 1967, was announced with 1953's "The Robe". VistaVision, Cinerama, and Todd-AO boasted a "bigger is better" approach to marketing films to a dwindling US audience. This resulted in the revival of epic films to take advantage of the new big screen formats. Some of the most successful examples of these Biblical and historical spectaculars include "The Ten Commandments" (1956), "The Vikings" (1958), "Ben-Hur" (1959), "Spartacus" (1960) and "El Cid" (1961). Also during this period a number of other significant films were produced in Todd-AO, developed by Mike Todd shortly before his death, including Oklahoma! (1955), Around the World in 80 Days (1956), South Pacific (1958) and Cleopatra (1963) plus many more.
Gimmicks also proliferated to lure in audiences. The fad for 3-D film would last for only two years, 1952–1954, and helped sell "House of Wax" and "Creature from the Black Lagoon". Producer William Castle would tout films featuring "Emergo" "Percepto", the first of a series of gimmicks that would remain popular marketing tools for Castle and others throughout the 1960s.
In the U.S., a post-WW2 tendency toward questioning the establishment and societal norms and the early activism of the civil rights movement was reflected in Hollywood films such as "Blackboard Jungle" (1955), "On the Waterfront" (1954), Paddy Chayefsky's "Marty" and Reginald Rose's "12 Angry Men" (1957). Disney continued making animated films, notably; "Cinderella" (1950), "Peter Pan" (1953), "Lady and the Tramp" (1955), and "Sleeping Beauty" (1959). He began, however, getting more involved in live action films, producing classics like "20,000 Leagues Under the Sea" (1954), and "Old Yeller" (1957). Television began competing seriously with films projected in theatres, but surprisingly it promoted more filmgoing rather than curtailing it.
"Limelight" is probably a unique film in at least one interesting respect. Its two leads, Charlie Chaplin and Claire Bloom, were in the industry in no less than three different centuries. In the 19th Century, Chaplin made his theatrical debut at the age of eight, in 1897, in a clog dancing troupe, The Eight Lancaster Lads. In the 21st Century, Bloom is still enjoying a full and productive career, having appeared in dozens of films and television series produced up to and including 2013. She received particular acclaim for her role in "The King's Speech" (2010).
Following the end of World War II in the 1940s, the following decade, the 1950s, marked a 'golden age' for non-English world cinema, especially for Asian cinema. Many of the most critically acclaimed Asian films of all time were produced during this decade, including Yasujirō Ozu's "Tokyo Story" (1953), Satyajit Ray's "The Apu Trilogy" (1955–1959) and "Jalsaghar" (1958), Kenji Mizoguchi's "Ugetsu" (1954) and "Sansho the Bailiff" (1954), Raj Kapoor's "Awaara" (1951), Mikio Naruse's "Floating Clouds" (1955), Guru Dutt's "Pyaasa" (1957) and "Kaagaz Ke Phool" (1959), and the Akira Kurosawa films "Rashomon" (1950), "Ikiru" (1952), "Seven Samurai" (1954) and "Throne of Blood" (1957).
During Japanese cinema's 'Golden Age' of the 1950s, successful films included "Rashomon" (1950), "Seven Samurai" (1954) and "The Hidden Fortress" (1958) by Akira Kurosawa, as well as Yasujirō Ozu's "Tokyo Story" (1953) and Ishirō Honda's "Godzilla" (1954). These films have had a profound influence on world cinema. In particular, Kurosawa's "Seven Samurai" has been remade several times as Western films, such as "The Magnificent Seven" (1960) and "Battle Beyond the Stars" (1980), and has also inspired several Bollywood films, such as "Sholay" (1975) and "China Gate" (1998). "Rashomon" was also remade as "The Outrage" (1964), and inspired films with "Rashomon effect" storytelling methods, such as "Andha Naal" (1954), "The Usual Suspects" (1995) and "Hero" (2002). "The Hidden Fortress" was also the inspiration behind George Lucas' "Star Wars" (1977). Other famous Japanese filmmakers from this period include Kenji Mizoguchi, Mikio Naruse, Hiroshi Inagaki and Nagisa Oshima. Japanese cinema later became one of the main inspirations behind the New Hollywood movement of the 1960s to 1980s.
During Indian cinema's 'Golden Age' of the 1950s, it was producing 200 films annually, while Indian independent films gained greater recognition through international film festivals. One of the most famous was "The Apu Trilogy" (1955–1959) from critically acclaimed Bengali film director Satyajit Ray, whose films had a profound influence on world cinema, with directors such as Akira Kurosawa, Martin Scorsese, James Ivory, Abbas Kiarostami, Elia Kazan, François Truffaut, Steven Spielberg, Carlos Saura, Jean-Luc Godard, Isao Takahata, Gregory Nava, Ira Sachs, Wes Anderson and Danny Boyle being influenced by his cinematic style. According to Michael Sragow of "The Atlantic Monthly", the "youthful coming-of-age dramas that have flooded art houses since the mid-fifties owe a tremendous debt to the Apu trilogy". Subrata Mitra's cinematographic technique of bounce lighting also originates from "The Apu Trilogy". Other famous Indian filmmakers from this period include Guru Dutt, Ritwik Ghatak, Mrinal Sen, Raj Kapoor, Bimal Roy, K. Asif and Mehboob Khan.
The cinema of South Korea also experienced a 'Golden Age' in the 1950s, beginning with director Lee Kyu-hwan's tremendously successful remake of "Chunhyang-jon" (1955). That year also saw the release of "Yangsan Province" by the renowned director, Kim Ki-young, marking the beginning of his productive career. Both the quality and quantity of filmmaking had increased rapidly by the end of the 1950s. South Korean films, such as Lee Byeong-il's 1956 comedy "Sijibganeun nal (The Wedding Day)", had begun winning international awards. In contrast to the beginning of the 1950s, when only 5 films were made per year, 111 films were produced in South Korea in 1959.
The 1950s was also a 'Golden Age' for Philippine cinema, with the emergence of more artistic and mature films, and significant improvement in cinematic techniques among filmmakers. The studio system produced frenetic activity in the local film industry as many films were made annually and several local talents started to earn recognition abroad. The premiere Philippine directors of the era included Gerardo de Leon, Gregorio Fernández, Eddie Romero, Lamberto Avellana, and Cirio Santiago.
During the 1960s, the studio system in Hollywood declined, because many films were now being made on location in other countries, or using studio facilities abroad, such as Pinewood in the UK and Cinecittà in Rome. "Hollywood" films were still largely aimed at family audiences, and it was often the more old-fashioned films that produced the studios' biggest successes. Productions like "Mary Poppins" (1964), "My Fair Lady" (1964) and "The Sound of Music" (1965) were among the biggest money-makers of the decade. The growth in independent producers and production companies, and the increase in the power of individual actors also contributed to the decline of traditional Hollywood studio production.
There was also an increasing awareness of foreign language cinema in America during this period. During the late 1950s and 1960s, the French New Wave directors such as François Truffaut and Jean-Luc Godard produced films such as "Les quatre cents coups", "Breathless" and "Jules et Jim" which broke the rules of Hollywood cinema's narrative structure. As well, audiences were becoming aware of Italian films like Federico Fellini's "La Dolce Vita" and the stark dramas of Sweden's Ingmar Bergman.
In Britain, the "Free Cinema" of Lindsay Anderson, Tony Richardson and others lead to a group of realistic and innovative dramas including "Saturday Night and Sunday Morning", "A Kind of Loving" and "This Sporting Life". Other British films such as "Repulsion", "Darling", "Alfie", "Blowup" and "Georgy Girl" (all in 1965–1966) helped to reduce prohibitions of sex and nudity on screen, while the casual sex and violence of the James Bond films, beginning with "Dr. No" in 1962 would render the series popular worldwide.
During the 1960s, Ousmane Sembène produced several French- and Wolof-language films and became the "father" of African Cinema. In Latin America, the dominance of the "Hollywood" model was challenged by many film makers. Fernando Solanas and Octavio Getino called for a politically engaged Third Cinema in contrast to Hollywood and the European auteur cinema.
Further, the nuclear paranoia of the age, and the threat of an apocalyptic nuclear exchange (like the 1962 close-call with the USSR during the Cuban missile crisis) prompted a reaction within the film community as well. Films like Stanley Kubrick's "Dr. Strangelove" and "Fail Safe" with Henry Fonda were produced in a Hollywood that was once known for its overt patriotism and wartime propaganda.
In documentary film the sixties saw the blossoming of Direct Cinema, an observational style of film making as well as the advent of more overtly partisan films like "In the Year of the Pig" about the Vietnam War by Emile de Antonio. By the late 1960s however, Hollywood filmmakers were beginning to create more innovative and groundbreaking films that reflected the social revolution taken over much of the western world such as "Bonnie and Clyde" (1967), "The Graduate" (1967), "" (1968), "Rosemary's Baby" (1968), "Midnight Cowboy" (1969), "Easy Rider" (1969) and "The Wild Bunch" (1969). "Bonnie and Clyde" is often considered the beginning of the so-called New Hollywood.
In Japanese cinema, Academy Award-winning director Akira Kurosawa produced "Yojimbo" (1961), which like his previous films also had a profound influence around the world. The influence of this film is most apparent in Sergio Leone's "A Fistful of Dollars" (1964) and Walter Hill's "Last Man Standing" (1996). "Yojimbo" was also the origin of the "Man with No Name" trend.
The New Hollywood was the period following the decline of the studio system during the 1950s and 1960s and the end of the production code, (which was replaced in 1968 by the MPAA film rating system). During the 1970s, filmmakers increasingly depicted explicit sexual content and showed gunfight and battle scenes that included graphic images of bloody deaths a good example of this is Wes Craven's "The Last House on the Left" (1972).
Post-classical cinema is the changing methods of storytelling of the New Hollywood producers. The new methods of drama and characterization played upon audience expectations acquired during the classical/Golden Age period: story chronology may be scrambled, storylines may feature unsettling "twist endings", main characters may behave in a morally ambiguous fashion, and the lines between the antagonist and protagonist may be blurred. The beginnings of post-classical storytelling may be seen in 1940s and 1950s film noir films, in films such as "Rebel Without a Cause" (1955), and in Hitchcock's "Psycho". 1971 marked the release of controversial films like "Straw Dogs", "A Clockwork Orange", "The French Connection" and "Dirty Harry". This sparked heated controversy over the perceived escalation of violence in cinema.
During the 1970s, a new group of American filmmakers emerged, such as Martin Scorsese, Francis Ford Coppola, George Lucas, Woody Allen, Terrence Malick, and Robert Altman. This coincided with the increasing popularity of the auteur theory in film literature and the media, which posited that a film director's films express their personal vision and creative insights. The development of the auteur style of filmmaking helped to give these directors far greater control over their projects than would have been possible in earlier eras. This led to some great critical and commercial successes, like Scorsese's "Taxi Driver", Coppola's "The Godfather" films, William Friedkin's "The Exorcist", Altman's "Nashville", Allen's "Annie Hall" and "Manhattan", Malick's "Badlands" and "Days of Heaven", and Polish immigrant Roman Polanski's "Chinatown". It also, however, resulted in some failures, including Peter Bogdanovich's "At Long Last Love" and Michael Cimino's hugely expensive Western epic "Heaven's Gate", which helped to bring about the demise of its backer, United Artists.
The financial disaster of "Heaven's Gate" marked the end of the visionary "auteur" directors of the "New Hollywood", who had unrestrained creative and financial freedom to develop films. The phenomenal success in the 1970s of Spielberg's "Jaws" originated the concept of the modern "blockbuster". However, the enormous success of George Lucas' 1977 film "Star Wars" led to much more than just the popularization of blockbuster film-making. The film's revolutionary use of special effects, sound editing and music had led it to become widely regarded as one of the single most important films in the medium's history, as well as the most influential film of the 1970s. Hollywood studios increasingly focused on producing a smaller number of very large budget films with massive marketing and promotional campaigns. This trend had already been foreshadowed by the commercial success of disaster films such as "The Poseidon Adventure" and "The Towering Inferno".
During the mid-1970s, more pornographic theatres, euphemistically called "adult cinemas", were established, and the legal production of hardcore pornographic films began. Porn films such as "Deep Throat" and its star Linda Lovelace became something of a popular culture phenomenon and resulted in a spate of similar sex films. The porn cinemas finally died out during the 1980s, when the popularization of the home VCR and pornography videotapes allowed audiences to watch sex films at home. In the early 1970s, English-language audiences became more aware of the new West German cinema, with Werner Herzog, Rainer Werner Fassbinder and Wim Wenders among its leading exponents.
In world cinema, the 1970s saw a dramatic increase in the popularity of martial arts films, largely due to its reinvention by Bruce Lee, who departed from the artistic style of traditional Chinese martial arts films and added a much greater sense of realism to them with his Jeet Kune Do style. This began with "The Big Boss" (1971), which was a major success across Asia. However, he didn't gain fame in the Western world until shortly after his death in 1973, when "Enter the Dragon" was released. The film went on to become the most successful martial arts film in cinematic history, popularized the martial arts film genre across the world, and cemented Bruce Lee's status as a cultural icon. Hong Kong action cinema, however, was in decline due to a wave of "Bruceploitation" films. This trend eventually came to an end in 1978 with the martial arts comedy films, "Snake in the Eagle's Shadow" and "Drunken Master", directed by Yuen Woo-ping and starring Jackie Chan, laying the foundations for the rise of Hong Kong action cinema in the 1980s.
While the musical film genre had declined in Hollywood by this time, musical films were quickly gaining popularity in the cinema of India, where the term "Bollywood" was coined for the growing Hindi film industry in Bombay (now Mumbai) that ended up dominating South Asian cinema, overtaking the more critically acclaimed Bengali film industry in popularity. Hindi filmmakers combined the Hollywood musical formula with the conventions of ancient Indian theatre to create a new film genre called "Masala", which dominated Indian cinema throughout the late 20th century. These "Masala" films portrayed action, comedy, drama, romance and melodrama all at once, with "filmi" song and dance routines thrown in. This trend began with films directed by Manmohan Desai and starring Amitabh Bachchan, who remains one of the most popular film stars in South Asia. The most popular Indian film of all time was "Sholay" (1975), a "Masala" film inspired by a real-life dacoit as well as Kurosawa's "Seven Samurai" and the Spaghetti Westerns.
The end of the decade saw the first major international marketing of Australian cinema, as Peter Weir's films "Picnic at Hanging Rock" and "The Last Wave" and Fred Schepisi's "The Chant of Jimmie Blacksmith" gained critical acclaim. In 1979, Australian filmmaker George Miller also garnered international attention for his violent, low-budget action film "Mad Max".
During the 1980s, audiences began increasingly watching films on their home VCRs. In the early part of that decade, the film studios tried legal action to ban home ownership of VCRs as a violation of copyright, which proved unsuccessful. Eventually, the sale and rental of films on home video became a significant "second venue" for exhibition of films, and an additional source of revenue for the film industries. Direct-to-video (niche) markets usually offered lower quality, cheap productions that were not deemed very suitable for the general audiences of television and theatrical releases.
The Lucas–Spielberg combine would dominate "Hollywood" cinema for much of the 1980s, and lead to much imitation. Two follow-ups to "Star Wars", three to "Jaws", and three "Indiana Jones" films helped to make sequels of successful films more of an expectation than ever before. Lucas also launched THX Ltd, a division of Lucasfilm in 1982, while Spielberg enjoyed one of the decade's greatest successes in "E.T. the Extra-Terrestrial" the same year. 1982 also saw the release of Disney's "Tron" which was one of the first films from a major studio to use computer graphics extensively. American independent cinema struggled more during the decade, although Martin Scorsese's "Raging Bull" (1980), "After Hours" (1985), and "The King of Comedy" (1983) helped to establish him as one of the most critically acclaimed American film makers of the era. Also during 1983 "Scarface" was released, which was very profitable and resulted in even greater fame for its leading actor Al Pacino. Probably the most successful film commercially was Tim Burton's 1989 version of Bob Kane's creation, "Batman", which broke box-office records. Jack Nicholson's portrayal of the demented Joker earned him a total of $60,000,000 after figuring in his percentage of the gross.
British cinema was given a boost during the early 1980s by the arrival of David Puttnam's company Goldcrest Films. The films "Chariots of Fire", "Gandhi", "The Killing Fields" and "A Room with a View" appealed to a "middlebrow" audience which was increasingly being ignored by the major Hollywood studios. While the films of the 1970s had helped to define modern blockbuster motion pictures, the way "Hollywood" released its films would now change. Films, for the most part, would premiere in a wider number of theatres, although, to this day, some films still premiere using the route of the limited/roadshow release system. Against some expectations, the rise of the multiplex cinema did not allow less mainstream films to be shown, but simply allowed the major blockbusters to be given an even greater number of screenings. However, films that had been overlooked in cinemas were increasingly being given a second chance on home video.
During the 1980s, Japanese cinema experienced a revival, largely due to the success of anime films. At the beginning of the 1980s, "Space Battleship Yamato" (1973) and "Mobile Suit Gundam" (1979), both of which were unsuccessful as television series, were remade as films and became hugely successful in Japan. In particular, "Mobile Suit Gundam" sparked the Gundam franchise of Real Robot mecha anime. The success of "" also sparked a Macross franchise of mecha anime. This was also the decade when Studio Ghibli was founded. The studio produced Hayao Miyazaki's first fantasy films, "Nausicaä of the Valley of the Wind" (1984) and "Castle in the Sky" (1986), as well as Isao Takahata's "Grave of the Fireflies" (1988), all of which were very successful in Japan and received worldwide critical acclaim. Original video animation (OVA) films also began during this decade; the most influential of these early OVA films was Noboru Ishiguro's cyberpunk film "Megazone 23" (1985). The most famous anime film of this decade was Katsuhiro Otomo's cyberpunk film "Akira" (1988), which although initially unsuccessful at Japanese theaters, went on to become an international success.
Hong Kong action cinema, which was in a state of decline due to endless Bruceploitation films after the death of Bruce Lee, also experienced a revival in the 1980s, largely due to the reinvention of the action film genre by Jackie Chan. He had previously combined the comedy film and martial arts film genres successfully in the 1978 films "Snake in the Eagle's Shadow" and "Drunken Master". The next step he took was in combining this comedy martial arts genre with a new emphasis on elaborate and highly dangerous stunts, reminiscent of the silent film era. The first film in this new style of action cinema was "Project A" (1983), which saw the formation of the Jackie Chan Stunt Team as well as the "Three Brothers" (Chan, Sammo Hung and Yuen Biao). The film added elaborate, dangerous stunts to the fights and slapstick humor, and became a huge success throughout the Far East. As a result, Chan continued this trend with martial arts action films containing even more elaborate and dangerous stunts, including "Wheels on Meals" (1984), "Police Story" (1985), "Armour of God" (1986), "Project A Part II" (1987), "Police Story 2" (1988), and "Dragons Forever" (1988). Other new trends which began in the 1980s were the "girls with guns" subgenre, for which Michelle Yeoh gained fame; and especially the "heroic bloodshed" genre, revolving around Triads, largely pioneered by John Woo and for which Chow Yun-fat became famous. These Hong Kong action trends were later adopted by many Hollywood action films in the 1990s and 2000s.
The early 1990s saw the development of a commercially successful independent cinema in the United States. Although cinema was increasingly dominated by special-effects films such as "" (1991), "Jurassic Park" (1993) and "Titanic" (1997), the latter of which became the highest-grossing film of all time at the time up until "Avatar" (2009), also directed by James Cameron, independent films like Steven Soderbergh's "Sex, Lies, and Videotape" (1989) and Quentin Tarantino's "Reservoir Dogs" (1992) had significant commercial success both at the cinema and on home video. Filmmakers associated with the Danish film movement Dogme 95 introduced a manifesto aimed to purify filmmaking. Its first few films gained worldwide critical acclaim, after which the movement slowly faded out.
Major American studios began to create their own "independent" production companies to finance and produce non-mainstream fare. One of the most successful independents of the 1990s, Miramax Films, was bought by Disney the year before the release of Tarantino's runaway hit "Pulp Fiction" in 1994. The same year marked the beginning of film and video distribution online. Animated films aimed at family audiences also regained their popularity, with Disney's "Beauty and the Beast" (1991), "Aladdin" (1992), and "The Lion King" (1994). During 1995, the first feature-length computer-animated feature, "Toy Story", was produced by Pixar Animation Studios and released by Disney. After the success of Toy Story, computer animation would grow to become the dominant technique for feature-length animation, which would allow competing film companies such as DreamWorks Animation and 20th Century Fox to effectively compete with Disney with successful films of their own. During the late 1990s, another cinematic transition began, from physical film stock to digital cinema technology. Meanwhile, DVDs became the new standard for consumer video, replacing VHS tapes.
Since the late 2000s streaming media platforms like YouTube provided means for anyone with access to internet and cameras (a standard feature of smartphones) to publish videos to the world. Also competing with the increasing popularity of video games and other forms of home entertainment, the industry once again started to make theatrical releases more attractive, with new 3D technologies and epic (fantasy and superhero) films becoming a mainstay in cinemas.
The documentary film also rose as a commercial genre for perhaps the first time, with the success of films such as "March of the Penguins" and Michael Moore's "Bowling for Columbine" and "Fahrenheit 9/11". A new genre was created with Martin Kunert and Eric Manes' "Voices of Iraq", when 150 inexpensive DV cameras were distributed across Iraq, transforming ordinary people into collaborative filmmakers. The success of "Gladiator" led to a revival of interest in epic cinema, and "Moulin Rouge!" renewed interest in musical cinema. Home theatre systems became increasingly sophisticated, as did some of the special edition DVDs designed to be shown on them. "The Lord of the Rings trilogy" was released on DVD in both the theatrical version and in a special extended version intended only for home cinema audiences.
In 2001, the "Harry Potter" film series began, and by its end in 2011, it had become the highest-grossing film franchise of all time until the Marvel Cinematic Universe passed it in 2015.
More films were also being released simultaneously to IMAX cinema, the first was in 2002's Disney animation "Treasure Planet"; and the first live action was in 2003's "The Matrix Revolutions" and a re-release of "The Matrix Reloaded". Later in the decade, "The Dark Knight" was the first major feature film to have been at least partially shot in IMAX technology.
There has been an increasing globalization of cinema during this decade, with foreign-language films gaining popularity in English-speaking markets. Examples of such films include "Crouching Tiger, Hidden Dragon" (Mandarin), "Amélie" (French), "Lagaan" (Hindi), "Spirited Away" (Japanese), "City of God" (Brazilian Portuguese), "The Passion of the Christ" (Aramaic), "Apocalypto" (Mayan) and "Inglourious Basterds" (multiple European languages). Italy is the most awarded country at the Academy Award for Best Foreign Language Film, with 14 awards won, 3 Special Awards and 31 nominations.
In 2003, there was a revival in 3D film popularity the first being James Cameron's "Ghosts of the Abyss" which was released as the first full-length 3-D IMAX feature filmed with the Reality Camera System. This camera system used the latest HD video cameras, not film, and was built for Cameron by Emmy nominated Director of Photography Vince Pace, to his specifications. The same camera system was used to film "" (2003), "Aliens of the Deep" IMAX (2005), and "The Adventures of Sharkboy and Lavagirl in 3-D" (2005).
After James Cameron's 3D film "Avatar" became the highest-grossing film of all time, 3D films gained brief popularity with many other films being released in 3D, with the best critical and financial successes being in the field of feature film animation such as Universal Pictures/Illumination Entertainment's "Despicable Me" and DreamWorks Animation's "How To Train Your Dragon", "Shrek Forever After" and "Megamind". "Avatar" is also note-worthy for pioneering highly sophisticated use of motion capture technology and influencing several other films such as "Rise of the Planet of the Apes".
, the largest film industries by number of feature films produced were those of India, the United States, China, Nigeria and Japan.
In Hollywood, superhero films have greatly increased in popularity and financial success, with films based on Marvel and DC comics regularly being released every year up to the present. , the superhero genre has been the most dominant genre as far as American box office receipts are concerned. The 2019 superhero film "", was the most successful movie of all-time at the box office.
\ Jones. Based on the book (above); written by Basten & Jones. Documentary, (1998). | https://en.wikipedia.org/wiki?curid=10783 |
Cinema of France
French cinema comprises the art of film and creative movies made within the nation of France or by French filmmakers abroad.
France is the birthplace of cinema and was responsible for many of its significant contributions to the art form and the film-making process itself. Several important cinematic movements, including the Nouvelle Vague, began in the country. It is noted for having a particularly strong film industry, due in part to protections afforded by the French government.
Apart from its strong and innovative film tradition, France has also been a gathering spot for artists from across Europe and the world. For this reason, French cinema is sometimes intertwined with the cinema of foreign nations. Directors from nations such as Poland (Roman Polanski, Krzysztof Kieślowski, and Andrzej Żuławski), Argentina (Gaspar Noé and Edgardo Cozarinsky), Russia (Alexandre Alexeieff, Anatole Litvak), Austria (Michael Haneke), and Georgia (Géla Babluani, Otar Iosseliani) are prominent in the ranks of French cinema. Conversely, French directors have had prolific and influential careers in other countries, such as Luc Besson, Jacques Tourneur, or Francis Veber in the United States.
Another element supporting this fact is that Paris has the highest density of cinemas in the world, measured by the number of movie theaters per inhabitant, and that in most "downtown Paris" movie theaters, foreign movies which would be secluded to "art houses" cinemas in other places are shown alongside "mainstream" works. Philippe Binant realized, on 2 February 2000, the first digital cinema projection in Europe, with the DLP CINEMA technology developed by Texas Instruments, in Paris. Paris also boasts the Cité du cinéma, a major studio north of the city, and Disney Studio, a theme park devoted to the cinema and the third theme park near the city behind Disneyland and Parc Asterix.
France is the most successful film industry in Europe in terms of number of films produced per annum, with a record-breaking 300 feature-length films produced in 2015. France is also one of the few countries where non-American productions have the biggest share: American films only represented 44.9% of total admissions in 2014. This is largely due to the commercial strength of domestic productions, which accounted for 44,5% of admissions in 2014 (35.5% in 2015; 35.3% in 2016). Also, the French film industry is closer to being entirely self-sufficient than any other country in Europe, recovering around 80–90% of costs from revenues generated in the domestic market alone.
In 2013, France was the 2nd largest exporter of films in the world after the United States. A study in April 2014 showed the positive image which French cinema maintains around the world, being the most appreciated cinema after American cinema.
Les frères Lumière released the first projection with the Cinematograph, in Paris on 28 December 1895. The French film industry in the late 19th century and early 20th century was the world's most important. Auguste and Louis Lumière invented the cinématographe and their "L'Arrivée d'un train en gare de La Ciotat" in Paris in 1895 is considered by many historians as the official birth of cinematography.
The early days of the industry, from 1896 to 1902, saw the dominance of four firms: Pathé Frères, the Gaumont Film Company, the Georges Méliès company, and the Lumières. Méliès invented many of the techniques of cinematic grammar, and among his fantastic, surreal short subjects is the first science fiction film "A Trip to the Moon" ("Le Voyage dans la Lune") in 1902.
In 1902 the Lumières abandoned everything but the production of film stock, leaving Méliès as the weakest player of the remaining three. (He would retire in 1914.) From 1904 to 1911 the Pathé Frères company led the world in film production and distribution.
At Gaumont, pioneer Alice Guy-Blaché (M. Gaumont's former secretary) was made head of production and oversaw about 400 films, from her first, "La Fée aux Choux", in 1896, through 1906. She then continued her career in the United States, as did Maurice Tourneur and Léonce Perret after World War I.
In 1907 Gaumont owned and operated the biggest movie studio in the world, and along with the boom in construction of "luxury cinemas" like the Gaumont-Palace and the Pathé-Palace (both 1911), cinema became an economic challenger to legitimate theater by 1914.
After World War I, the French film industry suffered because of a lack of capital, and film production decreased as it did in most other European countries. This allowed the United States film industry to enter the European cinema market, because American films could be sold more cheaply than European productions, since the studios already had recouped their costs in the home market. When film studios in Europe began to fail, many European countries began to set import barriers. France installed an import quota of 1:7, meaning for every seven foreign films imported to France, one French film was to be produced and shown in French cinemas.
During the period between World War I and World War II, Jacques Feyder and Jean Vigo became two of the founders of poetic realism in French cinema. They also dominated French impressionist cinema, along with Abel Gance, Germaine Dulac and Jean Epstein.
In 1931, Marcel Pagnol filmed the first of his great trilogy "Marius", "Fanny", and "César". He followed this with other films including "The Baker's Wife". Other notable films of the 1930s included René Clair's "Under the Roofs of Paris" (1930), Jean Vigo's "L'Atalante" (1934), Jacques Feyder's "Carnival in Flanders" (1935), and Julien Duvivier's "La belle equipe" (1936). In 1935, renowned playwright and actor Sacha Guitry directed his first film and went on to make more than 30 films that were precursors to the New Wave era. In 1937, Jean Renoir, the son of painter Pierre-Auguste Renoir, directed "La Grande Illusion" ("The Grand Illusion"). In 1939, Renoir directed "La Règle du Jeu" ("The Rules of the Game"). Several critics have cited this film as one of the greatest of all-time, particularly for its innovative camerawork, cinematography and sound editing.
Marcel Carné's "Les Enfants du Paradis" ("Children of Paradise") was filmed during World War II and released in 1945. The three-hour film was extremely difficult to make due to the Nazi occupation. Set in Paris in 1828, it was voted Best French Film of the Century in a poll of 600 French critics and professionals in the late 1990s.
In the magazine "Cahiers du cinéma", founded by André Bazin and two other writers in 1951, film critics raised the level of discussion of the cinema, providing a platform for the birth of modern film theory. Several of the "Cahiers" critics, including Jean-Luc Godard, François Truffaut, Claude Chabrol, Jacques Rivette and Éric Rohmer, went on to make films themselves, creating what was to become known as the French New Wave. Some of the first films of this new movement were Godard's "Breathless" ("À bout de souffle", 1960), starring Jean-Paul Belmondo, Rivette's "Paris Belongs to Us" ("Paris nous appartient", 1958 – distributed in 1961), starring Jean-Claude Brialy and Truffaut's "The 400 Blows" ("Les Quatre Cent Coups", 1959) starring Jean-Pierre Léaud.
Many contemporaries of Godard and Truffaut followed suit, or achieved international critical acclaim with styles of their own, such as the minimalist films of Robert Bresson and Jean-Pierre Melville, the Hitchcockian-like thrillers of Henri-Georges Clouzot, and other New Wave films by Agnès Varda and Alain Resnais. The movement, while an inspiration to other national cinemas and unmistakably a direct influence on the future New Hollywood directors, slowly faded by the end of the 1960s.
During this period, French commercial film also made a name for itself. Immensely popular French comedies with Louis de Funès topped the French box office. The war comedy "La Grande Vadrouille" (1966), from Gérard Oury with Bourvil and Terry-Thomas, was the most successful film in French theaters for more than 30 years. Another example was "La Folie des grandeurs" with Yves Montand. French cinema also was the birthplace for many subgenres of the crime film, most notably the modern caper film, starting with 1955's "Rififi" by American-born director Jules Dassin and followed by a large number of serious, noirish heist dramas as well as playful caper comedies throughout the sixties, and the "polar," a typical French blend of film noir and detective fiction. In addition, French movie stars began to claim fame abroad as well as at home. Popular actors of the period included Brigitte Bardot, Alain Delon, Romy Schneider, Catherine Deneuve, Jeanne Moreau, Simone Signoret, Yves Montand, Jean-Paul Belmondo and Jean Gabin. Since the Sixties and Seventies they are completed and followed by Michel Piccoli, Philippe Noiret, Annie Girardot, Jean-Louis Trintignant, Jean-Pierre Léaud, Claude Jade, Isabelle Huppert, Anny Duperey, Gérard Depardieu, Patrick Dewaere, Jean-Pierre Cassel, Miou-Miou, Brigitte Fossey, Stéphane Audran and Isabelle Adjani.
The 1979 film "La Cage aux Folles" ran for well over a year at the Paris Theatre, an arthouse cinema in New York City, and was a commercial success at theaters throughout the country, in both urban and rural areas. It won the Golden Globe Award for Best Foreign Language Film, and for years it remained the most successful foreign film to be released in the United States.
Jean-Jacques Beineix's "Diva" (1981) sparked the beginning of the 1980s wave of French cinema. Movies which followed in its wake included "Betty Blue" ("37°2 le matin", 1986) by Beineix, "The Big Blue" ("Le Grand bleu", 1988) by Luc Besson, and "The Lovers on the Bridge" ("Les Amants du Pont-Neuf", 1991) by Léos Carax. These films, made with a slick commercial style and emphasizing the alienation of their main characters, was known as Cinema du look.
"Camille Claudel", directed by newcomer Bruno Nuytten and starring Isabelle Adjani and Gérard Depardieu, was a major commercial success in 1988, earning Adjani, who was also the film's co-producer, a César Award for best actress. The historical drama film "Jean de Florette" (1986) and its sequel "Manon des Sources" (1986) were among the highest grossing French films in history and brought Daniel Auteuil international recognition.
According to Raphaël Bassan, in his article «"The Angel": Un météore dans le ciel de l'animation,» "La Revue du cinéma", n° 393, avril 1984. , Patrick Bokanowski's "The Angel", shown in 1982 at the Cannes Film Festival, can be considered the beginnings of contemporary animation. The masks erase all human personality in the characters. Patrick Bokanowski would thus have total control over the "matter" of the image and its optical composition. This is especially noticeable throughout the film, with images taken through distorted objectives or a plastic work on the sets and costumes, for example in the scene of the designer. Patrick Bokanowski creates his own universe and obeys his own aesthetic logic. It takes us through a series of distorted areas, obscure visions, metamorphoses and synthetic objects. Indeed, in the film, the human may be viewed as a fetish object (for example, the doll hanging by a thread), with reference to Kafkaesque and Freudian theories on automata and the fear of man faced with something as complex as him. The ascent of the stairs would be the liberation of the ideas of death, culture, and sex that makes us reach the emblematic figure of the angel.
Jean-Paul Rappeneau's "Cyrano de Bergerac" was a major box-office success in 1990, earning several César Awards, including best actor for Gérard Depardieu, as well as an Academy Award nomination for best foreign picture.
Luc Besson made "La Femme Nikita" in 1990, a movie that inspired remakes in both United States and in Hong Kong. In 1994, he also made "" (starring Jean Reno and a young Natalie Portman), and in 1997 "The Fifth Element", which became a cult favorite and launched the career of Milla Jovovich.
Jean-Pierre Jeunet made "Delicatessen" and "The City of Lost Children" ("La Cité des enfants perdus"), both of which featured a distinctly fantastical style.
In 1992, Claude Sautet co-wrote (with Jacques Fieschi) and directed "Un Coeur en Hiver", considered by many to be a masterpiece. Mathieu Kassovitz's 1995 film "Hate" ("La Haine") received critical praise and made Vincent Cassel a star, and in 1997, Juliette Binoche won the Academy Award for Best Supporting Actress for her role in "The English Patient".
The success of Michel Ocelot's "Kirikou and the Sorceress" in 1998 rejuvenated the production of original feature-length animated films by such filmmakers as Jean-François Laguionie and Sylvain Chomet.
In 2000, Philippe Binant realized the first digital cinema projection in Europe, with the DLP CINEMA technology developed by Texas Instruments, in Paris.
In 2001, after a brief stint in Hollywood, Jean-Pierre Jeunet returned to France with "Amélie" ("Le Fabuleux Destin d'Amélie Poulain") starring Audrey Tautou. It became the highest-grossing French-language film ever released in the United States. The following year, "Brotherhood of the Wolf" became the second-highest-grossing French-language film in the United States since 1980 and went on to gross more than $70 million worldwide.
In 2008, Marion Cotillard won the Academy Award for Best Actress and the BAFTA Award for Best Actress in a Leading Role for her portrayal of legendary French singer Édith Piaf in "La Vie en Rose", the first French-language performance to be so honored. The film won two Oscars and four BAFTAs and became the third-highest-grossing French-language film in the United States since 1980. Cotillard was the first female and second person to win both an Academy Award and César Award for the same performance.
At the 2008 Cannes Film Festival, "Entre les murs" ("The Class") won the Palme d'Or, the 6th French victory at the festival. The 2000s also saw an increase in the number of individual competitive awards won by French artists at the Cannes Festival, for direction (Tony Gatlif, "Exils", 2004), screenplay (Agnès Jaoui and Jean-Pierre Bacri, "Look at Me", 2004), female acting (Isabelle Huppert, "The Piano Teacher", 2001; Charlotte Gainsbourg, "Antichrist", 2009) and male acting (Jamel Debbouze, Samy Naceri, Roschdy Zem, Sami Bouajila and Bernard Blancan, "Days of Glory", 2006).
The 2008 rural comedy "Bienvenue chez les Ch'tis" drew an audience of more than 20 million, the first French film to do so. Its $193 million gross in France puts it just behind "Titanic" as the most successful film of all time in French theaters.
In the 2000s, several French directors made international productions, often in the action genre. These include Gérard Pirès ("Riders", 2002), Pitof ("Catwoman", 2004), Jean-François Richet ("Assault on Precinct 13", 2005), Florent Emilio Siri ("Hostage", 2005), Christophe Gans ("Silent Hill", 2006), Mathieu Kassovitz ("Babylon A.D.", 2008), Louis Leterrier ("The Transporter", 2002; "Transporter 2", 2005; Olivier Megaton directed "Transporter 3", 2008), Alexandre Aja ("Mirrors", 2008), and Pierre Morel ("Taken", 2009).
Surveying the entire range of French filmmaking today, Tim Palmer calls contemporary cinema in France a kind of eco-system, in which commercial cinema co-exists with artistic radicalism, first-time directors (who make up about 40% of all France's directors each year) mingle with veterans, and there even occasionally emerges a fascinating pop-art hybridity, in which the features of intellectual and mass cinemas are interrelated (as in filmmakers like Valeria Bruni-Tedeschi, Olivier Assayas, Maïwenn, Sophie Fillières, Serge Bozon, and others).
One of the most noticed and best reviewed films of 2010 was the drama "Of Gods and Men" ("Des hommes et des dieux"), about the assassination of seven monks in Tibhirine, Algeria. 2011 saw the release of "The Artist", a silent film shot in black and white by Michel Hazanavicius that reflected on the end of Hollywood's silent era.
French cinema continued its upward trend of earning awards at the Cannes Festival, including the prestigious Grand Prix for "Of Gods and Men" (2010) and the Jury Prize for Poliss (2011); the Best Director Award for Mathieu Amalric ("On Tour", 2010); the Best Actress Award for Juliette Binoche ("Certified Copy", 2010); and the Best Actor Award for Jean Dujardin ("The Artist", 2011).
In 2011, the film "Intouchables" became the most watched film in France (including the foreign films). After ten weeks nearly 17.5 million people had seen the film in France, Intouchables was the second most-seen French movie of all-time in France, and the third including foreign movies.
In 2012, with 226 million admissions (US$1,900 million) in the world for French films (582 films released in 84 countries), including 82 million admissions in France (US$700 million), 2012 was the fourth best year since 1985. With 144 million admissions outside France (US$1,200 million), 2012 was the best year since at least 1994 (since Unifrance collects data), and the French cinema reached a market share of 2.95% of worldwide admissions and of 4.86% of worldwide sales. Three films particularly contributed to this record year: "Taken 2", "The Intouchables" and "The Artist". In 2012, films shot in French ranked 4th in admissions (145 million) behind films shot in English (more than a billion admissions in the US alone), Hindi (?: no accurate data but estimated at 3 billion for the whole India/Indian languages) and Chinese (275 million in China plus a few million abroad), but above films shot in Korean (115 million admissions in South Korea plus a few millions abroad) and Japanese (102 million admissions in Japan plus a few million abroad, a record since 1973 et its 104 million admissions). French-language movies ranked 2nd in export (outside of French-speaking countries) after films in English. 2012 was also the year French animation studio Mac Guff was acquired by an American studio, Universal Pictures, through its Illumination Entertainment subsidiary. Illumination Mac Guff became the animation studio for some of the top English-language animated movies of the 2010s, including "The Lorax" and the "Despicable Me" franchise.
In 2015 French cinema sold 106 million tickets and grossed €600 million outside of the country. The highest-grossing film was "Taken 3" (€261.7 million) and the largest territory in admissions was China (14.7 million).
As the advent of television threatened the success of cinema, countries were faced with the problem of reviving movie-going. The French cinema market, and more generally the French-speaking market, is smaller than the English-speaking market; one reason being that some major markets, including prominently the United States, are reluctant to generally accept foreign films, especially foreign-language and subtitled productions. As a consequence, French movies have to be amortized on a relatively small market and thus generally have budgets far lower than their American counterparts, ruling out expensive settings and special effects.
The French government has implemented various measures aimed at supporting local film production and movie theaters. The Canal+ TV channel has a broadcast license requiring it to support the production of movies. Some taxes are levied on movies and TV channels for use as subsidies for movie production. Some tax breaks are given for investment in movie productions, as is common elsewhere including in the United States. The sale of DVDs is prohibited for four months after the showing in theaters, so as to ensure some revenue for movie theaters. Recently, Messerlin and Parc (2014, 2017) described the effect of subsidies in the French film industry.
The French national and regional governments involve themselves in film production. For example, the award-winning documentary "In the Land of the Deaf" ("Le Pays des sourds") was created by Nicolas Philibert in 1992. The film was co-produced by multinational partners, which reduced the financial risks inherent in the project; and co-production also ensured enhanced distribution opportunities.
In Anglophone distribution, "In the Land of the Deaf" was presented in French Sign Language (FSL) and French, with English subtitles and closed captions.
Notable French film distribution and/or production companies include: | https://en.wikipedia.org/wiki?curid=10784 |
Cinema of the Soviet Union
The cinema of the Soviet Union includes films produced by the constituent republics of the Soviet Union reflecting elements of their pre-Soviet culture, language and history, albeit they were all regulated by the central government in Moscow. Most prolific in their republican films, after the Russian Soviet Federative Socialist Republic, were Armenia, Azerbaijan, Georgia, Ukraine, and, to a lesser degree, Lithuania, Belarus and Moldavia. At the same time, the nation's film industry, which was fully nationalized throughout most of the country's history, was guided by philosophies and laws propounded by the monopoly Soviet Communist Party which introduced a new view on the cinema, socialist realism, which was different from the one before or after the existence of the Soviet Union.
Upon the establishment of the Russian Soviet Federative Socialist Republic (RSFSR) on November 7, 1917 (although the Union of Soviet Socialist Republics did not officially come into existence until December 30, 1922), what had formerly been the Russian Empire began quickly to come under the domination of a Soviet reorganization of all its institutions. From the outset, the leaders of this new state held that film would be the most ideal propaganda tool for the Soviet Union because of its widespread popularity among the established citizenry of the new land. Vladimir Lenin viewed film as the most important medium for educating the masses in the ways, means and successes of communism. As a consequence Lenin issued the "Directives on the Film Business" on 17 January 1922, which instructed the People's Commissariat for Education to systemise the film business, registering and numbering all films shown in the Russian Soviet Federative Socialist Republic, extracting rent from all privately owned cinemas and subject them to censorship. Joseph Stalin later also regarded cinema as of the prime importance.
However, between World War I and the Russian Revolution, the Russian film industry and the infrastructure needed to support it (e.g., electrical power) had deteriorated to the point of unworkability. The majority of cinemas had been in the corridor between Moscow and Saint Petersburg, and most were out of commission. Additionally, many of the performers, producers, directors and other artists of pre-Soviet Russia had fled the country or were moving ahead of Red Army forces as they pushed further and further south into what remained of the Russian Empire. Furthermore, the new government did not have the funds to spare for an extensive reworking of the system of filmmaking. Thus, they initially opted for project approval and censorship guidelines while leaving what remained of the industry in private hands. As this amounted mostly to cinema houses, the first Soviet films consisted of recycled films of the Russian Empire and its imports, to the extent that these were not determined to be offensive to the new Soviet ideology. Ironically, the first new film released in Soviet Russia did not exactly fit this mold: this was "Father Sergius", a religious film completed during the last weeks of the Russian Empire but not yet exhibited. It appeared on Soviet screens in 1918.
Beyond this, the government was principally able to fund only short, educational films, the most famous of which were the agitki – propaganda films intended to "agitate", or energize and enthuse, the masses to participate fully in approved Soviet activities, and deal effectively with those who remained in opposition to the new order. These short (often one small reel) films were often simple visual aids and accompaniments to live lectures and speeches, and were carried from city to city, town to town, village to village (along with the lecturers) to educate the entire countryside, even reaching areas where film had not been previously seen.
Newsreels, as documentaries, were the other major form of earliest Soviet cinema. Dziga Vertov's newsreel series "Kino-Pravda", the best known of these, lasted from 1922 to 1925 and had a propagandistic bent; Vertov used the series to promote socialist realism but also to experiment with cinema.
Still, in 1921, there was not one functioning cinema in Moscow until late in the year. Its rapid success, utilizing old Russian and imported feature films, jumpstarted the industry significantly, especially insofar as the government did not heavily or directly regulate what was shown, and by 1923 an additional 89 cinemas had opened. Despite extremely high taxation of ticket sales and film rentals, there was an incentive for individuals to begin making feature film product again – there were places to show the films – albeit they now had to conform their subject matter to a Soviet world view. In this context, the directors and writers who were in support of the objectives of communism assumed quick dominance in the industry, as they were the ones who could most reliably and convincingly turn out films that would satisfy government censors.
New talent joined the experienced remainder, and an artistic community assembled with the goal of defining "Soviet film" as something distinct and better from the output of "decadent capitalism". The leaders of this community viewed it essential to this goal to be free to experiment with the entire nature of film, a position which would result in several well-known creative efforts but would also result in an unforeseen counter-reaction by the increasingly solidifying administrators of the government-controlled society.
In 1924 wrote a book on the history of film he says is "the first Soviet attempt at systematization of the meager available sources [on cinema] for the general reader". Along with other articles written by Lebedev and published by "Pravda", "Izvestia" and "Kino". In the book he draws attention to the funding challenges that follow nationalization of Soviet cinema. In 1925 all film organizations merged to form "Sovkino". Under "Sovkino" the film industry was given a tax-free benefit and held a monopoly on all film-related exports and imports.
Sergei Eisenstein's "Battleship Potemkin" was released to wide acclaim in 1925; the film was heavily fictionalized and also propagandistic, giving the party line about the virtues of the proletariat. The "kinokomitet" or "Film Committee" established that same year published translations of important books about film theory by Béla Balázs, Rudolf Harms and Léon Moussinac.
One of the most popular films released in the 1930s was "Circus". Immediately after the end of World War II, color movies such as "The Stone Flower" (1946), "Ballad of Siberia" (1947), and "Cossacks of the Kuban" (1949) were released. Other notable films from the 1940s include "Alexander Nevsky" and "Ivan the Terrible".
In the late 1950s and early 1960s Soviet cinema produced "Ballad of a Soldier", which won the 1961 BAFTA Award for Best Film, and "The Cranes Are Flying".
"The Height" is considered to be one of the best films of the 1950s (it also became the foundation of the bard movement).
In the 1980s there was a diversification of subject matter. Touchy issues could now be discussed openly. The results were films like "Repentance", which dealt with repression in Georgia, and the allegorical science fiction movie "Kin-dza-dza!".
After the death of Stalin, Soviet filmmakers were given a freer hand to film what they believed audiences wanted to see in their film's characters and stories. The industry remained a part of the government and any material that was found politically offensive or undesirable, was either removed, edited, reshot, or shelved. The definition of "socialist realism" was liberalized to allow development of more human characters, but communism still had to remain uncriticized in its fundamentals. Additionally, the degree of relative artistic liberality was changed from administration to administration.
Examples created by censorship include:
The first Soviet Russian state film organization, the Film Subdepartment of the People's Commissariat for Education, was established in 1917. The work of the nationalized motion-picture studios was administered by the All-Russian Photography and Motion Picture Department, which was recognized in 1923 into Goskino, which in 1926 became Sovkino. The world's first state-filmmaking school, the First State School of Cinematography, was established in Moscow in 1919.
During the Russian Civil War, agitation trains and ships visited soldiers, workers, and peasants. Lectures, reports, and political meetings were accompanied by newsreels about events at the various fronts.
In the 1920s, the documentary film group headed by Dziga Vertov blazed the trail from the conventional newsreel to the "image centered publicistic film", which became the basis of the Soviet film documentary. Typical of the 1920s were the topical news serial "Kino-Pravda" and the film "Forward, Soviet!" by Vertov, whose experiments and achievements in documentary films influenced the development of Russian and world cinematography. Other important films of the 1920s were Esfir Shub's historical-revolutionary films such as "The Fall of the Romanov Dynasty". The film "Hydropeat" by Yuri Zhelyabuzhsky marked the beginning of popular science films. Feature-length agitation films in 1918–21 were important in the development of the film industry. Innovation in Russian filmmaking was expressed particularly in the work of Eisenstein. "Battleship Potemkin" was noteworthy for its innovative montage and metaphorical quality of its film language. It won world acclaim. Eisenstein developed concepts of the revolutionary epic in the film "". Also noteworthy was Vsevolod Pudovkin's adaptation of Maxim Gorky's "Mother" to the screen in 1926. Pudovkin developed themes of revolutionary history in the film "The End of St. Petersburg" (1927). Other noteworthy silent films were films dealing with contemporary life such as Boris Barnet's "The House on Trubnaya". The films of Yakov Protazanov were devoted to the revolutionary struggle and the shaping of a new way of life, such as "Don Diego and Pelagia" (1928). Ukrainian director Alexander Dovzhenko was noteworthy for the historical-revolutionary epic "Zvenigora", "Arsenal" and the poetic film "Earth".
In the early 1930s, Russian filmmakers applied socialist realism to their work. Among the most outstanding films was "Chapaev", a film about Russian revolutionaries and society during the Revolution and Civil War. Revolutionary history was developed in films such as "Golden Mountains" by Sergei Yutkevich, "Outskirts" by Boris Barnet, and the Maxim trilogy by Grigori Kozintsev and Leonid Trauberg: "The Youth of Maxim", "The Return of Maxim", and "The Vyborg Side". Also notable were biographical films about Vladimir Lenin such as Mikhail Romm's "Lenin in October" and "Lenin in 1918". The life of Russian society and everyday people were depicted in films such as "Seven Brave Men" and "Komsomolsk" by Sergei Gerasimov. The comedies of Grigori Aleksandrov such as "Circus", "Volga-Volga", and "Tanya" as well as "The Rich Bride" by Ivan Pyryev and "By the Bluest of Seas" by Boris Barnet focus on the psychology of the common person, enthusiasm for work and intolerance for remnants of the past. Many films focused on national heroes, including "Alexander Nevsky" by Sergei Eisenstein, "Minin and Pozharsky" by Vsevolod Pudovkin, and "Bogdan Khmelnitsky" by Igor Savchenko. There were adaptations of literary classics, particularly Mark Donskoy's trilogy of films about Maxim Gorky: "The Childhood of Maxim Gorky", "", and "".
During the late 1920s and early 1930s the Stalin wing of the Communist Party consolidated its authority and set about transforming the Soviet Union on both the economic and cultural fronts. The economy moved from the market-based New Economic Policy (NEP) to a system of central planning. The new leadership declared a "cultural revolution" in which the party would exercise control over cultural affairs, including artistic expression. Cinema existed at the intersection of art and economics; so it was destined to be thoroughly reorganized in this episode of economic and cultural transformation.
To implement central planning in cinema, the new entity Soyuzkino was created in 1930. All the hitherto autonomous studios and distribution networks that had grown up under NEP's market would now be coordinated in their activities by this planning agency. Soyuzkino's authority also extended to the studios of the national republics such as VUFKU, which had enjoyed more independence during the 1920s. Soyuzkino consisted of an extended bureaucracy of economic planners and policy specialists who were charged to formulate annual production plans for the studios and then to monitor the distribution and exhibition of finished films.
With central planning came more centralized authority over creative decision making. Script development became a long, torturous process under this bureaucratic system, with various committees reviewing drafts and calling for cuts or revisions. In the 1930s censorship became more exacting with each passing year. Feature film projects would drag out for months or years and might be terminated at any point.
Alexander Dovzhenko drew from Ukrainian folk culture in such films as "Earth" (1930)
along the way because of the capricious decision of one or another censoring committee.
This redundant oversight slowed down production and inhibited creativity. Although central planning was supposed to increase the film industry's productivity, production levels declined steadily through the 1930s. The industry was releasing over one-hundred features annually at the end of the NEP period, but that figure fell to seventy by 1932 and to forty-five by 1934. It never again reached triple digits during the remainder of the Stalin era. Veteran directors experienced precipitous career declines under this system of control; whereas Eisenstein was able to make four features between 1924 and 1929, he completed only one film, "Alexander Nevsky" (1938) during the entire decade of the 1930s. His planned adaptation of the Ivan Turgenev story "Bezhin Meadow" (1935–37) was halted during production in 1937 and officially banned, one of many promising film projects that fell victim to an exacting censorship system.
Meanwhile, the USSR cut off its film contacts with the West. It stopped importing films after 1931 out of concern that foreign films exposed audiences to capitalist ideology. The industry also freed itself from dependency on foreign technologies. During its industrialization effort of the early 1930s, the USSR finally built an array of factories to supply the film industry with the nation's own technical resources.
To secure independence from the West, industry leaders mandated that the USSR develop its own sound technologies, rather than taking licenses on Western sound systems. Two Soviet scientists, Alexander Shorin in Leningrad (present-day St. Petersburg) and Pavel Tager in Moscow, conducted research through the late 1920s on complementary sound systems, which were ready for use by 1930. The implementation process, including the cost of refitting movie theaters, proved daunting, and the USSR did not complete the transition to sound until 1935. Nevertheless, several directors made innovative use of sound once the technology became available. In "Enthusiasm: The Symphony of Donbass" (1930), his documentary on coal mining and heavy industry, Dziga Vertov based his soundtrack on an elegantly orchestrated array of industrial noises. In "The Deserter" (1933) Pudovkin experimented with a form of "sound counterpoint" by exploiting tensions and ironic dissonances between sound elements and the image track. And in "Alexander Nevsky", Eisenstein collaborated with the composer Sergei Prokofiev on an "operatic" film style that elegantly coordinated the musical score and the image track.
As Soviet cinema made the transition to sound and central planning in the early 1930s, it was also put under a mandate to adopt a uniform film style, commonly identified as "socialist realism". In 1932 the party leadership ordered the literary community to abandon the avant-garde practices of the 1920s and to embrace socialist realism, a literary style that, in practice, was actually close to 19th-century realism. The other arts, including cinema, were subsequently instructed to develop the aesthetic equivalent. For cinema, this meant adopting a film style that would be legible to a broad audience, thus avoiding a possible split between the avant-garde and mainstream cinema that was evident in the late 1920s. The director of Soyuzkino and chief policy officer for the film industry, Boris Shumyatsky (1886–1938), who served from 1931 to 1938, was a harsh critic of the montage aesthetic. He championed a "cinema for the millions", which would use clear, linear narration. Although American movies were no longer being imported in the 1930s, the Hollywood model of continuity editing was readily available, and it had a successful track record with Soviet movie audiences. Soviet socialist realism was built on this style, which assured tidy storytelling. Various guidelines were then added to the doctrine: positive heroes to act as role models for viewers; lessons in good citizenship for spectators to embrace; and support for reigning policy decisions of the Communist Party.
Such aesthetic policies, enforced by the rigorous censorship apparatus of Soyuzkino, resulted in a number of formulaic films. Apparently, they did succeed in sustaining a true "cinema of the masses". The 1930s witnessed some stellar examples of popular cinema. The single most successful film of the decade, in terms of both official praise and genuine affection from the mass audience, was "Chapaev" (1934), directed by the Vasilyev brothers. Based on the life of a martyred Red Army commander, the film was touted as a model of socialist realism, in that Chapayev and his followers battled heroically for the revolutionary cause. The film also humanized the title character, giving him personal foibles, an ironic sense of humour, and a rough peasant charm. These qualities endeared him to the viewing public: spectators reported seeing the film multiple times during its first run in 1934, and "Chapaev" was periodically re-released for subsequent generations of audiences.
A genre that emerged in the 1930s to consistent popular acclaim was the musical comedy, and a master of that form was Grigori Aleksandrov (1903–1984). He effected a creative partnership with his wife, the brilliant comic actress and chanteuse Lyubov Orlova (1902–1975), in a series of crowd-pleasing musicals. Their pastoral comedy "Volga-Volga" (1938) was surpassed only by "Chapaev" in terms of box-office success. The fantasy element of their films, with lively musical numbers reviving the montage aesthetic, sometimes stretched the boundaries of socialist realism, but the genre could also allude to contemporary affairs. In Aleksandrov's 1940 musical "Tanya", Orlova plays a humble servant girl who rises through the ranks of the Soviet industrial leadership after developing clever labour-saving work methods. Audiences could enjoy the film's comic turn on the "Cinderella" story while also learning about the value of efficiency in the workplace.
Immediately after the end of the Second World War, color movies such as "The Stone Flower" (1946), "Ballad of Siberia" (1947), and "Cossacks of the Kuban" (1949) were released.
Other notable films from the 1940s include the black and white films, "Alexander Nevsky", "Ivan the Terrible" and "Encounter at the Elbe".
The Soviet film industry suffered during the period after World War II. On top of dealing with the severe physical and monetary losses of the war, Stalin's regime tightened social control and censorship in order to manage the effects recent exposure to the West had on the people. The postwar period was marked by an end of almost all autonomy in the Soviet Union. The "Catalogue of Soviet Films" recorded remarkably low numbers of films being produced from 1945 to 1953, with as few as nine films produced in 1951 and a maximum of twenty-three produced in 1952. These numbers do not, however, include many of the works which are not generally considered to be "film" in an elitist sense, such as filmed versions of theatrical works and operas, feature-length event documentaries and travelogues, short films for children, and experimental stereoscopic films. But compared to the four hundred to five hundred films produced every year by Hollywood, the Soviet film industry was practically dead.
Even as the economy of the Soviet Union strengthened, film production continued to decrease. A resolution passed by the Council of Ministers in 1948 further crippled the film industry. The resolution criticized the work of the industry, saying that an emphasis placed on quantity over quality had ideologically weakened the films. Instead, the council insisted that every film produced must be a masterpiece for promoting communist ideas and the Soviet system. Often, Stalin had the ultimate decision on whether a newly produced film was appropriate for public viewing. In private screenings after meetings of the Politburo, the Minister of the Film Industry privately screened films for Stalin and top members of Soviet government. The strict limitations on content and complex, centralized process for approval drove many screenwriters away, and studios had much difficulty producing any of the quality films mandated by the 1948 resolution.
Movie theaters in the postwar period faced the problem of satisfying the growing appetites of Soviet audiences for films while dealing with the shortage of newly produced works from studios. In response, cinemas played the same films for months at a time, many of them the works of the late 1930s. Anything new drew millions of people to the box office, and many theaters screened foreign films to attract larger audiences. Most of these foreign films were "trophy films", two thousand films brought into the country by the Red Army after the occupation of Germany and Eastern Europe in World War II. In the top secret minutes for the CPSU Committee Meeting on August 31, 1948, the committee permitted the Minister of the Film Industry to release fifty of these films in the Soviet Union. Of these fifty, Bolshakov was only allowed to release twenty-four for screening to the general public, mainly films made in Germany, Austria, Italy, and France. The other twenty-six films, consisting almost entirely of American films, were only allowed to be shown in private screenings. The minutes also include a separate list of permitted German musical films, which were mainly German and Italian film adaptations of famous operas. Most of the trophy films were released in 1948–49, but somewhat strangely, compiled lists of the released films include ones not previously mentioned in the official minutes of the Central Committee.
The public release of these trophy films seems contradictory in the context of the 1940s Soviet Union. The Soviet government allowed the exhibition of foreign films which contained far more subversive ideas than any a Soviet director would have ever attempted putting in a film at a time when Soviet artists found themselves unemployed because of censorship laws. Historians hypothesize many possible reasons why the Soviet government showed such seemingly inexplicable leniency toward the foreign films. The government may have granted cinemas the right to show the films so they could stay in business after the domestic film industry had declined. A second hypothesis speculates that the government saw the films as an easy source of money to help rebuild the nation after the war. The minutes of the CPSU Central Committee meeting seem to support the latter idea with instructions that the films are to bring in a net income of at least 750 million rubles to the State coffers over the course of a year from public and private screenings, and 250 million rubles of this were supposed to come from rentals to the trade union camera network.
In addition to releasing the films, the committee also charged Bolshakov and the Agitation and Propaganda Department of the CPSU Central Committee "with making the necessary editorial corrections to the films and with providing an introductory text and carefully edited subtitles for each film." In general, the captured Nazi films were considered apolitical enough to be shown to the general populace. Still the Propaganda and Agitation Section of the Central Committee ran into trouble with the censoring of two films slated for release. The censors found it impossible to remove the "Zionist" ideas from "Jud Suss", an anti-Semitic, Nazi propaganda film. The censors also had trouble with a film adaptation of "Of Mice and Men" because of the representation of the poor as a detriment to society.
There is very little direct evidence of how Soviet audiences received the trophy films. Soviet magazines or newspapers never reviewed the films, there were no audience surveys, and no records exist of how many people viewed the films. In order to judge the reception and popularity of these foreign films, historians have mainly relied on anecdotal evidence. The German musical comedy "The Woman of My Dreams" has received mixed reviews according to this evidence. ' published a supposed survey compiled of readers' letters to the editor in March, 1947 which criticize the film for being idealess, low brow, and even harmful. Bulat Okudzhava wrote a contradicting viewpoint in ' in 1986, saying that everyone in the city of Tbilisi was crazy about the film. According to him, everywhere he went people were talking about the film and whistling the songs. Of the two accounts, film historians generally consider Okudzhava's more reliable than the one presented by "Kultura i Zhizn". Films such as "His Butler's Sister", "The Thief of Bagdad", "Waterloo Bridge" and "Sun Valley Serenade", although not technically trophies as they had been purchased legally during the wartime alliance with America, were highly popular with Soviet audiences. In "Vechernyaya Moskva" (October 4, 1946), M. Chistiakov reprimanded theaters and the Soviet film industry for the fact that over a six-month timespan, sixty of the films shown had been tasteless Western films rather than Soviet ones. Even in criticism of the films and the crusading efforts of the anti-cosmopolitan campaign against the trophy films, it is clear to see they had quite an impact on Soviet society.
With the start of the Cold War, writers, still considered the primary auteurs, were all the more reluctant to take up script writing, and the early 1950s saw only a handful of feature films completed during any year. The death of Stalin was a relief to some people, and all the more so was the official trashing of his public image as a benign and competent leader by Nikita Khrushchev two years later. This latter event gave filmmakers the margin of comfort they needed to move away from the narrow stories of socialist realism, expand its boundaries, and begin work on a wider range of entertaining and artistic Soviet films.
Notable films include:
The 1960s and 1970s saw the creation of many films, many of which molded Soviet and post-Soviet culture. They include:
Soviet films tend to be rather culture-specific and are difficult for many foreigners to understand without having been exposed to the culture first. Various Soviet directors were more concerned with artistic success than with economical success (they were paid by the academy, and so money was not a critical issue). This contributed to the creation of a large number of more philosophical and poetical films. Most well-known examples of such films are those by directors Andrei Tarkovsky, Sergei Parajanov and Nikita Mikhalkov. In keeping with Russian culture, tragi-comedies were very popular. These decades were also prominent in the production of the Eastern or Red Western.
Animation was a respected genre, with many directors experimenting with technique. "Tale of Tales" (1979) by Yuri Norstein was twice given the title of "Best Animated Film of All Eras and Nations" by animation professionals from around the world, in 1984 and 2002.
In the year of the 60th anniversary of the Soviet cinema (1979), on April 25, a decision of the Presidium of the Supreme Soviet of the USSR established a commemorative "". It was then celebrated in the USSR each year on August 27, the day on which Vladimir Lenin signed a decree to nationalise the country's cinematic and photographic industries.
The policies of perestroika and glasnost saw a loosening of the censorship of earlier eras. A genre known as "" (from the Russian word for "gore"), including films such as "Little Vera", portrayed the harsher side of Soviet life. Notable films of this period include:
Early personalities in the development of Soviet cinema:
Later personalities: | https://en.wikipedia.org/wiki?curid=10786 |
Cinema of Italy
The Cinema of Italy comprises the films made within Italy or by Italian directors. The first Italian director is considered to be , a collaborator of the Lumière Brothers, who filmed Pope Leo XIII in 1896. Since its beginning, Italian cinema has influenced film movements worldwide. As of 2018, Italian films have won 14 Academy Awards for Best Foreign Language Film (the most of any country) as well as 12 Palmes d'Or (the second-most of any country), one Academy Award for Best Picture and many Golden Lions and Golden Bears.
Italy is a birthplace of Art Cinema and the stylistic aspect of film has been the most important factor in the history of Italian movies. In the early 1900s, artistic and epic films such as "Otello" (1906), "The Last Days of Pompeii" (1908), "L'Inferno" (1911), "Quo Vadis" (1913), and "Cabiria" (1914), were made as adaptations of books or stage plays. Italian filmmakers were utilizing complex set designs, lavish costumes, and record budgets, to produce pioneering films. One of the first cinematic avante-garde movements, Italian Futurism, took place in Italy in the late 1910s. After a period of decline in the 1920s, the Italian film industry was revitalized in the 1930s with the arrival of sound film. A popular Italian genre during this period, the "Telefoni Bianchi", consisted of comedies with glamorous backgrounds.
While Italy's Fascist government provided financial support for the nation's film industry, most notably the construction of the Cinecittà studios (the largest film studio in Europe), it also engaged in censorship, and thus many Italian films produced in the late 1930s were propaganda films. Post-World War II Italy saw the rise of the influential Italian neorealist movement, which launched the directorial careers of Luchino Visconti, Roberto Rossellini, and Vittorio De Sica. Neorealism declined in the late 1950s in favor of lighter films, such as those of the "Commedia all'italiana" genre and important directors like Federico Fellini and Michelangelo Antonioni. Actresses such as Sophia Loren, Giulietta Masina and Gina Lollobrigida achieved international stardom during this period.
The Spaghetti Western achieved popularity in the mid-1960s, peaking with Sergio Leone's Dollars Trilogy, which featured enigmatic scores by composer Ennio Morricone, which have become popular culture icons of the Western genre. Erotic Italian thrillers, or "giallos", produced by directors such as Mario Bava and Dario Argento in the 1970s, influenced the horror genre worldwide. During the 1980s and 1990s, directors such as Ermanno Olmi, Bernardo Bertolucci, Giuseppe Tornatore, Gabriele Salvatores and Roberto Benigni brought critical acclaim back to Italian cinema.
The country is also famed for its prestigious Venice Film Festival, the oldest film festival in the world, held annually since 1932 and awarding the Golden Lion. In 2008 the Venice Days ("Giornate degli Autori"), a section held in parallel to the Venice Film Festival, has produced in collaboration with Cinecittà studios and the Ministry of Cultural Heritage a list of 100 films that have changed the collective memory of the country between 1942 and 1978: the "100 Italian films to be saved".
The French Lumière brothers commenced public screenings in Italy in 1896: in March 1896, in Rome and Milan; in April in Naples, Salerno and Bari; in June in Livorno; in August in Bergamo, Bologna and Ravenna; in October in Ancona; and in December in Turin, Pescara and Reggio Calabria. Lumière trainees produced short films documenting everyday life and comic strips in the late 1890s and early 1900s. Pioneering Italian cinematographer Filoteo Alberini patented his "Kinetograph" during this period.
The Italian film industry took shape between 1903 and 1908, led by three major organizations: Cines, based in Rome; and the Turin-based companies Ambrosio Film and Itala Film. Other companies soon followed in Milan and Naples, and these early companies quickly attained a respectable production quality and were able to market their products both within Italy and abroad.
Early Italian films typically consisted of adaptations of books or stage plays, such as Mario Caserini's "Otello" (1906) and Arturo Ambrosio's 1908 adaptation of the novel, "The Last Days of Pompeii". Also popular during this period were films about historical figures, such as Caserini's "Beatrice Cenci" (1909) and Ugo Falena's "Lucrezia Borgia" (1910). "L'Inferno", produced by Milano Films in 1911, was the first full-length Italian feature film ever made. Popular early Italian actors included Emilio Ghione, Alberto Collo, Bartolomeo Pagano, Amleto Novelli, Lyda Borelli, Ida Carloni Talli, Lidia Quaranta and Maria Jacobini.
Enrico Guazzone's 1913 film "Quo Vadis" was one of the earliest "blockbusters" in cinema history, utilizing thousands of extras and a lavish set design. Giovanni Pastrone's 1914 film "Cabiria" was an even larger production, requiring two years and a record budget to produce, and it was the first epic film ever made. Nino Martoglio's "Lost in Darkness", also produced in 1914, documented life in the slums of Naples, and is considered a precursor to the Neorealist movement of the 1940s and 1950s.
Between 1911 and 1919, Italy was home to the first avant-garde movement in cinema, inspired by the country's Futurism movement. The 1916 Manifesto of Futuristic Cinematography was signed by Filippo Marinetti, Armando Ginna, Bruno Corra, Giacomo Balla and others. To the Futurists, cinema was an ideal art form, being a fresh medium, and able to be manipulated by speed, special effects and editing. Most of the futuristic-themed films of this period have been lost, but critics cite "Thaïs" (1917) by Anton Giulio Bragaglia as one of the most influential, serving as the main inspiration for German Expressionist cinema in the following decade.
The Italian film industry struggled against rising foreign competition in the years following World War I. Several major studios, among them Cines and Ambrosio, formed the Unione Cinematografica Italiana to coordinate a national strategy for film production. This effort was largely unsuccessful, however, due to a wide disconnect between production and exhibition (some movies weren't released until several years after they were produced). Among the notable Italian films of the late silent era were Mario Camerini's "Rotaio" (1929) and Alessandro Blasetti's "Sun" (1929).
In 1930, Gennaro Righelli directed the first Italian talking picture, "The Song of Love". This was followed by Blasetti's "Mother Earth" (1930) and "Resurrection" (1931), and Camerini's "Figaro and His Great Day" (1931). The advent of talkies led to stricter censorship by the Fascist government.
During the 1930s, light comedies known as "telefoni bianchi" ("white telephones") were predominant in Italian cinema. These films, which featured lavish set designs, promoted conservative values and respect for authority, and thus typically avoided the scrutiny of government censors. Important examples of "telefoni bianchi" include Guido Brignone's "Paradiso" (1932), Carlo Bragaglia's "O la borsa o la vita" (1933), and Righelli's "Together in the Dark" (1935). Historical films such as Blasetti's "1860" (1934) and Carmine Gallone's "" (1937) were also popular during this period.
In 1934, the Italian government created the General Directorate for Cinema ("Direzione Generale per le Cinematografia"), and appointed Luigi Freddi its director. With the approval of Benito Mussolini, this directorate called for the establishment of a town southeast of Rome devoted exclusively to cinema, dubbed the "Cinecittà" ("Cinema City"). Completed in 1937, the Cinecittà provided everything necessary for filmmaking: theaters, technical services, and even a cinematography school, the Centro Sperimentale di Cinematografia, for younger apprentices. The Cinecittà studios were Europe's most advanced production facilities, and greatly boosted the technical quality of Italian films. Many films are still shot entirely in Cinecittà.
During this period, Mussolini's son, Vittorio, created a national production company and organized the work of noted authors, directors and actors (including even some political opponents), thereby creating an interesting communication network among them, which produced several noted friendships and stimulated cultural interaction.
By the end of World War II, the Italian "neorealist" movement had begun to take shape. Neorealist films typically dealt with the working class (in contrast to the "Telefoni Bianchi"), and were shot on location. Many neorealist films, but not all, utilized non-professional actors. Though the term "neorealism" was used for the first time to describe Luchino Visconti’s 1943 film, "Ossessione", there were several important precursors to the movement, most notably Camerini's "What Scoundrels Men Are!" (1932), which was the first Italian film shot entirely on location, and Blasetti's 1942 film, "Four Steps in the Clouds".
"Ossessione" angered Fascist officials. Upon viewing the film, Vittorio Mussolini is reported to have shouted, "This is not Italy!" before walking out of the theater. The film was subsequently banned in the Fascist-controlled parts of Italy. While neorealism exploded after the war, and was incredibly influential at the international level, neorealist films made up only a small percentage of Italian films produced during this period, as postwar Italian moviegoers preferred escapist comedies starring actors such as Totò and Alberto Sordi.
Neorealist works such as Roberto Rossellini's trilogy "Rome, Open City" (1945), "Paisà" (1946), and "Germany, Year Zero" (1948), with professional actors such as Anna Magnani and a number of non-professional actors, attempted to describe the difficult economic and moral conditions of postwar Italy and the changes in public mentality in everyday life. Visconti's "The Earth Trembles" (1948) was shot on location in a Sicilian fishing village, and utilized local non-professional actors. Giuseppe De Santis, on other hand, used actors such as Silvana Mangano and Vittorio Gassman in his 1949 film, "Bitter Rice", which is set in the Po Valley during rice-harvesting season.
Poetry and cruelty of life were harmonically combined in the works that Vittorio De Sica wrote and directed together with screenwriter Cesare Zavattini: among them, "Shoeshine" (1946), "The Bicycle Thief" (1948) and "Miracle in Milan" (1951). The 1952 film "Umberto D." showed a poor old man with his little dog, who must beg for alms against his dignity in the loneliness of the new society. This work is perhaps De Sica's masterpiece and one of the most important works in Italian cinema. It was not a commercial success and since then it has been shown on Italian television only a few times. Yet it is perhaps the most violent attack, in the apparent quietness of the action, against the rules of the new economy, the new mentality, the new values, and it embodies both a conservative and a progressive view.
Although "Umberto D." is considered the end of the neorealist period, later films such as Federico Fellini's "La Strada" (1954) and De Sica's 1960 film "Two Women" (for which Sophia Loren won the Oscar for Best Actress) are grouped with the genre. Director Pier Paolo Pasolini's first film, "Accattone" (1961), shows a strong neorealist influence. Italian neorealist cinema influenced filmmakers around the world, and helped inspire other film movements, such as the French New Wave and the Polish Film School. The Neorealist period is often simply referred to as "The Golden Age" of Italian Cinema by critics, filmmakers, and scholars.
It has been said that after "Umberto D." nothing more could be added to neorealism. Possibly because of this, neorealism effectively ended with that film; subsequent works turned toward lighter atmospheres, perhaps more coherent with the improving conditions of the country, and this genre has been called "pink neorealism". This trend allowed better-"equipped" actresses to become real celebrities, such as Sophia Loren, Gina Lollobrigida, Silvana Pampanini, Lucia Bosé, Barbara Bouchet, Eleonora Rossi Drago, Silvana Mangano, Virna Lisi, Claudia Cardinale and Stefania Sandrelli. Soon pink neorealism, such as "Pane, amore e fantasia" (1953) with Vittorio De Sica and Gina Lollobrigida, was replaced by the "Commedia all'italiana", a unique genre that, born on an ideally humouristic line, talked instead very seriously about important social themes.
At this time, on the more commercial side of production, the phenomenon of Totò, a Neapolitan actor who is acclaimed as the major Italian comic, exploded. His films (often with Peppino De Filippo and almost always with Mario Castellani) expressed a sort of neorealistic satire, in the means of a "guitto" (a "hammy" actor) as well as with the art of the great dramatic actor he also was. A "film-machine" who produced dozens of titles per year, his repertoire was frequently repeated. His personal story (a prince born in the poorest "rione" (section of the city) of Naples), his unique twisted face, his special mimic expressions and his gestures created an inimitable personage and made him one of the most beloved Italians of the 1960s.
Italian Comedy is generally considered to have started with Mario Monicelli's "I soliti Ignoti" ("Big Deal on Madonna Street", 1958) and derives its name from the title of Pietro Germi's "Divorzio all'Italiana" ("Divorce Italian Style", 1961). For a long time this definition was used with a derogatory intention.
Vittorio Gassman, Marcello Mastroianni, Ugo Tognazzi, Alberto Sordi, Claudia Cardinale, Monica Vitti and Nino Manfredi were among the stars of these movies, that described the years of the economical reprise and investigated Italian customs, a sort of self-ethnological research.
In 1961 Dino Risi directed "Una vita difficile" ("A Difficult Life"), then "Il sorpasso" (The Easy Life), now a cult-movie, followed by: "I Mostri" ("The Monsters", also known as "15 From Rome"), "In nome del Popolo Italiano" ("In the Name of the Italian People") and "Profumo di donna" ("Scent of a Woman").
Monicelli's works include "La grande guerra" ("The Great War"), "I compagni" ("Comrades", also known as "The Organizer"), "L'Armata Brancaleone", "Vogliamo i colonnelli" ("We Want the Colonels"), "Romanzo popolare" (Popular Novel) and the "Amici miei" series.
A series of black-and-white films based on Don Camillo character created by the Italian writer and journalist Giovannino Guareschi were made between 1952 and 1965. These were French-Italian coproductions, and starred Fernandel as Don Camillo and Gino Cervi as Peppone. The titles are: "The Little World of Don Camillo", "The Return of Don Camillo", "Don Camillo's Last Round", "", and "Don Camillo in Moscow". Mario Camerini began filming the film "Don Camillo e i giovani d'oggi" but had to stop filming due to Fernandel's falling ill, which resulted in his untimely death. The film was then completed in 1972 with Gastone Moschin playing the role of Don Camillo and Lionel Stander as Peppone. A Don Camillo ("The World of Don Camillo") film was remade in 1983, an Italian production with Terence Hill directing and also starring as Don Camillo. Colin Blakely performed Peppone in one of his last film roles.
In the late 1940s, Hollywood studios began to shift production abroad to Europe. Italy was, along with Britain, one of the major destinations for American film companies. Shooting at Cinecittà, large-budget films such as "Quo Vadis" (1951), "Roman Holiday" (1953), "Ben-Hur" (1959), and "Cleopatra" (1963) were made in English with international casts and sometimes, but not always, Italian settings or themes. The heyday of what was dubbed '"Hollywood on the Tiber" was between 1950 and 1970, during which time many of the most famous names in world cinema made films in Italy.
With the release of 1958's "Hercules", starring American bodybuilder Steve Reeves, the Italian film industry gained entree to the American film market. These films, many with mythological or Bible themes, were low-budget costume/adventure dramas, and had immediate appeal with both European and American audiences. Besides the many films starring a variety of muscle men as Hercules, heroes such as Samson and Italian fictional hero Maciste were common. Sometimes dismissed as low-quality escapist fare, the Peplums allowed newer directors such as Sergio Leone and Mario Bava a means of breaking into the film industry. Some, such as Mario Bava's "Hercules in the Haunted World" (Italian: Ercole Al Centro Della Terra) are considered seminal works in their own right. As the genre matured, budgets sometimes increased, as evidenced in 1962's "I sette gladiatori" ("The Seven Gladiators" in 1964 US release), a wide-screen epic with impressive sets and matte-painting work. Most Peplum films were in color, whereas previous Italian efforts had often been black and white.
On the heels of the Peplum craze, a related genre, the Spaghetti Western arose and was popular both in Italy and elsewhere. These films differed from traditional westerns by being filmed in Europe on limited budgets, but featured vivid cinematography.
The most popular Spaghetti Westerns were those of Sergio Leone, whose Dollars Trilogy ("A Fistful of Dollars", an unauthorized remake of the Japanese film "Yojimbo" by Akira Kurosawa; "For a Few Dollars More", an original sequel; and "The Good, the Bad and the Ugly", a World-famous prequel), featuring Clint Eastwood as a character marketed as "the Man with No Name" and notorious scores by Ennio Morricone, came to define the genre along with "Once Upon a Time in the West".
Another popular Spaghetti Western film is "Django", starring Franco Nero as the titular character, another "Yojimbo" plagiarism, which was followed by both an authorized sequel ("Django Strikes Again") and an overwhelming number of unauthorized uses of the same character in other films.
Also considered Spaghetti Westerns is a film genre which combined traditional western ambiance with a Commedia all'italiana-type comedy; films including "They Call Me Trinity" and "Trinity Is STILL My Name!", which featured Bud Spencer and Terence Hill, the stage names of Carlo Pedersoli and Mario Girotti.
During the 1960s and 70s, Italian filmmakers Mario Bava, Riccardo Freda, Antonio Margheriti and Dario Argento developed "giallo" horror films that become classics and influenced the genre in other countries. Representative films include: "Black Sunday", "Castle of Blood", "Twitch of the Death Nerve", "The Bird with the Crystal Plumage", "Deep Red" and "Suspiria."
Due to the success of the James Bond film series the Italian film industry made large amounts of imitations and spoofs in the Eurospy genre from 1964-1967.
Following the 1960s boom of shockumentary "Mondo films" such as Gualtiero Jacopetti's "Mondo Cane", during the late 1970s and early 1980s, Italian cinema became internationally synonymous with violent horror films. These films were primarily produced for the video market and were credited with fueling the "video nasty" era in the United Kingdom.
Directors in this genre included Lucio Fulci, Joe D'Amato, Umberto Lenzi and Ruggero Deodato. Some of their films faced legal challenges in the United Kingdom; after the Video Recordings Act of 1984, it became a legal offense to sell a copy of such films as "Cannibal Holocaust" and "SS Experiment Camp". Italian films of this period are usually grouped together as exploitation films.
Several countries charged Italian studios with exceeding the boundaries of acceptability with their late-1970s Nazi exploitation films, inspired by American movies such as "Ilsa, She Wolf of the SS". The Italian works included the notorious but comparatively tame "SS Experiment Camp" and the far more graphic "Last Orgy of the Third Reich" (Italian: L'ultima orgia del III Reich). These films showed, in great detail, sexual crimes against prisoners at concentration camps. These films may still be banned in the United Kingdom and other countries.
Poliziotteschi (; plural of poliziottesco) films constitute a subgenre of crime and action film that emerged in Italy in the late 1960s and reached the height of their popularity in the 1970s. They are also known as polizieschi all'italiana, Euro-crime, Italo-crime, spaghetti crime films or simply Italian crime films. Most notable international actors acted in this genre of films such Alain Delon, Henry Silva, Fred Williamson, Charles Bronson, Tomas Milian and others international stars.
Between the late 1970s and mid 1980s, Italian cinema was in crisis; "art films" became increasingly isolated, separating from the mainstream Italian cinema.
Among the major artistic films of this era were "La città delle donne", "E la nave va", "Ginger and Fred" by Fellini, "L'albero degli zoccoli" by Ermanno Olmi (winner of the Palme d'Or at the Cannes Film Festival), "La notte di San Lorenzo" by Paolo and Vittorio Taviani, Antonioni's "Identificazione di una donna", and "Bianca" and "La messa è finita" by Nanni Moretti. Although not entirely Italian, Bertolucci's "The Last Emperor", winner of 9 Oscars, and "Once Upon a Time in America" of Sergio Leone came out of this period also.
During this time, commedia sexy all'italiana films, described as "trash films", were popular in Italy. These comedy films were of little artistic value and reached their popularity by confronting Italian social taboos, most notably in the sexual sphere. Actors such as Lino Banfi, Diego Abatantuono, Alvaro Vitali, Gloria Guida, Barbara Bouchet and Edwige Fenech owe much of their popularity to these films.
Also considered part of the trash genre are films which feature Ugo Fantozzi, a character invented by Paolo Villaggio for his TV sketches and newspaper short stories. Although Villaggio's movies tend to bridge trash comedy with a more elevated social satire; this character had a great impact on Italian society, to such a degree that the adjective "fantozziano" entered the lexicon. Of the many films telling of Fantozzi's misadventures, the most notable and famous were "Fantozzi" and "Il secondo tragico Fantozzi", but many other were produced.
A new generation of directors has helped return Italian cinema to a healthy level since the end of the 1980s. Probably the most noted film of the period is "Nuovo Cinema Paradiso", for which Giuseppe Tornatore won a 1989 Oscar (awarded in 1990) for Best Foreign Language Film. This award was followed when Gabriele Salvatores's "Mediterraneo" won the same prize for 1991. "" (1994), directed by and starring Massimo Troisi, received five nominations at the Academy Awards, and won for Best Original Score. Another exploit was in 1998 when Roberto Benigni won three oscars for his movie "Life Is Beautiful" ("La vita è bella)" (Best Actor, Best Foreign Film, Best Music). In 2001 Nanni Moretti's film "The Son's Room" ("La stanza del figlio") received the Palme d'Or at the Cannes Film Festival.
Other noteworthy recent Italian films include: "Jona che visse nella balena" directed by Roberto Faenza, "Il grande cocomero" by Francesca Archibugi, "The Profession of Arms" ("Il mestiere delle armi") by Olmi, "L'ora di religione" by Marco Bellocchio, "Il ladro di bambini", "Lamerica", "The Keys to the House" ("Le chiavi di casa") by Gianni Amelio, "I'm Not Scared" ("Io non ho paura") by Gabriele Salvatores, "Le fate ignoranti", "Facing Windows" ("La finestra di fronte") by Ferzan Özpetek, "Good Morning, Night" ("Buongiorno, notte") by Marco Bellocchio, "The Best of Youth" ("La meglio gioventù") by Marco Tullio Giordana, "The Beast in the Heart" ("La bestia nel cuore") by Cristina Comencini.
In 2008 Paolo Sorrentino's "Il Divo", a biographical film based on the life of Giulio Andreotti, won the Jury prize and "Gomorra", a crime drama film, directed by Matteo Garrone won the Gran Prix at the Cannes Film Festival.
Paolo Sorrentino's "The Great Beauty" ("La Grande Bellezza") won the 2014 Academy Award for Best Foreign Language Film.
The two highest-grossing Italian films in Italy have both been directed by Gennaro Nunziante and starred Checco Zalone: "Sole a catinelle" (2013) with €51.8 million, and "Quo Vado?" (2016) with €65.3 million.
"They Call Me Jeeg", a 2016 critically acclaimed superhero film directed by Gabriele Mainetti and starring Claudio Santamaria, won many awards, such as eight David di Donatello, two Nastro d'Argento, and a Globo d'oro.
Gianfranco Rosi's documentary film "Fire at Sea" (2016) won the Golden Bear at the 66th Berlin International Film Festival. "They Call Me Jeeg" and "Fire at Sea" were also selected as the Italian entry for the Best Foreign Language Film at the 89th Academy Awards, but they were not nominated.
Other successful 2010s Italian films include: "Vincere" by Marco Bellocchio, "The First Beautiful Thing" ("La prima cosa bella"), "Human Capital" ("Il capitale umano") and "Like Crazy" ("La pazza gioia") by Paolo Virzì, "We Have a Pope" ("Habemus Papam") and "Mia Madre" by Nanni Moretti, "Caesar Must Die" ("Cesare deve morire") by Paolo and Vittorio Taviani, "Don't Be Bad" ("Non essere cattivo") by Claudio Caligari, "Romanzo Criminale" by Michele Placido (that spawned a TV series, "Romanzo criminale - La serie"), "Youth" ("La giovinezza") by Paolo Sorrentino, "Suburra" by Stefano Sollima, "Perfect Strangers" ("Perfetti sconosciuti") by Paolo Genovese, "Mediterranea" and "A Ciambra" by Jonas Carpignano, "Tale of Tales" ("Il racconto dei racconti") and "Dogman" by Matteo Garrone, and "Italian Race" ("Veloce come il vento") and "" ("Il primo re") by Matteo Rovere.
"Call Me by Your Name" (2017), the final installment in Luca Guadagnino's thematic "Desire" trilogy, following "I Am Love" (2009) and "A Bigger Splash" (2015), received widespread acclaim and numerous accolades, including the Academy Award for Best Adapted Screenplay in 2018.
After the United States and the United Kingdom, Italy has the most Academy Awards wins.
Italy is the most awarded country at the Academy Award for Best Foreign Language Film, with 14 awards won, 3 Special Awards and 31 nominations. Winners with the year of the ceremony:
In 1961, Sophia Loren won the Academy Award for Best Actress for her role as a woman who is raped in World War II, along with her adolescent daughter, in Vittorio De Sica's "Two Women". She was the first actress to win an Academy Award for a performance in any foreign language, and the second Italian leading lady Oscar-winner, after Anna Magnani for "The Rose Tattoo". In 1998, Roberto Benigni was the first Italian actor to win for the Best Actor for "Life Is Beautiful".
Italian-born filmmaker Frank Capra won three times at the Academy Award for Best Director, for "It Happened One Night", "Mr. Deeds Goes to Town" and "You Can't Take It with You". Bernardo Bertolucci won the award for "The Last Emperor", and also Best Adapted Screenplay for the same movie.
Ennio De Concini, Alfredo Giannetti and Pietro Germi won the award for Best Original Screenplay for "Divorce Italian Style". The Academy Award for Best Film Editing was won by Gabriella Cristiani for "The Last Emperor" and by Pietro Scalia for "JFK" and "Black Hawk Down".
The award for Best Original Score was won by Nino Rota for "The Godfather Part II"; Giorgio Moroder for "Midnight Express"; Nicola Piovani for "Life is Beautiful"; Dario Marianelli for "Atonement"; and Ennio Morricone for "The Hateful Eight". Giorgio Moroder also won the award for Best Original Song for "Flashdance" and "Top Gun".
The Italian winners at the Academy Award for Best Production Design are Dario Simoni for "Lawrence of Arabia" and "Doctor Zhivago"; Elio Altramura and Gianni Quaranta for "A Room with a View"; Bruno Cesari, Osvaldo Desideri and Ferdinando Scarfiotti for "The Last Emperor"; Luciana Arrighi for "Howards End"; and Dante Ferretti and Francesca Lo Schiavo for "The Aviator", "" and "Hugo".
The winners at the Academy Award for Best Cinematography are: Tony Gaudio for "Anthony Adverse"; Pasqualino De Santis for "Romeo and Juliet"; Vittorio Storaro for "Apocalypse Now", "Reds" and "The Last Emperor"; and Mauro Fiore for "Avatar".
The winners at the Academy Award for Best Costume Design are Piero Gherardi for "La dolce vita" and "8½"; Vittorio Nino Novarese for "Cleopatra" and "Cromwell"; Danilo Donati for "The Taming of the Shrew", "Romeo and Juliet", and "Fellini's Casanova"; Franca Squarciapino for "Cyrano de Bergerac"; Gabriella Pescucci for "The Age of Innocence"; and Milena Canonero for "Barry Lyndon", "Chariots of Fire", "Marie Antoinette" and "The Grand Budapest Hotel".
Special effects artist Carlo Rambaldi won three Oscars: one Special Achievement Academy Award for Best Visual Effects for "King Kong" and two Academy Awards for Best Visual Effects for "Alien" (1979) and "E.T. the Extra-Terrestrial". The Academy Award for Best Makeup and Hairstyling was won by Manlio Rocchetti for "Driving Miss Daisy", and Alessandro Bertolazzi and Giorgio Gregorini for "Suicide Squad".
Sophia Loren, Federico Fellini, Michelangelo Antonioni, Dino De Laurentiis, Ennio Morricone, and Piero Tosi also received the Academy Honorary Award.
Italy has produced many important cinematography "auteurs", including Federico Fellini, Michelangelo Antonioni, Roberto Rossellini, Vittorio De Sica, Luchino Visconti, Ettore Scola, Sergio Leone, Luigi Comencini, Pier Paolo Pasolini, Bernardo Bertolucci, Franco Zeffirelli, Ermanno Olmi, Valerio Zurlini, Florestano Vancini, Mario Monicelli, Marco Ferreri, Elio Petri, Dino Risi and Mauro Bolognini. These directors' works often span many decades and genres. Present "auteurs" include Giuseppe Tornatore, Marco Bellocchio, Nanni Moretti, Gabriele Salvatores, Gianni Amelio, Dario Argento and Paolo Sorrentino. | https://en.wikipedia.org/wiki?curid=10787 |
Cinema of Poland
The history of cinema in Poland is almost as long as the history of cinematography, and it has universally recognized achievements, even though Polish films tend to be less commercially available than films from several other European nations.
After World War II, the communist government built an auteur-based national cinema, trained hundreds of new directors and empowered them to make films. Filmmakers like Roman Polański, Krzysztof Kieślowski, Agnieszka Holland, Andrzej Wajda, Andrzej Żuławski, Andrzej Munk, and Jerzy Skolimowski impacted the development of Polish film-making. In more recent years, the industry has been producer-led with finance being the key to a film being made, and with many independent filmmakers of all genres, Polish productions tend to be more inspired by American film.
The first cinema in Poland (then occupied by the Russian Empire) was founded in Łódź in 1899, several years after the invention of the Cinematograph. Initially dubbed "Living Pictures Theatre", it gained much popularity and by the end of the next decade there were cinemas in almost every major town of Poland. Arguably the first Polish filmmaker was Kazimierz Prószyński, who filmed various short documentaries in Warsaw. His pleograph film camera had been patented before the Lumière brothers' invention and he is credited as the author of the earliest surviving Polish documentary titled "Ślizgawka w Łazienkach" ("Skating-rink in the Royal Baths"), as well as the first short narrative films "Powrót birbanta" ("Rake's return home") and "Przygoda dorożkarza" ("Cabman's Adventure"), both created in 1902. Another pioneer of cinema was Bolesław Matuszewski, who became one of the first filmmakers working for the Lumière company - and the official "cinematographer" of the Russian tsars in 1897.
The earliest surviving feature film, the "Antoś pierwszy raz w Warszawie" ("Antoś for the First Time in Warsaw") was made in 1908 by Antoni Fertner. The date of its première, October 22, 1908, is considered the founding date of Polish film industry. Soon Polish artists started experimenting with other genres of cinema: in 1910 Władysław Starewicz made one of the first animated cartoons in the world - and the first to use the stop motion technique, the "Piękna Lukanida" ("Beautiful Lukanida"). By the start of World War I the cinema in Poland was already in full swing, with numerous adaptations of major works of Polish literature screened (notably the "Dzieje grzechu", "Meir Ezofowicz" and "Nad Niemnem".
During the World War I the Polish cinema crossed borders. Films made in Warsaw or Vilna were often rebranded with German language intertitles and shown in Berlin. That was how a young actress Pola Negri (born Barbara Apolonia Chałupiec) gained fame in Germany and eventually became one of the European super-stars of silent film.
During the World War II Polish filmmakers in Great Britain created anti-Nazi color film "Calling Mr. Smith" (1943) about current nazi crimes in occupied Europe and about lies of Nazi propaganda. It was one of the first anti-Nazi films in history being both avant-garde and documentary film.
In November 1945 the communist government founded the film production and distribution organization Film Polski, and put the well-known Polish People's Army filmmaker Aleksander Ford in charge. Starting with a few railway carriages full of film equipment taken from the Germans they proceed to train and build a Polish film industry. The FP output was limited; only thirteen features were released between 1947 and its dissolution in 1952, concentrating on Polish suffering at the hands of the Nazis. In 1947 Ford moved to help establish the new National Film School in Łódź, where he taught for 20 years.
The industry used imported cameras and film stocks. At first ORWO black and white film stock from East Germany and then Eastman colour negative stock and ORWO print stocks for rushes and release prints. Poland made its own lighting equipment. Because of the high costs of film stock Polish films were shot with very low shooting ratios, the amount of film stock used in shooting the film to length of the finished film. The equipment and film stock were not the best and budgets were modest but the film makers received probably the best training in the world from the Polish Film School. Another advantage was Film Polski's status as a state organisation, so its film-makers had access to all Polish institutions and their cooperation in making their films. Film cameras were able to enter almost every aspect of Polish life.
The first film produced in Poland following the World War II was "Zakazane piosenki" (1946), directed by Leonard Buczkowski, which was seen by 10.8 million people (out of 23,8 total population) in its initial theatrical run. Buczkowski continued to make films regularly until his death in 1967. Other important films of early post-World War II period were "The Last Stage" (1948), directed by Wanda Jakubowska, who continued to make films until the transition from communism to capitalism in 1989, and "Border Street" (1949), directed by Aleksander Ford.
By the mid 1950s, following the end of Stalinism in Poland, Film production was organised into film groups. A film group was a collection of film makers, led by an experienced film director and consisting of writers, film directors and production managers. They would write scripts, create budgets, apply for funding off the Ministry of Culture and produce the picture. They would hire actors and crew, and use studios and laboratories controlled by Film Polski.
The change in political climate gave rise to the Polish Film School movement, a training ground for some of the icons of the world cinematography, e.g., Roman Polanski ("Knife in the Water", "Rosemary's Baby", "Frantic", "The Pianist") and Krzysztof Zanussi (a leading director of the so-called "cinema of moral anxiety" of the 1970s). Andrzej Wajda's films offer insightful analyses of the universal element of the Polish experience - the struggle to maintain dignity under the most trying circumstances. His films defined several Polish generations. In 2000, Wajda was awarded an honorary Oscar for his overall contribution to cinema. Four of his films were nominated for Best Foreign Language Film award at Academy Awards with six other Polish directors receiving one nomination each: Roman Polański, Jerzy Kawalerowicz, Jerzy Hoffman, Jerzy Antczak, Agnieszka Holland and Jan Komasa. In 2015, Polish filmmaker Paweł Pawlikowski received this award for his film "Ida". In 2019, he was also nominated to the award for his next film "Cold War" in two categories - Best Foreign Language Film and Best Director.
It is also important to note that during the 1980s, the People's Republic of Poland instituted the martial law to vanquish and censor all forms of opposition against the communist rule of the nation, including outlets such as cinema and radio. A notable film to have emerged during this period was Ryszard Bugajski's 1982 film "Interrogation" ("Przesluchanie"), which depicts the story of an unfortunate woman (played by Krystyna Janda) who is arrested and tortured by the secret police into confessing a crime she knows nothing about. The anti-communist nature of the film brought about the film's over seven-year ban. In 1989, the ban was repealed after the overthrow of the Communist government in Poland, and the film was shown in theaters for the first time later that year. The film is still lauded today for its audacity in depicting the cruelty of the Stalinist regime, as many artists feared persecution during that time.
In the 1990s, Krzysztof Kieślowski won a universal acclaim with productions such as "Dekalog" (made for television), "The Double Life of Véronique" and the "Three Colors" trilogy. Another of the most famous movies in Poland is Krzysztof Krauze’s "The Debt", which became a blockbuster. It showed the brutal reality of Polish capitalism and the growth of poverty. A considerable number of Polish film directors (e.g., Agnieszka Holland and Janusz Kamiński) have worked in American studios. Polish animated films - like those by Jan Lenica and Zbigniew Rybczyński (Oscar, 1983) - drew on a long tradition and continued to derive their inspiration from Poland's graphic arts. Other notable Polish film directors include: Tomasz Bagiński, Małgorzata Szumowska, Jan Jakub Kolski, Jerzy Kawalerowicz, Stanisław Bareja and Janusz Zaorski.
Among prominent annual film festivals taking place in Poland are: Warsaw International Film Festival, Camerimage, International Festival of Independent Cinema Off Camera, New Horizons Film Festival as well as Gdynia Film Festival and Polish Film Awards.
The Communist government invested resources into building a sophisticated cinema audience. All the cinema were state owned and consisted of first run premiere cinema, local cinema and art house cinemas. Tickets were cheap and students and old people received discounts. In the city of Lodz there were 36 cinemas in the 1970s showing films from all over the world. There were the Italian films of Fellini, French comedies, American crime movies such as Don Siegel's "Charley Varrick" . Films were shown in their original versions with Polish subtitles. Anti-Communist and Cold War films were not shown, but a bigger restriction was the cost of some films. There were popular film magazines like "Film" and "Screen", critical magazines such as "Kino". This all helped to build a well informed film audience.
The Polish Film Academy was founded in 2003 in Warsaw and aims to provide native filmmakers a forum for discussion and a way to promote the reputation of Polish cinema through publications, presentations, discussions and regular promotion of the subject in the schools.
Since 2003, the winners of the Polish Film Awards are elected by the members of the Academy.
Several institutions, both government run and private, provide formal education in various aspects of filmmaking. | https://en.wikipedia.org/wiki?curid=10789 |
Cinema of Japan
The has a history that spans more than 100 years. Japan has one of the oldest and largest film industries in the world; as of 2010, it was the fourth largest by number of feature films produced. In 2011 Japan produced 411 feature films that earned 54.9% of a box office total of US$2.338 billion. Films have been produced in Japan since 1897, when the first foreign cameramen arrived.
In a "Sight & Sound" list of the best films produced in Asia, Japanese works made up eight of the top 12, with "Tokyo Story" (1953) ranked number one. "Tokyo Story" also topped the 2012 "Sight & Sound" directors' poll of The Top 50 Greatest Films of All Time, dethroning "Citizen Kane", while Akira Kurosawa's "Seven Samurai" (1954) was voted the greatest foreign-language film of all time in BBC's 2018 poll of 209 critics in 43 countries. Japan has won the Academy Award for the Best Foreign Language Film four times, more than any other Asian country.
Japan's Big Four film studios are Toho, Toei, Shochiku and Kadokawa, which are the members of the Motion Picture Producers Association of Japan (MPPAJ). The annual Japan Academy Film Prize hosted by the Nippon Academy-shō Association is considered to be the Japanese equivalent of the Academy Awards.
The kinetoscope, first shown commercially by Thomas Edison in the United States in 1894, was first shown in Japan in November 1896. The Vitascope and the Lumière Brothers' Cinematograph were first presented in Japan in early 1897, by businessmen such as Inabata Katsutaro. Lumière cameramen were the first to shoot films in Japan. Moving pictures, however, were not an entirely new experience for the Japanese because of their rich tradition of pre-cinematic devices such as "gentō" ("utsushi-e") or the magic lantern. The first successful Japanese film in late 1897 showed sights in Tokyo.
In 1898 some ghost films were made, the Shirō Asano shorts "Bake Jizo" (Jizo the Spook / 化け地蔵) and "Shinin no sosei" (Resurrection of a Corpse). The first documentary, the short "Geisha no teodori" (芸者の手踊り), was made in June 1899. Tsunekichi Shibata made a number of early films, including "Momijigari", an 1899 record of two famous actors performing a scene from a well-known kabuki play. Early films were influenced by traditional theater – for example, kabuki and bunraku.
At the dawn of the twentieth century theaters in Japan hired benshi, storytellers who sat next to the screen and narrated silent movies. They were descendants of kabuki jōruri, kōdan storytellers, theater barkers and other forms of oral storytelling. Benshi could be accompanied by music like silent films from cinema of the West. With the advent of sound in the early 1930s, the benshi gradually declined.
In 1908, Shōzō Makino, considered the pioneering director of Japanese film, began his influential career with "Honnōji gassen" (本能寺合戦), produced for Yokota Shōkai. Shōzō recruited Matsunosuke Onoe, a former kabuki actor, to star in his productions. Onoe became Japan's first film star, appearing in over 1,000 films, mostly shorts, between 1909 and 1926. The pair pioneered the "jidaigeki" genre. Tokihiko Okada was a popular romantic lead of the same era.
The first Japanese film production studio was built in 1909 by the Yoshizawa Shōten company in Tokyo.
The first female Japanese performer to appear in a film professionally was the dancer/actress Tokuko Nagai Takagi, who appeared in four shorts for the American-based Thanhouser Company between 1911 and 1914.
Among intellectuals, critiques of Japanese cinema grew in the 1910s and eventually developed into a movement that transformed Japanese film. Film criticism began with early film magazines such as "Katsudō shashinkai" (begun in 1909) and a full-length book written by Yasunosuke Gonda in 1914, but many early film critics often focused on chastising the work of studios like Nikkatsu and Tenkatsu for being too theatrical (using, for instance, elements from kabuki and shinpa such as onnagata) and for not utilizing what were considered more cinematic techniques to tell stories, instead relying on benshi. In what was later named the Pure Film Movement, writers in magazines such as "Kinema Record" called for a broader use of such cinematic techniques. Some of these critics, such as Norimasa Kaeriyama, went on to put their ideas into practice by directing such films as "The Glow of Life" (1918), which was one of the first films to use actresses (in this case, Harumi Hanayagi). There were parallel efforts elsewhere in the film industry. In his 1917 film "The Captain's Daughter", Masao Inoue started using techniques new to the silent film era, such as the close-up and cut back. The Pure Film Movement was central in the development of the gendaigeki and scriptwriting.
New studios established around 1920, such as Shochiku and Taikatsu, aided the cause for reform. At Taikatsu, Thomas Kurihara directed films scripted by the novelist Junichiro Tanizaki, who was a strong advocate of film reform. Even Nikkatsu produced reformist films under the direction of Eizō Tanaka. By the mid-1920s, actresses had replaced onnagata and films used more of the devices pioneered by Inoue. Some of the most discussed silent films from Japan are those of Kenji Mizoguchi, whose later works (including "Ugetsu"/"Ugetsu Monogatari") retain a very high reputation.
Japanese films gained popularity in the mid-1920s against foreign films, in part fueled by the popularity of movie stars and a new style of jidaigeki. Directors such as Daisuke Itō and Masahiro Makino made samurai films like "A Diary of Chuji's Travels" and "Roningai" featuring rebellious antiheroes in fast-cut fight scenes that were both critically acclaimed and commercial successes. Some stars, such as Tsumasaburo Bando, Kanjūrō Arashi, Chiezō Kataoka, Takako Irie and Utaemon Ichikawa, were inspired by Makino Film Productions and formed their own independent production companies where directors such as Hiroshi Inagaki, Mansaku Itami and Sadao Yamanaka honed their skills. Director Teinosuke Kinugasa created a production company to produce the experimental masterpiece "A Page of Madness", starring Masao Inoue, in 1926. Many of these companies, while surviving during the silent era against major studios like Nikkatsu, Shochiku, Teikine, and Toa Studios, could not survive the cost involved in converting to sound.
With the rise of left-wing political movements and labor unions at the end of the 1920s, there arose so-called tendency films with left-leaning tendencies. Directors Kenji Mizoguchi, Daisuke Itō, Shigeyoshi Suzuki, and Tomu Uchida were prominent examples. In contrast to these commercially produced 35 mm films, the Marxist Proletarian Film League of Japan (Prokino) made works independently in smaller gauges (such as 9.5mm and 16mm), with more radical intentions. Tendency films suffered from severe censorship heading into the 1930s, and Prokino members were arrested and the movement effectively crushed. Such moves by the government had profound effects on the expression of political dissent in 1930s cinema. Films from this period include: "Sakanaya Honda, Jitsuroku Chushingura, Horaijima, Orochi, Maboroshi, Kurutta Ippeji, Jujiro, ", and "Kurama Tengu".
A later version of "The Captain's Daughter" was one of the first talkie films. It used the Mina Talkie System. The Japanese film industry later split into two groups; one retained the Mina Talkie System, while the other used the Iisutofyon Talkie System used to make Tojo Masaki's films.
The 1923 earthquake, the bombing of Tokyo during World War II, and the natural effects of time and Japan's humidity on flammable and unstable nitrate film have resulted in a great dearth of surviving films from this period.
Unlike in the West, silent films were still being produced in Japan well into the 1930s; as late as 1938, a third of Japanese films were silent. For instance, Yasujirō Ozu's "An Inn in Tokyo" (1935), considered a precursor to the neorealism genre, was a silent film. A few Japanese sound shorts were made in the 1920s and 1930s, but Japan's first feature-length talkie was "Fujiwara Yoshie no furusato" (1930), which used the "Mina Talkie System". Notable talkies of this period include Mikio Naruse's "Wife, Be Like A Rose!" ("Tsuma Yo Bara No Yoni", 1935), which was one of the first Japanese films to gain a theatrical release in the U.S.; Kenji Mizoguchi's "Sisters of the Gion" ("Gion no shimai", 1936); "Osaka Elegy" (1936); and "The Story of the Last Chrysanthemums" (1939); and Sadao Yamanaka's "Humanity and Paper Balloons" (1937).
Film criticism shared this vitality, with many film journals such as "Kinema Junpo" and newspapers printing detailed discussions of the cinema of the day, both at home and abroad. A cultured "impressionist" criticism pursued by critics such as Tadashi Iijima, Fuyuhiko Kitagawa, and Matsuo Kishi was dominant, but opposed by leftist critics such as Akira Iwasaki and Genjū Sasa who sought an ideological critique of films.
The 1930s also saw increased government involvement in cinema, which was symbolized by the passing of the Film Law, which gave the state more authority over the film industry, in 1939. The government encouraged some forms of cinema, producing propaganda films and promoting documentary films (also called "bunka eiga" or "culture films"), with important documentaries being made by directors such as Fumio Kamei. Realism was in favor; film theorists such as Taihei Imamura and Heiichi Sugiyama advocated for documentary or realist drama, while directors such as Hiroshi Shimizu and Tomotaka Tasaka produced fiction films that were strongly realistic in style.
Because of World War II and the weak economy, unemployment became widespread in Japan, and the cinema industry suffered.
During this period, when Japan was expanding its Empire, the Japanese government saw cinema as a propaganda tool to show the glory and invincibility of the Empire of Japan. Thus, many films from this period depict patriotic and militaristic themes. In 1942 Kajiro Yamamoto's film "Hawai Mare oki kaisen" or "The War at Sea from Hawaii to Malaya" portrayed the attack on Pearl Harbor; the film made use of special effects directed by Eiji Tsuburaya, including a miniature scale model of Pearl Harbor itself.
Yoshiko Yamaguchi was a very popular actress. She rose to international stardom with 22 wartime movies. The Manchukuo Film Association let her use the Chinese name Li Xianglan so she could represent Chinese roles in Japanese propaganda movies. After the war she used her official Japanese name and starred in an additional 29 movies. She was elected as a member of the Japanese parliament in the 1970s and served for 18 years.
Akira Kurosawa made his feature film debut with "Sugata Sanshiro" in 1943.
In 1945, when Japan was defeated in World War II, the rule of Japan by the SCAP (Supreme Commander for the Allied Powers) began. Movies produced in Japan were managed by GHQ's subordinate organization CIE (Civil Information Educational Section, 民間情報教育局). This management system lasted until 1952, and it was the first time in the Japanese movie world that management and control by a foreign institution was implemented. During the planning and scripting stages it was translated to English, only the movies approved by the CIE were produced. For example, Akira Kurosawa's “Akatsuki no Dassō” (1950) was originally a work depicting a Korean military comfort woman starring Yoshiko Yamaguchi, but with dozens of CIE censorship, it became an original work. The completed film was censored a second time by a CCD (Civil Censorship Detachment). The censorship was also carried out retroactively to past movie works. Japan was exposed to over a decade's worth of American animation that were banned under the war-time government.
Furthermore, as part of the occupation policy, the issue of responsibility for war spread to the film industry, and when voices of banning war cooperators in movie production during the war began to be expressed, Nagamasa Kawakita, Kanichi Negishi, Shiro Kido in 1947, the person who was involved in such high-motion films was exiled. However, as in other genre pursuits, the position of responsibility for war has been dealt with vaguely in the film industry, and the above measures were lifted in 1950.
The first movie released after the war was “Soyokaze” (そよかぜ) 1945 by Yasushi Sasaki, and the theme song “Ringo no Uta” by Michiko Namiki was a big hit.
In the production ban list promulgated in 1945 by CIE's David Conde, nationalism, patriotism, suicide and slaughter, brutal violent movies, etc. became prohibited items, making the production of historical drama virtually impossible . As a result, actors who have been using historical drama as their business appeared in contemporary drama. This includes Chiezō Kataoka's “Bannai Tarao” (1946), Tsumasaburō Bandō's “Torn Drum (破れ太鼓)” (1949), Hiroshi Inagaki's “The Child Holding Hands (手をつなぐ子等)”, and Daisuke Itō's “King (王将)”.
In addition, many propaganda films were produced as democratic courtesy works recommended by SCAP. Significant movies among them are, Setsuko Hara appeared in Akira Kurosawa's “No Regrets for Our Youth” (1946), Kōzaburō Yoshimura's “A Ball at the Anjo House” (1947), Tadashi Imai's “Aoi sanmyaku” (1949), etc. It gained national popularity as a star symbolizing the beginning of a new era. In Yasushi Sasaki's "Hata no Seishun (はたちの青春)" (1946), the first kiss scene of a Japanese movie was filmed.
The first collaborations between Akira Kurosawa and actor Toshiro Mifune were "Drunken Angel" in 1948 and "Stray Dog" in 1949. Yasujirō Ozu directed the critically and commercially successful "Late Spring" in 1949.
The Mainichi Film Award was created in 1946.
The 1950s are widely considered the Golden Age of Japanese cinema. Three Japanese films from this decade ("Rashomon", "Seven Samurai" and "Tokyo Story") appeared in the top ten of "Sight & Sound"s critics' and directors' polls for the best films of all time in 2002. They also appeared in the 2012 polls, with "Tokyo Story" (1953) dethroning "Citizen Kane" at the top of the 2012 directors' poll.
War movies restricted by SCAP began to be produced, Hideo Sekigawa's “Listen to the Voices of the Sea” (1950), Tadashi Imai's “Himeyuri no Tô - Tower of the Lilies” (1953), Keisuke Kinoshita's “Twenty-Four Eyes” (1954), “ Kon Ichikawa's “The Burmese Harp” (1956), and other works aimed at the tragic and sentimental retrospective of the war experience, one after another, It became a social influence. Other Nostalgia films such as Battleship Yamato (1953) and Eagle of the Pacific (1953) were also mass-produced. Under these circumstances, movies such as "Emperor Meiji and the Russo-Japanese War (明治天皇と日露大戦争)" (1957), where Kanjūrō Arashi played Emperor Meiji, also appeared. It was a situation that was unthinkable before the war, the commercialization of the Emperor who was supposed to be sacred and inviolable.
The period after the American Occupation led to a rise in diversity in movie distribution thanks to the increased output and popularity of the film studios of Toho, Daiei, Shochiku, Nikkatsu, and Toei. This period gave rise to the four great artists of Japanese cinema: Masaki Kobayashi, Akira Kurosawa, Kenji Mizoguchi, and Yasujirō Ozu. Each director dealt with the effects the war and subsequent occupation by America in unique and innovative ways.
The decade started with Akira Kurosawa's "Rashomon" (1950), which won the Golden Lion at the Venice Film Festival in 1951 and the Academy Honorary Award for Best Foreign Language Film in 1952, and marked the entrance of Japanese cinema onto the world stage. It was also the breakout role for legendary star Toshiro Mifune. In 1953 "Entotsu no mieru basho" by Heinosuke Gosho was in competition at the 3rd Berlin International Film Festival.
The first Japanese film in color was "Carmen Comes Home" directed by Keisuke Kinoshita and released in 1951. There was also a black-and-white version of this film available. "Tokyo File 212" (1951) was the first American feature film to be shot entirely in Japan. The lead roles were played by Florence Marly and Robert Peyton. It featured the geisha Ichimaru in a short cameo. Suzuki Ikuzo's Tonichi Enterprises Company co-produced the film. "Gate of Hell", a 1953 film by Teinosuke Kinugasa, was the first movie that filmed using Eastmancolor film, "Gate of Hell" was both Daiei's first color film and the first Japanese color movie to be released outside Japan, receiving an Academy Honorary Award in 1954 for Best Costume Design by Sanzo Wada and an Honorary Award for Best Foreign Language Film. It also won the Palme d'Or at the Cannes Film Festival, the first Japanese film to achieve that honour.
The year 1954 saw two of Japan's most influential films released. The first was the Kurosawa epic "Seven Samurai", about a band of hired samurai who protect a helpless village from a rapacious gang of thieves. The same year, Ishirō Honda directed the anti-nuclear monster-drama "Godzilla", which was released in America as "Godzilla, King of the Monsters". Though edited for its Western release, Godzilla became an international icon of Japan and spawned an entire subgenre of "kaiju" films, as well as the longest-running film franchise in history. Also in 1954, another Kurosawa film, "Ikiru" was in competition at the 4th Berlin International Film Festival.
In 1955, Hiroshi Inagaki won an Academy Honorary Award for Best Foreign Language Film for of his "Samurai" trilogy and in 1958 won the Golden Lion at the Venice Film Festival for "Rickshaw Man". Kon Ichikawa directed two anti-war dramas: "The Burmese Harp" (1956), which was nominated for Best Foreign Language Film at the Academy Awards, and "Fires On The Plain" (1959), along with "Enjo" (1958), which was adapted from Yukio Mishima's novel "Temple Of The Golden Pavilion". Masaki Kobayashi made three films which would collectively become known as "The Human Condition Trilogy": "No Greater Love" (1959), and "The Road To Eternity" (1959). The trilogy was completed in 1961, with "A Soldier's Prayer".
Kenji Mizoguchi, who died in 1956, ended his career with a series of masterpieces including "The Life of Oharu" (1952), "Ugetsu" (1953) and "Sansho the Bailiff" (1954). He won the Silver Bear at the Venice Film Festival for "Ugetsu". Mizoguchi's films often deal with the tragedies inflicted on women by Japanese society. Mikio Naruse made "Repast" (1950), "Late Chrysanthemums" (1954), "The Sound of the Mountain" (1954) and "Floating Clouds" (1955). Yasujirō Ozu began directing color films beginning with "Equinox Flower" (1958), and later "Good Morning" (1959) and "Floating Weeds" (1958), which was adapted from his earlier silent "A Story of Floating Weeds" (1934), and was shot by "Rashomon" and "Sansho the Bailiff" cinematographer Kazuo Miyagawa.
The Blue Ribbon Awards were established in 1950. The first winner for Best Film was "Until We Meet Again" by Tadashi Imai.
The number of films produced, and the cinema audience reached a peak in the 1960s. Most films were shown in double bills, with one half of the bill being a "program picture" or B-movie. A typical program picture was shot in four weeks. The demand for these program pictures in quantity meant the growth of film series such as "The Hoodlum Soldier" or "Akumyo".
The huge level of activity of 1960s Japanese cinema also resulted in many classics. Akira Kurosawa directed the 1961 classic "Yojimbo". Yasujirō Ozu made his final film, "An Autumn Afternoon", in 1962. Mikio Naruse directed the wide screen melodrama "When a Woman Ascends the Stairs" in 1960; his final film was 1967's "Scattered Clouds".
Kon Ichikawa captured the watershed 1964 Olympics in his three-hour documentary "Tokyo Olympiad" (1965). Seijun Suzuki was fired by Nikkatsu for "making films that don't make any sense and don't make any money" after his surrealist yakuza flick "Branded to Kill" (1967).
The 1960s were the peak years of the "Japanese New Wave" movement, which began in the 1950s and continued through the early 1970s. Nagisa Oshima, Kaneto Shindo, Masahiro Shinoda, Susumu Hani and Shohei Imamura emerged as major filmmakers during the decade. Oshima's "Cruel Story of Youth", "Night and Fog in Japan" and "Death By Hanging", along with Shindo's "Onibaba", Hani's "Kanojo to kare" and Imamura's "The Insect Woman", became some of the better-known examples of Japanese New Wave filmmaking. Documentary played a crucial role in the New Wave, as directors such as Hani, Kazuo Kuroki, Toshio Matsumoto, and Hiroshi Teshigahara moved from documentary into fiction film, while feature filmmakers like Oshima and Imamura also made documentaries. Shinsuke Ogawa and Noriaki Tsuchimoto became the most important documentarists: "two figures [that] tower over the landscape of Japanese documentary."
Teshigahara's "Woman in the Dunes" (1964) won the Special Jury Prize at the Cannes Film Festival, and was nominated for Best Director and Best Foreign Language Film Oscars. Masaki Kobayashi's "Kwaidan" (1965) also picked up the Special Jury Prize at Cannes and received a nomination for Best Foreign Language Film at the Academy Awards. "Bushido, Samurai Saga" by Tadashi Imai won the Golden Bear at the 13th Berlin International Film Festival. "Immortal Love" by Keisuke Kinoshita and "Twin Sisters of Kyoto" and "Portrait of Chieko", both by Noboru Nakamura, also received nominations for Best Foreign Language Film at the Academy Awards. "Lost Spring", also by Nakamura, was in competition for the Golden Bear at the 17th Berlin International Film Festival.
The 1970s saw the cinema audience drop due to the spread of television. Total audience declined from 1.2 billion in 1960 to 0.2 billion in 1980.
Film companies fought back in various ways, such as the bigger budget films of Kadokawa Pictures, or including increasingly sexual or violent content and language which could not be shown on television. The resulting pink film industry became the stepping stone for many young independent filmmakers. The seventies also saw the start of the "idol eiga", films starring young "idols", who would bring in audiences due to their fame and popularity.
Toshiya Fujita made the revenge film "Lady Snowblood" in 1973. In the same year, Yoshishige Yoshida made the film "Coup d'État", a portrait of Ikki Kita, the leader of the Japanese coup of February 1936. Its experimental cinematography and mise-en-scène, as well as its avant-garde score by Ichiyanagi Sei, garnered it wide critical acclaim within Japan.
In 1976 the Hochi Film Award was created. The first winner for Best Film was "The Inugamis" by Kon Ichikawa. Nagisa Oshima directed "In the Realm of the Senses" (1976), a film detailing a crime of passion involving Sada Abe set in the 1930s. Controversial for its explicit sexual content, it has never been seen uncensored in Japan.
Kinji Fukasaku completed the epic "Battles Without Honor and Humanity" series of yakuza films. Yoji Yamada introduced the commercially successful "Tora-San" series, while also directing other films, notably the popular "The Yellow Handkerchief", which won the first Japan Academy Prize for Best Film in 1978. New wave filmmakers Susumu Hani and Shōhei Imamura retreated to documentary work, though Imamura made a dramatic return to feature filmmaking with "Vengeance Is Mine" (1979).
"Dodes'ka-den" by Akira Kurosawa and "Sandakan No. 8" by Kei Kumai were nominated to the Academy Award for Best Foreign Language Film.
The 1980s saw the decline of the major Japanese film studios and their associated chains of cinemas, with major studios Toho and Toei barely staying in business, Shochiku supported almost solely by the "Otoko wa tsurai" films, and Nikkatsu declining even further.
Of the older generation of directors, Akira Kurosawa directed "Kagemusha" (1980), which won the Palme d'Or at the 1980 Cannes Film Festival, and "Ran" (1985). Seijun Suzuki made a comeback beginning with "Zigeunerweisen" in 1980. Shohei Imamura won the Palme d'Or at the Cannes Film Festival for "The Ballad of Narayama" (1983). Yoshishige Yoshida made "A Promise" (1986), his first film since 1973's "Coup d'État".
New directors who appeared in the 1980s include actor Juzo Itami, who directed his first film, "The Funeral", in 1984, and achieved critical and box office success with "Tampopo" in 1985. Shinji Sōmai, an artistically inclined populist director who made films like the youth-focused Typhoon Club, and the critically acclaimed Roman porno Love Hotel among others. Kiyoshi Kurosawa, who would generate international attention beginning in the mid-1990s, made his initial debut with pink films and genre horror.
During the 1980s, anime rose in popularity, with new animated movies released every summer and winter, often based upon popular anime television series. Mamoru Oshii released his landmark "Angel's Egg" in 1985. Hayao Miyazaki adapted his manga series "Nausicaä of the Valley of Wind" into a feature film of the same name in 1984. Katsuhiro Otomo followed suit by adapting his own manga "Akira" into a feature film of the same name in 1988.
Home video made possible the creation of a direct-to-video film industry.
Mini theaters, a type of independent movie theater characterized by a smaller size and seating capacity in comparison to larger movie theaters, gained popularity during the 1980s. Mini theaters helped bring independent and arthouse films from other countries, as well as films produced in Japan by unknown Japanese filmmakers, to Japanese audiences.
Because of economic recessions, the number of movie theaters in Japan had been steadily decreasing since the 1960s. The 1990s saw the reversal of this trend and the introduction of the multiplex in Japan. At the same time, the popularity of mini theaters continued.
Takeshi Kitano emerged as a significant filmmaker with works such as "Sonatine" (1993), "Kids Return" (1996) and "Hana-bi" (1997), which was given the Golden Lion at the Venice Film Festival. Shōhei Imamura again won the Golden Palm (shared with Iranian director Abbas Kiarostami), this time for "The Eel" (1997). He became the fifth two-time recipient, joining Alf Sjöberg, Francis Ford Coppola, Emir Kusturica and Bille August.
Kiyoshi Kurosawa gained international recognition following the release of "Cure" (1997). Takashi Miike launched a prolific career with titles such as "Audition" (1999), "Dead or Alive" (1999) and "The Bird People in China" (1998). Former documentary filmmaker Hirokazu Koreeda launched an acclaimed feature career with "Maborosi" (1996) and "After Life" (1999).
Hayao Miyazaki directed two mammoth box office and critical successes, "Porco Rosso" (1992) – which beat "E.T. the Extra-Terrestrial" (1982) as the highest-grossing film in Japan – and "Princess Mononoke" (1997), which also claimed the top box office spot until "Titanic" (1997).
Several new anime directors rose to widespread recognition, bringing with them notions of anime as not only entertainment, but modern art. Mamoru Oshii released the internationally acclaimed philosophical science fiction action film "Ghost in the Shell" in 1996. Satoshi Kon directed the award-winning psychological thriller "Perfect Blue". Hideaki Anno also gained considerable recognition with "The End of Evangelion" in 1997.
The number of movies being shown in Japan steadily increased, with about 821 films released in 2006. Movies based on Japanese television series were especially popular during this period. Anime films now accounted for 60 percent of Japanese film production. The 1990s and 2000s are considered to be "Japanese Cinema's Second Golden Age", due to the immense popularity of anime, both within Japan and overseas.
Although not a commercial success, "All About Lily Chou-Chou" directed by Shunji Iwai was honored at the Berlin, the Yokohama and the Shanghai Film Festivals in 2001. Takeshi Kitano appeared in "Battle Royale" and directed and starred in "Dolls" and "Zatoichi". Several horror films, "Kairo", "Dark Water", "Yogen", the "Grudge" series and "One Missed Call" met with commercial success. In 2004, "", directed by Ryuhei Kitamura, was released to celebrate the 50th anniversary of Godzilla. In 2005, director Seijun Suzuki made his 56th film, "Princess Raccoon". Hirokazu Koreeda claimed film festival awards around the world with two of his films "Distance" and "Nobody Knows". Female film director Naomi Kawase's film "The Mourning Forest" won the Grand Prix at the Cannes Film Festival in 2007. Yoji Yamada, director of the Otoko wa Tsurai yo series, made a trilogy of acclaimed revisionist samurai films, 2002's "Twilight Samurai", followed by "The Hidden Blade" in 2004 and "Love and Honor" in 2006.
In anime, Hayao Miyazaki directed "Spirited Away" in 2001, breaking Japanese box office records and winning several awards—including the Academy Award for Best Animated Feature in 2003—followed by "Howl's Moving Castle" and "Ponyo" in 2004 and 2008 respectively. In 2004, Mamoru Oshii released the anime movie "" which received critical praise around the world. His 2008 film "The Sky Crawlers" was met with similarly positive international reception. Satoshi Kon also released three quieter, but nonetheless highly successful films: "Millennium Actress", "Tokyo Godfathers", and "Paprika". Katsuhiro Otomo released "Steamboy", his first animated project since the 1995 short film compilation "Memories", in 2004. In collaboration with Studio 4C, American director Michael Arias released "Tekkon Kinkreet" in 2008, to international acclaim. After several years of directing primarily lower-key live-action films, Hideaki Anno formed his own production studio and revisited his still-popular "Evangelion" franchise with the "Rebuild of Evangelion" tetralogy, a new series of films providing an alternate retelling of the original story.
In February 2000, the Japan Film Commission Promotion Council was established. On November 16, 2001, the Japanese Foundation for the Promotion of the Arts laws were presented to the House of Representatives. These laws were intended to promote the production of media arts, including film scenery, and stipulate that the government – on both the national and local levels – must lend aid in order to preserve film media. The laws were passed on November 30 and came into effect on December 7. In 2003, at a gathering for the Agency of Cultural Affairs, twelve policies were proposed in a written report to allow public-made films to be promoted and shown at the Film Center of the National Museum of Modern Art.
Four films have so far received international recognition by being selected to compete in major film festivals: "Caterpillar" by Kōji Wakamatsu was in competition for the Golden Bear at the 60th Berlin International Film Festival and won the Silver Bear for Best Actress, "Outrage" by Takeshi Kitano was In Competition for the Palme d'Or at the 2010 Cannes Film Festival, "Himizu" by Sion Sono was in competition for the Golden Lion at the 68th Venice International Film Festival.
In 2011, Takashi Miike's "" was In Competition for the Palme d'Or at the 2012 Cannes Film Festival, the first 3D film ever to screen In Competition at Cannes. The film was co-produced by British independent producer Jeremy Thomas, who had successfully broken Japanese titles such as Nagisa Oshima's "Merry Christmas, Mr Lawrence" and " Taboo", Takeshi Kitano's "Brother", and Miike's "13 Assassins" onto the international stage as producer.
In 2018, Hirokazu Kore-Eda won the Palme d'Or for his movie "Shoplifters" at the 71st Cannes Film Festival, a festival that also featured Ryūsuke Hamaguchi's "Asako I & II" in competition.
Genres of Japanese film include:
Film scholars experts in Japanese cinema include: | https://en.wikipedia.org/wiki?curid=10790 |
Cinema of China
The cinema of mainland China is one of three distinct historical threads of Chinese-language cinema together with the cinema of Hong Kong and the cinema of Taiwan.
Cinema was introduced in China in 1896 and the first Chinese film, "Dingjun Mountain", was made in 1905. In the early decades the film industry was centred on Shanghai. The first sound film, "Sing-Song Girl Red Peony", using the sound-on-disc technology, was made in 1931. The 1930s, considered the first "Golden Period" of Chinese cinema, saw the advent of the Leftist cinematic movement. The dispute between Nationalists and Communists was reflected in the films produced. After the Japanese invasion of China and the occupation of Shanghai, the industry in the city was severely curtailed, with filmmakers moving to Hong Kong, Chongqing and other places. A "Solitary Island" period began in Shanghai, where the filmmakers who remained worked in the foreign concessions. "Princess Iron Fan" (1941), the first Chinese animated feature film, was released at the end of this period. It influenced wartime Japanese animation and later Osamu Tezuka. After being completely engulfed by the occupation in 1941, and until the end of the war in 1945, the film industry in the city was under Japanese control.
After the end of the war, a second golden age took place, with production in Shanghai resuming. "Spring in a Small Town" (1948) was named the best Chinese-language film at the 24th Hong Kong Film Awards. After the communist revolution in 1949, domestic films that were already released and a selection of foreign films were banned in 1951, marking a tirade of film censorship in China. Despite this, movie attendance increased sharply. During the Cultural Revolution, the film industry was severely restricted, coming almost to a standstill from 1967 to 1972. The industry flourished following the end of the Cultural Revolution, including the "scar dramas" of the 1980s, such as "Evening Rain" (1980), "Legend of Tianyun Mountain" (1980) and "Hibiscus Town" (1986), depicting the emotional traumas left by the period. Starting in the mid to late 1980s, with films such as "One and Eight" (1983) and "Yellow Earth" (1984), the rise of the Fifth Generation brought increased popularity to Chinese cinema abroad, especially among Western arthouse audiences. Films like "Red Sorghum" (1987), "The Story of Qiu Ju" (1992) and "Farewell My Concubine" (1993) won major international awards. The movement partially ended after the Tiananmen Square protests of 1989. The post-1990 period saw the rise of the Sixth Generation and post-Sixth Generation, both mostly making films outside the main Chinese film system which played mostly on the international film festival circuit.
Following the international commercial success of films such as "Crouching Tiger, Hidden Dragon" (2000) and "Hero" (2002), the number of co-productions in Chinese-language cinema has increased and there has been a movement of Chinese-language cinema into a domain of large scale international influence. After "The Dream Factory" (1997) demonstrated the viability of the commercial model, and with the growth of the Chinese box office in the new millennium, Chinese films have broken box office records and, as of January 2017, 5 of the top 10 highest-grossing films in China are domestic productions. "Lost in Thailand" (2012) was the first Chinese film to reach at the Chinese box office. "Monster Hunt" (2015) was the first to reach . "The Mermaid" (2016) was the first to . "Wolf Warrior 2" (2017) beat them out to become the highest-grossing film in China.
China is the home of the largest movie & drama production complex and film studios in the world, the Oriental Movie Metropolis and Hengdian World Studios, and in 2010 it had the third largest film industry by number of feature films produced annually. In 2012 the country became the second-largest market in the world by box office receipts. In 2016, the gross box office in China was (). The country has the largest number of screens in the world since 2016, and is expected to become the largest theatrical market by 2019. China has also become a major hub of business for Hollywood studios.
In November 2016, China passed a film law banning content deemed harmful to the “dignity, honor and interests” of the People's Republic and encouraging the promotion of “socialist core values", approved by the National People's Congress Standing Committee. Due to industry regulations, films are typically allowed to stay in theaters for one month. However, studios may apply to regulators to have the limit extended.
Motion pictures were introduced to China in 1896. China was one of the earliest countries to be exposed to the medium of film, due to Louis Lumière sending his cameraman to Shanghai a year after inventing cinematography. The first recorded screening of a motion picture in China took place in Shanghai on August 11, 1896, as an "act" on a variety bill. The first Chinese film, a recording of the Peking opera, "Dingjun Mountain", was made in November 1905 in Beijing. For the next decade the production companies were mainly foreign-owned, and the domestic film industry was centered on Shanghai, a thriving entrepot and the largest city in the Far East. In 1913, the first independent Chinese screenplay, "The Difficult Couple", was filmed in Shanghai by Zheng Zhengqiu and Zhang Shichuan. Zhang Shichuan then set up the first Chinese-owned film production company in 1916. The first full-length feature film was "Yan Ruisheng" () released in 1921. which was a docudrama about the killing of a Shanghai courtesan, although it was too crude a film to ever be considered commercially successful. During the 1920s film technicians from the United States trained Chinese technicians in Shanghai, and American influence continued to be felt there for the next two decades. Since film was still in its earliest stages of development, most Chinese silent films at this time were only comic skits or operatic shorts, and training was minimal at a technical aspect due to this being a period of experimental film.
Later, after trial and error, China was able to draw inspiration from its own traditional values and began producing martial arts films, with the first being "Burning of Red Lotus Temple" (1928). "Burning of Red Lotus Temple" was so successful at the box office, the Star Motion Pictures (Mingxing) production has since filmed 18 sequels, marking the beginning of China's esteemed martial arts films. It was during this period that some of the more important production companies first came into being, notably Mingxing and the Shaw brothers' Tianyi ("Unique"). Mingxing, founded by Zheng Zhengqiu and Zhang Shichuan in 1922, initially focused on comic shorts, including the oldest surviving complete Chinese film, "Laborer's Love" (1922). This soon shifted, however, to feature-length films and family dramas including "Orphan Rescues Grandfather" (1923). Meanwhile, Tianyi shifted their model towards folklore dramas, and also pushed into foreign markets; their film "White Snake" (1926) proved a typical example of their success in the Chinese communities of Southeast Asia. In 1931, the first Chinese sound film "Sing-Song Girl Red Peony" was made, the product of a cooperation between the Mingxing Film Company's image production and Pathé Frères's sound technology. However, the sound was disc-recorded, which was then played in the theatre in-sync with the action on the screen. The first sound-on-film talkie made in China was either "Spring on Stage" (歌場春色) by Tianyi, or "Clear Sky After Storm" by Great China Studio and Jinan Studio.
However, the first truly important Chinese films were produced beginning in the 1930s, with the advent of the "progressive" or "left-wing" movement, like Cheng Bugao's "Spring Silkworms" (1933), Wu Yonggang's "The Goddess" (1934), and Sun Yu's "The Big Road" (1935). These films were noted for their emphasis on class struggle and external threats (i.e. Japanese aggression), as well as on their focus on common people, such as a family of silk farmers in "Spring Silkworms" and a prostitute in "The Goddess". In part due to the success of these kinds of films, this post-1930 era is now often referred to as the first "golden period" of Chinese cinema. The Leftist cinematic movement often revolved around the Western-influenced Shanghai, where filmmakers portrayed the struggling lower class of an overpopulated city.
Three production companies dominated the market in the early to mid- 1930s: the newly formed Lianhua ("United China"), the older and larger Mingxing and Tianyi. Both Mingxing and Lianhua leaned left (Lianhua's management perhaps more so), while Tianyi continued to make less socially conscious fare.
The period also produced the first big Chinese movie stars, such as Hu Die, Ruan Lingyu, Li Lili, Chen Yanyan, Zhou Xuan, Zhao Dan and Jin Yan. Other major films of the period include "Love and Duty" (1931), "Little Toys" (1933), "New Women" (1934), "Song of the Fishermen" (1934), "Plunder of Peach and Plum" (1934), "Crossroads" (1937), and "Street Angel" (1937). Throughout the 1930s, the Nationalists and the Communists struggled for power and control over the major studios; their influence can be seen in the films the studios produced during this period.
The Japanese invasion of China in 1937, in particular the Battle of Shanghai, ended this golden run in Chinese cinema. All production companies except Xinhua Film Company ("New China") closed shop, and many of the filmmakers fled Shanghai, relocating to Hong Kong, the wartime Nationalist capital Chongqing, and elsewhere. The Shanghai film industry, though severely curtailed, did not stop however, thus leading to the "Solitary Island" period (also known as the "Sole Island" or "Orphan Island"), with Shanghai's foreign concessions serving as an "island" of production in the "sea" of Japanese-occupied territory. It was during this period that artists and directors who remained in the city had to walk a fine line between staying true to their leftist and nationalist beliefs and Japanese pressures. Director Bu Wancang's "Mulan Joins the Army" (1939), with its story of a young Chinese peasant fighting against a foreign invasion, was a particularly good example of Shanghai's continued film-production in the midst of war. This period ended when Japan declared war on the Western allies on December 7, 1941; the solitary island was finally engulfed by the sea of the Japanese occupation. With the Shanghai industry firmly in Japanese control, films like the Greater East Asia Co-Prosperity Sphere-promoting "Eternity" (1943) were produced. At the end of World War II, one of the most controversial Japanese-authorized companies, Manchukuo Film Association, would be separated and integrated into Chinese cinema.
The film industry continued to develop after 1945. Production in Shanghai once again resumed as a new crop of studios took the place that Lianhua and Mingxing studios had occupied in the previous decade. In 1945, Cai Chusheng returned to Shanghai to revive the Lianhua name as the "Lianhua Film Society with Shi Dongshan, Meng Junmou and Zheng Junli." This in turn became Kunlun Studios which would go on to become one of the most important studios of the era, (Kunlun Studios merged with seven other studios to form Shanghai film studio in 1949) putting out the classics "The Spring River Flows East" (1947), "Myriad of Lights" (1948), "Crows and Sparrows" (1949) and "San Mao, The Little Vagabond" (1949).
Many of these films showed the disillusionment with the oppressive rule of Chiang Kai-shek's Nationalist Party and the struggling oppression of nation by war. "The Spring River Flows East", a three-hour-long two-parter directed by Cai Chusheng and Zheng Junli, was a particularly strong success. Its depiction of the struggles of ordinary Chinese during the Second Sino-Japanese war, replete with biting social and political commentary, struck a chord with audiences of the time.
Meanwhile, companies like the Wenhua Film Company ("Culture Films"), moved away from the leftist tradition and explored the evolution and development of other dramatic genres. Wenhua treated postwar problems in universalistic and humanistic ways, avoiding the family narrative and melodramatic formulae. Excellent examples of Wenhua's fare are its first two postwar features, "Unending Emotions" (1947) and "Fake Bride, Phony Bridegroom" (1947). Another memorable Wenhua film is "Long Live the Missus" (1947), like "Unending Emotions" with an original screenplay by writer Eileen Chang. Wenhua's romantic drama "Spring in a Small Town" (1948), a film by director Fei Mu shortly prior to the revolution, is often regarded by Chinese film critics as one of the most important films in the history of Chinese cinema, in 2005, Hong Kong film awards it as the best 100 years of film. Ironically, it was precisely its artistic quality and apparent lack of "political grounding" that led to its labeling by the Communists as rightist or reactionary, and the film was quickly forgotten by those on the mainland following the Communist victory in China in 1949. However, with the China Film Archive's re-opening after the Cultural Revolution, a new print was struck from the original negative, allowing "Spring of the Small Town" to find a new and admiring audience and to influence an entire new generation of filmmakers. Indeed, an acclaimed remake was made in 2002 by Tian Zhuangzhuang. A Chinese Peking opera film, "A Wedding in the Dream" (1948), by the same director(Fei Mu), was the first Chinese color film.
With the communist revolution in China in 1949, the government saw motion pictures as an important mass production art form and tool for propaganda. Starting from 1951, pre-1949 Chinese films, Hollywood and Hong Kong productions were banned as the Communist Party of China sought to tighten control over mass media, producing instead movies centering on peasants, soldiers and workers, such as "Bridge" (1949) and "The White Haired Girl" (1950). One of the production bases in the middle of all the transition was the Changchun Film Studio.
The private studios in Shanghai, including Kunming, Wenhua, Guotai and Datong, were encouraged to make new films from 1949 to 1951. They made approximately 47 films during this period, but soon ran into trouble, owing to the furore over the Kunlun-produced drama "The Life of Wu Xun" (1950), directed by Sun Yu and starring veteran Zhao Dan. The feature was accused in an anonymous article in "People's Daily" in May 1951 of spreading feudal ideas. After the article was revealed to be penned by Mao Zedong, the film was banned, a Film Steering Committee was formed to "re-educate" the film industry and within two years, these private studios were all incorporated into the state-run Shanghai Film Studio.
The Communist regime solved the problem of a lack of film theaters by building mobile projection units which could tour the remote regions of China, ensuring that even the poorest could have access to films. By 1965 there were around 20,393 such units. The number of movie-viewers hence increased sharply, partly bolstered by the fact that film tickets were given out to work units and attendance was compulsory, with admissions rising from 47 million in 1949 to 4.15 billion in 1959. In the 17 years between the founding of the People's Republic of China and the Cultural Revolution, 603 feature films and 8,342 reels of documentaries and newsreels were produced, sponsored mostly as Communist propaganda by the government. For example, in "Guerrilla on the Railroad" (铁道游击队), dated 1956, the Chinese Communist Party was depicted as the primary resistance force against the Japanese in the war against invasion. Chinese filmmakers were sent to Moscow to study the Soviet socialist realism style of filmmaking. The Beijing Film Academy established in 1950 and in 1956, the Beijing Film Academy was officially opened. One important film of this era is "This Life of Mine" (1950), directed by Shi Hu, which follows an old beggar reflecting on his past life as a policeman working for the various regimes since 1911. The first widescreen Chinese film was produced in 1960. Animated films using a variety of folk arts, such as papercuts, shadow plays, puppetry, and traditional paintings, also were very popular for entertaining and educating children. The most famous of these, the classic "Havoc in Heaven" (two parts, 1961, 4), was made by Wan Laiming of the Wan Brothers and won Outstanding Film award at the London International Film Festival.
The thawing of censorship in 1956–57 (known as the Hundred Flowers Campaign) and the early 1960s led to more indigenous Chinese films being made which were less reliant on their Soviet counterparts. During this campaign the sharpest criticisms came from the satirical comedies of Lü Ban. "Before the New Director Arrives" exposes the hierarchical relationships occurring between the cadres, while his next film, "The Unfinished Comedy" (1957), was labelled as a "poisonous weed" during the Anti-Rightist Movement and Lü was banned from directing for life."The Unfinished Comedy" was only screened after Mao's death. Other noteworthy films produced during this period were adaptations of literary classics, such as Sang Hu's "The New Year's Sacrifice" (1956; adapted from a Lu Xun story) and Shui Hua's "The Lin Family Shop" (1959; adapted from a Mao Dun story). The most prominent filmmaker of this era was Xie Jin, whose three films in particular, "Woman Basketball Player No. 5" (1957), "The Red Detachment of Women" (1961) and "Two Stage Sisters" (1964), exemplify China's increased expertise at filmmaking during this time. Films made during this period are polished and exhibit high production value and elaborate sets. While Beijing and Shanghai remained the main centers of production, between 1957–60 the government built regional studios in Guangzhou, Xi'an and Chengdu to encourage representations of ethnic minorities in films. Chinese cinema began to directly address the issue of such ethnic minorities during the late 1950s and early 1960s, in films like "Five Golden Flowers" (1959), "Third Sister Liu" (1960), "Serfs" (1963), "Ashima" (1964).
During the Cultural Revolution, the film industry was severely restricted. Almost all previous films were banned, and only a few new ones were produced, the so-called "revolutionary model operas". The most notable of these was a ballet version of the revolutionary opera "The Red Detachment of Women", directed by Pan Wenzhan and Fu Jie in 1970. Feature film production came almost to a standstill in the early years from 1967 to 1972. Movie production revived after 1972 under the strict jurisdiction of the Gang of Four until 1976, when they were overthrown. The few films that were produced during this period, such as 1975's "Breaking with Old Ideas", were highly regulated in terms of plot and characterization.
In the years immediately following the Cultural Revolution, the film industry again flourished as a medium of popular entertainment. Production rose steadily, from 19 features in 1977 to 125 in 1986. Domestically produced films played to large audiences, and tickets for foreign film festivals sold quickly. The industry tried to revive crowds by making more innovative and "exploratory" films like their counterparts in the West.
In the 1980s the film industry fell on hard times, faced with the dual problems of competition from other forms of entertainment and concern on the part of the authorities that many of the popular thriller and martial arts films were socially unacceptable. In January 1986 the film industry was transferred from the Ministry of Culture to the newly formed Ministry of Radio, Cinema, and Television to bring it under "stricter control and management" and to "strengthen supervision over production."
The end of the Cultural Revolution brought the release of "scar dramas", which depicted the emotional traumas left by this period. The best-known of these is probably Xie Jin's "Hibiscus Town" (1986), although they could be seen as late as the 1990s with Tian Zhuangzhuang's "The Blue Kite" (1993). In the 1980s, open criticism of certain past Communist Party policies was encouraged by Deng Xiaoping as a way to reveal the excesses of the Cultural Revolution and the earlier Anti-Rightist Campaign, also helping to legitimize Deng's new policies of "reform and opening up." For instance, the Best Picture prize in the inaugural 1981 Golden Rooster Awards was given to two "scar dramas", "Evening Rain" (Wu Yonggang, Wu Yigong, 1980) and "Legend of Tianyun Mountain" (Xie Jin, 1980).
Many scar dramas were made by members of the Fourth Generation whose own careers or lives had suffered during the events in question, while younger, Fifth Generation directors such as Tian tended to focus on less controversial subjects of the immediate present or the distant past. Official enthusiasm for scar dramas waned by the 1990s when younger filmmakers began to confront negative aspects of the Mao era. "The Blue Kite", though sharing a similar subject as the earlier scar dramas, was more realistic in style, and was made only through obfuscating its real script. Shown abroad, it was banned from release in mainland China, while Tian himself was banned from making any films for nearly a decade afterward. After the 1989 Tiananmen Square Protests, few if any scar dramas were released domestically in mainland China.
Beginning in the mid-late 1980s, the rise of the so-called Fifth Generation of Chinese filmmakers brought increased popularity of Chinese cinema abroad. Most of the filmmakers who made up the Fifth Generation had graduated from the Beijing Film Academy in 1982 and included Zhang Yimou, Tian Zhuangzhuang, Chen Kaige, Zhang Junzhao, Li Shaohong, Wu Ziniu and others. These graduates constituted the first group of filmmakers to graduate since the Cultural Revolution and they soon jettisoned traditional methods of storytelling and opted for a more free and unorthodox symbolic approach. After the so-called scar literature in fiction had paved the way for frank discussion, Zhang Junzhao's "One and Eight" (1983) and Chen Kaige's "Yellow Earth" (1984) in particular were taken to mark the beginnings of the Fifth Generation. The most famous of the Fifth Generation directors, Chen Kaige and Zhang Yimou, went on to produce celebrated works such as "King of the Children" (1987), "Ju Dou" (1989), "Raise the Red Lantern" (1991) and "Farewell My Concubine" (1993), which were not only acclaimed by Chinese cinema-goers but by the Western arthouse audience. Tian Zhuangzhuang's films, though less well known by Western viewers, were well noted by directors such as Martin Scorsese. It was during this period that Chinese cinema began reaping the rewards of international attention, including the 1988 Golden Bear for "Red Sorghum", the 1992 Golden Lion for "The Story of Qiu Ju", the 1993 Palme d'Or for "Farewell My Concubine", and three Best Foreign Language Film nominations from the Academy Awards. All these award-winning films starred actress Gong Li, who became the Fifth Generation's most recognizable star, especially to international audiences.
Diverse in style and subject, the Fifth Generation directors' films ranged from black comedy (Huang Jianxin's "The Black Cannon Incident", 1985) to the esoteric (Chen Kaige's "Life on a String", 1991), but they share a common rejection of the socialist-realist tradition worked by earlier Chinese filmmakers in the Communist era. Other notable Fifth Generation directors include Wu Ziniu, Hu Mei, Li Shaohong and Zhou Xiaowen. Fifth Generation filmmakers reacted against the ideological purity of Cultural Revolution cinema. By relocating to regional studios, they began to explore the actuality of local culture in a somewhat documentarian fashion. Instead of stories depicting heroic military struggles, the films were built out of the drama of ordinary people's daily lives. They also retained political edge, but aimed at exploring issues rather than recycling approved policy. While Cultural Revolution films used character, the younger directors favored psychological depth along the lines of European cinema. They adopted complex plots, ambiguous symbolism, and evocative imagery. Some of their bolder works with political overtones were banned by Chinese authorities.
These films came with a creative genres of stories, new style of shooting as well, directors utilized extensive color and long shots to present and explore history and structure of national culture. As a result of the new films being so intricate, the films were for more educated audiences than anything. The new style was profitable for some and helped filmmakers to make strides in the business. It allowed directors to get away from reality and show their artistic sense.
The Fourth Generation also returned to prominence. Given their label after the rise of the Fifth Generation, these were directors whose careers were stalled by the Cultural Revolution and who were professionally trained prior to 1966. Wu Tianming, in particular, made outstanding contributions by helping to finance major Fifth Generation directors under the auspices of the Xi'an Film Studio (which he took over in 1983), while continuing to make films like "Old Well" (1986) and "The King of Masks" (1996).
The Fifth Generation movement ended in part after the 1989 Tiananmen Incident, although its major directors continued to produce notable works. Several of its filmmakers went into self-imposed exile: Wu Tianming moved to the United States (but later returned), Huang Jianxin left for Australia, while many others went into television-related works.
During a period where socialist dramas were beginning to lose viewership, the Chinese government began to involve itself deeper into the world of popular culture and cinema by creating the official genre of the "main melody" (主旋律), inspired by Hollywood's strides in musical dramas. In 1987, the Ministry of Radio, Film and Television issued a statement encouraging the making of movies which emphasizes the main melody to "invigorate national spirit and national pride". The expression "main melody" refers to the musical term leitmotif, that translates to the "theme of our times", which scholars suggest is representative of China's socio-political climate and cultural context of popular cinema. These main melody films (主旋律电影), still produced regularly in modern times, try to emulate the commercial mainstream by the use of Hollywood-style music and special effects. A significant feature of these films is the incorporation of a "red song", which is a song written as propaganda to support the People's Republic of China. By revolving the film around the motif of a red song, the film is able to gain traction at the box office as songs are generally thought to be more accessible than a film. Theoretically, once the red song dominates the charts, it will stir interest in the film that which it accompanies.
Main melody dramas are often subsidized by the state and have free access to government and military personnel. The Chinese government spends between "one and two million RMBs" annually to support the production of films in the main melody genre. August 1st Film Studio, the film and TV production arm of the People's Liberation Army, is a studio which produces main melody cinema. Main melody films, which often depict past military engagements or are biopics of first-generation CCP leaders, have won several Best Picture prizes at the Golden Rooster Awards. Some of the more famous main melody dramas include the ten-hour epic "Decisive Engagement" (大决战, 1991), directed by Cai Jiawei, Yang Guangyuan and Wei Lian; "The Opium War" (1997), directed by Xie Jin; and "The Founding of a Republic" (2009), directed by Han Sanping and Fifth Generation director Huang Jianxin. The Founding of an Army (2017) was commissioned by the government to celebrate the 90th anniversary of the People's Liberation Army, and is the third instalment in The Founding of a Republic series. The film featured many young Chinese pop singers that are already well-established in the industry, including Li Yifeng, Liu Haoran, and Lay Zhang, so as to further the film's reputation as a main melody drama.
The post-1990 era has been labelled the "return of the amateur filmmaker" as state censorship policies after the Tiananmen Square demonstrations produced an edgy underground film movement loosely referred to as the Sixth Generation. Owing to the lack of state funding and backing, these films were shot quickly and cheaply, using materials like 16 mm film and digital video and mostly non-professional actors and actresses, producing a documentary feel, often with long takes, hand-held cameras, and ambient sound; more akin to Italian neorealism and cinéma vérité than the often lush, far more considered productions of the Fifth Generation. Unlike the Fifth Generation, the Sixth Generation brings a more creative individualistic, anti-romantic life-view and pays far closer attention to contemporary urban life, especially as affected by disorientation, rebellion and dissatisfaction with China's contemporary social marketing economic tensions and comprehensive cultural background. Many were made with an extremely low budget (an example is Jia Zhangke, who shoots on digital video, and formerly on 16 mm; Wang Xiaoshuai's "The Days" (1993) were made for US$10,000). The title and subjects of many of these films reflect the Sixth Generation's concerns. The Sixth Generation takes an interest in marginalized individuals and the less represented fringes of society. For example, Zhang Yuan's hand-held "Beijing Bastards" (1993) focuses on youth punk subculture, featuring artists like Cui Jian, Dou Wei and He Yong frowned upon by many state authorities, while Jia Zhangke's debut film "Xiao Wu" (1997) concerns a provincial pickpocket.
As the Sixth Generation gained international exposure, many subsequent movies were joint ventures and projects with international backers, but remained quite resolutely low-key and low budget. Jia's "Platform" (2000) was funded in part by Takeshi Kitano's production house, while his "Still Life" was shot on HD video. "Still Life" was a surprise addition and Golden Lion winner of the 2006 Venice International Film Festival. "Still Life", which concerns provincial workers around the Three Gorges region, sharply contrasts with the works of Fifth Generation Chinese directors like Zhang Yimou and Chen Kaige who were at the time producing "House of Flying Daggers" (2004) and "The Promise" (2005). It featured no star of international renown and was acted mostly by non-professionals.
Many Sixth Generation films have highlighted the negative attributes of China's entry into the modern capitalist market. Li Yang's "Blind Shaft" (2003) for example, is an account of two murderous con-men in the unregulated and notoriously dangerous mining industry of northern China. (Li refused the tag of Sixth Generation, although admitted he was not Fifth Generation). While Jia Zhangke's "The World" (2004) emphasizes the emptiness of globalization in the backdrop of an internationally themed amusement park.
Some of the more prolific Sixth Generation directors to have emerged are Wang Xiaoshuai ("The Days", "Beijing Bicycle", "So Long, My Son"), Zhang Yuan ("Beijing Bastards", "East Palace West Palace"), Jia Zhangke ("Xiao Wu", "Unknown Pleasures", "Platform", "The World", "A Touch of Sin", "Mountains May Depart", "Ash is Purest White"), He Jianjun ("Postman") and Lou Ye ("Suzhou River", "Summer Palace"). One young director who does not share most of the concerns of the Sixth Generation is Lu Chuan ("", 2004; "City of Life and Death", 2010).
In the 2018 Cannes Film Festival, two of China's Sixth generation filmmakers, Jia Zhangke and Zhang Ming – whose grim works transformed Chinese cinema in the 1990s – showed on the French Riviera. While both directors represent Chinese cinema, their profiles are quite different. The 49 year old Jia set up the Pingyao International Film Festival in 2017 and on the other hand is Zhang, a 56-year-old film school professor who spent years working on government commissions and domestic TV shows after struggling with his own projects. Despite their different profiles, they mark an important cornerstone in Chinese Cinema and are both credited with bringing Chinese movies to the international big screen. Chinese director Jia Zhangke's latest film "Ash Is Purest White" has been selected to compete in the official competition for the Palme d'Or of the 71st Cannes Film Festival, the highest prize awarded at the film festival. It is Jia’s fifth movie, a gangster revenge drama that is his most expensive and mainstream film to date. Back in 2013, Jia won Best Screenplay Award for A Touch of Sin, following nominations for Unknown Pleasures in 2002 and 24 City in 2008. In 2014, he was a member of the official jury and the following year his film Mountains May Depart was nominated. According to entertainment website Variety, a record number of Chinese films were submitted this year but only Jia's romantic drama was selected to compete for the Palme d'Or. Meanwhile, Zhang will make his debut at Cannes with "The Pluto Moment", a slow-moving relationship drama about a team of filmmakers scouting for locations and musical talent in China’s rural hinterland. The film is Zhang’s highest profile production so far, as it stars actor Wang Xuebing in the leading role. The film was partly financed by iQiyi, the company behind one of China’s most popular online video browsing sharing sites.
There is a growing number of independent seventh or post-Sixth Generation filmmakers making films with extremely low budgets and using digital equipment. They are the so-called dGeneration (for digital). These films, like those from Sixth Generation filmmakers, are mostly made outside the Chinese film system and are shown mostly on the international film festival circuit. Ying Liang and Jian Yi are two of these generation filmmakers. Ying's "Taking Father Home" (2005) and "The Other Half" (2006) are both representative of the generation trends of the feature film. Liu Jiayin made two dGeneration feature films, "Oxhide" (2004) and "Oxhide II" (2010), blurring the line between documentary and narrative film. "Oxhide", made by Liu when she was a film student, frames herself and her parents in their claustrophobic Beijing apartment in a narrative praised by critics. "An Elephant Sitting Still" was another great work considered to be one of the greatest films ever made as a film debut and the last film by the late Hu Bo.
Two decades of reform and commercialization have brought dramatic social changes in mainland China, reflected not only in fiction film but in a growing documentary movement. Wu Wenguang's 70-minute "" (1990) is now seen as one of the first works of this "New Documentary Movement" (NDM) in China. "Bumming", made between 1988 and 1990, contains interviews with five young artists eking out a living in Beijing, subject to state authorized tasks. Shot using a camcorder, the documentary ends with four of the artists moving abroad after the 1989 Tiananmen Protests. "Dance with the Farm Workers" (2001) is another documentary by Wu.
Another internationally acclaimed documentary is Wang Bing's nine-hour tale of deindustrialization "" (2003). Wang's subsequent documentaries, "" (2007), "Crude Oil" (2008), "Man with no name" (2009), "Three Sisters" (2012) and "Feng ai" (2013), cemented his reputation as a leading documentarist of the movement.
Li Hong, the first woman in the NDM, in "Out of Phoenix Bridge" (1997) relates the story of four young women, who moving from rural areas to the big cities like millions of other men and women, have come to Beijing to make a living.
The New Documentary Movement in recent times has overlapped with the dGeneration filmmaking, with most documentaries being shot cheaply and independently in the digital format. Xu Xin's "Karamay" (2010), Zhao Liang's Behemoth, Huang Weikai's "Disorder" (2009), Zhao Dayong's "Ghost Town" (2009), Du Haibing's "1428" (2009), Xu Tong's "Fortune Teller" (2010) and Li Ning’s "Tape" (2010) were all shot in digital format. All had made their impact in the international documentary scene and the use of digital format allows for works of vaster lengths.
Inspired by the success of Disney animation, the self-taught pioneers Wan brothers, Wan Laiming and Wan Guchan, made the first Chinese animated short in the 1920s, thus inaugurating the history of Chinese animation. (Chen Yuanyuan 175)
In 1937, the Wan brothers decided to produce 《铁扇公主》" Princess Iron Fan", which was the first Chinese animated feature film and the fourth, after the American feature films "Snow White", "Gulliver’s Travels", and "The Adventure of Pinocchio". It was at this time that Chinese animation as an art form had risen to prominence on the world stage. Completed in 1941, the film was released under China United Pictures and aroused a great response in Asia. Japanese animator Shigeru Tezuka once said that he gave up medicine after watching the cartoon and decided to pursue animation.
During this golden era, Chinese animation had developed a variety of styles, including ink animation, shadow play animation, puppet animation, and so on. Some of the most representative works are 《大闹天宫》" Uproar in Heaven", 《哪吒闹海》 "Nezha's Rebellion in the Sea" and《天书奇谈》 "Heavenly Book", which have also won lofty praise and numerous awards in the world.
After Deng Xiaoping’s Reform Period and the “opening up” of China, the movies《葫芦兄弟》 "Calabash Brothers", 《黑猫警长》"Black Cat Sheriff", 《阿凡提》"Avanti Story" and other impressive animated movies were released. However, at this time, China still favored the Japanese’s more unique, American and European-influenced animated works over the less-advanced domestic ones.
In the 1990s, digital production methods replaced manual hand-drawing methods; however, even with the use of advanced technology, none of the animated works were considered to be a breakthrough film. Animated films that tried to cater to all age groups, such as Lotus Lantern and Storm Resolution, did not attract much attention. The only animated works that seemed to achieve popularity were the ones for catered for children, such as Pleasant Goat and Wolfy 《喜羊羊与灰太狼》.
During this period, the technical level of Chinese domestic animation production has been established comprehensively, and 3D animation films have become the mainstream. However, as more and more foreign films (such as ones from Japan, Europe, and the United States) are being imported into China, Chinese animated works is left in the shadows of these animated foreign films.
It is only with the release of 《西游记之大圣归来》Journey to the West: The Return of Monkey King in 2015, a live-action film where CGI was extensively used in its production, that Chinese animated works took back the rein. This movie was a big hit in 2015 and broke the gross record of Chinese domestic animated movies with CN¥956 million at China’s box office.
After the success of Journey to the West, several other high-quality animated films were released, such as 《风雨咒》 Wind Language Curse and 《白蛇缘起》 White Snake. Though none of these movies made headway in regards to the box office and popularity aspect, it did make filmmakers more and more interested in animated works.
This all changed with the breakthrough animated film, 《哪吒之魔童降世》Nezha. Released in 2019, it became the second highest-grossing film of all time in China. It was with this film that Chinese animated films, as a medium, has finally broken the notion in China that domestic animated films are only for children. With Nezha (2019), Chinese animation has now come to known to a veritable source of entertainment for all ages.
With China's liberalization in the late 1970s and its opening up to foreign markets, commercial considerations have made its impact in post-1980s filmmaking. Traditionally arthouse movies screened seldom make enough to break even. An example is Fifth Generation director Tian Zhuangzhuang's "The Horse Thief" (1986), a narrative film with minimal dialog on a Tibetan horse thief. The film, showcasing exotic landscapes, was well received by Chinese and some Western arthouse audiences, but did poorly at the box office.
Tian's later "The Warrior and the Wolf" (2010) was a similar commercial failure. Prior to these, there were examples of successful commercial films in the post-liberalization period. One was the romance film "Romance on the Lu Mountain" (1980), which was a success with older Chinese. The film broke the Guinness Book of Records as the longest-running film on a first run. Jet Li's cinematic debut "Shaolin Temple" (1982) was an instant hit at home and abroad (in Japan and the Southeast Asia, for example). Another successful commercial film was "Murder in 405" (405谋杀案, 1980), a murder thriller.
Feng Xiaogang's "The Dream Factory" (1997) was heralded as a turning point in Chinese movie industry, a "hesui pian" (Chinese New Year-screened film) which demonstrated the viability of the commercial model in China's socialist market society. Feng has become the most successful commercial director in the post-1997 era. Almost all his films made high returns domestically while he used ethnic Chinese co-stars like Rosamund Kwan, Jacqueline Wu, Rene Liu and Shu Qi to boost his films' appeal.
In the decade following 2010, owing to the influx of Hollywood films (though the number screened each year is curtailed), Chinese domestic cinema faces mounting challenges. The industry is growing and domestic films are starting to achieve the box office impact of major Hollywood blockbusters. However, not all domestic films are successful financially. In January 2010 James Cameron's "Avatar" was pulled out from non-3D theaters for Hu Mei's biopic "Confucius", but this move led to a backlash on Hu's film. Zhang Yang's 2005 "Sunflower" also made little money, but his earlier, low-budget "Spicy Love Soup" (1997) grossed ten times its budget of ¥3 million. Likewise, the 2006 "Crazy Stone", a sleeper hit, was made for just 3 million HKD/US$400,000. In 2009–11, Feng's "Aftershock" (2009) and Jiang Wen's "Let the Bullets Fly" (2010) became China's highest grossing domestic films, with "Aftershock" earning ¥670 million (US$105 million) and "Let the Bullets Fly" ¥674 million (US$110 million). "Lost in Thailand" (2012) became the first Chinese film to reach ¥1 billion at the Chinese box office and "Monster Hunt" (2015) became the first to reach . As of November 2015, 5 of the top 10 highest-grossing films in China are domestic productions. On February 8, 2016, the Chinese box office set a new single-day gross record, with , beating the previous record of on July 18, 2015. Also on February 2016, "The Mermaid", directed by Stephen Chow, became the highest-grossing film in China, overtaking "Monster Hunt". It is also the first film to reach .
Under the influence of Hollywood science fiction movies like Prometheus, published on June 8, 2012, such genres especially the space science films have risen rapidly in the Chinese film market in recent years. On February 5, 2019, the film The Wandering Earth directed by Frant Kwo reached $699.8 million worldwide, which became the third highest-grossing film in the history of Chinese cinema.
He Ping is a director of mostly Western-like films set in Chinese locale. His "Swordsmen in Double Flag Town" (1991) and "Sun Valley" (1995) explore narratives set in the sparse terrain of West China near the Gobi Desert. His historical drama "Red Firecracker, Green Firecracker" (1994) won a myriad of prizes home and abroad.
Recent cinema has seen Chinese cinematographers direct some acclaimed films. Other than Zhang Yimou, Lü Yue made "Mr. Zhao" (1998), a black comedy film well received abroad. Gu Changwei's minimalist epic "Peacock" (2005), about a quiet, ordinary Chinese family with three very different siblings in the post-Cultural Revolution era, took home the Silver Bear prize for 2005 Berlin International Film Festival. Hou Yong is another cinematographer who made films ("Jasmine Women", 2004) and TV series. There are actors who straddle the dual roles of acting and directing. Xu Jinglei, a popular Chinese actress, has made six movies to date. Her second film "Letter from an Unknown Woman" (2004) landed her the San Sebastián International Film Festival Best Director award. Another popular actress and director is Zhao Wei, whose directorial debut "So Young" (2013) was a huge box office and critical success.
The most highly regarded Chinese actor-director is undoubtedly Jiang Wen, who has directed several critically acclaimed movies while following on his acting career. His directorial debut, "In the Heat of the Sun" (1994) was the first PRC film to win Best Picture at the Golden Horse Film Awards held in Taiwan. His other films, like "Devils on the Doorstep" (2000, Cannes Grand Prix) and "Let the Bullets Fly" (2010), were similarly well received. By the early 2011, "Let the Bullets Fly" had become the highest grossing domestic film in China's history.
Since the late 1980s and progressively in the 2000s, Chinese films have enjoyed considerable box office success abroad. Formerly viewed only by cineastes, its global appeal mounted after the international box office and critical success of Ang Lee's period martial arts film "Crouching Tiger, Hidden Dragon" which won Academy Award for Best Foreign Language Film in 2000. This multi-national production increased its appeal by featuring stars from all parts of the Chinese-speaking world. It provided an introduction to Chinese cinema (and especially the wuxia genre) for many and increased the popularity of many earlier Chinese films. To date "Crouching Tiger" remains the most commercially successful foreign-language film in U.S. history.
Similarly, in 2002, Zhang Yimou's "Hero" was another international box office success. Its cast featured famous actors from the Mainland China and Hong Kong who were also known to some extent in the West, including Jet Li, Zhang Ziyi, Maggie Cheung and Tony Leung Chiu-Wai. Despite criticisms by some that these two films pander somewhat to Western tastes, "Hero" was a phenomenal success in most of Asia and topped the U.S. box office for two weeks, making enough in the U.S. alone to cover the production costs.
Other films such as "Farewell My Concubine", "2046", "Suzhou River", "The Road Home" and "House of Flying Daggers" were critically acclaimed around the world. The Hengdian World Studios can be seen as the "Chinese Hollywood", with a total area of up to 330 ha. and 13 shooting bases, including a 1:1 copy of the Forbidden City.
The successes of "Crouching Tiger, Hidden Dragon" and "Hero" make it difficult to demarcate the boundary between "Mainland Chinese" cinema and a more international-based "Chinese-language cinema". "Crouching Tiger", for example, was directed by a Taiwan-born American director (Ang Lee) who works often in Hollywood. Its pan-Chinese leads include Mainland Chinese (Zhang Ziyi), Hong Kong (Chow Yun-Fat), Taiwan (Chang Chen) and Malaysian (Michelle Yeoh) actors and actresses; the film was co-produced by an array of Chinese, American, Hong Kong, and Taiwan film companies. Likewise, Lee's Chinese-language "Lust, Caution" (2007) drew a crew and cast from Mainland China, Hong Kong and Taiwan, and includes an orchestral score by French composer Alexandre Desplat. This merging of people, resources and expertise from the three regions and the broader East Asia and the world, marks the movement of Chinese-language cinema into a domain of large scale international influence. Other examples of films in this mold include "The Promise" (2005), "The Banquet" (2006), "Fearless" (2006), "The Warlords" (2007), "Bodyguards and Assassins" (2009) and "Red Cliff" (2008-09). The ease with which ethnic Chinese actresses and actors straddle the mainland and Hong Kong has significantly increased the number of co-productions in Chinese-language cinema. Many of these films also feature South Korean or Japanese actors to appeal to their East Asian neighbours. Some artistes originating from the mainland, like Hu Jun, Zhang Ziyi, Tang Wei and Zhou Xun, obtained Hong Kong residency under the Quality Migrant Admission Scheme and have acted in many Hong Kong productions.
In 2010, Chinese cinema was the third largest film industry by number of feature films produced annually. In 2013, China's gross box office was ¥21.8 billion (US$3.6 billion), the second-largest film market in the world by box office receipts. In January 2013, "Lost in Thailand" (2012) became the first Chinese film to reach ¥1 billion at the box office. As of May 2013, 7 of the top 10 highest-grossing films in China were domestic productions. As of 2014, around half of all tickets are sold online, with the largest ticket selling sites being Maoyan.com (82 million), Gewara.com (45 million) and Wepiao.com (28 million). In 2014, Chinese films earned ¥1.87 billion outside China. By December 2013 there were 17,000 screens in the country. By January 6, 2014, there were 18,195 screens in the country. Greater China has around 251 IMAX theaters. There were 299 cinema chains (252 rural, 47 urban), 5,813 movie theaters and 24,317 screens in the country in 2014.
The country added about 8,035 screens in 2015 (at an average of 22 new screens per day, increasing its total by about 40% to around 31,627 screens, which is about 7,373 shy of the number of screens in the United States. Chinese films accounted for 61.48% of ticket sales in 2015 (up from 54% last year) with more than 60% of ticket sales being made online. Average ticket price was down about 2.5% to $5.36 in 2015. It also witnessed 51.08% increase in admissions, with 1.26 billion people buying tickets to the cinema in 2015. Chinese films grossed overseas in 2015. During the week of the 2016 Chinese New Year, the country set a new record for the highest box office gross during one week in one territory with , overtaking the previous record of of December 26, 2015 to January 1, 2016 in the United States and Canada. Chinese films grossed () in foreign markets in 2016.
As of April 2015, the largest Chinese film company by worth was Alibaba Pictures (US$8.77 billion). Other large companies include Huayi Brothers Media (US$7.9 billion), Enlight Media (US$5.98 billion) and Bona Film Group (US$542 million). The biggest distributors by market share in 2014 were: China Film Group (32.8%), Huaxia Film (22.89%), Enlight Pictures (7.75%), Bona Film Group (5.99%), Wanda Media (5.2%), Le Vision Pictures (4.1%), Huayi Brothers (2.26%), United Exhibitor Partners (2%), Heng Ye Film Distribution (1.77%) and Beijing Anshi Yingna Entertainment (1.52%). The biggest cinema chains in 2014 by box office gross were: Wanda Cinema Line (), China Film Stellar (393.35 million), Dadi Theater Circuit (378.17 million), Shanghai United Circuit (355.07 million), Guangzhou Jinyi Zhujiang (335.39 million), China Film South Cinema Circuit (318.71 million), Zhejiang Time Cinema (190.53 million), China Film Group Digital Cinema Line (177.42 million), Hengdian Cinema Line (170.15 million) and Beijing New Film Association (163.09 million).
Notable Independent Non-state owned Film Companies
Huayi Brothers: China’s most powerful independent (i.e., non state-owned) entertainment company, Beijing-based Huayi Brothers is a diversified company engaged in film and TV production, distribution, theatrical exhibition, as well as talent management. Notable films include 2004's "Kung Fu Hustle," and 2010's "Aftershock" which had a 91% rating on Rotten Tomatoes, a site with usual notoriously low ratings.
Beijing Enlight Media: Under CEO Wang Changtian, Enlight Media rarely mis-fires in its production and distribution of feature films. Squarely focused on the action and romance genres, Enlight usually places several films in China’s top 20 grossers, and currently has in release the country’s fourth highest-grossing Chinese language film, "The Four". Enlight is also a major player in China’s TV series production and distribution businesses. Under the leadership of its CEO Wang Changtian, the publicly traded, Beijing-based company has achieved a market capitalization of nearly US$1 billion. | https://en.wikipedia.org/wiki?curid=10791 |
Cinema of the United Kingdom
The United Kingdom has had a significant film industry for over a century. While film production reached an all-time high in 1936, the "golden age" of British cinema is usually thought to have occurred in the 1940s, during which the directors David Lean, Michael Powell, (with Emeric Pressburger) and Carol Reed produced their most critically acclaimed works. Many British actors have accrued critical success and worldwide recognition, such as Maggie Smith, Roger Moore, Michael Caine, Sean Connery, Daniel Day-Lewis, Judi Dench, Gary Oldman, Emma Thompson, and Kate Winslet. Some of the films with the largest ever box office returns have been made in the United Kingdom, including the third and fourth highest-grossing film franchises ("Harry Potter" and "James Bond").
The identity of the British film industry, particularly as it relates to Hollywood, has often been the subject of debate. Its history has often been affected by attempts to compete with the American industry. The career of the producer Alexander Korda was marked by this objective, the Rank Organisation attempted to do so in the 1940s, and Goldcrest in the 1980s. Numerous British-born directors, including Alfred Hitchcock, Christopher Nolan and Ridley Scott, and performers, such as Charlie Chaplin and Cary Grant, have achieved success primarily through their work in the United States.
In 2009, British films grossed around $2 billion worldwide and achieved a market share of around 7% globally and 17% in the United Kingdom. UK box-office takings totalled £1.1 billion in 2012, with 172.5 million admissions.
The British Film Institute has produced a poll ranking what they consider to be the 100 greatest British films of all time, the BFI Top 100 British films. The annual BAFTA awards hosted by the British Academy of Film and Television Arts are considered to be the British equivalent of the Academy Awards.
The world's first moving picture was shot in Leeds by Louis Le Prince in 1888 and the first moving pictures developed on celluloid film were made in Hyde Park, London in 1889 by British inventor William Friese Greene, who patented the process in 1890.
The first people to build and run a working 35 mm camera in Britain were Robert W. Paul and Birt Acres. They made the first British film "Incident at Clovelly Cottage" in February 1895, shortly before falling out over the camera's patent. Soon several British film companies had opened to meet the demand for new films, such as Mitchell and Kenyon in Blackburn.
Although the earliest British films were of everyday events, the early 20th century saw the appearance of narrative shorts, mainly comedies and melodramas. The early films were often melodramatic in tone, and there was a distinct preference for story lines already known to the audience, in particular, adaptations of Shakespeare plays and Dickens novels.
The Lumière brothers first brought their show to London in 1896. In 1898 American producer Charles Urban expanded the London-based Warwick Trading Company to produce British films, mostly documentary and news.
In 1898 Gaumont-British Picture Corp. was founded as a subsidiary of the French Gaumont Film Company, constructing Lime Grove Studios in West London in 1915 in the first building built in Britain solely for film production. Also in 1898 Hepworth Studios was founded in Lambeth, South London by Cecil Hepworth, the Bamforths began producing films in Yorkshire, and William Haggar began producing films in Wales.
Directed by Walter R. Booth, "Scrooge, or, Marley's Ghost" (1901) is the earliest known film adaptation of Charles Dickens's novella "A Christmas Carol". Booth's "The Hand of the Artist" (1906) has been described as the first British animated film.
In 1902 Ealing Studios was founded by Will Barker, becoming the oldest continuously-operating film studio in the world.
In 1902 the earliest colour film in the world was made; like other films made at the time, it is of everyday events. In 2012 it was found by the National Science and Media Museum in Bradford after lying forgotten in an old tin for 110 years. The previous title for earliest colour film, using Urban's inferior Kinemacolor process, was thought to date from 1909. The re-discovered films were made by pioneer Edward Raymond Turner from London who patented his process on 22 March 1899.
In 1903 Urban formed the Charles Urban Trading Company, which produced early colour films using his patented Kinemacolor process. This was later challenged in court by Greene, causing the company to go out of business in 1915.
In 1903, Cecil Hepworth and Percy Stow directed "Alice in Wonderland", the first film adaptation of Lewis Carroll's children's book "Alice's Adventures in Wonderland".
In 1903 Frank Mottershaw of Sheffield produced the film "A Daring Daylight Robbery", which launched the chase genre.
In 1911 the Ideal Film Company was founded in Soho, London, distributing almost 400 films by 1934, and producing 80.
In 1913 stage director Maurice Elvey began directing British films, becoming Britain's most prolific film director, with almost 200 by 1957.
In 1914 Elstree Studios was founded, and acquired in 1928 by German-born Ludwig Blattner, who invented a magnetic steel tape recording system that was adopted by the BBC in 1930.
In 1920 Gaumont opened Islington Studios, where Alfred Hitchcock got his start, selling out to Gainsborough Pictures in 1927. Also in 1920 Cricklewood Studios was founded by Sir Oswald Stoll, becoming Britain's largest film studio, known for Fu Manchu and Sherlock Holmes film series.
In 1920 the short-lived company Minerva Films was founded in London by the actor Leslie Howard (also producer and director) and his friend and story editor Adrian Brunel. Some of their early films include four written by A. A. Milne including "The Bump", starring C. Aubrey Smith; "Twice Two"; "Five Pound Reward"; and "Bookworms".
By the mid-1920s the British film industry was losing out to heavy competition from the United States, which was helped by its much larger home market – in 1914 25% of films shown in the UK were British, but by 1926 this had fallen to 5%. The Slump of 1924 caused many British film studios to close, resulting in the passage of the Cinematograph Films Act 1927 to boost local production, requiring that cinemas show a certain percentage of British films. The act was technically a success, with audiences for British films becoming larger than the quota required, but it had the effect of creating a market for poor quality, low cost films, made to satisfy the quota. The "quota quickies", as they became known, are often blamed by historians for holding back the development of the industry. However, some British film makers, such as Michael Powell, learnt their craft making such films. The act was modified with the Cinematograph Films Act 1938 assisted the British film industry by specifying only films made by and shot in Great Britain would be included in the quota, an act that severely reduced Canadian and Australian film production.
Ironically, the biggest star of the silent era, English comedian Charlie Chaplin, was Hollywood-based.
Scottish solicitor John Maxwell founded British International Pictures (BIP) in 1927. Based at the former British National Studios in Elstree, the facilities original owners, including producer-director Herbert Wilcox, had run into financial difficulties. One of the company's early films, Alfred Hitchcock's "Blackmail" (1929), is often regarded as the first British sound feature. It was a part-talkie with a synchronized score and sound effects. Earlier in 1929, the first all-talking British feature, "The Clue of the New Pin" was released. It was based on a novel by Edgar Wallace, starring Donald Calthrop, Benita Home and Fred Raines, which was made by British Lion at their Beaconsfield Studios. John Maxwell's BIP became the Associated British Picture Corporation (ABPC) in 1933. ABPC's studios in Elstree came to be known as the "porridge factory", according to Lou Alexander, "for reasons more likely to do with the quantity of films that the company turned out, than their quality". Elstree (strictly speaking almost all the studios were in neighboring Borehamwood) became the center of the British film industry, with six film complexes over the years all in close proximity to each other.
With the advent of sound films, many foreign actors were in less demand, with English received pronunciation commonly used; for example, the voice of Czech actress Anny Ondra in "Blackmail" was substituted by an off-camera Joan Barry during Ondra's scenes.
Starting with John Grierson's "Drifters" (also 1929), the period saw the emergence of the school of realist Documentary Film Movement, from 1933 associated with the GPO Film Unit. It was Grierson who coined the term "documentary" to describe a non-fiction film, and he produced the movement's most celebrated early films, "Night Mail" (1936), written and directed by Basil Wright and Harry Watt, and incorporating the poem by W. H. Auden towards the end of the short.
Music halls also proved influential in comedy films of this period, and a number of popular personalities emerged, including George Formby, Gracie Fields, Jessie Matthews and Will Hay. These stars often made several films a year, and their productions remained important for morale purposes during World War II.
Many of the British films with larger budgets during the 1930s were produced by London Films, founded by Hungarian "emigre" Alexander Korda. The success of "The Private Life of Henry VIII" (1933), made at British and Dominion in Elstree, persuaded United Artists and The Prudential to invest in Korda's Denham Film Studios, which opened in May 1936, but both investors suffered losses as a result. Korda's films before the war included "Things to Come", "Rembrandt" (both 1936) and "Knight Without Armour" (1937), as well as the early Technicolour films "The Drum" (1938) and "The Four Feathers" (1939). These had followed closely on from "Wings of the Morning" (1937), the UK's first three-strip Technicolour feature film, made by the local offshoot of 20th Century Fox. Although some of Korda's films indulged in "unrelenting pro-Empire flag waving", those featuring Sabu turned him into "a huge international star"; "for many years" he had the highest profile of any actor of Indian origin. Paul Robeson was also cast in leading roles when "there were hardly any opportunities" for African Americans "to play challenging roles" in their own country's productions.
Rising expenditure and over-optimistic expectations of expansion into the American market caused a financial crisis in 1937, after an all-time high of 192 films were released in 1936. Of the 640 British production companies registered between 1925 and 1936, only 20 were still active in 1937. Moreover, the 1927 Films Act was up for renewal. The replacement Cinematograph Films Act 1938 provided incentives, via a "quality test", for UK companies to make fewer films, but of higher quality, and to eliminate the "quota quickies". Influenced by world politics, it encouraged American investment and imports. One result was the creation of MGM-British, an English subsidiary of the largest American studio, which produced four films before the war, including "Goodbye, Mr. Chips" (1939).
The new venture was initially based at Denham Studios. Korda himself lost control of the facility in 1939 to the Rank Organisation, whose own Pinewood Studios had opened at the end of September 1936. Circumstances forced Korda's "The Thief of Bagdad" (1940), a spectacular fantasy film, to be completed in California, where Korda continued his film career during the war.
By now contracted to Gaumont British, Alfred Hitchcock had settled on the thriller genre by the mid-1930s with "The Man Who Knew Too Much" (1934), "The 39 Steps" (1935) and "The Lady Vanishes" (1938). Lauded in Britain where he was dubbed "Alfred the Great" by "Picturegoer" magazine, Hitchcock's reputation was beginning to develop overseas, with a "The New York Times" feature writer asserting; "Three unique and valuable institutions the British have that we in America have not. Magna Carta, the Tower Bridge and Alfred Hitchcock, the greatest director of screen melodramas in the world." Hitchcock was then signed up to a seven-year contract by Selznick and moved to Hollywood.
Humphrey Jennings began his career as a documentary film maker just before the war, in some cases working in collaboration with co-directors. "London Can Take It" (with Harry Wat, 1940) detailed the blitz while "Listen to Britain" (with Stewart McAllister, 1942) looked at the home front. The Crown Film Unit, part of the Ministry of Information took over the responsibilities of the GPO Film Unit in 1940. Paul Rotha and Alberto Cavalcanti were colleagues of Jennings. British films began to make use of documentary techniques; Cavalcanti joined Ealing for "Went the Day Well?" (1942),
Many other films helped to shape the popular image of the nation at war. Among the best known of these films are "In Which We Serve" (1942), "We Dive at Dawn" (1943), "Millions Like Us" (1943) and "The Way Ahead" (1944). The war years also saw the emergence of The Archers partnership between director Michael Powell and the Hungarian-born writer-producer Emeric Pressburger with films such as "The Life and Death of Colonel Blimp" (1943) and "A Canterbury Tale" (1944).
Two Cities Films, an independent production company releasing their films through a Rank subsidiary, also made some important films, including the Noël Coward and David Lean collaborations "This Happy Breed" (1944) and "Blithe Spirit" (1945) as well as Laurence Olivier's "Henry V" (1944). By this time, Gainsborough Studios were releasing their series of critically derided but immensely popular period melodramas, including "The Man in Grey" (1943) and "The Wicked Lady" (1945). New stars, such as Margaret Lockwood and James Mason, emerged in the Gainsborough films.
Towards the end of the 1940s, the Rank Organisation, founded in 1937 by J. Arthur Rank, became the dominant force behind British film-making, having acquired a number of British studios and the Gaumont chain (in 1941) to add to its Odeon Cinemas. Rank's serious financial crisis in 1949, a substantial loss and debt, resulted in the contraction of its film production. In practice, Rank maintained an industry duopoly with ABPC (later absorbed by EMI) for many years.
For the moment, the industry hit new heights of creativity in the immediate post-war years. Among the most significant films produced during this period were David Lean's "Brief Encounter" (1945) and his Dickens adaptations "Great Expectations" (1946) and "Oliver Twist" (1948), Carol Reed's thrillers "Odd Man Out" (1947) and "The Third Man" (1949), and Powell and Pressburger's "A Matter of Life and Death" (1946), "Black Narcissus" (1947) and "The Red Shoes" (1948), the most commercially successful film of its year in the United States. Laurence Olivier's "Hamlet" (also 1948), was the first non-American film to win the Academy Award for Best Picture. Ealing Studios (financially backed by Rank) began to produce their most celebrated comedies, with three of the best remembered films, "Whisky Galore" (1948), "Kind Hearts and Coronets" and "Passport to Pimlico" (both 1949), being on release almost simultaneously. Their portmanteau horror film "Dead of Night" (1945) is also particularly highly regarded.
Under the Import Duties Act 1932, HM Treasury levied a 75 per cent tariff on all film imports on 6 August 1947 which became known as Dalton Duty (after Hugh Dalton then the Chancellor of the Exchequer). The tax came into effect on 8 August, applying to all imported films, of which the overwhelming majority came from the United States; American film studio revenues from the UK had been in excess of US$68 million in 1946. The following day, 9 August, the Motion Picture Association of America announced that no further films would be supplied to British cinemas until further notice. The Dalton Duty was ended on 3 May 1948 with the American studios again exported films to the UK though the Marshall Plan prohibited US film companies from taking foreign exchange out of the nations their films played in.
The Eady Levy, named after Sir Wilfred Eady was a tax on box office receipts in the United Kingdom in order to support the British Film industry. It was established in 1950 coming into effect in 1957. A direct governmental payment to British-based producers would have qualified as a subsidy under the terms of the General Agreement on Tariffs and Trade, and would have led to objections from American film producers. An indirect levy did not qualify as a subsidy, and so was a suitable way of providing additional funding for the UK film industry whilst avoiding criticism from abroad.
During the 1950s, the British industry began to concentrate on popular comedies and World War II dramas aimed more squarely at the domestic audience. The war films were often based on true stories and made in a similar low-key style to their wartime predecessors. They helped to make stars of actors like John Mills, Jack Hawkins and Kenneth More. Some of the most successful included "The Cruel Sea" (1953), "The Dam Busters" (1954), "The Colditz Story" (1955) and "Reach for the Sky" (1956).
The Rank Organisation produced some comedy successes, such as "Genevieve" (1953). The writer/director/producer team of twin brothers John and Roy Boulting also produced a series of successful satires on British life and institutions, beginning with "Private's Progress" (1956), and continuing with (among others) "Brothers in Law" (1957), "Carlton-Browne of the F.O." (1958), and "I'm All Right Jack" (1959).
Popular comedy series included the "Doctor" series, beginning with "Doctor in the House" (1954). The series originally starred Dirk Bogarde, probably the British industry's most popular star of the 1950s, though later films had Michael Craig and Leslie Phillips in leading roles. The Carry On series began in 1958 with regular instalments appearing for the next twenty years. The Italian director-producer Mario Zampi also made a number of successful black comedies, including "Laughter in Paradise" (1951), "The Naked Truth" (1957) and "Too Many Crooks" (1958). Ealing Studios had continued its run of successful comedies, including "The Lavender Hill Mob" (1951) and "The Ladykillers" (1955), but the company ceased production in 1958, after the studios had already been bought by the BBC.
Less restrictive censorship towards the end of the 1950s encouraged film producer Hammer Films to embark on their series of commercially successful horror films. Beginning with adaptations of Nigel Kneale's BBC science fiction serials "The Quatermass Experiment" (1955) and "Quatermass II" (1957), Hammer quickly graduated to "The Curse of Frankenstein" (1957) and "Dracula" (1958), both deceptively lavish and the first gothic horror films in colour. The studio turned out numerous sequels and variants, with English actors Peter Cushing and Christopher Lee being the most regular leads. "Peeping Tom" (1960), a now highly regarded thriller, with horror elements, set in the contemporary period, was badly received by the critics at the time, and effectively finished the career of Michael Powell, its director.
The British New Wave film makers attempted to produce social realist films (see also 'kitchen sink realism') attempted in commercial feature films released between around 1959 and 1963 to convey narratives about a wider spectrum of people in Britain than the country's earlier films had done. These individuals, principally Karel Reisz, Lindsay Anderson and Tony Richardson, were also involved in the short lived Oxford film journal "Sequence" and the "Free Cinema" documentary film movement. The 1956 statement of Free Cinema, the name was coined by Anderson, asserted: "No film can be too personal. The image speaks. Sounds amplifies and comments. Size is irrelevant. Perfection is not an aim. An attitude means a style. A style means an attitude." Anderson, in particular, was dismissive of the commercial film industry. Their documentary films included Anderson's "Every Day Except Christmas", among several sponsored by Ford of Britain, and Richardson's "Momma Don't Allow". Another member of this group, John Schlesinger, made documentaries for the BBC's "Monitor" arts series.
Together with future James Bond co-producer Harry Saltzman, dramatist John Osborne and Tony Richardson established the company Woodfall Films to produce their early feature films. These included adaptations of Richardson's stage productions of Osborne's "Look Back in Anger" (1959), with Richard Burton, and "The Entertainer" (1960) with Laurence Olivier, both from Osborne's own screenplays. Such films as Reisz's "Saturday Night and Sunday Morning" (also 1960), Richardson's "A Taste of Honey" (1961), Schlesinger's "A Kind of Loving" (1962) and "Billy Liar" (1963), and Anderson's "This Sporting Life" (1963) are often associated with a new openness about working-class life or previously taboo issues.
The team of Basil Dearden and Michael Relph, from an earlier generation, "probe[d] into the social issues that now confronted social stability and the establishment of the promised peacetime consensus". "Pool of London" (1950). and "Sapphire" (1959) were early attempts to create narratives about racial tensions and an emerging multi-cultural Britain. Dearden and Relph's "Victim" (1961), was about the blackmail of homosexuals. Influenced by the Wolfenden report of four years earlier, which advocated the decriminalising of homosexual sexual activity, this was "the first British film to deal explicitly with homosexuality". Unlike the New Wave film makers though, critical responses to Dearden's and Relph's work have not generally been positive.
As the 1960s progressed, American studios returned to financially supporting British films, especially those that capitalised on the "swinging London" image propagated by "Time" magazine in 1966. Films like "Darling", "The Knack ...and How to Get It" (both 1965), "Alfie" and "Georgy Girl" (both 1966), all explored this phenomenon. "Blowup" (also 1966), and later "Women in Love" (1969), showed female and then male full-frontal nudity on screen in mainstream British films for the first time.
At the same time, film producers Harry Saltzman and Albert R. Broccoli combined sex with exotic locations, casual violence and self-referential humour in the phenomenally successful James Bond series with Sean Connery in the leading role. The first film "Dr. No" (1962) was a sleeper hit in the UK and the second, "From Russia with Love" (1963), a hit worldwide. By the time of the third film, "Goldfinger" (1964), the series had become a global phenomenon, reaching its commercial peak with "Thunderball" the following year. The series' success led to a spy film boom with many Bond imitations. Bond co-producer Saltzman also instigated a rival series of more realistic spy films based on the novels of Len Deighton. Michael Caine starred as bespectacled spy Harry Palmer in "The Ipcress File" (1965), and two sequels in the next few years. Other more downbeat espionage films were adapted from John le Carré novels, such as "The Spy Who Came in from the Cold" (1965) and "The Deadly Affair" (1966).
American directors were regularly working in London throughout the decade, but several became permanent residents in the UK. Blacklisted in America, Joseph Losey had a significant influence on British cinema in the 1960s, particularly with his collaborations with playwright Harold Pinter and leading man Dirk Bogarde, including "The Servant" (1963) and "Accident" (1967). Voluntary exiles Richard Lester and Stanley Kubrick were also active in the UK. Lester had major hits with The Beatles film "A Hard Day's Night" (1964) and "The Knack ...and How to Get It" (1965) while Kubrick's "Dr. Strangelove" (1963) and "" (1968). While Kubrick settled in Hertfordshire in the early 1960s and would remain in England for the rest of his career, these two films retained a strong American influence. Other films of this era involved prominent filmmakers from elsewhere in Europe, "Repulsion" (1965) and "Blowup" (1966) were the first English language films of the Polish director Roman Polanski and the Italian Michelangelo Antonioni respectively.
Historical films as diverse as "Lawrence of Arabia" (1962), "Tom Jones" (1963), and "A Man for All Seasons" (1966) benefited from the investment of American studios. Major films like "Becket" (1964), "Khartoum" (1966) and "The Charge of the Light Brigade" (1968) were regularly mounted, while smaller-scale films, including "Accident" (1967), were big critical successes. Four of the decade's Academy Award winners for best picture were British productions, including six Oscars for the film musical "Oliver!" (1968), based on the Charles Dickens novel "Oliver Twist".
After directing several contributions to the BBC's "Wednesday Play" anthology series, Ken Loach began his feature film career with the social realist "Poor Cow" (1967) and "Kes" (1969). Meanwhile, the controversy around Peter Watkins "The War Game" (1965), which won the Best Documentary Film Oscar in 1967, but had been suppressed by the BBC who had commissioned it, would ultimately lead Watkins to work exclusively outside Britain.
American studios cut back on British productions, and in many cases withdrew from financing them altogether. Films financed by American interests were still being made, including Billy Wilder's "The Private Life of Sherlock Holmes" (1970), but for a time funds became hard to come by.
More relaxed censorship also brought several controversial films, including Nicolas Roeg and Donald Cammell's "Performance", Ken Russell's "The Devils" (1971), Sam Peckinpah's "Straw Dogs" (1971), and Stanley Kubrick's "A Clockwork Orange" (1971) starring Malcolm McDowell as the leader of a gang of thugs in a dystopian future Britain.
Other films during the early 1970s included the Edwardian drama "The Go-Between" (1971), which won the Palme d'Or at the Cannes Film Festival, Nicolas Roeg's Venice-set supernatural thriller "Don't Look Now" (1973) and Mike Hodges' gangster drama "Get Carter" (1971) starring Michael Caine. Alfred Hitchcock returned to Britain to shoot "Frenzy" (1972), Other productions such as Richard Attenborough's "Young Winston" (1972) and "A Bridge Too Far" (1977) met with mixed commercial success. The British horror film cycle associated with Hammer Film Productions, Amicus and Tigon drew to a close, despite attempts by Hammer to spice up the formula with added nudity and gore. Although some attempts were made to broaden the range of British horror films, such as with "The Wicker Man" (1973), these films made little impact at the box office, In 1976, British Lion, who produced "The Wicker Man", were finally absorbed into the film division of EMI, who had taken over ABPC in 1969. The duopoly in British cinema exhibition, via Rank and now EMI, continued.
Some British producers, including Hammer, turned to television for inspiration, and big screen versions of popular sitcoms like "On the Buses" (1971) and "Steptoe and Son" (1972) proved successful with domestic audiences, the former had greater domestic box office returns in its year than the Bond film, "Diamonds Are Forever" and in 1973, an established British actor Roger Moore was cast as Bond in, "Live and Let Die", it was a commercial success and Moore would continue the role for the next 12 years.Low-budget British sex comedies included the "Confessions of ..." series starring Robin Askwith, beginning with "Confessions of a Window Cleaner" (1974). More elevated comedy films came from the Monty Python team, also from television. Their two most successful films were "Monty Python and the Holy Grail" (1975) and "Monty Python's Life of Brian" (1979), the latter a major commercial success, probably at least in part due to the controversy at the time surrounding its subject.
Some American productions did return to the major British studios in 1977–79, including the original "Star Wars" (1977) at Elstree Studios, "Superman" (1978) at Pinewood, and "Alien" (1979) at Shepperton. Successful adaptations were made in the decade of the Agatha Christie novels "Murder on the Orient Express" (1974) and "Death on the Nile" (1978). The entry of Lew Grade's company ITC into film production in the latter half of the decade brought only a few box office successes and an unsustainable number of failures
In 1980, only 31 British films were made, a 50% decline from the previous year and the lowest number since 1914, and production fell again in 1981 to 24 films. The industry suffered further blows from falling cinema attendances, which reached a record low of 54 million in 1984, and the elimination of the 1957 Eady Levy, a tax concession, in the same year. The concession had made it possible for an overseas based film company to write off a large amount of its production costs by filming in the UK – this was what attracted a succession of big-budget American productions to British studios in the 1970s. These factors led to significant changes in the industry, with the profitability of British films now "increasingly reliant on secondary markets such as video and television, and Channel 4 ... [became] a crucial part of the funding equation."
With the removal of the levy, multiplex cinemas were introduced to the United Kingdom with the opening of a ten-screen cinema by AMC Cinemas at The Point in Milton Keynes in 1985 and the number of screens in the UK increased by around 500 over the decade leading to increased attendances of almost 100 million by the end of the decade.
The 1980s soon saw a renewed optimism, led by smaller independent production companies such as Goldcrest, HandMade Films and Merchant Ivory Productions.
Handmade Films, which was partly owned by George Harrison, was originally formed to take over the production of "Monty Python's Life of Brian", after EMI's Bernard Delfont (Lew Grade's brother) had pulled out. Handmade also bought and released the gangster drama "The Long Good Friday" (1980), produced by a Lew Grade subsidiary, after its original backers became cautious. Members of the Python team were involved in other comedies during the decade, including Terry Gilliam's fantasy films "Time Bandits" (1981) and "Brazil" (1985), and John Cleese's hit "A Fish Called Wanda" (1988), while Michael Palin starred in "A Private Function" (1984), from Alan Bennett's first screenplay for the cinema screen.
Goldcrest producer David Puttnam has been described as "the nearest thing to a mogul that British cinema has had in the last quarter of the 20th century." Under Puttnam, a generation of British directors emerged making popular films with international distribution. Some of the talent backed by Puttnam — Hugh Hudson, Ridley Scott, Alan Parker, and Adrian Lyne — had shot commercials; Puttnam himself had begun his career in the advertising industry. When Hudson's "Chariots of Fire" (1981) won 4 Academy Awards in 1982, including Best Picture, its writer Colin Welland declared "the British are coming!". When "Gandhi" (1982), another Goldcrest film, picked up a Best Picture Oscar, it looked as if he was right.
It prompted a cycle of period films – some with a large budget for a British film, such as David Lean's final film "A Passage to India" (1984), alongside the lower-budget Merchant Ivory adaptations of the works of E. M. Forster, such as "A Room with a View" (1986). But further attempts to make 'big' productions for the US market ended in failure, with Goldcrest losing its independence after "Revolution" (1985) and "Absolute Beginners" (1986) were commercial and critical flops. Another Goldcrest film, Roland Joffé's "The Mission" (also 1986), won the 1986 Palme d'Or, but did not go into profit either. Joffé's earlier "The Killing Fields" (1984) had been both a critical and financial success. These were Joffé's first two feature films and were amongst those produced by Puttnam.
Mainly outside the commercial sector, film makers from the new commonwealth countries had begun to emerge during the 1970s. Horace Ové's "Pressure" (1975) had been funded by the British Film Institute as was "A Private Enterprise" (1974), these being the first Black British and Asian British films, respectively. The 1980s however saw a wave of new talent, with films such as Franco Rosso's "Babylon" (1980), Menelik Shabazz's "Burning an Illusion" (1981) and Po-Chih Leong's "Ping Pong" (1986; one of the first films about Britain's Chinese community). Many of these films were assisted by the newly formed Channel 4, which had an official remit to provide for "minority audiences." Commercial success was first achieved with "My Beautiful Laundrette" (1985). Dealing with racial and gay issues, it was developed from Hanif Kureishi's first film script. "My Beautiful Laundrette" features Daniel Day-Lewis in a leading role. Day-Lewis and other young British actors who were becoming stars, such as Gary Oldman, Colin Firth, Tim Roth and Rupert Everett, were dubbed the Brit Pack.
With the involvement of Channel 4 in film production, talents from television moved into feature films with Stephen Frears ("My Beautiful Laundrette") and Mike Newell with "Dance with a Stranger" (1985). John Boorman, who had been working in the US, was encouraged back to the UK to make "Hope and Glory" (1987). Channel Four also became a major sponsor of the British Film Institute's Production Board, which backed three of Britain's most critically acclaimed filmmakers: Derek Jarman ("The Last of England", 1987), Terence Davies ("Distant Voices, Still Lives", 1988), and Peter Greenaway; the latter of whom gained surprising commercial success with "The Draughtsman's Contract" (1982) and "The Cook, the Thief, His Wife & Her Lover" (1989). Stephen Woolley's company Palace Pictures also produced some successful films, including Neil Jordan's "The Company of Wolves" (1984) and "Mona Lisa" (1986), before collapsing amid a series of unsuccessful films. Amongst the other British films of the decade were Bill Forsyth's "Gregory's Girl" (1981) and "Local Hero" (1983), Lewis Gilbert's "Educating Rita" (1983), Peter Yates' "The Dresser" (1983) and Kenneth Branagh's directorial debut, "Henry V" (1989).
Compared to the 1980s, investment in film production rose dramatically. In 1989, annual investment was a meagre £104 million. By 1996, this figure had soared to £741 million. Nevertheless, the dependence on finance from television broadcasters such as the BBC and Channel 4 meant that budgets were often low and indigenous production was very fragmented: the film industry mostly relied on Hollywood inward investment. According to critic Neil Watson, it was hoped that the £90 million apportioned by the new National Lottery into three franchises (The Film Consortium, Pathé Pictures, and DNA) would fill the gap, but "corporate and equity finance for the UK film production industry continues to be thin on the ground and most production companies operating in the sector remain hopelessly under-capitalised."
These problems were mostly compensated by PolyGram Filmed Entertainment, a film studio whose British subsidiary Working Title Films released a Richard Curtis-scripted comedy "Four Weddings and a Funeral" (1994). It grossed $244 million worldwide and introduced Hugh Grant to global fame, led to renewed interest and investment in British films, and set a pattern for British-set romantic comedies, including "Sliding Doors" (1998) and "Notting Hill" (1999). Other Working Titles films included "Bean" (1997), "Elizabeth" (1998) and "Captain Corelli's Mandolin" (2001). PFE was eventually sold and merged with Universal Pictures in 1999, the hopes and expectations of "building a British-based company which could compete with Hollywood in its home market [had] eventually collapsed."
Tax incentives allowed American producers to increasingly invest in UK-based film production throughout the 1990s, including films such as "Interview with the Vampire" (1994), "" (1996), "Saving Private Ryan" (1998), "" (1999) and "The Mummy" (1999). Miramax also distributed Neil Jordan's acclaimed thriller "The Crying Game" (1992), which was generally ignored on its initial release in the UK, but was a considerable success in the United States. The same company also enjoyed some success releasing the BBC period drama "Enchanted April" (1992) and "The Wings of the Dove" (1997).
Among the more successful British films were the Merchant Ivory productions "Howards End" (1992) and "The Remains of the Day" (1993), Richard Attenborough's "Shadowlands" (1993), and Kenneth Branagh's Shakespeare adaptations. "The Madness of King George" (1994) proved there was still a market for British costume dramas, and other period films followed, including "Sense and Sensibility" (1995), "Restoration" (1995), "Emma" (1996), "Mrs. Brown" (1997), "Basil" (1998), "Shakespeare in Love" (1998) and "Topsy-Turvy" (1999).
After a six-year hiatus for legal reasons the James Bond films returned to production with the 17th Bond film, "GoldenEye". With their traditional home Pinewood Studios fully booked, a new studio was created for the film in a former Rolls-Royce aero-engine factory at Leavesden in Hertfordshire.
Mike Leigh emerged as a significant figure in British cinema in the 1990s, with a series of films financed by Channel 4 about working and middle class life in modern England, including "Life Is Sweet" (1991), "Naked" (1993) and his biggest hit "Secrets & Lies" (1996), which won the Palme d'Or at Cannes.
Other new talents to emerge during the decade included the writer-director-producer team of John Hodge, Danny Boyle and Andrew Macdonald responsible for "Shallow Grave" (1994) and "Trainspotting" (1996). The latter film generated interested in other "regional" productions, including the Scottish films "Small Faces" (1996), "Ratcatcher" (1999) and "My Name Is Joe" (1998).
The first decade of the 21st century was a relatively successful one for the British film industry. Many British films found a wide international audience due to funding from BBC Films, Film 4 and the UK Film Council, and some independent production companies, such as Working Title, secured financing and distribution deals with major American studios. Working Title scored three major international successes, all starring Hugh Grant and Colin Firth, with the romantic comedies "Bridget Jones's Diary" (2001), which grossed $254 million worldwide; the sequel "", which earned $228 million; and Richard Curtis's directorial debut "Love Actually" (2003), which grossed $239 million. Most successful of all, Phyllida Lloyd's "Mamma Mia!" (2008), which grossed $601 million.
The new decade saw a major new film series in the Harry Potter films, beginning with "Harry Potter and the Philosopher's Stone" in 2001. David Heyman's company Heyday Films has produced seven sequels, with the final title released in two parts – "Harry Potter and the Deathly Hallows – Part 1" in 2010 and "Harry Potter and the Deathly Hallows – Part 2" in 2011. All were filmed at Leavesden Studios in England.
Aardman Animations' Nick Park, the creator of Wallace and Gromit and the Creature Comforts series, produced his first feature-length film, "Chicken Run" in 2000. Co-directed with Peter Lord, the film was a major success worldwide and one of the most successful British films of its year. Park's follow up, "" was another worldwide hit: it grossed $56 million at the US box office and £32 million in the UK. It also won the 2005 Academy Award for Best Animated Feature.
However it was usually through domestically funded features throughout the decade that British directors and films won awards at the top international film festivals. In 2003, Michael Winterbottom won the Golden Bear at the Berlin Film Festival for "In This World". In 2004, Mike Leigh directed "Vera Drake", an account of a housewife who leads a double life as an abortionist in 1950s London. The film won the Golden Lion at the Venice Film Festival. In 2006 Stephen Frears directed "The Queen" based on the events surrounding the death of Princess Diana, which won the Best Actress prize at the Venice Film Festival and Academy Awards and the BAFTA for Best Film. In 2006, Ken Loach won the Palme d'Or at the Cannes Film Festival with his account of the struggle for Irish Independence in "The Wind That Shakes the Barley". Joe Wright's adaptation of the Ian McEwan novel "Atonement" was nominated for 7 Academy Awards, including Best Film and won the Golden Globe and BAFTA for Best Film. "Slumdog Millionaire" was filmed entirely in Mumbai with a mostly Indian cast, though with a British director (Danny Boyle), producer (Christian Colson), screenwriter (Simon Beaufoy) and star (Dev Patel)—the film was all-British financed via Film4 and Celador. It has received worldwide critical acclaim. It has won four Golden Globes, seven BAFTA Awards and eight Academy Awards, including Best Director and Best Film. "The King's Speech", which tells the story of King George VI's attempts to overcome his speech impediment, was directed by Tom Hooper and filmed almost entirely in London. It received four Academy Awards (including Best Film, Best Director, Best Actor and Best Screenplay) in 2011.
The start of the 21st century saw Asian British cinema assert itself at the box office, starting with "East Is East" (1999) and continuing with "Bend It Like Beckham" (2002). Other notable British Asian films from this period include "My Son the Fanatic" (1997), "Ae Fond Kiss... (2004)", "Mischief Night (2006)", "Yasmin" (2004) and "Four Lions" (2010). Some argue it has brought more flexible attitudes towards casting Black and Asian British actors, with Robbie Gee and Naomie Harris take leading roles in "Underworld" and "28 Days Later" respectively. The year 2005 saw the emergence of The British Urban Film Festival, a timely addition to the film festival calendar, which recognised the influence of "Kidulthood" on UK audiences and consequently began to showcase a growing profile of films in a genre previously not otherwise regularly seen in the capital's cinemas. Then in 2005 "Kidulthood", a film centring on inner-city London youth had a limited release. This was successfully followed up with a sequel "Adulthood" (2008) that was written and directed by actor Noel Clarke. Several other films dealing with inner city issues and Black Britons were released in the 2000s such as "Bullet Boy" (2004), "Life and Lyrics" (2006) and "Rollin' with the Nines" (2009).
Like the 1960s, this decade saw plenty of British films directed by imported talent. The American Woody Allen shot "Match Point" (2005) and three later films in London. The Mexican director Alfonso Cuarón helmed "Harry Potter and the Prisoner of Azkaban" (2004) and "Children of Men" (2006); New Zealand filmmaker Jane Campion made "Bright Star" (2009), a film set in 19th century London; Danish director Nicolas Winding Refn made "Bronson" (2008), a biopic about the English criminal Michael Gordon Peterson; the Spanish filmmaker Juan Carlos Fresnadillo directed "28 Weeks Later" (2007), a sequel to a British horror film; and two John le Carré adaptations were also directed by foreigners—"The Constant Gardener" by the Brazilian Fernando Meirelles and "Tinker Tailor Soldier Spy" by the Swedish Tomas Alfredson. The decade also saw English actor Daniel Craig became the new James Bond with "Casino Royale", the 21st entry in the official Eon Productions series.
Despite increasing competition from film studios in Australia and Eastern Europe, British studios such as Pinewood, Shepperton and Leavesden remained successful in hosting major productions, including "Finding Neverland", "Closer", "Batman Begins", "Charlie and the Chocolate Factory", "United 93", "The Phantom of the Opera", "", "Fantastic Mr. Fox", "Robin Hood", "", "Hugo" and "War Horse".
In November 2010, Warner Bros. completed the acquisition of Leavesden Film Studios, becoming the first Hollywood studio since the 1940s to have a permanent base in the UK, and announced plans to invest £100 million in the site.
A study by the British Film Institute published in December 2013 found that of the 613 tracked British films released between 2003 and 2010 only 7% made a profit. Films with low budgets, those that cost below £500,000 to produce, were even less likely to gain a return on outlay. Of these films, only 3.1% went into the black. At the top end of budgets for the British industry, under a fifth of films that cost £10million went into profit.
On 26 July 2010 it was announced that the UK Film Council, which was the main body responsible for the development of promotion of British cinema during the 2000s, would be abolished, with many of the abolished body's functions being taken over by the British Film Institute. Actors and professionals, including James McAvoy, Emily Blunt, Pete Postlethwaite, Damian Lewis, Timothy Spall, Daniel Barber and Ian Holm, campaigned against the Council's abolition. The move also led American actor and director Clint Eastwood (who had filmed "Hereafter" in London) to write to the British Chancellor of the Exchequer George Osborne in August 2010 to protest the decision to close the Council. Eastwood warned Osborne that the closure could result in fewer foreign production companies choosing to work in the UK. A grass-roots online campaign was launched and a petition established by supporters of the Council.
Countering this, a few professionals, including Michael Winner and Julian Fellowes, supported the Government's decision. A number of other organisations responded positively.
At the closure of the UK Film Council on 31 March 2011, "The Guardian" reported that "The UKFC's entire annual budget was a reported £3m, while the cost of closing it down and restructuring is estimated to have been almost four times that amount." One of the UKFC's last films, "The King's Speech", is estimated to have cost $15m to make and grossed $235m, besides winning several Academy Awards. UKFC invested $1.6m for a 34% share of net profits, a valuable stake that will pass to the British Film Institute.
In April 2011, The Peel Group acquired a controlling 71% interest in The Pinewood Studios Group (the owner of Pinewood Studios and Shepperton Studios) for £96 million. In June 2012, Warner opened the re-developed Leavesden studio for business. The most commercially successful British directors in recent years are Paul Greengrass, Mike Newell, Christopher Nolan, Ridley Scott and David Yates.
In January 2012, at Pinewood Studios to visit film-related businesses, UK Prime Minister David Cameron said that his government had bold ambitions for the film industry: "Our role, and that of the BFI, should be to support the sector in becoming even more dynamic and entrepreneurial, helping UK producers to make commercially successful pictures that rival the quality and impact of the best international productions. Just as the British Film Commission has played a crucial role in attracting the biggest and best international studios to produce their films here, so we must incentivise UK producers to chase new markets both here and overseas."
The film industry remains an important earner for the British economy. According to a UK Film Council press release of 20 January 2011, £1.115 billion was spent on UK film production during 2010. A 2014 survey suggested that British-made films were generally more highly rated than Hollywood productions, especially when considering low-budget UK productions.
Although it had been funding British experimental films as early as 1952, the British Film Institute's foundation of a production board in 1964—and a substantial increase in public funding from 1971 onwards—enabled it to become a dominant force in developing British art cinema in the 1970s and 80s: from the first of Bill Douglas's Trilogy "My Childhood" (1972), and of Terence Davies' Trilogy "Childhood" (1978), via Peter Greenaway's earliest films (including the surprising commercial success of "The Draughtsman's Contract" (1982)) and Derek Jarman's championing of the New Queer Cinema. The first full-length feature produced under the BFI's new scheme was Kevin Brownlow and Andrew Mollo's "Winstanley" (1975), while others included "Moon Over the Alley" (1975), "Requiem for a Village" (1975), the openly avant-garde "Central Bazaar" (1973), "Pressure" (1975) and "A Private Enterprise" (1974) – the last two being, respectively, the first British Black and Asian features.
The release of Derek Jarman's "Jubilee" (1978) marked the beginning of a successful period of UK art cinema, continuing into the 1980s with filmmakers like Sally Potter. Unlike the previous generation of British film makers who had broken into directing and production after careers in the theatre or on television, the Art Cinema Directors were mostly the products of Art Schools. Many of these filmmakers were championed in their early career by the London Film Makers Cooperative and their work was the subject of detailed theoretical analysis in the journal "Screen Education". Peter Greenaway was an early pioneer of the use of computer generated imagery blended with filmed footage and was also one of the first directors to film entirely on high definition video for a cinema release.
With the launch of Channel 4 and its Film on Four commissioning strand, Art Cinema was promoted to a wider audience. However, the Channel had a sharp change in its commissioning policy in the early 1990s and Greenaway and others were forced to seek European co-production financing.
In the 1970s and 1980s, British studios established a reputation for great special effects in films such as "Superman" (1978), "Alien" (1979), and "Batman" (1989). Some of this reputation was founded on the core of talent brought together for the filming of "" (1968) who subsequently worked together on series and feature films for Gerry Anderson. Thanks to the Bristol-based Aardman Animations, the UK is still recognised as a world leader in the use of stop-motion animation.
British special effects technicians and production designers are known for creating visual effects at a far lower cost than their counterparts in the US, as seen in "Time Bandits" (1981) and "Brazil" (1985). This reputation has continued through the 1990s and into the 21st century with films such as the James Bond series, "Gladiator" (2000) and the Harry Potter franchise.
From the 1990s to the present day, there has been a progressive movement from traditional film opticals to an integrated digital film environment, with special effects, cutting, colour grading, and other post-production tasks all sharing the same all-digital infrastructure. The London-based visual effects company Framestore, with Tim Webber the visual effects supervisor, have worked on some of the most technically and artistically challenging projects, including, "The Dark Knight" (2008) and "Gravity" (2013), with new techniques involved in "Gravity" realized by Webber and the Framestore team taking three years to complete.
The availability of high-speed internet has made the British film industry capable of working closely with U.S. studios as part of globally distributed productions. As of 2005, this trend is expected to continue with moves towards (currently experimental) digital distribution and projection as mainstream technologies. The British film "This is Not a Love Song" (2003) was the first to be streamed live on the Internet at the same time as its cinema premiere. | https://en.wikipedia.org/wiki?curid=10793 |
Feminist film theory
Feminist film theory is a theoretical film criticism derived from feminist politics and feminist theory influenced by Second Wave Feminism and brought about around the 1970s in the United States. With the advancements in film throughout the years feminist film theory has developed and changed to analyse the current ways of film and also go back to analyse films past. Feminists have many approaches to cinema analysis, regarding the film elements analyzed and their theoretical underpinnings.
The development of feminist film theory was influenced by second wave feminism and women's studies in the 1960s and 1970s. Initially in the United States in the early 1970s feminist film theory was generally based on sociological theory and focused on the function of female characters in film narratives or genres. Feminist film theory, such as Marjorie Rosen's "Popcorn Venus: Women, Movies, and the American Dream" (1973) and Molly Haskell’s "From Reverence to Rape: The Treatment of Women in Movies" (1974) analyze the ways in which women are portrayed in film, and how this relates to a broader historical context. Additionally, feminist critiques also examine common stereotypes depicted in film, the extent to which the women were shown as active or passive, and the amount of screen time given to women.
In contrast, film theoreticians in England concerned themselves with critical theory, psychoanalysis, semiotics, and Marxism. Eventually, these ideas gained hold within the American scholarly community in the 1980's. Analysis generally focused on the meaning within a film's text and the way in which the text constructs a viewing subject. It also examined how the process of cinematic production affects how women are represented and reinforces sexism.
British feminist film theorist, Laura Mulvey, best known for her essay, "Visual Pleasure and Narrative Cinema", written in 1973 and published in 1975 in the influential British film theory journal, "Screen" was influenced by the theories of Sigmund Freud and Jacques Lacan. "Visual Pleasure" is one of the first major essays that helped shift the orientation of film theory towards a psychoanalytic framework. Prior to Mulvey, film theorists such as Jean-Louis Baudry and Christian Metz used psychoanalytic ideas in their theoretical accounts of cinema. Mulvey's contribution, however, initiated the intersection of film theory, psychoanalysis and feminism.
In 1976 the journal Camera Obscura was published by beginning grad students Janet Bergstrom, Sandy Flitterman, Elisabeth Lyon, and Constance Penley to talk about how women where in films but they were excluded in the development of those films or erased from the process. Camera Obscura is still published to this day by Duke University Press and has moved from just film theory to media studies.
Other key influences come from Metz's essay "The Imaginary Signifier", "Identification, Mirror," where he argues that viewing film is only possible through scopophilia (pleasure from looking, related to voyeurism), which is best exemplified in silent film. Also, according to Cynthia A. Freeland in "Feminist Frameworks for Horror Films," feminist studies of horror films have focused on psychodynamics where the chief interest is "on viewers' motives and interests in watching horror films".
Beginning in the early 1980s feminist film theory began to look at film through a more intersectional lens. The film journal "Jump Cut" published a special issue about titled "Lesbians and Film" in 1981 which examined the lack of lesbian identities in film. Jane Gaines's essay "White Privilege and Looking Relations: Race and Gender in Feminist Film Theory" examined the erasure of black women in cinema by white male filmmakers. While Lola Young argues that filmmakers of all races fail to break away from the use to tired stereotypes when depicting black women. Other theorists who wrote about feminist film theory and race include bell hooks and Michele Wallace.
From the 1985 onward the Matrixial theory of artist and psychoanalyst Bracha L. Ettinger revolutionized feminist film theory.
Her concept, from her book, The Matrixial Gaze, has established a feminine gaze and has articulated its differences from the phallic gaze and its relation to feminine as well as maternal specificities and potentialities of "coemergence", offering a critique of Sigmund Freud's and Jacques Lacan's psychoanalysis, is extensively used in analysis of films, by female directors, like Chantal Akerman, as well as by male directors, like Pedro Almodovar. The matrixial gaze offers the female the position of a subject, not of an object, of the gaze, while deconstructing the structure of the subject itself, and offers border-time, border-space and a possibility for compassion and witnessing. Ettinger's notions articulate the links between aesthetics, ethics and trauma.
Recently, scholars have expanded their work to include analysis of television and digital media. Additionally, they have begun to explore notions of difference, engaging in dialogue about the differences among women (part of movement away from essentialism in feminist work more generally), the various methodologies and perspectives contained under the umbrella of feminist film theory, and the multiplicity of methods and intended effects that influence the development of films. Scholars are also taking increasingly global perspectives, responding to postcolonialist criticisms of perceived Anglo- and Eurocentrism in the academy more generally. Increased focus has been given to, "disparate feminisms, nationalisms, and media in various locations and across class, racial, and ethnic groups throughout the world". Scholars in recent years have also turned their attention towards women in the silent film industry and their erasure from the history of those films and women's bodies and how they are portrayed in the films. Jane Gaines's Women's Film Pioneer Project (WFPP), a database of women who worked in the silent-era film industry, has been cited as a major achievement in recognizing pioneering women in the field of silent and non-silent film by scholars such as Rachel Schaff.
As of recent years many believe feminist film theory to be a fading area of feminism with the massive amount of coverage currently around media studies and theory. As these areas have grown the framework created in feminist film theory have been adapted to fit into analysing other forms of media.
Considering the way that films are put together, many feminist film critics have pointed to what they argue is the "male gaze" that predominates classical Hollywood filmmaking. Budd Boetticher summarizes the view:
Laura Mulvey expands on this conception to argue that in cinema, women are typically depicted in a passive role that provides visual pleasure through scopophilia, and identification with the on-screen male actor. She asserts: "In their traditional exhibitionist role women are simultaneously looked at and displayed, with their appearance coded for strong visual and erotic impact so that they can be said to connote "to-be-looked-at-ness"," and as a result contends that in film a woman is the "bearer of meaning, not maker of meaning." Mulvey argues that the psychoanalytic theory of Jacques Lacan is the key to understanding how film creates such a space for female sexual objectification and exploitation through the combination of the patriarchal order of society, and 'looking' in itself as a pleasurable act of scopophilia, as "the cinema satisfies a primordial wish for pleasurable looking."
While Laura Mulvey's paper has a particular place in the feminist film theory, it is important to note that her ideas regarding ways of watching the cinema (from the voyeuristic element to the feelings of identification) are important to some feminist film theorists in terms of defining spectatorship from the psychoanalytical viewpoint.
Mulvey identifies three "looks" or perspectives that occur in film which, she argues, serve to sexually objectify women. The first is the perspective of the male character and how he perceives the female character. The second is the perspective of the spectator as they see the female character on screen. The third "look" joins the first two looks together: it is the male audience member's perspective of the male character in the film. This third perspective allows the male audience to take the female character as his own personal sex object because he can relate himself, through looking, to the male character in the film.
In the paper, Mulvey calls for a destruction of modern film structure as the only way to free women from their sexual objectification in film. She argues for a removal of the voyeurism encoded into film by creating distance between the male spectator and the female character. The only way to do so, Mulvey argues, is by destroying the element of voyeurism and "the invisible guest". Mulvey also asserts that the dominance men embody is only so because women exist, as without a woman for comparison, a man and his supremacy as the controller of visual pleasure are insignificant. For Mulvey, it is the presence of the female that defines the patriarchal order of society as well as the male psychology of thought.
Mulvey's argument is likely influenced by the time period in which she was writing. "Visual Pleasure and Narrative Cinema" was composed during the period of second-wave feminism, which was concerned with achieving equality for women in the workplace, and with exploring the psychological implications of sexual stereotypes. Mulvey calls for an eradication of female sexual objectivity, aligning herself with second-wave feminism. She argues that in order for women to be equally represented in the workplace, women must be portrayed as men are: as lacking sexual objectification.
Mulvey proposes in her notes to the Criterion Collection DVD of Michael Powell's controversial film, "Peeping Tom" (a film about a homicidal voyeur who films the deaths of his victims), that the cinema spectator's own voyeurism is made shockingly obvious and even more shockingly, the spectator identifies with the perverted protagonist. The inference is that she includes female spectators in that, identifying with the male observer rather than the female object of the gaze.
The early work of Marjorie Rosen and Molly Haskell on the representation of women in film was part of a movement to depict women more realistically, both in documentaries and narrative cinema. The growing female presence in the film industry was seen as a positive step toward realizing this goal, by drawing attention to feminist issues and putting forth an alternative, true-to-life view of women. However, Rosen and Haskell argue that these images are still mediated by the same factors as traditional film, such as the "moving camera, composition, editing, lighting, and all varieties of sound." While acknowledging the value in inserting positive representations of women in film, some critics asserted that real change would only come about from reconsidering the role of film in society, often from a semiotic point of view.
Claire Johnston put forth the idea that women's cinema can function as "counter cinema." Through consciousness of the means of production and opposition of sexist ideologies, films made by women have the potential to posit an alternative to traditional Hollywood films. Initially, the attempt to show "real" women was praised, eventually critics such as Eileen McGarry claimed that the "real" women being shown on screen were still just contrived depictions. In reaction to this article, many women filmmakers integrated "alternative forms and experimental techniques" to "encourage audiences to critique the seemingly transparent images on the screen and to question the manipulative techniques of filming and editing".
B. Ruby Rich argues that feminist film theory should shift to look at films in a broader sense. Rich's essay "In the Name of Feminist Film Criticism" claims that films by women often receive praise for certain elements, while feminist undertones are ignored. Rich goes on to say that because of this feminist theory needs to focus on how film by women are being received.
Coming from a black feminist perspective, American scholar, Bell Hooks, put forth the notion of the “oppositional gaze,” encouraging black women not to accept stereotypical representations in film, but rather actively critique them. The “oppositional gaze” is a response to Mulvey's "visual pleasure" and states that just as women do not identify with female characters that are not "real," women of color should respond similarly to the one denominational caricatures of black women.
Janet Bergstrom's article “Enunciation and Sexual Difference” (1979) uses Sigmund Freud's ideas of bisexual responses, arguing that women are capable of identifying with male characters and men with women characters, either successively or simultaneously. Miriam Hansen, in "Pleasure, Ambivalence, Identification: Valentino and Female Spectatorship" (1984) put forth the idea that women are also able to view male characters as erotic objects of desire. In "The Master's Dollhouse: Rear Window," Tania Modleski argues that Hitchcock's film, "Rear Window", is an example of the power of male gazer and the position of the female as a prisoner of the "master's dollhouse".
Carol Clover, in her popular and influential book, "Men, Women, and Chainsaws: Gender in the Modern Horror Film" (Princeton University Press, 1992), argues that young male viewers of the Horror Genre (young males being the primary demographic) are quite prepared to identify with the female-in-jeopardy, a key component of the horror narrative, and to identify on an unexpectedly profound level. Clover further argues that the "Final Girl" in the psychosexual subgenre of exploitation horror invariably triumphs through her own resourcefulness, and is not by any means a passive, or inevitable, victim. Laura Mulvey, in response to these and other criticisms, revisited the topic in "Afterthoughts on 'Visual Pleasure and Narrative Cinema' inspired by "Duel in the Sun"" (1981). In addressing the heterosexual female spectator, she revised her stance to argue that women can take two possible roles in relation to film: a masochistic identification with the female object of desire that is ultimately self-defeating, or a transgender identification with men as the active viewers of the text. A new version of the gaze was offered in the early 1990s by Bracha Ettinger, who proposed the notion of the "matrixial gaze". | https://en.wikipedia.org/wiki?curid=10796 |
Formalist film theory
Formalist film theory is an approach to film theory that is focused on the formal, or technical, elements of a film: i.e., the lighting, scoring, sound and set design, use of color, shot composition, and editing. This approach was proposed by Hugo Münsterberg, Rudolf Arnheim, Sergei Eisenstein, and Béla Balázs. Today, it is a major approach in film studies.
Formalism, at its most general, considers the synthesis (or lack of synthesis) of the multiple elements of film production, and the effects, emotional and intellectual, of that synthesis and of the individual elements. For example, take the single element of editing. A formalist might study how standard Hollywood "continuity editing" creates a more comforting effect and non-continuity or jump cut editing might become more disconcerting or volatile.
Or one might consider the synthesis of several elements, such as editing, shot composition, and music. The shoot-out that ends Sergio Leone's Spaghetti Western "Dollars" trilogy is a notable example of how these elements work together to produce an effect: The shot selection goes from very wide to very close and tense; the length of shots decreases as the sequence progresses towards its end; the music builds. All of these elements, in combination rather than individually, create tension.
Formalism is unique in that it embraces both ideological and auteurist branches of criticism. In both these cases, the common denominator for Formalist criticism is style. Ideologues focus on how socio-economic pressures create a particular style, and auteurists on how auteurs put their own stamp on the material. Formalism is primarily concerned with style and how it communicates ideas, emotions, and themes (rather than, as critics of formalism point out, concentrating on the themes of a work itself).
Two examples of ideological interpretations that are related to formalism:
The classical Hollywood cinema has a very distinct style, sometimes called the institutional mode of representation: continuity editing, massive coverage, three-point lighting, "mood" music, dissolves, all designed to make the experience as pleasant as possible. The socio-economic ideological explanation for this is, quite crassly, that Hollywood wants to make as much money and appeal to as many ticket-buyers as possible.
Film noir, which was given its name by Nino Frank, is marked by lower production values, darker images, under lighting, location shooting, and general nihilism: this is because, we are told, during the war and post-war years filmmakers (as well as filmgoers) were generally more pessimistic. Also, the German Expressionists (including Fritz Lang, who was not technically an expressionist as popularly believed) emigrated to America and brought their stylized lighting effects (and disillusionment due to the war) to American soil.
It can be argued that, by this approach, the style or 'language' of these films is directly affected not by the individuals responsible, but by social, economic, and political pressures, of which the filmmakers themselves may be aware or not. It is this branch of criticism that gives us such categories as the classical Hollywood cinema, the American independent movement, the new queer cinema, and the French, German, and Czech new waves.
If the ideological approach is concerned with broad movements and the effects of the world around the filmmaker, then the auteur theory is diametrically opposite to it, celebrating the individual, usually in the person of the filmmaker, and how his/her personal decisions, thoughts, and style manifest themselves in the material.
This branch of criticism, begun by François Truffaut and the other young film critics writing for "Cahiers du cinéma", was created for two reasons.
First, it was created to redeem the art of film itself. By arguing that films had auteurs, or authors, Truffaut sought to make films (and their directors) at least as important as the more widely accepted art forms, such as literature, music, and painting. Each of these art forms, and the criticism thereof, is primarily concerned with a sole creative force: the author of a novel (not, for example, his editor or type-setter), the composer of a piece of music (though sometimes the performers are given credence, akin to actors in film today), or the painter of a fresco (not his assistants who mix the colours or often do some of the painting themselves). By elevating the director, and not the screenwriter, to the same importance as novelists, composers, or painters, it sought to free the cinema from its popular conception as a bastard art, somewhere between theater and literature.
Secondly, it sought to redeem many filmmakers who were looked down upon by mainstream film critics. It argued that genre filmmakers and low-budget B-movies were just as important, if not more, than the prestige pictures commonly given more press and legitimacy in France and the United States. According to Truffaut's theory, auteurs took material that was beneath their talents—a thriller, a pulpy action film, a romance—and, through their style, put their own personal stamp on it.
It is this auteur style that concerns formalism.
A perfect example of formalist criticism of auteur style would be the work of Alfred Hitchcock. Hitchcock primarily made thrillers, which, according to the "Cahiers du cinema" crowd, were popular with the public but were dismissed by the critics and the award ceremonies, although Hitchcock's "Rebecca" won the Oscar for Best Picture at the 1940 Academy Awards. Though he never won the Oscar for directing, he was nominated five times in the category. Truffaut and his colleagues argued that Hitchcock had a style as distinct as that of Flaubert or Van Gogh: the virtuoso editing, the lyrical camera movements, the droll humour. He also had "Hitchcockian" themes: the wrong man falsely accused, violence erupting at the times it was least expected, the cool blonde. Now, Hitchcock is more or less universally lauded, his films dissected shot-by-shot, his work celebrated as being that of a master. And the study of this style, his variations, and obsessions all fall quite neatly under the umbrella of formalist film theory. | https://en.wikipedia.org/wiki?curid=10798 |
Film noir
Film noir (; ) is a cinematic term used primarily to describe stylish Hollywood crime dramas, particularly those that emphasize cynical attitudes and sexual motivations. The 1940s and 1950s are generally regarded as the "classic period" of American "film noir". Film noir of this era is associated with a low-key, black-and-white visual style that has roots in German Expressionist cinematography. Many of the prototypical stories and much of the attitude of classic noir derive from the hardboiled school of crime fiction that emerged in the United States during the Great Depression.
The term "film noir", French for 'black film' (literal) or 'dark film' (closer meaning), was first applied to Hollywood films by French critic Nino Frank in 1946, but was unrecognized by most American film industry professionals of that era. Cinema historians and critics defined the category retrospectively. Before the notion was widely adopted in the 1970s, many of the classic film noir were referred to as "melodramas". Whether film noir qualifies as a distinct genre is a matter of ongoing debate among scholars.
Film noir encompasses a range of plots: the central figure may be a private investigator ("The Big Sleep"), a plainclothes policeman ("The Big Heat"), an aging boxer ("The Set-Up"), a hapless grifter ("Night and the City"), a law-abiding citizen lured into a life of crime ("Gun Crazy"), or simply a victim of circumstance ("D.O.A."). Although film noir was originally associated with American productions, the term has been used to describe films from around the world. Many films released from the 1960s onward share attributes with film noirs of the classical period, and often treat its conventions self-referentially. Some refer to such latter-day works as neo-noir. The clichés of film noir have inspired parody since the mid-1940s.
The questions of what defines film noir, and what sort of category it is, provoke continuing debate. "We'd be oversimplifying things in calling film noir oneiric, strange, erotic, ambivalent, and cruel ..."—this set of attributes constitutes the first of many attempts to define film noir made by French critics and Étienne Chaumeton in their 1955 book "Panorama du film noir américain 1941–1953" ("A Panorama of American Film Noir"), the original and seminal extended treatment of the subject. They emphasize that not every film noir embodies all five attributes in equal measure—one might be more dreamlike; another, particularly brutal. The authors' caveats and repeated efforts at alternative definition have been echoed in subsequent scholarship: in the more than five decades since, there have been innumerable further attempts at definition, yet in the words of cinema historian Mark Bould, film noir remains an "elusive phenomenon ... always just out of reach".
Though film noir is often identified with a visual style, unconventional within a Hollywood context, that emphasizes low-key lighting and unbalanced compositions, films commonly identified as noir evidence a variety of visual approaches, including ones that fit comfortably within the Hollywood mainstream. Film noir similarly embraces a variety of genres, from the gangster film to the police procedural to the gothic romance to the social problem picture—any example of which from the 1940s and 1950s, now seen as noir's classical era, was likely to be described as a melodrama at the time.
While many critics refer to film noir as a genre itself, others argue that it can be no such thing. Foster Hirsch defines a genre as determined by "conventions of narrative structure, characterization, theme, and visual design". Hirsch, as one who has taken the position that film noir is a genre, argues that these elements are present "in abundance". Hirsch notes that there are unifying features of tone, visual style and narrative sufficient to classify noir as a distinct genre.
Others argue that film noir is not a genre. Film noir is often associated with an urban setting, but many classic noirs take place in small towns, suburbia, rural areas, or on the open road; setting, therefore, cannot be its genre determinant, as with the Western. Similarly, while the private eye and the femme fatale are stock character types conventionally identified with noir, the majority of film noirs feature neither; so there is no character basis for genre designation as with the gangster film. Nor does film noir rely on anything as evident as the monstrous or supernatural elements of the horror film, the speculative leaps of the science fiction film, or the song-and-dance routines of the musical.
An analogous case is that of the screwball comedy, widely accepted by film historians as constituting a "genre": the screwball is defined not by a fundamental attribute, but by a general disposition and a group of elements, some—but rarely and perhaps never all—of which are found in each of the genre's films. Because of the diversity of noir (much greater than that of the screwball comedy), certain scholars in the field, such as film historian Thomas Schatz, treat it as not a genre but a "style". Alain Silver, the most widely published American critic specializing in film noir studies, refers to film noir as a "cycle" and a "phenomenon", even as he argues that it has—like certain genres—a consistent set of visual and thematic codes. Screenwriter Eric R. Williams labels both film noir and screwball comedy as a "pathway" in his screenwriters taxonomy; explaining that a pathway has two parts: 1) the way the audience connects with the protagonist and 2) the trajectory the audience expects the story to follow. Other critics treat film noir as a "mood", characterize it as a "series", or simply address a chosen set of films they regard as belonging to the noir "canon". There is no consensus on the matter.
The aesthetics of film noir are influenced by German Expressionism, an artistic movement of the 1910s and 1920s that involved theater, photography, painting, sculpture and architecture, as well as cinema. The opportunities offered by the booming Hollywood film industry and then the threat of Nazism, led to the emigration of many film artists working in Germany who had been involved in the Expressionist movement or studied with its practitioners. "M" (1931), shot only a few years before director Fritz Lang's departure from Germany, is among the first crime films of the sound era to join a characteristically noirish visual style with a noir-type plot, in which the protagonist is a criminal (as are his most successful pursuers). Directors such as Lang, Jacques Tourneur, Robert Siodmak and Michael Curtiz brought a dramatically shadowed lighting style and a psychologically expressive approach to visual composition ("mise-en-scène"), with them to Hollywood, where they made some of the most famous classic noirs.
By 1931, Curtiz had already been in Hollywood for half a decade, making as many as six films a year. Movies of his such as "20,000 Years in Sing Sing" (1932) and "Private Detective 62" (1933) are among the early Hollywood sound films arguably classifiable as noir—scholar Marc Vernet offers the latter as evidence that dating the initiation of film noir to 1940 or any other year is "arbitrary". Expressionism-orientated filmmakers had free stylistic rein in Universal horror pictures such as "Dracula" (1931), "The Mummy" (1932)—the former photographed and the latter directed by the Berlin-trained Karl Freund—and "The Black Cat" (1934), directed by Austrian émigré Edgar G. Ulmer. The Universal horror film that comes closest to noir, in story and sensibility, is "The Invisible Man" (1933), directed by Englishman James Whale and photographed by American Arthur Edeson. Edeson later photographed "The Maltese Falcon" (1941), widely regarded as the first major film noir of the classic era.
Josef von Sternberg was directing in Hollywood during the same period. Films of his such as "Shanghai Express" (1932) and "The Devil Is a Woman" (1935), with their hothouse eroticism and baroque visual style, anticipated central elements of classic noir. The commercial and critical success of Sternberg's silent "Underworld" (1927) was largely responsible for spurring a trend of Hollywood gangster films. Successful films in that genre such as "Little Caesar" (1931), "The Public Enemy" (1931) and "Scarface" (1932) demonstrated that there was an audience for crime dramas with morally reprehensible protagonists. An important, possibly influential, cinematic antecedent to classic noir was 1930s French poetic realism, with its romantic, fatalistic attitude and celebration of doomed heroes. The movement's sensibility is mirrored in the Warner Bros. drama "I Am a Fugitive from a Chain Gang" (1932), a forerunner of noir. Among films not considered film noirs, perhaps none had a greater effect on the development of the genre than "Citizen Kane" (1941), directed by Orson Welles. Its visual intricacy and complex, voiceover narrative structure are echoed in dozens of classic film noirs.
Italian neorealism of the 1940s, with its emphasis on quasi-documentary authenticity, was an acknowledged influence on trends that emerged in American noir. "The Lost Weekend" (1945), directed by Billy Wilder, another Vienna-born, Berlin-trained American auteur, tells the story of an alcoholic in a manner evocative of neorealism. It also exemplifies the problem of classification: one of the first American films to be described as a film noir, it has largely disappeared from considerations of the field. Director Jules Dassin of "The Naked City" (1948) pointed to the neorealists as inspiring his use of location photography with non-professional extras. This semidocumentary approach characterized a substantial number of noirs in the late 1940s and early 1950s. Along with neorealism, the style had an American precedent cited by Dassin, in director Henry Hathaway's "The House on 92nd Street" (1945), which demonstrated the parallel influence of the cinematic newsreel.
The primary literary influence on film noir was the hardboiled school of American detective and crime fiction, led in its early years by such writers as Dashiell Hammett (whose first novel, "Red Harvest", was published in 1929) and James M. Cain (whose "The Postman Always Rings Twice" appeared five years later), and popularized in pulp magazines such as "Black Mask". The classic film noirs "The Maltese Falcon" (1941) and "The Glass Key" (1942) were based on novels by Hammett; Cain's novels provided the basis for "Double Indemnity" (1944), "Mildred Pierce" (1945), "The Postman Always Rings Twice" (1946), and "Slightly Scarlet" (1956; adapted from "Love's Lovely Counterfeit"). A decade before the classic era, a story by Hammett was the source for the gangster melodrama "City Streets" (1931), directed by Rouben Mamoulian and photographed by Lee Garmes, who worked regularly with Sternberg. Released the month before Lang's "M", "City Streets" has a claim to being the first major film noir; both its style and story had many noir characteristics.
Raymond Chandler, who debuted as a novelist with "The Big Sleep" in 1939, soon became the most famous author of the hardboiled school. Not only were Chandler's novels turned into major noirs—"Murder, My Sweet" (1944; adapted from "Farewell, My Lovely"), "The Big Sleep" (1946), and "Lady in the Lake" (1947)—he was an important screenwriter in the genre as well, producing the scripts for "Double Indemnity", "The Blue Dahlia" (1946), and "Strangers on a Train" (1951). Where Chandler, like Hammett, centered most of his novels and stories on the character of the private eye, Cain featured less heroic protagonists and focused more on psychological exposition than on crime solving; the Cain approach has come to be identified with a subset of the hardboiled genre dubbed "noir fiction". For much of the 1940s, one of the most prolific and successful authors of this often downbeat brand of suspense tale was Cornell Woolrich (sometimes under the pseudonym George Hopley or William Irish). No writer's published work provided the basis for more film noirs of the classic period than Woolrich's: thirteen in all, including "Black Angel" (1946), "Deadline at Dawn" (1946), and "Fear in the Night" (1947).
Another crucial literary source for film noir was W. R. Burnett, whose first novel to be published was "Little Caesar", in 1929. It was turned into a hit for Warner Bros. in 1931; the following year, Burnett was hired to write dialogue for "Scarface", while "The Beast of the City" (1932) was adapted from one of his stories. At least one important reference work identifies the latter as a film noir despite its early date. Burnett's characteristic narrative approach fell somewhere between that of the quintessential hardboiled writers and their noir fiction compatriots—his protagonists were often heroic in their own way, which happened to be that of the gangster. During the classic era, his work, either as author or screenwriter, was the basis for seven films now widely regarded as film noirs, including three of the most famous: "High Sierra" (1941), "This Gun for Hire" (1942), and "The Asphalt Jungle" (1950).
The 1940s and 1950s are generally regarded as the "classic period" of American "film noir". While "City Streets" and other pre-WWII crime melodramas such as "Fury" (1936) and "You Only Live Once" (1937), both directed by Fritz Lang, are categorized as full-fledged "noir" in Alain Silver and Elizabeth Ward's "film noir" encyclopedia, other critics tend to describe them as "proto-noir" or in similar terms.
The film now most commonly cited as the first "true" "film noir" is "Stranger on the Third Floor" (1940), directed by Latvian-born, Soviet-trained Boris Ingster. Hungarian émigré Peter Lorre—who had starred in Lang's "M"—was top-billed, although he did not play the primary lead. He later played secondary roles in several other formative American noirs. Although modestly budgeted, at the high end of the B movie scale, "Stranger on the Third Floor" still lost its studio, RKO, US$56,000 (), almost a third of its total cost. "Variety" magazine found Ingster's work: "...too studied and when original, lacks the flare to hold attention. It's a film too arty for average audiences, and too humdrum for others." "Stranger on the Third Floor" was not recognized as the beginning of a trend, let alone a new genre, for many decades.
Most film noirs of the classic period were similarly low- and modestly-budgeted features without major stars—B movies either literally or in spirit. In this production context, writers, directors, cinematographers, and other craftsmen were relatively free from typical big-picture constraints. There was more visual experimentation than in Hollywood filmmaking as a whole: the Expressionism now closely associated with noir and the semi-documentary style that later emerged represent two very different tendencies. Narrative structures sometimes involved convoluted flashbacks uncommon in non-noir commercial productions. In terms of content, enforcement of the Production Code ensured that no film character could literally get away with murder or be seen sharing a bed with anyone but a spouse; within those bounds, however, many films now identified as noir feature plot elements and dialogue that were very risqué for the time.
Thematically, film noirs were most exceptional for the relative frequency with which they centered on portrayals of women of questionable virtue—a focus that had become rare in Hollywood films after the mid-1930s and the end of the pre-Code era. The signal film in this vein was "Double Indemnity", directed by Billy Wilder; setting the mold was Barbara Stanwyck's unforgettable femme fatale, Phyllis Dietrichson—an apparent nod to Marlene Dietrich, who had built her extraordinary career playing such characters for Sternberg. An A-level feature all the way, the film's commercial success and seven Oscar nominations made it probably the most influential of the early noirs. A slew of now-renowned noir "bad girls" followed, such as those played by Rita Hayworth in "Gilda" (1946), Lana Turner in "The Postman Always Rings Twice" (1946), Ava Gardner in "The Killers" (1946), and Jane Greer in "Out of the Past" (1947). The iconic noir counterpart to the femme fatale, the private eye, came to the fore in films such as "The Maltese Falcon" (1941), with Humphrey Bogart as Sam Spade, and "Murder, My Sweet" (1944), with Dick Powell as Philip Marlowe.
The prevalence of the private eye as a lead character declined in film noir of the 1950s, a period during which several critics describe the form as becoming more focused on extreme psychologies and more exaggerated in general. A prime example is "Kiss Me Deadly" (1955); based on a novel by Mickey Spillane, the best-selling of all the hardboiled authors, here the protagonist is a private eye, Mike Hammer. As described by Paul Schrader, "Robert Aldrich's teasing direction carries "noir" to its sleaziest and most perversely erotic. Hammer overturns the underworld in search of the 'great whatsit' [which] turns out to be—joke of jokes—an exploding atomic bomb." Orson Welles's baroquely styled "Touch of Evil" (1958) is frequently cited as the last noir of the classic period. Some scholars believe film noir never really ended, but continued to transform even as the characteristic noir visual style began to seem dated and changing production conditions led Hollywood in different directions—in this view, post-1950s films in the noir tradition are seen as part of a continuity with classic noir. A majority of critics, however, regard comparable films made outside the classic era to be something other than genuine film noirs. They regard true film noir as belonging to a temporally and geographically limited cycle or period, treating subsequent films that evoke the classics as fundamentally different due to general shifts in filmmaking style and latter-day awareness of noir as a historical source for allusion.
While the inceptive noir, "Stranger on the Third Floor", was a B picture directed by a virtual unknown, many of the film noirs still remembered were A-list productions by well-known film makers. Debuting as a director with "The Maltese Falcon" (1941), John Huston followed with "Key Largo" (1948) and "The Asphalt Jungle" (1950), "Circle of Danger" (1951). Opinion is divided on the noir status of several Alfred Hitchcock thrillers from the era; at least four qualify by consensus: "Shadow of a Doubt" (1943), "Notorious" (1946), "Strangers on a Train" (1951) and "The Wrong Man" (1956), Otto Preminger's success with "Laura" (1944) made his name and helped demonstrate noir's adaptability to a high-gloss 20th Century-Fox presentation. Among Hollywood's most celebrated directors of the era, arguably none worked more often in a noir mode than Preminger; his other noirs include "Fallen Angel" (1945), "Whirlpool" (1949), "Where the Sidewalk Ends" (1950) (all for Fox) and "Angel Face" (1952). A half-decade after "Double Indemnity" and "The Lost Weekend", Billy Wilder made "Sunset Boulevard" (1950) and "Ace in the Hole" (1951), noirs that were not so much crime dramas as satires on Hollywood and the news media. "In a Lonely Place" (1950) was Nicholas Ray's breakthrough; his other noirs include his debut, "They Live by Night" (1948) and "On Dangerous Ground" (1952), noted for their unusually sympathetic treatment of characters alienated from the social mainstream.
Orson Welles had notorious problems with financing but his three film noirs were well budgeted: "The Lady from Shanghai" (1947) received top-level, "prestige" backing, while "The Stranger" (1946), his most conventional film, and "Touch of Evil" (1958), an unmistakably personal work, were funded at levels lower but still commensurate with headlining releases. Like "The Stranger", Fritz Lang's "The Woman in the Window" (1945) was a production of the independent International Pictures. Lang's follow-up, "Scarlet Street" (1945), was one of the few classic noirs to be officially censored: filled with erotic innuendo, it was temporarily banned in Milwaukee, Atlanta and New York State. "Scarlet Street" was a semi-independent, cosponsored by Universal and Lang's Diana Productions, of which the film's co-star, Joan Bennett, was the second biggest shareholder. Lang, Bennett and her husband, the Universal veteran and Diana production head Walter Wanger, made "Secret Beyond the Door" (1948) in similar fashion.
Before leaving the United States while subject to the Hollywood blacklist, Jules Dassin made two classic noirs that also straddled the major–independent line: "Brute Force" (1947) and the influential documentary-style "The Naked City" (1948) were developed by producer Mark Hellinger, who had an "inside/outside" contract with Universal similar to Wanger's. Years earlier, working at Warner Bros., Hellinger had produced three films for Raoul Walsh, the proto-noirs "They Drive by Night" (1940), "Manpower" (1941) and "High Sierra" (1941), now regarded as a seminal work in noir's development. Walsh had no great name during his half-century as a director but his noirs "White Heat" (1949) and "The Enforcer" (1951) had A-list stars and are seen as important examples of the cycle. Other directors associated with top-of-the-bill Hollywood film noirs include Edward Dmytryk ("Murder, My Sweet" (1944), "Crossfire" (1947))—the first important noir director to fall prey to the industry blacklist—as well as Henry Hathaway ("The Dark Corner" (1946), "Kiss of Death" (1947)) and John Farrow ("The Big Clock" (1948), "Night Has a Thousand Eyes" (1948)).
Most of the Hollywood films considered to be classic noirs fall into the category of the "B movie". Some were Bs in the most precise sense, produced to run on the bottom of double bills by a low-budget unit of one of the major studios or by one of the smaller Poverty Row outfits, from the relatively well-off Monogram to shakier ventures such as Producers Releasing Corporation (PRC). Jacques Tourneur had made over thirty Hollywood Bs (a few now highly regarded, most forgotten) before directing the A-level "Out of the Past", described by scholar Robert Ottoson as "the "ne plus ultra" of forties film noir". Movies with budgets a step up the ladder, known as "intermediates" by the industry, might be treated as A or B pictures depending on the circumstances. Monogram created Allied Artists in the late 1940s to focus on this sort of production. Robert Wise ("Born to Kill" [1947], "The Set-Up" [1949]) and Anthony Mann ("T-Men" [1947] and "Raw Deal" [1948]) each made a series of impressive intermediates, many of them noirs, before graduating to steady work on big-budget productions. Mann did some of his most celebrated work with cinematographer John Alton, a specialist in what James Naremore called "hypnotic moments of light-in-darkness". "He Walked by Night" (1948), shot by Alton and though credited solely to Alfred Werker, directed in large part by Mann, demonstrates their technical mastery and exemplifies the late 1940s trend of "police procedural" crime dramas. It was released, like other Mann-Alton noirs, by the small Eagle-Lion company; it was the inspiration for the "Dragnet" series, which debuted on radio in 1949 and television in 1951.
Several directors associated with noir built well-respected oeuvres largely at the B-movie/intermediate level. Samuel Fuller's brutal, visually energetic films such as "Pickup on South Street" (1953) and "Underworld U.S.A." (1961) earned him a unique reputation; his advocates praise him as "primitive" and "barbarous". Joseph H. Lewis directed noirs as diverse as "Gun Crazy" (1950) and "The Big Combo" (1955). The former—whose screenplay was written by the blacklisted Dalton Trumbo, disguised by a front—features a bank hold-up sequence shown in an unbroken take of over three minutes that was influential. "The Big Combo" was shot by John Alton and took the shadowy noir style to its outer limits. The most distinctive films of Phil Karlson ("The Phenix City Story" [1955] and "The Brothers Rico" [1957]) tell stories of vice organized on a monstrous scale. The work of other directors in this tier of the industry, such as Felix E. Feist ("The Devil Thumbs a Ride" [1947], "Tomorrow Is Another Day" [1951]), has become obscure. Edgar G. Ulmer spent most of his Hollywood career working at B studios and once in a while on projects that achieved intermediate status; for the most part, on unmistakable Bs. In 1945, while at PRC, he directed a noir cult classic, "Detour". Ulmer's other noirs include "Strange Illusion" (1945), also for PRC; "Ruthless" (1948), for Eagle-Lion, which had acquired PRC the previous year and "Murder Is My Beat" (1955), for Allied Artists.
A number of low- and modestly-budgeted noirs were made by independent, often actor-owned, companies contracting with larger studios for distribution. Serving as producer, writer, director and top-billed performer, Hugo Haas made films like "Pickup" (1951), "The Other Woman" (1954) and Jacques Tourneur, "The Fearmakers (1958)"). It was in this way that accomplished noir actress Ida Lupino established herself as the sole female director in Hollywood during the late 1940s and much of the 1950s. She does not appear in the best-known film she directed, "The Hitch-Hiker" (1953), developed by her company, The Filmakers, with support and distribution by RKO. It is one of the seven classic film noirs produced largely outside of the major studios that have been chosen for the United States National Film Registry. Of the others, one was a small-studio release: "Detour". Four were independent productions distributed by United Artists, the "studio without a studio": "Gun Crazy"; "Kiss Me Deadly"; "D.O.A." (1950), directed by Rudolph Maté and "Sweet Smell of Success" (1957), directed by Alexander Mackendrick. One was an independent distributed by MGM, the industry leader: "Force of Evil" (1948), directed by Abraham Polonsky and starring John Garfield, both of whom were blacklisted in the 1950s. Independent production usually meant restricted circumstances but "Sweet Smell of Success", despite the plans of the production team, was clearly not made on the cheap, though like many other cherished A-budget noirs, it might be said to have a B-movie soul.
Perhaps no director better displayed that spirit than the German-born Robert Siodmak, who had already made a score of films before his 1940 arrival in Hollywood. Working mostly on A features, he made eight films now regarded as classic-era film noirs (a figure matched only by Lang and Mann). In addition to "The Killers", Burt Lancaster's debut and a Hellinger/Universal co-production, Siodmak's other important contributions to the genre include 1944's "Phantom Lady" (a top-of-the-line B and Woolrich adaptation), the ironically titled "Christmas Holiday" (1944), and "Cry of the City" (1948). "Criss Cross" (1949), with Lancaster again the lead, exemplifies how Siodmak brought the virtues of the B-movie to the A noir. In addition to the relatively looser constraints on character and message at lower budgets, the nature of B production lent itself to the noir style for economic reasons: dim lighting saved on electricity and helped cloak cheap sets (mist and smoke also served the cause); night shooting was often compelled by hurried production schedules; plots with obscure motivations and intriguingly elliptical transitions were sometimes the consequence of hastily written scripts, of which there was not always enough time or money to shoot every scene. In "Criss Cross", Siodmak achieved these effects with purpose, wrapping them around Yvonne De Carlo, playing the most understandable of femme fatales; Dan Duryea, in one of his many charismatic villain roles; and Lancaster as an ordinary laborer turned armed robber, doomed by a romantic obsession.
Some critics regard classic film noir as a cycle exclusive to the United States; Alain Silver and Elizabeth Ward, for example, argue, "With the Western, film noir shares the distinction of being an indigenous American form ... a wholly American film style." However, although the term "film noir" was originally coined to describe Hollywood movies, it was an international phenomenon. Even before the beginning of the generally accepted classic period, there were films made far from Hollywood that can be seen in retrospect as film noirs, for example, the French productions "Pépé le Moko" (1937), directed by Julien Duvivier, and "Le Jour se lève" (1939), directed by Marcel Carné. In addition, Mexico experienced a vibrant film noir period from roughly 1946 to 1952, which was around the same time film noir was blossoming in the United States.
During the classic period, there were many films produced in Europe, particularly in France, that share elements of style, theme, and sensibility with American film noirs and may themselves be included in the genre's canon. In certain cases, the interrelationship with Hollywood noir is obvious: American-born director Jules Dassin moved to France in the early 1950s as a result of the Hollywood blacklist, and made one of the most famous French film noirs, "Rififi" (1955). Other well-known French films often classified as noir include "Quai des Orfèvres" (1947) and "Les Diaboliques" (1955), both directed by Henri-Georges Clouzot. "Casque d'Or" (1952), "Touchez pas au grisbi" (1954), and "Le Trou" (1960) directed by Jacques Becker; and "Ascenseur pour l'échafaud" (1958), directed by Louis Malle. French director Jean-Pierre Melville is widely recognized for his tragic, minimalist film noirs—"Bob le flambeur" (1955), from the classic period, was followed by "Le Doulos" (1962), "Le deuxième souffle" (1966), "Le Samouraï" (1967), and "Le Cercle rouge" (1970).
Scholar Andrew Spicer argues that British film noir evidences a greater debt to French poetic realism than to the expressionistic American mode of noir. Examples of British noir from the classic period include "Brighton Rock" (1947), directed by John Boulting; "They Made Me a Fugitive" (1947), directed by Alberto Cavalcanti; "The Small Back Room" (1948), directed by Michael Powell and Emeric Pressburger; "The October Man" (1950), directed by Roy Ward Baker; and "Cast a Dark Shadow" (1955), directed by Lewis Gilbert. Terence Fisher directed several low-budget thrillers in a noir mode for Hammer Film Productions, including "The Last Page" (a.k.a. "Man Bait"; 1952), "Stolen Face" (1952), and "Murder by Proxy" (a.k.a. "Blackout"; 1954). Before leaving for France, Jules Dassin had been obliged by political pressure to shoot his last English-language film of the classic noir period in Great Britain: "Night and the City" (1950). Though it was conceived in the United States and was not only directed by an American but also stars two American actors—Richard Widmark and Gene Tierney—it is technically a UK production, financed by 20th Century-Fox's British subsidiary. The most famous of classic British noirs is director Carol Reed's "The Third Man" (1949), from a screenplay by Graham Greene. Set in Vienna immediately after World War II, it also stars two American actors, Joseph Cotten and Orson Welles, who had appeared together in "Citizen Kane".
Elsewhere, Italian director Luchino Visconti adapted Cain's "The Postman Always Rings Twice" as "Ossessione" (1943), regarded both as one of the great noirs and a seminal film in the development of neorealism. (This was not even the first screen version of Cain's novel, having been preceded by the French "Le Dernier Tournant" in 1939.) In Japan, the celebrated Akira Kurosawa directed several films recognizable as film noirs, including "Drunken Angel" (1948), "Stray Dog" (1949), "The Bad Sleep Well" (1960), and "High and Low" (1963).
Among the first major neo-noir films—the term often applied to films that consciously refer back to the classic noir tradition—was the French "Tirez sur le pianiste" (1960), directed by François Truffaut from a novel by one of the gloomiest of American noir fiction writers, David Goodis. Noir crime films and melodramas have been produced in many countries in the post-classic area. Some of these are quintessentially self-aware neo-noirs—for example, "Il Conformista" (1969; Italy), "Der Amerikanische Freund" (1977; Germany), "The Element of Crime" (1984; Denmark), and "El Aura" (2005; Argentina). Others simply share narrative elements and a version of the hardboiled sensibility associated with classic noir, such as "Castle of Sand" (1974; Japan), "Insomnia" (1997; Norway), "Croupier" (1998; UK), and "Blind Shaft" (2003; China).
The neo-noir film genre developed mid-way into the Cold War. This cinematological trend reflected much of the cynicism and the possibility of nuclear annihilation of the era. This new genre introduced innovations that were not available with the earlier noir films. The violence was also more potent.
While it is hard to draw a line between some of the noir films of the early 1960s such as "Blast of Silence" (1961) and "Cape Fear" (1962) and the noirs of the late 1950s, new trends emerged in the post-classic era. "The Manchurian Candidate" (1962), directed by John Frankenheimer, "Shock Corridor" (1963), directed by Samuel Fuller, and "Brainstorm" (1965), directed by experienced noir character actor William Conrad, all treat the theme of mental dispossession within stylistic and tonal frameworks derived from classic film noir. "The Manchurian Candidate" examined the situation of American prisoners of war (POWs) during the Korean War. Incidents that occurred during the war as well as those post-war, functioned as an inspiration for a "Cold War Noir" subgenre. The television series "The Fugitive" (1963–67) brought classic noir themes and mood to the small screen for an extended run.
In a different vein, films began to appear that self-consciously acknowledged the conventions of classic film noir as historical archetypes to be revived, rejected, or reimagined. These efforts typify what came to be known as neo-noir. Though several late classic noirs, "Kiss Me Deadly" in particular, were deeply self-knowing and post-traditional in conception, none tipped its hand so evidently as to be remarked on by American critics at the time. The first major film to overtly work this angle was French director Jean-Luc Godard's "À bout de souffle" ("Breathless"; 1960), which pays its literal respects to Bogart and his crime films while brandishing a bold new style for a new day. In the United States, Arthur Penn (1965's "Mickey One", drawing inspiration from Truffaut's "Tirez sur le pianiste" and other French New Wave films), John Boorman (1967's "Point Blank", similarly caught up, though in the "Nouvelle vague'"s deeper waters), and Alan J. Pakula (1971's "Klute") directed films that knowingly related themselves to the original film noirs, inviting audiences in on the game.
A manifest affiliation with noir traditions—which, by its nature, allows different sorts of commentary on them to be inferred—can also provide the basis for explicit critiques of those traditions. In 1973, director Robert Altman flipped off noir piety with "The Long Goodbye". Based on the novel by Raymond Chandler, it features one of Bogart's most famous characters, but in iconoclastic fashion: Philip Marlowe, the prototypical hardboiled detective, is replayed as a hapless misfit, almost laughably out of touch with contemporary mores and morality. Where Altman's subversion of the film noir mythos was so irreverent as to outrage some contemporary critics, around the same time Woody Allen was paying affectionate, at points idolatrous homage to the classic mode with "Play It Again, Sam" (1972). The "blaxploitation" film "Shaft" (1971), wherein Richard Roundtree plays the titular African-American private eye, John Shaft, takes conventions from classic noir.
The most acclaimed of the neo-noirs of the era was director Roman Polanski's 1974 "Chinatown". Written by Robert Towne, it is set in 1930s Los Angeles, an accustomed noir locale nudged back some few years in a way that makes the pivotal loss of innocence in the story even crueler. Where Polanski and Towne raised noir to a black apogee by turning rearward, director Martin Scorsese and screenwriter Paul Schrader brought the noir attitude crashing into the present day with "Taxi Driver" (1976), a crackling, bloody-minded gloss on bicentennial America. In 1978, Walter Hill wrote and directed "The Driver", a chase film as might have been imagined by Jean-Pierre Melville in an especially abstract mood.
Hill was already a central figure in 1970s noir of a more straightforward manner, having written the script for director Sam Peckinpah's "The Getaway" (1972), adapting a novel by pulp master Jim Thompson, as well as for two tough private eye films: an original screenplay for "Hickey & Boggs" (1972) and an adaptation of a novel by Ross Macdonald, the leading literary descendant of Hammett and Chandler, for "The Drowning Pool" (1975). Some of the strongest 1970s noirs, in fact, were unwinking remakes of the classics, "neo" mostly by default: the heartbreaking "Thieves Like Us" (1974), directed by Altman from the same source as Ray's "They Live by Night", and "Farewell, My Lovely" (1975), the Chandler tale made classically as "Murder, My Sweet", remade here with Robert Mitchum in his last notable noir role. Detective series, prevalent on American television during the period, updated the hardboiled tradition in different ways, but the show conjuring the most noir tone was a horror crossover touched with shaggy, "Long Goodbye"-style humor: "" (1974–75), featuring a Chicago newspaper reporter investigating strange, usually supernatural occurrences.
The turn of the decade brought Scorsese's black-and-white "Raging Bull" (cowritten by Schrader); an acknowledged masterpiece—the American Film Institute ranks it as the greatest American film of the 1980s and the fourth greatest of all time—it is also a retreat, telling a story of a boxer's moral self-destruction that recalls in both theme and visual ambience noir dramas such as "Body and Soul" (1947) and "Champion" (1949). From 1981, the popular "Body Heat", written and directed by Lawrence Kasdan, invokes a different set of classic noir elements, this time in a humid, erotically charged Florida setting; its success confirmed the commercial viability of neo-noir, at a time when the major Hollywood studios were becoming increasingly risk averse. The mainstreaming of neo-noir is evident in such films as "Black Widow" (1987), "Shattered" (1991), and "Final Analysis" (1992). Few neo-noirs have made more money or more wittily updated the tradition of the noir double-entendre than "Basic Instinct" (1992), directed by Paul Verhoeven and written by Joe Eszterhas. The film also demonstrates how neo-noir's polychrome palette can reproduce many of the expressionistic effects of classic black-and-white noir.
Like "Chinatown", its more complex predecessor, Curtis Hanson's Oscar-winning "L.A. Confidential" (1997), based on the James Ellroy novel, demonstrates an opposite tendency—the deliberately retro film noir; its tale of corrupt cops and femmes fatales is seemingly lifted straight from a film of 1953, the year in which it is set. Director David Fincher followed the immensely successful neo-noir "Seven" (1995) with a film that developed into a cult favorite after its original, disappointing release: "Fight Club" (1999) is a "sui generis" mix of noir aesthetic, perverse comedy, speculative content, and satiric intent.
Working generally with much smaller budgets, brothers Joel and Ethan Coen have created one of the most extensive film oeuvres influenced by classic noir, with films such as "Blood Simple" (1984) and "Fargo" (1996), considered by some a supreme work in the neo-noir mode. The Coens cross noir with other generic lines in the gangster drama "Miller's Crossing" (1990)—loosely based on the Dashiell Hammett novels "Red Harvest" and "The Glass Key"—and the comedy "The Big Lebowski" (1998), a tribute to Chandler and an homage to Altman's version of "The Long Goodbye". The characteristic work of David Lynch combines film noir tropes with scenarios driven by disturbed characters such as the sociopathic criminal played by Dennis Hopper in "Blue Velvet" (1986) and the delusionary protagonist of "Lost Highway" (1997). The "Twin Peaks" cycle, both TV series (1990–91) and film, "" (1992), puts a detective plot through a succession of bizarre spasms. David Cronenberg also mixes surrealism and noir in "Naked Lunch" (1991), inspired by William S. Burroughs' novel.
Perhaps no American neo-noirs better reflect the classic noir A-movie-with-a-B-movie-soul than those of director-writer Quentin Tarantino; neo-noirs of his such as "Reservoir Dogs" (1992) and "Pulp Fiction" (1994) display a relentlessly self-reflexive, sometimes tongue-in-cheek sensibility, similar to the work of the New Wave directors and the Coens. Other films from the era readily identifiable as neo-noir (some retro, some more au courant) include director John Dahl's "Kill Me Again" (1989), "Red Rock West" (1992), and "The Last Seduction" (1993); four adaptations of novels by Jim Thompson—"The Kill-Off" (1989), "After Dark, My Sweet" (1990), "The Grifters" (1990), and the remake of "The Getaway" (1994); and many more, including adaptations of the work of other major noir fiction writers: "The Hot Spot" (1990), from "Hell Hath No Fury", by Charles Williams; "Miami Blues" (1990), from the novel by Charles Willeford; and "Out of Sight" (1998), from the novel by Elmore Leonard. Several films by director-writer David Mamet involve noir elements: "House of Games" (1987), "Homicide" (1991), "The Spanish Prisoner" (1997), and "Heist" (2001). On television, "Moonlighting" (1985–89) paid homage to classic noir while demonstrating an unusual appreciation of the sense of humor often found in the original cycle. Between 1983 and 1989, Mickey Spillane's hardboiled private eye Mike Hammer was played with wry gusto by Stacy Keach in a series and several stand-alone television films (an unsuccessful revival followed in 1997–98). The British miniseries "The Singing Detective" (1986), written by Dennis Potter, tells the story of a mystery writer named Philip Marlow; widely considered one of the finest neo-noirs in any medium, some critics rank it among the greatest television productions of all time.
Among big-budget auteurs, Michael Mann has worked frequently in a neo-noir mode, with such films as "Thief" (1981) and "Heat" (1995) and the TV series "Miami Vice" (1984–89) and "Crime Story" (1986–88). Mann's output exemplifies a primary strain of neo-noir or as affectionately called "neon noir", in which classic themes and tropes are revisited in a contemporary setting with an up-to-date visual style and rock- or hip hop-based musical soundtrack.
Neo-noir film borrows from and reflects many of the characteristics of the film noir: a presence of crime, violence, complex characters and plot-lines, mystery, ambiguity and moral ambivalence, all come into play in the neon-noir genre. But more so than the superficial traits of the genre, neon noir emphasizes the socio-critique of film noir, recalling the specific socio-cultural dimensions of the interwar years when noirs first became prominent; a time of global existential crisis, depression and the mass movement of rural persons towards the cities. Long shots or montages of cityscapes, often portrayed as dark and menacing were suggestive of what Dueck referred to as a ‘bleak societal perspective’, | https://en.wikipedia.org/wiki?curid=10802 |
Finno-Ugric languages
Finno-Ugric ( or ; Fenno-Ugric) or Finno-Ugrian (Fenno-Ugrian), is a traditional grouping of all languages in the Uralic language family except the Samoyedic languages. Its formerly commonly accepted status as a subfamily of Uralic is based on criteria formulated in the 19th century and is criticized by some contemporary linguists such as Tapani Salminen and Ante Aikio as inaccurate and misleading. The three most-spoken Uralic languages, Hungarian, Finnish, and Estonian, are all included in Finno-Ugric, although linguistic roots common to both branches of the traditional Finno-Ugric language tree (Finno-Permic and Ugric) are distant.
The term "Finno-Ugric", which originally referred to the entire family, is sometimes used as a synonym for the term "Uralic", which includes the Samoyedic languages, as commonly happens when a language family is expanded with further discoveries.
The validity of Finno-Ugric as a genetic grouping is under challenge, with some feeling that the Finno-Permic languages are as distinct from the Ugric languages as they are from the Samoyedic languages spoken in Siberia, or even that none of the Finno-Ugric, Finno-Permic, or Ugric branches has been established. Received opinion has been that the easternmost (and last-discovered) Samoyed had separated first and the branching into Ugric and Finno-Permic took place later, but this reconstruction does not have strong support in the linguistic data.
Attempts at reconstructing a Proto-Finno-Ugric proto-language, a common ancestor of all Uralic languages except for the Samoyedic languages, are largely indistinguishable from Proto-Uralic, suggesting that Finno-Ugric might not be a historical grouping but a geographical one, with Samoyedic being distinct by lexical borrowing rather than actually being historically divergent. It has been proposed that the area in which Proto-Finno-Ugric was spoken reached between the Baltic Sea and the Ural Mountains.
Traditionally, the main set of evidence for the genetic proposal of Proto-Finno-Ugric has come from vocabulary. A large amount of vocabulary (e.g. the numerals "one", "three", "four" and "six"; the body-part terms "hand", "head") is only reconstructed up to the Proto-Finno-Ugric level, and only words with a Samoyedic equivalent have been reconstructed for Proto-Uralic. That methodology has been criticised, as no coherent explanation other than inheritance has been presented for the origin of most of the Finno-Ugric vocabulary (though a small number has been explained as old loanwords from Proto-Indo-European or its immediate successors).
The Samoyedic group has undergone a longer period of independent development, and its divergent vocabulary could be caused by mechanisms of replacement such as language contact. (The Finno-Ugric group is usually dated to approximately 4,000 years ago, the Samoyedic a little over 2,000.) Proponents of the traditional binary division note, however, that the invocation of extensive contact influence on vocabulary is at odds with the grammatical conservatism of Samoyedic.
The consonant "*š" (voiceless postalveolar fricative, ) has not been conclusively shown to occur in the traditional Proto-Uralic lexicon, but it is attested in some of the Proto-Finno-Ugric material. Another feature attested in the Finno-Ugric vocabulary is that "*i" now behaves as a neutral vowel with respect to front-back vowel harmony, and thus there are roots such as "*niwa-" "to remove the hair from hides".
Regular sound changes proposed for this stage are few and remain open to interpretation. Sammallahti (1988) proposes five, following Janhunen's (1981) reconstruction of Proto-Finno-Permic:
Sammallahti (1988) further reconstructs sound changes "*oo", "*ee" → "*a", "*ä" (merging with original "*a", "*ä") for the development from Proto-Finno-Ugric to Proto-Ugric. Similar sound laws are required for other languages as well. Thus, the origin and raising of long vowels may actually belong at a later stage, and the development of these words from Proto-Uralic to Proto-Ugric can be summarized as simple loss of "*x" (if it existed in the first place at all; vowel length only surfaces consistently in the Baltic-Finnic languages.) The proposed raising of "*o" has been alternately interpreted instead as a lowering "*u" → "*o" in Samoyedic (PU *"lumi" → "*lomə" → Proto-Samoyedic "*jom").
Janhunen (2007, 2009) notes a number of derivational innovations in Finno-Ugric, including "*ńoma" "hare" → "*ńoma-la", (vs. Samoyedic "*ńomå"), "*pexli" "side" → "*peel-ka" → "*pelka" "thumb", though involving Proto-Uralic derivational elements.
The Finno-Ugric group is not typologically distinct from Uralic as a whole: the most widespread structural features among the group all extend to the Samoyedic languages as well.
Modern linguistic research has shown that Volgaic languages is a geographical classification rather than a linguistic one, because the Mordvinic languages are more closely related to the Finno-Lappic languages than the Mari languages.
The relation of the Finno-Permic and the Ugric groups is adjudged remote by some scholars. On the other hand, with a projected time depth of only 3,000 to 4,000 years, the traditionally accepted Finno-Ugric grouping would be far younger than many major families such as Indo-European or Semitic, and would be about the same age as, for instance, the Eastern subfamily of Nilotic. But the grouping is far from transparent or securely established. The absence of early records is a major obstacle. As for the Finno-Ugric Urheimat, most of what has been said about it is speculation.
Some linguists criticizing the Finno-Ugric genetic proposal also question the validity of the entire Uralic family, instead proposing a Ural–Altaic hypothesis, within which they believe Finno-Permic may be as distant from Ugric as from Turkic. However, this approach has been rejected by nearly all other specialists in Uralic linguistics.
One argument in favor of the Finno-Ugric grouping has come from loanwords. Several loans from the Indo-European languages are present in most or all of the Finno-Ugric languages, while being absent from Samoyedic; many others also must be for phonological reasons dated as quite old.
According to Häkkinen (1983) the alleged Proto-Finno-Ugric loanwords are disproportionally well-represented in Hungarian and the Permic languages, and disproportionally poorly represented in the Ob-Ugric languages; hence it is possible that such words have been acquired by the languages only after the initial dissolution of the Uralic family into individual dialects, and that the scarcity of loanwords in Samoyedic results from its peripheric location.
The number systems among the Finno-Ugric languages are particularly distinct from the Samoyedic languages: only the numerals "2" and "5" have cognates in Samoyedic, while also the numerals, "1", "3", "4", "6", "10" are shared by all or most Finno-Ugric languages.
Below are the numbers 1 to 10 in several Finno-Ugric languages. Forms in "italic" do not descend from the reconstructed forms.
The number '2' descends in Ugric from a front-vocalic variant *kektä.
The numbers '9' and '8' in Finnic through Mari are considered to be derived from the numbers '1' and '2' as '10–1' and '10–2'. One reconstruction is *"yk+teksa" and *"kak+teksa", respectively, where *"teksa" cf. "deka" is an Indo-European loan; notice that the difference between /t/ and /d/ is not phonemic, unlike in Indo-European. Another analysis is *"ykt-e-ksa", *"kakt-e-ksa", with *"e" being the negative verb.
100-word Swadesh lists for certain Finno-Ugric languages can be compared and contrasted at the Rosetta Project website:
Finnish, Estonian, Hungarian, Erzya.
The four largest groups that speak Finno-Ugric languages are Hungarians (14.5 million), Finns (6.5 million), Estonians (1.1 million), and Mordvins (0.85 million). Three (Hungarians, Finns, and Estonians) inhabit independent nation-states, Hungary, Finland, and Estonia, while the Mordvins have an autonomous Mordovian Republic within Russia. The traditional area of the indigenous Sámi people is in Northern Fenno-Scandinavia and the Kola Peninsula in Northwest Russia and is known as Sápmi. Some other Finno-Ugric peoples have autonomous republics in Russia: Karelians (Republic of Karelia), Komi (Komi Republic), Udmurts (Udmurt Republic), Mari (Mari El Republic), and Mordvins (Moksha and Erzya; Republic of Mordovia). Khanty and Mansi peoples live in the Khanty-Mansi Autonomous Okrug of Russia, while Komi-Permyaks live in Komi-Permyak Okrug, which used to be an autonomous okrug of Russia, but today is a territory with special status within Perm Krai.
The linguistic reconstruction of the Finno-Ugric language family has led to the postulation that the ancient Proto-Finno-Ugric people were ethnically related, and that even the modern Finno-Ugric-speaking peoples are ethnically related. Such hypotheses are based on the assumption that heredity can be traced through linguistic relatedness, although it must be kept in mind that language shift and ethnic admixture, a relatively frequent and common occurrence both in recorded history and most likely also in prehistory, confuses the picture and there is no straightforward relationship, if at all, between linguistic and genetic affiliation. Still, the premise that the limited community of speakers of a proto-language must have been ethnically homogeneous remains accepted.
Modern genetic studies have shown that the Y-chromosome haplogroup N3, and sometimes N2, is almost specific though certainly not restricted to Uralic- or Finno-Ugric-speaking populations, especially as high frequency or primary paternal haplogroup. These haplogroups branched from haplogroup N, which probably spread north, then west and east from Northern China about 12,000–14,000 years before present from father haplogroup NO (haplogroup O being the most common Y-chromosome haplogroup in Southeast Asia).
Some of the ethnicities speaking Finno-Ugric languages are:
Notes
Further reading | https://en.wikipedia.org/wiki?curid=10803 |
Latin freestyle
Latin freestyle (local terms include Miami freestyle) or simply freestyle music is a form of electronic dance music that emerged in the New York metropolitan area in the 1980s. It experienced its greatest popularity from the late 1980s until the early 1990s. It continues to be produced and enjoys some degree of popularity, especially in urban settings. A common theme of freestyle lyricism is heartbreak in the city. The first freestyle hit is largely attributed to "Let the Music Play" by Shannon.
The music was largely made popular on radio stations such as WKTU and "pre-hip hop" Hot 97 in New York City, and it became especially popular among Italian Americans and Puerto Rican Americans in the New York metro area ,Philadelphia metro area ,and Baltimore metro area; Cuban Americans in the Miami area; Hispanic and Latino Americans and Italian Americans in Detroit, Los Angeles County, New Orleans and the Gulf coast; and Filipino Americans in Los Angeles, New York City, San Diego, and the San Francisco Bay Area. Notable performers in the freestyle genre include Stevie B, Corina, Lil Suzy, Timmy T, George Lamond, TKA, Noel, Company B, Exposé, Debbie Deb, Brenda K. Starr, the Cover Girls, Lisa Lisa and Cult Jam, Stacey Q, Sa-Fire, Shannon, Coro, Lisette Melendez, Judy Torres, Rockell, Paris by Air, Joyce Sims, and many others.
Freestyle music developed in the early 1980s, primarily in the Hispanic (Puerto Rican) communities of Upper Manhattan and The Bronx and the Italian-American communities in Brooklyn, The Bronx, and other boroughs of New York City, later spreading throughout New York's five boroughs and into New Jersey. It initially was a fusion of synthetic instrumentation and syncopated percussion of 1980s electro, as favored by fans of breakdancing. Sampling, as found in synth-pop music and hip-hop, was incorporated. Key influences include Afrika Bambaataa & Soul Sonic Force's "Planet Rock" (1982) and Shannon's "Let the Music Play" (1983), the latter was a top-ten "Billboard" Hot 100 hit. In 1984, a Latin presence was established when the first song recorded in the genre by a Latin American artist, "Please Don't Go", by newcomer Nayobe (a singer from Brooklyn and of Afro-Cuban descent) was recorded and released. The song became a success, reaching No. 23 on the "Billboard" Hot Dance Music/Club Play chart. In 1985, a Spanish version of the song was released with the title "No Te Vayas". By 1987, freestyle began getting more airplay on American pop radio stations. Songs such as "Come Go with Me" by Exposé, "Show Me" by the Cover Girls, "Fascinated" by Company B, "Silent Morning" by Noel, and "Catch Me (I'm Falling)" by Pretty Poison, brought freestyle into the mainstream. House music, based partly on disco rhythms, was by 1992 challenging the relatively upbeat, syncopated freestyle sound. Pitchfork considers the Miami Mix of ABC's single "When Smokey Sings" to be proto-freestyle.
Freestyle's Top 40 Radio airplay started to really take off by 1987, and it began to disappear from the airwaves in the early 1990s as radio stations moved to Top 40-only formats. Artists such as George Lamond, Exposé, Sweet Sensation, and Stevie B were still heard on mainstream radio, but other notable freestyle artists did not fare as well. Carlos Berrios and Platinum producer Frankie Cutlass used a freestyle production on "Temptation" by Corina and "Together Forever" by Lisette Melendez. The songs were released in 1991, almost simultaneously, and caused a resurgence in the style when they were embraced by Top 40 radio. "Temptation" reached the number 6 spot on the "Billboard" Hot 100 Chart. These hits were followed by the success of Lisa Lisa and Cult Jam, who had been one of the earliest freestyle acts. Their records were produced by Full Force, who had also worked with UTFO and James Brown.
Several primarily freestyle artists released ballads during the 1980s and early 1990s that crossed over to the pop charts and charted higher than their previous work. These include "Seasons Change" by Exposé, "Thinking of You" by Sa-Fire, "One More Try" by Timmy T, "Because I Love You (The Postman Song)" by Stevie B, and "If Wishes Came True" by Sweet Sensation. Brenda K. Starr reached the Hot 100 with her ballad "I Still Believe". Freestyle shortly thereafter gave way to mainstream pop artists such as MC Hammer, Paula Abdul, Bobby Brown, New Kids on the Block, and Milli Vanilli (with some artists utilizing elements of freestyle beginning in the 1980s) using hip hop beats and electro samples in a mainstream form with slicker production and MTV-friendly videos. These artists were successful on crossover stations as well as R&B stations, and freestyle was replaced as an underground genre by newer styles such as new jack swing, trance and Eurodance. Despite this, some freestyle acts managed to garner hits well into the 1990s, with acts such as Cynthia and Rockell scoring minor hits on the "Billboard" Hot 100 as late as 1998.
Freestyle remained a largely underground genre with a sizable following in New York, but has recently seen a comeback in the cities where the music originally experienced its greatest success. New York City impresario Steve Sylvester and producer Sal Abbetiello of Fever Records launched Stevie Sly's Freestyle Party show at the Manhattan live music venue, Coda on April 1, 2004. The show featured Judy Torres, Cynthia, and the Cover Girls and was attended by several celebrity guests. The Coda show was successful, and was followed by a summer 2006 Madison Square Garden concert that showcased freestyle's most successful performers. New freestyle releases are popular with enthusiasts and newcomers alike. Miami rapper Pitbull collaborated with Miami freestyle artist Stevie B to create an updated version of Stevie B's hit, "Spring Love".
Jordin Sparks' 2009 single "S.O.S. (Let the Music Play)" nods heavily to the freestyle genre with its use of a sample from the song "Let the Music Play" by Shannon.
In the modern day, freestyle music continues a thriving fanbase all across the United States. In cities like New York, Miami, and Los Angeles, recent concerts by freestyle artists have been extremely successful, with many events selling out.
As Latin freestyle in the late 1980s and early 1990s gradually became superseded with house music, dance-pop, and regular hip hop on one front and Spanish-language pop music with marginal Latin freestyle influences on another, "harder strain" of house music originating in New York City was known to incorporate elements of Latin freestyle and the old school hip hop sound. Principal architects of the genre were Todd Terry (early instances include "Alright Alright," and "Dum Dum Cry") and Nitro Deluxe. Deluxe's "This Brutal House," fusing Latin percussion and the New York electro sound of Man Parrish with brash house music, proved to have an impact on the United Kingdom's club music scene, presaging the early 1990s British rave scene.
Freestyle features a dance tempo with stress on beats two and four; syncopation with a bassline, and a louder bass drum, lead synth, or percussion, and optional stabs of synthesized brass or orchestral samples; sixteenth-note hi-hats; a chord progression that lasts eight, 16, or 32 beats and is usually in a minor key; relatively complex, upbeat melodies with singing, verses, and a chorus; and themes about a city, broken heart, love, or dancing. Freestyle music in general is heavily influenced by electronic instrumentation upon an upbeat dance tempo. Often the Latin clave rhythm is present in many songs, such as Amoretto "Clave Rocks" by Rae Serrano aka Amoretto. The tempo is almost always between 110 and 130 beats per minute (BPM), and is typically 118 BPM. Keyboard parts are influenced by House music, and often contain many short melodies and countermelodies.
The genre was recognized as a subgenre of hip-hop in the mid-1980s. It was dominated by "hard" electro beats of the type used primarily at the time in hip-hop music. Freestyle was more appreciated in larger cities.
The origin of the name "freestyle" is disputed. One theory is that the term refers to the mixing techniques of DJs who spun this form of music in its pre-house incarnations. Freestyle's syncopated beat structures required that DJs incorporate aspects of both electronic and hip-hop techniques, as they had to mix, or had more freedom to mix, more quickly and responsively to the individual songs. A second explanation is that the music allows for a greater degree of freedom of dance expression than other music of the time, and each dancer is free to create his or her own style. Yet another story holds that the freestyle name evolved in Miami over confusion between two tracks produced by Tony "Pretty Boy" Butler: "Freestyle Express" by Freestyle and Debbie Deb's "When I Hear Music." The sound became synonymous with Butler's production, and the name of the group he was in, Freestyle, became the genre's name.
"Let the Music Play" by Shannon, is often named as the genre's first hit, and its sound, called "The Shannon Sound", as the foundation of the genre. Others like DJ Lex and Triple Beam Records contend that Afrika Bambaataa's "Planet Rock" was the first freestyle song produced. "Let the Music Play" eventually became freestyle's biggest hit, and still receives frequent airplay. Its producers Chris Barbosa and Mark Liggett changed and redefined the electro funk sound with the addition of Latin-American rhythms and a syncopated drum-machine sound.
Many early or popular freestyle artists and DJs, such as Jellybean, Tony Torres, Raul Soto, Roman Ricardo, Lil Suzy, and Nocera, were of Puerto Rican or Italian ancestry, which was one reason for the style's popularity among Puerto Rican Americans and Italian Americans in the New York City area and Philadelphia.
The new sound rejuvenated the funk, soul and hip hop club scenes in New York City. While many neighborhood clubs closed their doors permanently, Manhattan clubs that played freestyle music began to thrive. Records like "Play At Your Own Risk" by Planet Patrol, "One More Shot" by C Bank, "Al-Naafiyish (The Soul)" by Hashim, and "I.O.U." by Freeez became hits.
Producers from around the world began to replicate the sound in productions that were more radio-friendly. Records such as "Let Me Be the One" by Sa-Fire, "I Remember What You Like" by Jenny Burton, "Running" by Information Society, "Give Me Tonight" by Shannon and "It Works For Me" by Pam Russo enjoyed heavy New York radio airplay.
The production team of Tony Moran and Albert Cabrera, known as the Latin Rascals, created original music for radio station WKTU that included freestyle classics like 1984's "Arabian Nights", and later hip-hop oriented projects such as the Cover Girls' "Show Me".
Freestyle continues to have a strong following in New York. Coro performed in WKTU's well-received "Beatstock" concert in 2006, and the 2008 "Freestyle Extravaganza" concert sold out Madison Square Garden.
In March 2013, Radio City Music Hall hosted the very first freestyle concert. Top freestyle artists included in the line-up were TKA, Safire, Judy Torres, Cynthia, Cover Girls, Lisa Lisa, Shannon, Noel, and Lisette Melendez. Originally scheduled as a one-night event, a second night was added shortly after the first night was sold out in a matter of days.
Radio stations nationwide began to play hits by artists like TKA, Sweet Sensation, Exposé, and Sa-Fire on the same playlists as Michael Jackson and Madonna. "(You Are My) All and All" by Joyce Sims became the first freestyle record to cross over into the R&B market, and was one of the first to reach the European market. Radio station WPOW/Power 96 was noted for exposing freestyle to South Florida in the mid-'80s through the early '90s, as well as mixing in some local Miami bass into its playlist.
'Pretty Tony' Butler produced several hits on Miami's Jam-Packed Records, including Debbie Deb's "When I Hear Music" and "Lookout Weekend", and Trinere's "I'll Be All You'll Ever Need" and "They're Playing Our Song". Company B, Stevie B, Paris By Air, Linear, Will to Power and Exposé's later hits defined Miami freestyle. Tolga Katas is credited as one of the first persons to create a hit record entirely on a computer, and produced Stevie B's "Party Your Body", "In My Eyes" and "Dreamin' of Love". Katas' record label Futura Records was an incubator for artists such as Linear, who achieved international success after a move from Futura to Atlantic Records.
The groundbreaking "Nightime" by Pretty Poison featuring red headed diva Jade Starling in 1984 initially put Philadelphia on the freestyle map. Their follow-up "Catch Me I'm Falling" was a worldwide hit and brought freestyle to American Bandstand, Soul Train, Solid Gold and the Arsenio Hall Show. "Catch Me I'm Falling" broke on the street during the summer of 1987 and was the #1 single at WCAU (98 Hot Hits) and #2 at WUSL (Power 99) during the first two weeks of July. Virgin Records was quick to sign Pretty Poison helping to usher in the avalanche of other major label signings from the expanding freestyle scene.
Several freestyle acts followed on the heels of Pretty Poison emerging from the metropolitan Philadelphia, PA area in the early 1990s, benefiting from both the clubs and the overnight success of then-Dance friendly Rhythmic Top 40 WIOQ. Artists such as T.P.E. (The Philadelphia Experiment) enjoyed regional success.
Freestyle had a recognizable following in California, particularly in Los Angeles, the San Francisco Bay Area, and San Diego. California's large Latino community enjoyed the sounds of the East Coast Latin club scene, and a number of California artists became popular among freestyle fans on the East Coast. Northern California freestyle, mainly from San Francisco and San Jose, leans towards a high-tempo dance beat similar to Hi-NRG. Most freestyle in California emerged from the Bay Area and Los Angeles regions.
California's large Filipino American community also embraced freestyle music during the late 1980s and early 1990s. Jaya, who immigrated from the Philippines to Los Angeles, was one of the first Filipina-American freestyle singers, and reached number 44 in 1990 with "If You Leave Me Now".
Freestyle's popularity spread outward from the Greater Toronto Area's Italian, Hispanic/Latino and Greek populations in the late 1980s and early 1990s. It was showcased alongside house music in various Toronto nightclubs, but by the mid-1990s was replaced almost entirely by house music.
Lil' Suzy released several 12-inch singles and performed live on the Canadian live dance music television program "Electric Circus". Montreal singer Nancy Martinez's 1986 single "For Tonight" would become the first Canadian freestyle single to reach the Top 40 on the Billboard Hot 100 chart, while the Montreal girl group reached the Canadian chart with "Ole Ole" in 2000.
Performers and producers associated with the style also came from around the world, including Turkish-American Murat Konar (the writer of Information Society's "Running"), Paul Lekakis from Greece, Asian artist Leonard (Leon Youngboy) who released the song "Youngboys", and British musicians including Freeez, Paul Hardcastle, Samantha Fox, and even Robin Gibb of the Bee Gees, who also adopted the freestyle sound in his 1984 album "Secret Agent", having worked with producer Chris Barbosa. Several British new wave and synthpop bands also teamed up with freestyle producers or were influenced by the genre, and released freestyle songs or remixes. These include Duran Duran whose song "Notorious" was remixed by the Latin Rascals, and whose album "Big Thing" contained several freestyle inspired songs such as "All She Wants Is"; New Order who teamed up with Arthur Baker, producing and co-writing the track "Confusion"; Erasure and the Der Deutsche mixes of their song "Blue Savannah"; and the Pet Shop Boys, whose song "Domino Dancing" was produced by Miami-based freestyle producer Lewis Martineé. Australian act I'm Talking utilized freestyle elements into their singles “Trust Me” and “Do You Wanna Be?,” both becoming top ten hits in their native Australia. | https://en.wikipedia.org/wiki?curid=10808 |
Fantasy (psychology)
Fantasy in a psychological sense refers to two different possible aspects of the mind, the conscious, and the unconscious.
A fantasy is a situation imagined by an individual that expresses certain desires or aims on the part of its creator. Fantasies sometimes involve situations that are highly unlikely; or they may be quite realistic. Fantasies can also be sexual in nature. Another, more basic meaning of fantasy is something which is not 'real,' as in perceived explicitly by any of the senses, but exists as an imagined situation of object to subject.
In everyday life, individuals often find their thoughts "pursue a series of fantasies concerning things they wish they could do or wish they had done ... fantasies of control or of sovereign choice ... daydreams."
George Eman Vaillant in his study of defence mechanisms took as a central example of "an immature defence ... "fantasy" — living in a 'Walter Mitty' dream world where you imagine you are successful and popular, instead of making real efforts to make friends and succeed at a job." Fantasy, when pushed to the extreme, is a common trait of narcissistic personality disorder; and Vaillant found that "not one person who used fantasy a lot had any close friends."
Other researchers and theorists find that fantasy has beneficial elements — providing "small regressions and compensatory wish fulfilments which are recuperative in effect." Research by Deirdre Barrett reports that people differ radically in the vividness, as well as frequency of fantasy, and that those who have the most elaborately developed fantasy life are often the people who make productive use of their imaginations in art, literature, or by being especially creative and innovative in more traditional professions.
For Freud, a fantasy is constructed around multiple, often repressed wishes, and employs disguise to mask and mark the very defensive processes by which desire is enacted. The subject's desire to maintain distance from the repressed wish and simultaneously experience it opens up a type of third person syntax allowing for multiple entry into the fantasy. Therefore, in fantasy, vision is multiplied—it becomes possible to see from more than one position at the same time, to see oneself and to see oneself seeing oneself, to divide vision and dislocate subjectivity. This radical omission of the “I” position creates space for all those processes that depend upon such a center, including not only identification but also the field and organization of vision itself.
For Freud, sexuality is linked from the very beginning to an object of fantasy. However, “the object to be rediscovered is not the lost object, but its substitute by displacement; the lost object is the object of self-preservation, of hunger, and the object one seeks to re-find in sexuality is an object displaced in relation to that first object.” This initial scene of fantasy is created out of the frustrated infants’ deflection away from the instinctual need for milk and nourishment towards a phantasmization of the mothers’ breast, which is in close proximity to the instinctual need. Now bodily pleasure is derived from the sucking of the mother's breast itself. The mouth that was the original source of nourishment is now the mouth that takes pleasure in its own sucking. This substitution of the breast for milk and the breast for a phantasmic scene represents a further level of mediation which is increasingly psychic. The child cannot experience the pleasure of milk without the psychic re-inscription of the scene in the mind. “The finding of an object is in fact a re-finding of it.” It is in the movement and constant restaging away from the instinct that desire is constituted and mobilized.
A similarly positive view of fantasy was taken by Sigmund Freud who considered fantasy () a defence mechanism. He considered that men and women "cannot subsist on the scanty satisfaction which they can extort from reality. 'We simply cannot do without auxiliary constructions,' as Theodor Fontane once said ... [without] dwelling on imaginary wish fulfillments." As childhood adaptation to the reality principle developed, so too "one species of thought activity was split off; it was kept free from reality-testing and remained subordinated to the pleasure principle alone. This activity is "fantasying" ... continued as "day-dreaming"." He compared such phantasising to the way a "nature reserve preserves its original state where everything ... including what is useless and even what is noxious, can grow and proliferate there as it pleases."
Daydreams for Freud were thus a valuable resource. "These day-dreams are cathected with a large amount of interest; they are carefully cherished by the subject and usually concealed with a great deal of sensitivity ... such phantasies may be unconscious just as well as conscious." He considered these fantasies to include a great deal of the true constitutional essence of a personality, and that the energetic man "is one who succeeds by his efforts in turning his wishful phantasies into reality," whereas the artist "can transform his phantasies into artistic creations instead of into symptoms ... the doom of neurosis."
In the context of occurrences of the mental disorder known as schizophrenia, individuals who exhibit symptoms fulfilling this particular classification might be experiencing fantasies as part of the diagnosis (Shneidman, E. S. 1948). Scientific investigation into activity of the so-called default network within the brain (Randy Buckner et al. 2008) has shown individuals diagnosed with schizophrenia have high levels ("...overactive...") of activity within their brains.
In a study of eighty individuals diagnosed with schizophrenia, it was found one quarter of men who had committed a contact crime against women were motivated by sexually orientated fantasy (A.D. Smith 2008).
Melanie Klein extended Freud's concept of fantasy to cover the developing child's relationship to a world of internal objects. In her thought, this kind of "play activity inside the person is known as 'unconscious fantasy'. And these phantasies are often very violent and aggressive. They are different from ordinary day-dreams or 'fantasies'."
The term "fantasy" became a central issue with the development of the Kleinian group as a distinctive strand within the British Psycho-Analytical Society, and was at the heart of the so-called controversial discussions of the wartime years. "A paper by Susan Isaacs (1952) on 'The nature and function of Phantasy' ... has been generally accepted by the Klein group in London as a fundamental statement of their position." As a defining feature, "Kleinian psychoanalysts regard the unconscious as made up of phantasies of relations with objects. These are thought of as primary and innate, and as the mental representations of instincts ... the psychological equivalents in the mind of defence mechanisms."
Isaacs considered that "unconscious phantasies exert a continuous influence throughout life, both in normal and neurotic people, the difference lying in the specific character of the dominant phantasies." Most schools of psychoanalytic thought would now accept that both in analysis and life, we perceive reality through a veil of unconscious fantasy. Isaacs however claimed that "Freud's 'hallucinatory wish-fulfilment' and his 'introjection' and 'projection' are the basis of the fantasy life," and how far unconscious fantasy was a genuine development of Freud's ideas, how far it represented the formation of a new psychoanalytic paradigm, is perhaps the key question of the controversial discussions.
Lacan engaged from early on with "the phantasies revealed by Melanie Klein ... the "imago" of the mother ... this shadow of the "bad internal objects"" — with the Imaginary. Increasingly, however, it was Freud's idea of fantasy as a kind of "screen-memory, representing something of more importance with which it was in some way connected" that was for him of greater importance. Lacan came to believe that "the phantasy is never anything more than the screen that conceals something quite primary, something determinate in the function of repetition."
Phantasies thus both link to and block off the individual's unconscious, his kernel or real core: ""subject and real are to be situated on either side of the split, in the resistance of the phantasy"", which thus comes close to the centre of the individual's personality and its splits and conflicts. "The subject situates himself as determined by the phantasy ... whether in the dream or in any of the more or less well-developed forms of day-dreaming;" and as a rule "a subject's fantasies are close variations on a single theme ... the 'fundamental fantasy' ... minimizing the variations in meaning which might otherwise cause a problem for desire."
The goal of therapy thus became ""la traversee du fantasme", the crossing over, traversal, or traversing of the fundamental fantasy." For Lacan, "The traversing of fantasy involves the subject's assumption of a new position with respect to the Other as language and the Other as desire ... a utopian moment beyond neurosis." The question he was left with was "What, then, does he who has passed through the experience ... who has traversed the radical phantasy ... become?."
The postmodern intersubjectivity of the 21st century has seen a new interest in fantasy as a form of interpersonal communication. Here, we are told, "We need to go beyond the pleasure principle, the reality principle, and repetition compulsion to ... the "fantasy principle" - not, as Freud did, reduce fantasies to wishes ... [but consider] all other imaginable emotions" and thus envisage emotional fantasies as a possible means of moving beyond stereotypes to more nuanced forms of personal and social relating.
Such a perspective "sees emotions as central to developing fantasies about each other that are not determined by collective 'typifications'."
Two characteristics of someone with narcissistic personality disorder are: | https://en.wikipedia.org/wiki?curid=10810 |
Surnames by country
Surname conventions and laws vary around the world. This article gives an overview of surnames around the world.
In Argentina, normally only one family name, the father's paternal family name, is used and registered, as in English-speaking countries. However, it is possible to use both the paternal and maternal name. For example, if "Ana Laura Melachenko" and "Emanuel Darío Guerrero" had a daughter named "Adabel Anahí", her full name could be "Adabel Anahí Guerrero Melachenko". Women, however, do not change their family names upon marriage and continue to use their birth family names instead of their husband's family names. However, women have traditionally, and some still choose to use the old Spanish custom of adjoining ""de"" and her husband's surname to her own name. For example, if Paula Segovia marries Felipe Cossia, she might keep her birth name or become "Paula Segovia de Cossia" or "Paula Cossia".
There are some province offices where a married woman can use only her birth name, and some others where she has to use the complete name, for legal purposes. The Argentine Civilian Code states both uses are correct, but police offices and passports are issued with the complete name. Today most women prefer to maintain their birth name given that "de" can be interpreted as meaning they belong to their husbands.
When Eva Duarte married Juan Domingo Perón, she could be addressed as Eva Duarte de Perón, but the preferred style was Eva Perón, or the familiar and affectionate "Evita" (little Eva).
Combined names come from old traditional families and are considered one last name, but are rare. Although Argentina is a Spanish-speaking country, it is also composed of other varied European influences, such as Italian, French, Russian, German, etc.
Children typically use their fathers' last names only. Some state offices have started to use both last names, in the traditional father then mother order, to reduce the risk of a person being mistaken for others using the same name combinations, e.g. if Eva Duarte and Juan Perón had a child named Juan, he might be misidentified if he were called "Juan Perón", but not if he was known as Juan Perón Duarte.
In early 2008, some new legislation is under consideration that will place the mother's last name ahead the father's last name, as it is done in Portuguese-speaking countries and only optionally in Spain, despite Argentina being a Spanish-speaking country.
In Chile, marriage has no effect at all on either of the spouses' names, so people keep their birth names for all their life, no matter how many times marital status, theirs or that of their parents, may change. However, in some upper-class circles or in older couples, even though considered to be old-fashioned, it is still customary for a wife to use her husband's name as reference, as in "Doña María Inés de Ramírez" (literally Lady María Inés (wife) of Ramírez).
Children will always bear the surname of the father followed by that of the mother, but if there is no known father and the mother is single, the children can bear either both of her mother's surnames or the mother's first surname followed by any of the surnames of the mother's parents or grandparents, or the child may bear the mother's first surname twice in a row.
France
Belgium
Canadian
There are about 1,000,000 different family names in German. German family names most often derive from given names, geographical names, occupational designations, bodily attributes or even traits of character. Hyphenations notwithstanding, they mostly consist of a single word; in those rare cases where the family name is linked to the given names by particles such as "von" or "zu", they usually indicate noble ancestry. Not all noble families used these names (see Riedesel), while some farm families, particularly in Westphalia, used the particle "von" or "zu" followed by their farm or former farm's name as a family name (see "Meyer zu Erpen").
Family names in German-speaking countries are usually positioned last, after all given names. There are exceptions, however: in parts of Austria and Bavaria and the Alemannic-speaking areas, the family name is regularly put in front of the first given name. Also in many – especially rural – parts of Germany, to emphasize family affiliation there is often an inversion in colloquial use, in which the family name becomes a possessive: "Rüters Erich", for example, would be Erich of the Rüter family.
In Germany today, upon marriage, both partners can choose to keep their birth name or choose either partner's name as the common name. In the latter case the partner whose name wasn't chosen can keep his birth name hyphenated to the new name (e.g. "Schmidt" and "Meyer" choose to marry under the name "Meyer". The former "Schmidt" can choose to be called "Meyer", "Schmidt-Meyer" or "Meyer-Schmidt"), but any children will only get the single common name. In the case that both partners keep their birth name they must decide on one of the two family names for all their future children. (German name)
Changing one's family name for reasons other than marriage, divorce or adoption is possible only if the application is approved by the responsible government agency. Permission will usually be granted if:
Otherwise, name changes will normally not be granted.
The Netherlands and Belgium (Flanders)
In Scandinavia, family names often, but certainly not always, originate from a patronymic. In Denmark and Norway, the corresponding ending is "-sen", as in "Karlsen". Names ending with "dotter/datter" (daughter), such as "Olofsdotter", are rare but occurring, and only apply to women. Today, the patronymic names are passed on similarly to family names in other Western countries, and a person's father does not have to be called Karl if he or she has the surname Karlsson. However, in 2006 Denmark reinstated patronymic and matronymic surnames as an option. Thus, parents Karl Larsen and Anna Hansen can name a son Karlssøn or Annasøn and a daughter Karlsdatter or Annasdatter.
Before the 19th century there was the same system in Scandinavia as in Iceland today. Noble families, however, as a rule adopted a family name, which could refer to a presumed or real forefather (e.g. Earl Birger Magnusson "Folkunge" ) or to the family's coat of arms (e.g. King Gustav Eriksson "Vasa"). In many surviving family noble names, such as "Silfversparre" ("silver chevron"; in modern spelling, "Silver-") or "Stiernhielm" ("star-helmet"; in modernized spelling, "stjärnhjälm"), the spelling is obsolete, but since it applies to a name, remains unchanged. (Some names from relatively modern times also use archaic or otherwise aberrant spelling as a stylistic trait; e.g. "-quist" instead of standard "-kvist" "twig" or "-grén" instead of standard "-gren", "branch".)
Later on, people from the Scandinavian middle classes, particularly artisans and town dwellers, adopted names in a similar fashion to that of the nobility. Family names joining two elements from nature such as the Swedish "Bergman" ("mountain man"), "Holmberg" ("island mountain"), "Lindgren" ("linden branch"), "Sandström" ("sand stream") and "Åkerlund" ("field meadow") were quite frequent and remain common today. The same is true for similar Norwegian and Danish names.
Another common practice was to adopt one's place of origin as a middle or surname.
Even more important a driver of change was the need, for administrative purposes, to develop a system under which each individual had a "stable" name from birth to death. In the old days, people would be known by their name, patronymic and the farm they lived at. This last element would change if a person got a new job, bought a new farm, or otherwise came to live somewhere else. (This is part of the origin, in this part of the world, of the custom of women changing their names upon marriage. Originally it indicated, basically, a change of address, and from older times, there are numerous examples of men doing the same thing). The many patronymic names may derive from the fact that people who moved from the country to the cities, also gave up the name of the farm they came from. As a worker, you passed by your father's name, and this name passed on to the next generation as a family name. Einar Gerhardsen, the Norwegian prime minister, used a true patronym, as his father was named Gerhard Olsen (Gerhard, the son of Ola). Gerhardsen passed his own patronym on to his children as a family name. This has been common in many working-class families. The tradition of keeping the farm name as a family name got stronger during the first half of the 20th century in Norway.
These names often indicated the place of residence of the family. For this reason, Denmark and Norway have a very high incidence of last names derived from those of farms, many signified by the suffixes like "-bø", "-rud", "-heim/-um", "-land" or "-set" (these being examples from Norway). In Denmark, the most common suffix is "-gaard" — the modern spelling is "gård" in Danish and can be either "gård" or "gard" in Norwegian, but as in Sweden, archaic spelling persists in surnames. The most well-known example of this kind of surname is probably "Kierkegaard" (combined by the words "kirke/kierke" (= church) and "gaard" (= farm) meaning "the farm located by the Church". It is, however, a common misunderstanding that the name relates to its direct translation: churchyard/cemetery), but many others could be cited. It should also be noted that, since the names in question are derived from the original owners' domiciles, the possession of this kind of name is no longer an indicator of affinity with others who bear it.
In many cases, names were taken from the nature around them. In Norway, for instance, there is an abundance of surnames based on coastal geography, with suffixes like "-strand", "-øy", "-holm", "-vik", "-fjord" or "-nes". Like the names derived from farms, most of these family names reflected the family's place of residence at the time the family name was "fixed", however. A family name such as Swedish "Dahlgren" is derived from "dahl" meaning valley and "gren" meaning branch; or similarly "Upvall" meaning "upper-valley"; It depends on the Scandinavian country, language, and dialect.
In Scandinavia family names often, but certainly not always, originate from a patronymic. Later on, people from the Scandinavian middle classes, particularly artisans and town dwellers, adopted surnames in a similar fashion to that of the gentry. Family names joining two elements from nature such as the Swedish "Bergman" ("mountain man"), "Holmberg" ("island mountain"), "Lindgren" ("linden branch"), "Sandström" ("sand stream") and "Åkerlund" ("field grove") were quite frequent and remain common today.
Finland including Karelia and Estonia was the eastern part of The Kingdom of Sweden from its unification around 1100–1200 AD until the year 1809 when Finland was conquered by Russia. During the Russian revolution 1917, Finland proclaimed the republic Finland and Sweden and many European countries rapidly acknowledged the new nation Finland. Finland has mainly Finnish (increasing) and Swedish (decreasing) surnames and first names. There are two predominant surname traditions among the "Finnish" in Finland: the West Finnish and the East Finnish. The surname traditions of "Swedish-speaking" farmers, fishermen and craftsmen resembles the West Finnish tradition, while smaller populations of "Sami" and "Romani people" have traditions of their own. Finland was exposed to a very small immigration from Russia, so Russian names barely exists.
Until the mid 20th Century, Finland was a predominantly agrarian society, and the names of West Finns were based on their association with a particular area, farm, or homestead, e.g. "Jaakko Jussila" ("Jaakko from the farm of Jussi"). On the other hand, the East Finnish surname tradition dates back to the 13th century. There, the Savonians pursued slash-and-burn agriculture which necessitated moving several times during a person's lifetime. This in turn required the families to have surnames, which were in wide use among the common folk as early as the 13th century. By the mid-16th century, the East Finnish surnames had become hereditary. Typically, the oldest East Finnish surnames were formed from the first names of the patriarchs of the families, e.g. "Ikävalko", "Termonen", "Pentikäinen". In the 16th, 17th, and 18th centuries, new names were most often formed by adding the name of the former or current place of living (e.g. "Puumalainen" < Puumala). In the East Finnish tradition, the women carried the family name of their fathers in female form (e.g. "Puumalatar" < "Puumalainen"). By the 19th century, this practice fell into disuse due to the influence of the West-European surname tradition.
In Western Finland, agrarian names dominated, and the last name of the person was usually given according to the farm or holding they lived on. In 1921, surnames became compulsory for all Finns. At this point, the agrarian names were usually adopted as surnames. A typical feature of such names is the addition of prefixes "Ala-" (Sub-) or "Ylä-" (Up-), giving the location of the holding along a waterway in relation of the main holding. (e.g. "Yli-Ojanperä", "Ala-Verronen"). The Swedish speaking farmers along the coast of Österbotten usually used two surnames – one which pointed out the father's name (e.g. "Eriksson", "Andersson", "Johansson") and one which related to the farm or the land their family or bigger family owned or had some connection to (e.g. "Holm", "Fant", "Westergård", "Kloo"). So a full name could be "Johan Karlsson Kvist", for his daughter "Elvira Johansdotter Kvist", and when she married a man with the Ahlskog farm, Elvira kept the first surname Johansdotter but changed the second surname to her husbands (e.g. "Elvira Johansdotter Ahlskog"). During the 20th century they started to drop the -son surname while they kept the second. So in Western Finland the Swedish speaking had names like "Johan Varg", "Karl Viskas", "Sebastian Byskata" and "Elin Loo", while the Swedes in Sweden at the other side of the Baltic Sea kept surnames ending with "-son" (e.g. "Johan Eriksson", "Thor Andersson", "Anna-Karin Johansson").
A third tradition of surnames was introduced in south Finland by the Swedish-speaking upper and middle classes, which used typical German and Swedish surnames. By custom, all Finnish-speaking persons who were able to get a position of some status in urban or learned society, discarded their Finnish name, adopting a Swedish, German or (in the case of clergy) Latin surname. In the case of enlisted soldiers, the new name was given regardless of the wishes of the individual.
In the late 19th and early 20th century, the overall modernization process, and especially the political movement of fennicization, caused a movement for adoption of Finnish surnames. At that time, many persons with a Swedish or otherwise foreign surname changed their family name to a Finnish one. The features of nature with endings "-o/ö", "-nen" ("Meriö" < "Meri" "sea", "Nieminen" < "Niemi" "point") are typical of the names of this era, as well as more or less direct translations of Swedish names ("Paasivirta" < "Hällström").
In 21st-century Finland, the use of surnames follows the German model. Every person is legally obligated to have a first and last name. At most, three first names are allowed. The Finnish married couple may adopt the name of either spouse, or either spouse (or both spouses) may decide to use a double name. The parents may choose either surname or the double surname for their children, but all siblings must share the same surname. All persons have the right to change their surname once without any specific reason. A surname that is un-Finnish, contrary to the usages of the Swedish or Finnish languages, or is in use by any person residing in Finland cannot be accepted as the new name, unless valid family reasons or religious or national customs give a reason for waiving this requirement. However, persons may change their surname to any surname that has ever been used by their ancestors if they can prove such claim. Some immigrants have had difficulty naming their children, as they must choose from an approved list based on the family's household language.
In the Finnish language, both the root of the surname and the first name can be modified by consonant gradation regularly when inflected to a case.
In Iceland, most people have no family name; a person's last name is most commonly a patronymic, i.e. derived from the father's first name. For example, when a man called "Karl" has a daughter called "Anna" and a son called "Magnús", their full names will typically be "Anna Karlsdóttir" ("Karl's daughter") and "Magnús Karlsson" ("Karl's son"). The name is not changed upon marriage.
Slavic countries are noted for having masculine and feminine versions for many (but not all) of their names. In most countries the use of a feminine form is obligatory in official documents as well as in other communication, except for foreigners. In some countries only the male form figures in official use (Bosnia and Herzegovina, Croatia, Montenegro, Serbia, Slovenia), but in communication (speech, print) a feminine form is often used.
In Slovenia the last name of a female is the same as the male form in official use (identification documents, letters). In speech and descriptive writing (literature, newspapers) a female form of the last name is regularly used.
If the name has no suffix, it may or may not have a feminine version. Sometimes it has the ending changed (such as the addition of "-a"). In the Czech Republic and Slovakia, suffixless names, such as those of German origin, are feminized by adding "-ová" (for example, "Schusterová").
Bulgarian names usually consist of three components – given name, patronymic (based on father's name), family name.
Given names have many variations, but the most common names have Christian/Greek (e.g. Maria, Ivan, Christo, Peter, Pavel), Slavic (Ognyan, Miroslav, Tihomir) or Protobulgarian (Krum, Asparukh) (pre-Christian) origin.
Father's names normally consist of the father's first name and the "-ov" (male) or "-ova" (female) or "-ovi" (plural) suffix.
Family names usually also end with the "-ov", "-ev" (male) or "-ova", "-eva" (female) or "-ovi", "-evi" (plural) suffix.
In many cases (depending on the name root) the suffixes can be also "-ski" (male and plural) or "-ska" (female); "-ovski", "-evski" (male and plural) or "-ovska", "-evska" (female); "-in" (male) or "-ina" (female) or "-ini" (plural); etc.
The meaning of the suffixes is similar to the English word "of", expressing membership in/belonging to a family.
For example, the family name Ivanova means a person belonging to the Ivanovi family.
A father's name Petr*ov* means son of Peter.
Regarding the different meaning of the suffixes, "-ov", "-ev"/"-ova", "-eva" are used for expressing relationship to the father and "-in"/"-ina" for relationship to the mother (often for orphans whose father is dead).
Names of Czech people consist of given name ("křestní jméno") and surname ("příjmení"). Usage of the second or middle name is not common. Feminine names are usually derived from masculine ones by a suffix "-ová" ("Nováková") or "-á" for names being originally adjectives ("Veselá"), sometimes with a little change of original name's ending ("Sedláčková" from "Sedláček" or "Svobodová" from "Svoboda"). Women usually change their family names when they get married. The family names are usually nouns ("Svoboda", "Král", "Růžička", "Dvořák", "Beneš"), adjectives ("Novotný", "Černý", "Veselý") or past participles of verbs ("Pospíšil"). There are also a couple of names with more complicated origin which are actually complete sentences ("Skočdopole", "Hrejsemnou" or "Vítámvás"). The most common Czech family name is "Novák" / "Nováková".
In addition, many Czechs and some Slovaks have German surnames due to mixing between the ethnic groups over the past thousand years. Deriving women's names from German and other foreign names is often problematic since foreign names do not suit Czech language rules, although most commonly "-ová" is simply added ("Schmidtová"; umlauts are often, but not always, dropped, e.g. "Müllerová"), or the German name is respelled with Czech spelling ("Šmitová"). Hungarian names, which can be found fairly commonly among Slovaks, can also be either left unchanged (Hungarian "Nagy", fem. "Nagyová") or respelled according to Czech/Slovak orthography (masc. "Naď", fem. "Naďová").
In Poland and most of the former Polish–Lithuanian Commonwealth, surnames first appeared during the late Middle Ages. They initially denoted the differences between various people living in the same town or village and bearing the same name. The conventions were similar to those of English surnames, using occupations, patronymic descent, geographic origins, or personal characteristics. Thus, early surnames indicating occupation include "Karczmarz" ("innkeeper"), "Kowal" ("blacksmith"), "Złotnik" ("gold smith") and "Bednarczyk" ("young cooper"), while those indicating patronymic descent include "Szczepaniak" ("Son of "Szczepan"), "Józefowicz" ("Son of "Józef"), and "Kaźmirkiewicz" ("Son of "Kazimierz""). Similarly, early surnames like "Mazur" ("the one from Mazury") indicated geographic origin, while ones like "Nowak" ("the new one"), "Biały" ("the pale one"), and "Wielgus" ("the big one") indicated personal characteristics.
In the early 16th century, ( the Polish Renaissance), toponymic names became common, especially among the nobility. Initially, the surnames were in a form of "[first name] "z" ("de", "of") [location]". Later, most surnames were changed to adjective forms, e.g. "Jakub Wiślicki" ("James of Wiślica") and "Zbigniew Oleśnicki" (""Zbigniew" of Oleśnica"), with masculine suffixes "-ski", "-cki", "-dzki" and "-icz" or respective feminine suffixes "-ska", "-cka", "-dzka" and "-icz" on the east of Polish–Lithuanian Commonwealth. Names formed this way are adjectives grammatically, and therefore change their form depending on sex; for example, "Jan Kowalski" and "Maria Kowalska" collectively use the plural "Kowalscy".
Names with masculine suffixes "-ski", "-cki", and "-dzki", and corresponding feminine suffixes "-ska", "-cka", and "-dzka" became associated with noble origin. Many people from lower classes successively changed their surnames to fit this pattern. This produced many "Kowalski"s, "Bednarski"s, "Kaczmarski"s and so on.
A separate class of surnames derive from the names of noble clans. These are used either as separate names or the first part of a double-barrelled name. Thus, persons named "Jan Nieczuja" and "Krzysztof Nieczuja-Machocki" might be related. Similarly, after World War I and World War II, many members of Polish underground organizations adopted their war-time pseudonyms as the first part of their surnames. "Edward Rydz" thus became Marshal of Poland "Edward Śmigły-Rydz" and "Zdzisław Jeziorański" became "Jan Nowak-Jeziorański".
A full Russian name consists of personal (given) name, patronymic, and family name (surname).
Most Russian family names originated from patronymics, that is, father's name usually formed by adding the adjective suffix "-ov(a)" or "-ev(a)". Contemporary patronymics, however, have a substantive suffix "-ich" for masculine and the adjective suffix "-na" for feminine.
For example, the proverbial triad of most common Russian surnames follows:
Feminine forms of these surnames have the ending "-a":
Such a pattern of name formation is not unique to Russia or even to the Eastern and Southern Slavs in general; quite common are also names derived from professions, places of origin, and personal characteristics, with various suffixes (e.g. "-in(a)" and "-sky (-skaya)").
Professions:
Places of origin:
Personal characteristics:
A considerable number of "artificial" names exists, for example, those given to seminary graduates; such names were based on Great Feasts of the Orthodox Church or Christian virtues.
Great Orthodox Feasts:
Christian virtues:
Many freed serfs were given surnames after those of their former owners. For example, a serf of the Demidov family might be named "Demidovsky", which translates roughly as "belonging to Demidov" or "one of Demidov's bunch".
Grammatically, Russian family names follow the same rules as other nouns or adjectives (names ending with "-oy", "-aya" are grammatically adjectives), with exceptions: some names do not change in different cases and have the same form in both genders (for example, "Sedykh", "Lata").
Surnames of some South Slavic groups such as Serbs, Croats, Montenegrins, and Bosniaks traditionally end with the suffixes "-ić" and "-vić" (often transliterated to English and other western languages as "ic", "ich", "vic" or "vich". The v is added in the case of a name to which "-ić" is appended would otherwise end with a vowel, to avoid double vowels with the "i" in "-ić".) These are a diminutive indicating descent i.e. "son of". In some cases the family name was derived from a profession (e.g. blacksmith – "Kovač" → "Kovačević").
An analogous ending is also common in Slovenia. As the Slovenian language does not have the softer consonant "ć", in Slovene words and names only "č" is used. So that people from the former Yugoslavia need not change their names, in official documents "ć" is also allowed (as well as "Đ / đ"). Thus, one may have two surname variants, e.g.: Božič, Tomšič (Slovenian origin or assimilated) and Božić, Tomšić (roots from the Serbo-Croat language continuum area). Slovene names ending in -ič do not necessarily have a patrimonial origin.
In general family names in all of these countries follow this pattern with some family names being typically Serbian, some typically Croat and yet others being common throughout the whole linguistic region.
Children usually inherit their fathers' family name. In an older naming convention which was common in Serbia up until the mid-19th century, a person's name would consist of three distinct parts: the person's given name, the patronymic derived from the father's personal name, and the family name, as seen, for example, in the name of the language reformer Vuk Stefanović Karadžić.
Official family names do not have distinct male or female forms, except in North Macedonia, though a somewhat archaic unofficial form of adding suffixes to family names to form female form persists, with "-eva", implying "daughter of" or "female descendant of" or "-ka", implying "wife of" or "married to". In Slovenia the feminine form of a surname ("-eva" or "-ova") is regularly used in non-official communication (speech, print), but not for official IDs or other legal documents.
Bosniak Muslim names follow the same formation pattern but are usually derived from proper names of Islamic origin, often combining archaic Islamic or feudal Turkish titles i.e. Mulaomerović, Šabanović, Hadžihafizbegović, etc. Also related to Islamic influence is the prefix "Hadži-" found in some family names. Regardless of religion, this prefix was derived from the honorary title which a distinguished ancestor earned by making a pilgrimage to either Christian or Islamic holy places; Hadžibegić, being a Bosniak Muslim example, and Hadžiantić an Orthodox Christian one.
In Croatia where tribal affiliations persisted longer, Lika, Herzegovina etc., originally a family name, came to signify practically all people living in one area, clan land or holding of the nobles. The Šubić family owned land around the Zrin River in the Central Croatian region of Banovina. The surname became Šubić Zrinski, the most famous being Nikola Šubić Zrinski.
In Montenegro and Herzegovina, family names came to signify all people living within one clan or bratstvo. As there exists a strong tradition of inheriting personal names from grandparents to grandchildren, an additional patronymic usually using suffix "-ov" had to be introduced to make distinctions between two persons bearing the same personal name and the same family name and living within same area. A noted example is Marko Miljanov Popović, i.e. Marko, son of Miljan, from Popović family.
Due to discriminatory laws in the Austro-Hungarian Empire, some Serb families of Vojvodina discarded the suffix "-ić" in an attempt to mask their ethnicity and avoid heavy taxation.
The prefix "Pop-" in Serbian names indicates descent from a priest, for example Gordana Pop Lazić, i.e. descendant of Pop Laza.
Some Serbian family names include prefixes of Turkish origin, such as "Uzun-" meaning tall, or "Kara-", black. Such names were derived from nicknames of family ancestors. A famous example is Karađorđević, descendants of Đorđe Petrović, known as Karađorđe or Black Đorđe.
Among the Bulgarians, another South Slavic people, the typical surname suffix is "-ov" (Ivanov, Kovachev), although other popular suffixes also exist.
In North Macedonia, the most popular suffix today is "-ski".
Slovenes have a great variety of surnames, most of them differentiated according to region. Surnames ending in -ič are by far less frequent than among Croats and Serbs. There are typically Slovenian surnames ending in -ič, such as Blažič, Stanič, Marušič. Many Slovenian surnames, especially in the Slovenian Littoral, end in -čič (Gregorčič, Kocijančič, Miklavčič, etc.), which is uncommon for other South Slavic peoples (except the neighboring Croats, e.g. Kovačić, Jelačić, Kranjčić, etc.). On the other hand, surname endings in -ski and -ov are rare, they can denote a noble origin (especially for the -ski, if it completes a toponym) or a foreign (mostly Czech) origin. One of the most typical Slovene surname endings is -nik (Rupnik, Pučnik, Plečnik, Pogačnik, Podobnik) and other used surname endings are -lin (Pavlin, Mehlin, Ahlin, Ferlin), -ar (Mlakar, Ravnikar, Smrekar Tisnikar) and -lj (Rugelj, Pucelj, Bagatelj, Bricelj). Many Slovenian surnames are linked to Medieval rural settlement patterns. Surnames like Novak (literally, "the new one") or Hribar (from "hrib", hill) were given to the peasants settled in newly established farms, usually in high mountains. Peasant families were also named according to the owner of the land which they cultivated: thus, the surname Kralj (King) or Cesar (Emperor) was given to those working on royal estates, Škof (Bishop) or Vidmar to those working on ecclesiastical lands, etc. Many Slovenian surnames are named after animals (Medved – bear, Volk, Vovk or Vouk – wolf, Golob – pigeon, Strnad - yellowhammer, Orel – eagle, Lisjak – fox, or Zajec – rabbit, etc.) or plants Pšenica - wheat, Slak - bindweed, Hrast - oak, etc. Many are named after neighbouring peoples: Horvat, Hrovat, or Hrovatin (Croat), Furlan (Friulian), Nemec (German), Lah (Italian), Vogrin, Vogrič or Vogrinčič (Hungarian), Vošnjak (Bosnian), Čeh (Czech), Turk (Turk), or different Slovene regions: Kranjc, Kranjec or Krajnc (from Carniola), Kraševec (from the Kras), Korošec (from Carinthia), Kočevar or Hočevar (from the Gottschee county).
In Slovenia last name of a female is the same as the male form in official use (identification documents, letters). In speech and descriptive writing (literature, newspapers) a female form of the last name is regularly used. Examples: Novak (m.) & Novakova (f.), Kralj (m.) & Kraljeva (f.), Mali (m.) & Malijeva (f.). Usually surenames on -ova are used together with the title/gender: gospa Novakova (Mrs. Novakova), gospa Kraljeva (Mrs. Kraljeva), gospodična Malijeva (Miss Malijeva, if unmarried), etc or with the name. So we have Maja Novak on the ID card and Novakova Maja (extremely rarely Maja Novakova) in communication; Tjaša Mali and Malijeva Tjaša (rarely Tjaša Malijeva); respectively. Diminutive forms of last names for females are also available: Novakovka, Kraljevka. As for pronunciation, in Slovenian there is some leeway regarding accentuation. Depending on the region or local usage, you may have either Nóvak & Nóvakova or, more frequently, Novák & Novákova. Accent marks are normally not used.
Ukrainian and Belarusian names evolved from the same Old East Slavic and Ruthenian language (western Rus') origins. Ukrainian and Belarusian names share many characteristics with family names from other Slavic cultures. Most prominent are the shared root words and suffixes. For example, the root "koval" (blacksmith) compares to the Polish "kowal", and the root "bab" (woman) is shared with Polish, Slovakian, and Czech. The suffix "-vych" (son of) corresponds to the South Slavic "-vic", the Russian "-vich", and the Polish "-wicz", while "-sky", "-ski", and "-ska" are shared with both Polish and Russian, and "-ak" with Polish.
However some suffixes are more uniquely characteristic to Ukrainian and Belarusian names, especially: "-chuk" (Western Ukraine), "-enko" (all other Ukraine) (both son of), "-ko" (little [masculine]), "-ka" (little [feminine]), "-shyn", and "-uk". See, for example, Mihalko, Ukrainian Presidents Leonid Kravchuk, and Viktor Yushchenko, Belarusian President Alexander Lukashenko, or former Soviet diplomat Andrei Gromyko. Such Ukrainian and Belarusian names can also be found in Russia, Poland, or even other Slavic countries (e.g. Croatian general Zvonimir Červenko), but are due to importation by Ukrainian, Belarusian, or Rusyn ancestors.
The given name is always followed by the father's first name, then the father's family surname.
Some surnames have a prefix of "ibn"- meaning son of ("ould"- in Mauritania)
The surnames follow similar rules defining a relation to a clan, family, place etc.
Some Arab countries have differences due to historic rule by the Ottoman Empire or due to being a different minority.
A large number of Arabic last names start with "Al-" which means "The"
Arab States of the Persian Gulf.
Names mainly consist of the person's name followed by the father's first name connected by the word "ibn" or "bin" (meaning "son of"). The last name either refers to the name of the tribe the person belongs to, or to the region, city, or town he/she originates from. In exceptional cases, members of the royal families or ancient tribes mainly, the title (usually H.M./H.E., Prince, or Sheikh) is included in the beginning as a prefix, and the first name can be followed by four names, his father, his grandfather, and great – grandfather, as a representation of the purity of blood and to show the pride one has for his ancestry.
In Arabic-speaking Levantine countries (Jordan, Lebanon, Palestine, Syria) it's common to have family names associated with a certain profession or craft, such as "Al-Haddad"/"Haddad" which means "Blacksmith" or "Al-Najjar"/"Najjar" which means "Carpenter".
In India, surnames are placed as last names or before first names, which often denote: village of origin, caste, clan, office of authority their ancestors held, or trades of their ancestors.
The largest variety of surnames is found in the states of Maharashtra and Goa, which numbers more than the rest of India together. Here surnames are placed last, the order being: the given name, followed by the father's name, followed by the family name. The majority of surnames are derived from the place where the family lived, with the 'ker' (Marathi) or 'Kar'(Konkani) suffix, for example, Mumbaiker, Puneker, Aurangabadker or Tendulkar, Parrikar, Mangeshkar, Mahendrakar. Another common variety found in Maharashtra and Goa are the ones ending in 'e'. These are usually more archaic than the 'Kar's and usually denote medieval clans or professions like Rane, Salunkhe, Gupte, Bhonsle, Ranadive, Rahane, Hazare, Apte, Satpute, Shinde, Sathe, Londhe, Salve, Kale, Gore, Godbole, etc.
In Andhra Pradesh and Telangana, surnames usually denote family names. It is easy to track family history and the caste they belonged to using a surname.
In Odisha and West bengal, surnames denote the cast they belong. There are also several local surnames like Das, Patnaik, Mohanty, Jena etc
It is a common in Kerala, Tamil Nadu, and some other parts of South India that the spouse adopts her husband's first name instead of his family or surname name after marriage.
India is a country with numerous distinct cultural and linguistic groups. Thus, Indian surnames, where formalized, fall into seven general types.
Surnames are based on:
The convention is to write the first name followed by middle names and surname. It is common to use the father's first name as the middle name or last name even though it is not universal. In some Indian states like Maharashtra, official documents list the family name first, followed by a comma and the given names.
Traditionally, wives take the surname of their husband after marriage. In modern times, in urban areas at least, this practice is not universal and some wives either suffix their husband's surname or do not alter their surnames at all. In some rural areas, particularly in North India, wives may also take a new first name after their nuptials. Children inherit their surnames from their father.
Jains generally use Jain, Shah, Firodia, Singhal or Gupta as their last names.
Sikhs generally use the words "Singh" ("lion") and "Kaur" ("princess") as surnames added to the otherwise unisex first names of men and women, respectively. It is also common to use a different surname after Singh in which case Singh or Kaur are used as middle names (Montek Singh Ahluwalia, Surinder Kaur Badal). The tenth Guru of Sikhism ordered (Hukamnama) that any man who considered himself a Sikh must use "Singh" in his name and any woman who considered herself a Sikh must use "Kaur" in her name. Other middle names or honorifics that are sometimes used as surnames include Kumar, Dev, Lal, and Chand.
The modern-day spellings of names originated when families translated their surnames to English, with no standardization across the country. Variations are regional, based on how the name was translated from the local language to English in the 18th, 19th or 20th centuries during British rule. Therefore, it is understood in the local traditions that Baranwal and Barnwal represent the same name derived from Uttar Pradesh and Punjab respectively. Similarly, Tagore derives from Bengal while Thakur is from Hindi-speaking areas. The officially recorded spellings tended to become the standard for that family. In the modern times, some states have attempted standardization, particularly where the surnames were corrupted because of the early British insistence of shortening them for convenience. Thus Bandopadhyay became Banerji, Mukhopadhay became Mukherji, Chattopadhyay became Chatterji, etc. This coupled with various other spelling variations created several surnames based on the original surnames. The West Bengal Government now insists on re-converting all the variations to their original form when the child is enrolled in school.
Some parts of Sri Lanka, Thailand, Nepal, Myanmar, and Indonesia have similar patronymic customs to those of India.
Nepali surnames are divided into three origins; Indo-Aryan languages, Tibeto-Burman languages and indigenous origins. Surnames of Khas community contains toponyms as Ghimire, Dahal, Pokharel, Sapkota from respective villages, occupational names as (Adhikari, Bhandari, Karki, Thapa). Many Khas surnames includes suffix as -wal, -al as in Katwal, Silwal, Dulal, Khanal, Khulal, Rijal. Kshatriya titles such as Bista, Kunwar, Rana, Rawat, Rawal, Dhami, Shah, Thakuri, Chand, were taken as surnames by various Kshetri and Thakuris. Khatri Kshetris share surnames with mainstream Pahari Bahuns. Other popular Chhetri surnames include Basnyat, Bogati, Budhathoki, Khadka, Khandayat, Mahat, Raut. Similarly, Brahmin surnames such as Acharya, Bhatta, Joshi, Pandit, Sharma, Upadhyay were taken by Pahari Bahuns. Jaisi Bahuns bear distinct surnames as Kattel, Banstola, Jaisi, Padhya and share surnames with mainstream Bahuns. Other Bahun surnames include Aryal, Bhattarai, Banskota, Chaulagain, Devkota, Dhakal, Gyawali, Koirala, Mainali, Pandey, Panta, Laudari Pandey, Paudel, Regmi, Subedi, Tiwari, Upreti, Lamsal, and Dhungel. Many Indian immigrants into Pahari zone are assimilated under Khas peoples and they carried ancestral clan names as Marhatta, Rathaur, Chauhan. Khas-Dalits surnames include Kami, Bishwakarma or B.K., Damai, Mijar, Dewal, Pariyar, Ranapaheli, Sarki. Newar groups of multiethnic background bears both Indo-Aryan surnames (like Shrestha, Joshi, Pradhan) and indigenous surnames like Maharjan, Dangol. Magars bear surnames derived from Khas peoples such as Baral, Budhathoki, Lamichhane, Thapa and indigenous origins as Dura, Gharti, Pun, Pulami. Other Himalayan Mongoloid castes bears Tibeto-Burmese surnames like Gurung, Tamang, Thakali, Sherpa. Various Kiranti ethnic group contains many Indo-Aryan surnames of Khas origin which were awarded by the government of Khas peoples. These surnames are Rai, Subba, Jimmi, Dewan depending upon job and position hold by them. Terai community consists both Indo-Aryan and Indigenous origin surnames. Terai Brahmins bears surnames as Jha, Mishra, Pandit, Tiwari. Terai Rajput and other Kshatriya groups bears the surnames Chauhan, Singh, Rajput, Verma, Pal. Marwari surnames like Agrawal, Baranwal, Jain, Khandelwal, Maheshwari, Tapadia are also common. Nepalese Muslims bears Islamic surnames such as Ali, Ansari, Begum, Khan, Mohammad, Pathan. Other common Terai surnames are Yadav, Mahato, Kamat, Thakur, Dev, Chaudhary, Kayastha.
Pakistani surnames are basically divided in three categories: Arab naming convention, tribal or caste names and ancestral names.
Family names indicating Arab ancestry, e.g. Shaikh, Siddiqui, Abbasi, Syed, Zaidi, Khawaja, Naqvi, Farooqi, Osmani, Alavi, Hassani, and Husseini.
People claiming Afghan ancestry include those with family names ځاځي dzādzi Durrani, Gardezi, Suri, Yousafzai, Afridi, Mullagori, Mohmand, Khattak, Wazir, Mehsud, Niazi.
Family names indicating Turkish heritage include Mughal, (cheema) Baig or Beg, Pasha, Barlas, and Seljuki.
People claiming Indian ancestry include those with family names Barelwi, Lakhnavi, Delhvi, Godharvi, Bilgrami, and Rajput.
People claiming Iranian ancestry include those with family names Agha, Bukhari, Firdausi, Ghazali, Gilani, Hamadani, Isfahani, Kashani, Kermani, Khorasani, Farooqui, Mir, Mirza, Montazeri, Nishapuri, Noorani, Kayani, Qizilbash, Saadi, Sabzvari, Shirazi, Sistani, Suhrawardi, Yazdani, Zahedi, and Zand.
Tribal names include Abro Afaqi, Afridi, Khogyani (Khakwani), Amini,[Ansari] Ashrafkhel, Awan, Bajwa, Baloch, Barakzai, Baranzai, Bhatti, Bhutto, Ranjha, Bijarani, Bizenjo, Brohi, Khetran, Bugti, Butt, Farooqui, Gabol, Ghaznavi, Ghilzai, Gichki, Gujjar, Jamali, Jamote, Janjua, Jatoi, Jutt Joyo, Junejo, Karmazkhel, Kayani, Khar, Khattak, Khuhro, Lakhani, Leghari, Lodhi, Magsi, Malik, Mandokhel, Mayo, Marwat, Mengal, Mughal, Palijo, Paracha, Panhwar, Phul, Popalzai, Qureshi & qusmani, Rabbani, Raisani, Rakhshani, Sahi, Swati, Soomro, Sulaimankhel, Talpur, Talwar, Thebo, Yousafzai, and Zamani.
Family names indicating Turkish/ Kurd ancestry, Dogar.
In Pakistan, the official paperwork format regarding personal identity is as follows:
So and so, son of so and so, of such and such tribe or clan and religion and resident of such and such place. For example, Amir Khan s/o Fakeer Khan, tribe Mughal Kayani or Chauhan Rajput, Follower of religion Islam, resident of Village Anywhere, Tehsil Anywhere, District.
A large number of Muslim Rajputs have retained their surnames such as Chauhan, Rathore, Parmar, and Janjua.
In modern Chinese, Japanese, Korean, and Vietnamese, the family name is placed before the given names, although this order may not be observed in translation. Generally speaking, Chinese, Korean, and Vietnamese names do not alter their order in English (Mao Zedong, Kim Jong-il, Ho Chi Minh) and Japanese names do (Kenzaburō Ōe). However, numerous exceptions exist, particularly for people born in English-speaking countries such as Yo-Yo Ma. This is sometimes systematized: in all Olympic events, the athletes of the People's Republic of China list their names in the Chinese ordering, while Chinese athletes representing other countries, such as the United States, use the Western ordering. (In Vietnam, the system is further complicated by the cultural tradition of addressing people by their given name, usually with an honorific. For example, Phan Văn Khải is "properly" addressed as Mr. Khải, even though Phan is his family name.)
Chinese family names have many types of origins, some claiming dates as early as the legendary Yellow Emperor (2nd millennium BC):
In history, some changed their surnames due to a naming taboo (from Zhuang 莊 to Yan 嚴 during the era of Liu Zhuang 劉莊) or as an award by the Emperor (Li was often to senior officers during Tang dynasty).
In modern times, some Chinese adopt an English name in addition to their native given names: e.g., adopted the English name Martin Lee. Particularly in Hong Kong and Singapore, the convention is to write both names together: Martin Lee Chu-ming. Owing to the confusion this can cause, a further convention is sometimes observed of capitalizing the surname: Martin L Chu-ming. Sometimes, however, the Chinese given name is forced into the Western system as a middle name ("Martin Chu-ming Lee"); less often, the English given name is forced into the Chinese system ("Lee Chu-ming Martin").
In Japan, the civil law forces a common surname for every married couple, unless in a case of international marriage. In most cases, women surrender their surnames upon marriage, and use the surnames of their husbands. However, a convention that a man uses his wife's family name if the wife is an only child is sometimes observed. A similar tradition called "ru zhui" (入贅) is common among Chinese when the bride's family is wealthy and has no son but wants the heir to pass on their assets under the same family name. The Chinese character "zhui" (贅) carries a money radical (貝), which implies that this tradition was originally based on financial reasons. All their offspring carry the mother's family name. If the groom is the first born with an obligation to carry his own ancestor's name, a compromise may be reached in that the first male child carries the mother's family name while subsequent offspring carry the father's family name. The tradition is still in use in many Chinese communities outside mainland China, but largely disused in China because of social changes from communism. Due to the economic reform in the past decade, accumulation and inheritance of personal wealth made a comeback to the Chinese society. It is unknown if this financially motivated tradition would also come back to mainland China.
In Chinese, Korean, and Singaporean cultures, women keep their own surnames, while the family as a whole is referred to by the surnames of the husbands.
In Hong Kong, some women would be known to the public with the surnames of their husbands preceding their own surnames, such as Anson Chan Fang On Sang. Anson is an English given name, On Sang is the given name in Chinese, Chan is the surname of Anson's husband, and Fang is her own surname. A name change on legal documents is not necessary. In Hong Kong's English publications, her family names would have been presented in small cap letters to resolve ambiguity, e.g. Anson CHAN FANG On Sang in full or simply Anson Chan in short form.
In Macau, some people have their names in Portuguese spelt with some Portuguese style, such as "Carlos do Rosario Tchiang".
Chinese women in Canada, especially Hongkongers in Toronto, would preserve their maiden names before the surnames of their husbands when written in English, for instance, Rosa Chan Leung, where Chan is the maiden name, and Leung is the surname of the husband.
In Chinese, Korean, and Vietnamese, surnames are predominantly monosyllabic (written with one character), though a small number of common disyllabic (or written with two characters) surnames exists (e.g. the Chinese name "Ouyang", the Korean name "Jegal" and the Vietnamese name "Phan-Tran").
Many Chinese, Korean, and Vietnamese surnames are of the same origin, but simply pronounced differently and even transliterated differently overseas in Western nations. For example, the common Chinese surnames Chen, Chan, Chin, Cheng and Tan, the Korean surname Jin, as well as the Vietnamese surname Trần are often all the same exact character 陳. The common Korean surname Kim is also the common Chinese surname Jin, and written 金. The common Mandarin surnames Lin or Lim (林) is also one and the same as the common Cantonese or Vietnamese surname "Lam" and Korean family name Lim (written/pronounced as Im in South Korea). There are people with the surname of Hayashi (林) in Japan too. The common Chinese surname 李, translated to English as Lee, is, in Chinese, the same character but transliterated as Li according to pinyin convention. Lee is also a common surname of Koreans, and the character is identical.
40% of all Vietnamese have the surname Nguyen. This may be because when a new dynasty took power in Vietnam it was custom to adopt that dynasty's surname. The last dynasty in Vietnam was the Nguyen dynasty, so as a result, many people have this surname.
In several Northeast Bantu languages such as Kamba, Taita and Kikuyu in Kenya the word "wa" (meaning "of") is inserted before the surname, for instance, Mugo wa Kibiru (Kikuyu) and Mekatilili wa Menza (Mijikenda).
In Burundi and Rwanda, most, if not all surnames have God in it, for example, Hakizimana (meaning God cures), Nshimirimana (I thank God) or Havyarimana/Habyarimana (God gives birth). But not all surnames end with the suffix -imana. Irakoze is one of these (technically meaning Thank God, though it is hard to translate it correctly in English or probably any other language). Surnames are often different among immediate family members, as parents frequently choose unique surnames for each child, and women keep their maiden names when married. Surnames are placed before given names and frequently written in capital letters, e.g. HAKIZIMANA Jacques.
The patronymic custom in most of the Horn of Africa gives children the father's first name as their surname. The family then gives the child its first name. Middle names are unknown. So, for example, a person's name might be "Bereket Mekonen ". In this case, "Bereket " is the first name and "Mekonen" is the surname, and also the first name of the father.
The paternal grandfather's name is often used if there is a requirement to identify a person further, for example, in school registration. Also, different cultures and tribes use the father's or grandfather's given name as the family's name. For example, some Oromos use Warra Ali to mean families of Ali, where Ali, is either the householder, a father or grandfather.
In Ethiopia, the customs surrounding the bestowal and use of family names is as varied and complex as the cultures to be found there. There are so many cultures, nations or tribes, that currently there can be no one formula whereby to demonstrate a clear pattern of Ethiopian family names. In general, however, Ethiopians use their father's name as a surname in most instances where identification is necessary, sometimes employing both father's and grandfather's names together where exigency dictates.
Many people in Eritrea have Italian surnames, but all of these are owned by Eritreans of Italian descent.
A full Albanian name consists of a given name (), patronymic () and family name (), for example "Agron Mark Gjoni". The patronymic is simply the given name of the individual's father, with no suffix added. The family name is typically a noun in the definite form or at the very least ends with a vowel or -j (an approximant close to -i). Many traditional last names end with -aj (previously -anj), which is more prevalent in certain regions of Albania and Kosovo. For clarification, the “family name” is typically the father's father's name (grandfather).
Proper names in Albanian are fully declinable like any noun (e.g. "Marinelda", genitive case "i/e Marineldës" "of Marinelda").
Armenian surnames almost always have the ending () transliterated into English as -yan or -ian (spelled -ean (եան) in Western Armenian and pre-Soviet Eastern Armenian, of Ancient Armenian or Iranian origin, presumably meaning "son of"), though names with that ending can also be found among Persians and a few other nationalities. Armenian surnames can derive from a geographic location, profession, noble rank, personal characteristic or personal name of an ancestor. Armenians in the diaspora sometimes adapt their surnames to help assimilation. In Russia, many have changed -yan to -ov (or -ova for women). In Turkey, many have changed the ending to -oğlu (also meaning "son of"). In English and French-speaking countries, many have shortened their name by removing the ending (for example Charles Aznavour). In ancient Armenia, many noble names ended with the locative -t'si (example, Khorenatsi) or -uni (Bagratuni). Several modern Armenian names also have a Turkish suffix which appears before -ian/-yan: -lian denotes a placename; -djian denotes a profession. Some Western Armenian names have a particle Der, while their Eastern counterparts have Ter. This particle indicates an ancestor who was a priest (Armenian priests can choose to marry or remain celibate, but married priests cannot become a bishop). Thus someone named Der Bedrosian (Western) or Ter Petrosian (Eastern) is a descendant of an Armenian priest. The convention is still in use today: the children of a priest named Hagop Sarkisian would be called Der Sarkisian. Other examples of Armenian surnames: Adonts, Sakunts, Vardanyants, Rshtuni.
Traditional Azeri surnames usually end with "-lı", "-lu", (Turkic for 'with' or 'belonging to'), "-oğlu", "-qızı" (Turkic for 'son of' and 'daughter of'), "-zade" (Persian for 'born of'). Azerbaijanis of Iranian descent traditionally use suffixes such as '-pour' or '-zadeh', meaning 'born of' with their father's name. It is, however, more usual for them to use the name of the city in which their ancestors lived (e.g. Tabrizpour for those from Tabriz) or their occupation (e.g. Damirchizadeh for blacksmiths). Also, due to it being a part of the Russian Empire, many last names carry Slavic endings of "-ov" for men and "-ova" for women.
Most eastern Georgian surnames end with the suffix of "-shvili", (e.g. Kartveli'shvili) Georgian for "child" or "offspring". Western Georgian surnames most commonly have the suffix "-dze", (e.g. ) Georgian for "son". Megrelian surnames usually end in "-ia", "-ua" or "-ava". Other location-specific endings exist: In Svaneti "-iani", meaning "belonging to", or "hailing from", is common. In the eastern Georgian highlands common endings are "uri" and "uli". Some noble family names end in "eli", meaning "of (someplace)".
In Georgian, the surname is not normally used as the polite form of address; instead, the given name is used together with a title. For instance, Nikoloz Kartvelishvili is politely addressed as "bat'ono Nikoloz" "My Lord. Nikoloz".
Greek surnames are most commonly patronymics. Occupation, characteristic, or ethnic background and location/origin-based surnames names also occur; they are sometimes supplemented by nicknames.
Commonly, Greek male surnames end in -s, which is the common ending for Greek masculine proper nouns in the nominative case. Exceptionally, some end in -ou, indicating the genitive case of this proper noun for patronymic reasons.
Although surnames are static today, dynamic and changing patronym usage survives in middle names in Greece where the genitive of the father's first name is commonly the middle name.
Because of their codification in the Modern Greek state, surnames have Katharevousa forms even though Katharevousa is no longer the official standard. Thus, the Ancient Greek name Eleutherios forms the Modern Greek proper name Lefteris, and former vernacular practice (prefixing the surname to the proper name) was to call John Eleutherios Leftero-giannis.
Modern practice is to call the same person Giannis Eleftheriou: the proper name is vernacular (and not Ioannis), but the surname is an archaic genitive. However, children are almost always baptised with the archaic form of the name so in official matters, the child will be referred to as Ioannis Eleftheriou and not Giannis Eleftheriou.
Female surnames are most often in the Katharevousa genitive case of a male name. This is an innovation of the Modern Greek state; Byzantine practice was to form a feminine counterpart of the male surname (e.g. masculine Palaiologos, Byzantine feminine Palaiologina, Modern feminine Palaiologou).
In the past, women would change their surname when married to that of their husband (again in the genitive case) signifying the transfer of "dependence" from the father to the husband. In earlier Modern Greek society, women were named with -aina as a feminine suffix on the husband's first name: "Giorgaina", "Mrs George", "Wife of George". Nowadays, a woman's legal surname does not change upon marriage, though she can use the husband's surname socially. Children usually receive the paternal surname, though in rare cases, if the bride and groom have agreed before the marriage, the children can receive the maternal surname.
Some surnames are prefixed with Papa-, indicating ancestry from a priest, e.g. "Papageorgiou", the "son of a priest named George". Others, like Archi- and Mastro- signify "boss" and "tradesman" respectively.
Prefixes such as Konto-, Makro-, and Chondro- describe body characteristics, such as "short", "tall/long" and "fat". "Gero-" and "Palaio-" signify "old" or "wise".
Other prefixes include Hadji- (Χαντζή- or Χαντζι-) which was an honorific deriving from the Arabic Hadj or pilgrimage, and indicate that the person had made a pilgrimage (in the case of Christians, to Jerusalem) and Kara- which is attributed to the Turkish word for "black" deriving from the Ottoman Empire era. The Turkish suffix -oglou (derived from a patronym, "-oğlu" in Turkish) can also be found. Although they are of course more common among Greece's Muslim minority, they still can be found among the Christian majority, often Greeks or Karamanlides who were pressured to leave Turkey after the Turkish Republic was founded (since Turkish surnames only date to the founding of the Republic, when Atatürk made them compulsory).
Arvanitic surnames also exist; an example is "Tzanavaras" or "Tzavaras", from the Arvanitic word "çanavar" or "çavar" meaning "brave" ("pallikari" in Greek).
Most Greek patronymic suffixes are diminutives, which vary by region. The most common Hellenic patronymic suffixes are:
Others, less common, are:
Either the surname or the given name may come first in different contexts; in newspapers and in informal uses, the order is "given name + surname", while in official documents and forums (tax forms, registrations, military service, school forms), the surname is often listed or said first.
In Hungarian, like Asian languages but unlike most other European ones (see French and German above for exceptions), the family name is placed before the given names. This usage does not apply to non-Hungarian names, for example "Tony Blair" will remain "Tony Blair" when written in Hungarian texts.
Names of Hungarian individuals, however, appear in Western order in English writing.
Indonesians comprise more than 300 ethnic groups. Not all of these groups traditionally have surnames, and in the populous Java surnames are not common at all – regardless of which one of the six officially recognized religions the name carrier profess. For instance, a Christian Javanese woman named "Agnes Mega Rosalin" has three forenames and no surname. "Agnes" is her Christian name, but "Mega" can be the first name she uses and the name which she is addressed with. "Rosalin" is only a middle name. Nonetheless, Indonesians are well aware of the custom of family names, which is known as "marga" or "fam", and such names have become a specific kind of identifier. People can tell what a person's heritage is by his or her family or clan name.
Javanese people are the majority in Indonesia, and most do not have any surname. There are some individuals, especially the old generation, who have only one name, such as "Suharto" and "Sukarno". These are not only common with the Javanese but also with other Indonesian ethnic groups who do not have the tradition of surnames. If, however, they are Muslims, they might opt to follow Arabic naming customs, but Indonesian Muslims don't automatically follow Arabic name traditions.
In conjunction with migration to Europe or America, Indonesians without surnames often adopt a surname based on some family name or middle name. The forms for visa application many Western countries use, has a square for writing the last name which cannot be left unfilled by the applicant.
Most Chinese Indonesians substituted their Chinese surnames with Indonesian-sounding surnames due to political pressure from 1965 to 1998 under Suharto's regime.
Persian last names may be:
Suffixes include: -an (plural suffix), -i ("of"), -zad/-zadeh ("born of"), -pur ("son of"), -nejad ("from the race of"), -nia ("descendant of"), -mand ("having or pertaining to"), -vand ("succeeding"), -far ("holder of"), -doost ("-phile"), -khah ("seeking of"), -manesh ("having the manner of"), -ian/-yan, -gar and -chi ("whose vocation pertains").
An example is names of geographical locations plus "-i": Irani ("Iranian"), Gilani ("of Gilan province"), Tabrizi ("of the city of Tabriz").
Another example is last names that indicate relation to religious groups such as Zoroastrian (e.g. Goshtaspi, Namiranian, Azargoshasp), Jewish (e.g. Yaghubian [Jacobean], Hayyem [Life], Shaul [Saul]) or Muslim (e.g. Alavi, Islamnia, Montazeri)
Last names are arbitrary; their holder need not to have any relation with their meaning.
Traditionally in Iran, the wife does not take her husband's surname, although children take the surname of their father. Individual reactions notwithstanding, it is possible to call a married woman by her husband's surname. This is facilitated by the fact that English words "Mrs.", "Miss", "Woman", "Lady" and "Wife (of)" in a polite context are all translated into "خانم" (Khaanom). Context, however, is important: "خانم گلدوست" (Khaanom Goldust) may, for instance, refer to the daughter of Mr. Goldust instead of his wife.
When most of Iranian surnames are used with a name, the name will be ended with a suffix _E or _ie (of) such as Hasan_e roshan (Hasan is name and roshan is surname) that means Hasan of Roshan or Mosa_ie saiidi (Muses of saiidi). The _e is not for surname and it is difficult to say it is a part of surname.
Italy has around 350,000 surnames. Most of them derive from the following sources: patronym or ilk (e.g. "Francesco di Marco", "Francis, son of Mark" or "Eduardo de Filippo", "Edward belonging to the family of Philip"), occupation (e.g. "Enzo Ferrari", "Heinz (of the) Blacksmiths"), personal characteristic (e.g. nicknames or pet names like "Dario Forte", "Darius the Strong"), geographic origin (e.g. "Elisabetta Romano", "Elisabeth from Rome") and objects (e.g. "Carlo Sacchi", "Charles Bags"). The two most common Italian family names, "Russo" and "Rossi", mean the same thing, "Red", possibly referring to the hair color.
Both Western and Eastern orders are used for full names: the given name usually comes first, but the family name may come first in administrative settings; lists are usually indexed according to the last name.
Since 1975, women have kept their own surname when married, but until recently (2000) they could have added the surname of the husband according to the civil code, although it was a very seldom-used practice. In recent years, the husband's surname cannot be used in any official situation. In some unofficial situations, sometimes both surnames are written (the proper first), sometimes separated by "in" (e.g. "Giuseppina Mauri in Crivelli") or, in case of widows, "ved." ("vedova").
Latvian male surnames usually end in "-s", "-š" or "-is" whereas the female versions of the same names end in "-a" or "-e" or "s" in both unmarried and married women.
Before the emancipation from serfdom (1817 in Courland, 1819 in Vidzeme, 1861 in Latgale) only noblemen, free craftsmen or people living in towns had surnames. Therefore, the oldest Latvian surnames originate from German or Low German, reflecting the dominance of German as an official language in Latvia till the 19th century. Examples: "Meijers/Meijere" (German: "Meier", farm administrator; akin to Mayor), "Millers/Millere" (German: "Müller", miller), "Šmits/Šmite" (German: "Schmidt", smith), "Šulcs/Šulca" (German: "Schulze", constable), "Ulmanis" (German: "Ullmann", a person from Ulm), "Godmanis" (a God-man), "Pētersons" (son of Peter). Some Latvian surnames, mainly from Latgale are of Polish or Belorussian origin by changing the final "-ski/-cki" to "-skis/-ckis", "-czyk" to "-čiks" or "-vich/-wicz" to "-vičs", such as "Sokolovkis/Sokolovska", "Baldunčiks/Baldunčika" or "Ratkevičs/Ratkeviča".
Most Latvian peasants received their surnames in 1826 (in Vidzeme), in 1835 (in Courland), and in 1866 (in Latgale). Diminutives were the most common form of family names. Examples: "Kalniņš/Kalniņa" (small hill), "Bērziņš/Bērziņa" (small birch).
Nowadays many Latvians of Slavic descent have surnames of Russian, Belarusian, or Ukrainian origin, for example "Volkovs/Volkova" or "Antoņenko".
Libya's names and surnames have a strong Islamic/Arab nature, with some Turkish influence from Ottoman Empire rule of nearly 400 years.
Amazigh, Touareg and other minorities also have their own name/surname traditions.
Due to its location as a trade route and the different cultures that had their impact on Libya throughout history, one can find names that could have originated in neighboring countries, including clan names from the Arabian Peninsula, and Turkish names derived from military rank or status ("Basha", "Agha").
Lithuanian names follow the Baltic distinction between male and female suffixes of names, although the details are different. Male surnames usually end in "-a", "-as", "-aitis", "-ys", "-ius", or "-us", whereas the female versions change these suffixes to "-aitė, -ytė, -iūtė", and "-utė" respectively (if unmarried), "-ienė" (if married), or "-ė" (not indicating the marital status). Some Lithuanians have names of Polish or another Slavic origin, which are made to conform to Lithuanian by changing the final "-ski" to "-skas", such as "Sadauskas", with the female version bein -"skaitė" (if unmarried), "-skienė" (if married), or "-skė" (not indicating the marital status).
Different cultures have their impact on the demographics of the Maltese islands, and this is evident in the various surnames Maltese citizens bear nowadays. There are very few "Maltese" surnames per se: the few that originate from Maltese places of origin include "Chircop" (Kirkop), "Lia" (Lija), "Balzan" (Balzan), "Valletta" (Valletta), and "Sciberras" (Xebb ir-Ras Hill, on which Valletta was built). The village of Munxar, Gozo is characterised by the majority of its population having one of two surnames, either "Curmi" or "de Brincat". In Gozo, the surnames "Bajada" and "Farrugia" are also common.
Sicilian and Italian surnames are common due to the close vicinity to Malta. Sicilian Italians were the first to colonise the Maltese islands. Common examples include "Azzopardi", "Bonello", "Cauchi", "Farrugia", "Gauci", "Rizzo", "Schembri", "Tabone", "Vassallo", "Vella".
Common examples include "Depuis", "Montfort", "Monsenuier", "Tafel".
English surnames exist for a number of reasons, but mainly due to migration as well as Malta forming a part of the British Empire in the 19th century and most of the 20th. Common examples include "Bone", "Harding", "Atkins", "Mattocks", "Smith", "Jones", "Woods", "Turner".
Arabic surnames occur in part due to the early presence of the Arabs in Malta. Common examples include "Sammut", "Camilleri", "Zammit", and "Xuereb".
Common surnames of Spanish origin include "Abela", "Galdes", "Herrera", and "Guzman".
Surnames from foreign countries from the Middle Ages include German,
such as "von Brockdorff", "Hyzler", and "Schranz".
Many of the earliest Maltese surnames are Sicilian Greek, e.g. "Cilia", "Calleia", "Brincat", "Cauchi". Much less common are recent surnames from Greece; examples include "Dacoutros", and "Trakosopoulos"
The original Jewish community of Malta and Gozo has left no trace of their presence on the islands since they were expelled in January 1493.
In line with the practice in other Christian, European states, women generally assume their husband's surname after legal marriage, and this is passed on to any children the couple may bear. Some women opt to retain their old name, for professional/personal reasons, or combine their surname with that of their husband.
Mongolians do not use surnames in the way that most Westerners, Chinese or Japanese do. Since the socialist period, patronymics – then called "ovog", now called "etsgiin ner" – are used instead of a surname. If the father's name is unknown, a matronymic is used. The patro- or matronymic is written before the given name. Therefore, if a man with given name Tsakhia has a son, and gives the son the name Elbegdorj, the son's full name is Tsakhia Elbegdorj. Very frequently, the patronymic is given in genitive case, i.e. Tsakhiagiin Elbegdorj. However, the patronymic is rather insignificant in everyday use and usually just given as an initial – Ts. Elbegdorj. People are normally just referred to and addressed by their given name (Elbegdorj "guai" – Mr. Elbegdorj), and if two people share a common given name, they are usually just kept apart by their initials, not by the full patronymic.
Since 2000, Mongolians have been officially using clan names – "ovog", the same word that had been used for the patronymics before – on their IDs. Many people chose the names of the ancient clans and tribes such Borjigin, Besud, Jalair, etc. Also many extended families chose the names of the native places of their ancestors. Some chose the names of their most ancient known ancestor. Some just decided to pass their own given names (or modifications of their given names) to their descendants as clan names. Some chose other attributes of their lives as surnames. Gürragchaa chose Sansar (Cosmos). Clan names precede the patronymics and given names, e.g. Besud Tsakhiagiin Elbegdorj. These clan names have a significance and are included in Mongolian passports.
People from Myanmar or Burmese, have no family names. This, to some, is the only known Asian people having no family names at all. Some of those from Myanmar or Burma, who are familiar with European or American cultures, began to put to their younger generations with a family name – adopted from the notable ancestors. For example, Ms. Aung San Suu Kyi is the daughter of the late Father of Independence General Aung San; Hayma Ne Win, is the daughter of the famous actor Kawleikgyin Ne Win etc.
Until the middle of the 19th century, there was no standardization of surnames in the Philippines. There were native Filipinos without surnames, others whose surnames deliberately did not match that of their families, as well as those who took certain surnames simply because they had a certain prestige, usually ones related to the Roman Catholic religion, such as de los Santos ("of the saints") and de la Cruz ("of the cross").
On 21 November 1849, the Spanish Governor-General of the Philippines, Narciso Clavería y Zaldúa, decreed an end to these arbitrary practices, the systematic distribution of surnames to Filipinos without prior surnames and the universal implementation of the Spanish naming system. This produced the "Catálogo alfabético de apellidos" ("Alphabetical Catalogue of Surnames"), which listed permitted surnames with origins in Spanish, Filipino, and Hispanicised Chinese words, names, and numbers. Thus, many Spanish-sounding Filipino surnames are not surnames common to the rest of the Hispanophone world. Surnames with connections to nobility, either Spanish or local, however, was explicitly prohibited, and only allowed to be retained by families with noble status or having the surname used in three consecutive generations. The book contained many words coming from Spanish and the Philippine languages such as Tagalog, as well as many Basque surnames such as Zuloaga or Aguirre.
The colonial authorities implemented this decree because too many (early) Christianized Filipinos assumed religious names. There soon were too many people surnamed "de los Santos" ("of the saints"), "de la Cruz" ("of the cross"), "del Rosario" ("of the Rosary"), "Bautista" ("Baptist"), et cetera, which made it difficult for the Spanish colonists to control the Filipino people, and most importantly, to collect taxes. These extremely common names were also banned by the decree unless the name has been used by a family for at least four generations. This Spanish naming custom also countered the native custom before the Spanish period, wherein siblings assumed different surnames. Clavería's decree was enforced to different degrees in different parts of the colony.
Because of this implementation of Spanish naming customs, of the arrangement "given_name + paternal_surname + maternal_surname", in the Philippines, a Spanish surname does not necessarily denote Spanish ancestry.
In practice, the application of this decree varied from municipality to municipality. Most municipalities received surnames starting with only one initial letter, but some are assigned surnames starting with two or three initial letters. For example, the majority of residents of the island of Banton in the province of Romblon have surnames starting with F such as "Fabicon", "Fallarme", "Fadrilan", and "Ferran". Other examples are case of Batangas, Batangas (present-day Batangas City), where most residents bear surnames starting with the letters A, B, and C, such as "Abacan", "Albayalde", "Almarez", "Andal", "Arce", "Arceo", "Arguelles", "Arrieta", "Babasa", "Balmes", "Basco", "Baylosis", "Berberabe", "Biscocho", "Blanco", "Borbon", "Calingasan", "Caringal", "Chavez", "Cuenca", and "Custodio" (in addition to some bearing native Tagalog surnames, such as "Dimaano", "Dimacuha", "Macatangay", "Malabanan", and "Marasigan"), and Argao, Cebu, where most residents bear surnames starting with "VI" and "Al", such as "Villaluz", "Villaflor", "Villamor", "Villanueva", "Albo", "Alcain", "Alcarez", "Algones", etc.
Thus, although perhaps a majority of Filipinos have Spanish surnames, such a surname does not indicate Spanish ancestry. In addition, most Filipinos currently do not use Spanish accented letters in their Spanish derived names. The lack of accents in Filipino Spanish has been attributed to the lack of accents on the predominantly American typewriters after the US gained control of the Philippines.
The vast majority of Filipinos follow a naming system in the American order (i.e. given_name + middle_name + surname), which is the reverse of the Spanish naming order (i.e. given_name + paternal_surname + maternal_surname). Children take the mother's surname as their middle name, followed by their father's as their surname; for example, a son of Juan de la Cruz and his wife María Agbayani may be David Agbayani de la Cruz. Women usually take the surnames of their husband upon marriage, and consequently lose their maiden middle names; so upon her marriage to David de la Cruz, the full name of Laura Yuchengco Macaraeg would become Laura Macaraeg de la Cruz. Their maiden last names automatically become their middle names upon marriage.
There are other sources for surnames. Many Filipinos also have Chinese-derived surnames, which in some cases could indicate Chinese ancestry. Many Hispanicised Chinese numerals and other Hispanicised Chinese words, however, were also among the surnames in the "Catálogo alfabético de apellidos". For those whose surname may indicate Chinese ancestry, analysis of the surname may help to pinpoint when those ancestors arrived in the Philippines. A Hispanicised Chinese surname such as Cojuangco suggests an 18th-century arrival while a Chinese surname such as Lim suggests a relatively recent immigration. Some Chinese surnames such as Tiu-Laurel are composed of the immigrant Chinese ancestor's surname as well as the name of that ancestor's godparent on receiving Christian baptism.
In the predominantly Muslim areas of the southern Philippines, adoption of surnames was influenced by Islamic religious terms. As a result, surnames among Filipino Muslims are largely Arabic-based, and include such surnames as Hassan and Haradji.
There are also Filipinos who, to this day, have no surnames at all, particularly if they come from indigenous cultural communities.
Prior to the establishment of the Philippines as a US territory during the earlier part of the 20th century, Filipinos usually followed Iberian naming customs. However, upon the promulgation of the Family Code of 1987, Filipinos formalized adopting the American system of using their surnames.
A common Filipino name will consist of the given name (mostly 2 given names are given), the initial letter of the mother's maiden name and finally the father's surname (i.e. Lucy Anne C. de Guzman). Also, women are allowed to retain their maiden name or use both her and her husband's surname as a double-barreled surname, separated by a dash. This is common in feminist circles or when the woman holds a prominent office (e.g. Gloria Macapagal-Arroyo, Miriam Defensor Santiago). In more traditional circles, especially those who belong to the prominent families in the provinces, the custom of the woman being addressed as "Mrs. Husband's Full Name" is still common.
For widows, who chose to marry again, two norms are in existence. For those who were widowed before the Family Code, the full name of the woman remains while the surname of the deceased husband is attached. That is, Maria Andres, who was widowed by Ignacio Dimaculangan will have the name Maria Andres viuda de Dimaculangan. If she chooses to marry again, this name will still continue to exist while the surname of the new husband is attached. Thus, if Maria marries Rene de los Santos, her new name will be Maria Andres viuda de Dimaculangan de los Santos.
However, a new norm is also in existence. The woman may choose to use her husband's surname to be one of her middle names. Thus, Maria Andres viuda de Dimaculangan de los Santos may also be called Maria A.D. de los Santos.
Children will however automatically inherit their father's surname if they are considered legitimate. If the child is born out of wedlock, the mother will automatically pass her surname to the child, unless the father gives a written acknowledgment of paternity. The father may also choose to give the child both his parents' surnames if he wishes (that is Gustavo Paredes, whose parents are Eulogio Paredes and Juliana Angeles, while having Maria Solis as a wife, may name his child Kevin S. Angeles-Paredes.
In some Tagalog regions, the norm of giving patronyms, or in some cases matronyms, is also accepted. These names are of course not official, since family names in the Philippines are inherited. It is not uncommon to refer to someone as Juan anak ni Pablo (John, the son of Paul) or Juan apo ni Teofilo (John, the grandson of Theophilus).
In Romania, like in most of Europe, it is customary for a child to take his father's family name, and a wife to take her husband's last name. However, this is not compulsory spouses and parents are allowed to choose other options too, as the law is flexible (see Art. 282, Art. 449 Art. 450. of the Civil Code of Romania).
Until the 19th century, the names were primarily of the form "[given name] [father's name] [grandfather's name]". The few exceptions are usually famous people or the nobility (boyars). The name reform introduced around 1850 had the names changed to a western style, most likely imported from France, consisting of a given name followed by a family name.
As such, the name is called "prenume" (French "prénom"), while the family name is called "nume" or, when otherwise ambiguous, "nume de familie" ("family name"). Although not mandatory, middle names are common.
Historically, when the family name reform was introduced in the mid-19th century, the default was to use a patronym, or a matronym when the father was dead or unknown. A common convention was to append the suffix "-escu" to the father's name, e.g. "Anghelescu" (""Anghel's" child") and "Petrescu" (""Petre's" child"). (The "-escu" seems to come from Latin "-iscum", thus being cognate with Italian "-esco" and French "-esque".) Another common convention was to append the suffix "-eanu" to the name of the place of origin, e.g. "Munteanu" ("from the mountains") and "Moldoveanu" ("from "Moldova""). These uniquely Romanian suffixes strongly identify ancestral nationality.
There are also descriptive family names derived from occupations, nicknames, and events, e.g. "Botezatu" ("baptised"), "Barbu" ("bushy bearded"), "Prodan" ("foster"), "Bălan" ("blond"), "Fieraru" ("smith"), "Croitoru" ("tailor"), "Păcuraru" ("shepherd").
Romanian family names remain the same regardless of the sex of the person.
Although given names appear before family names in most Romanian contexts, official documents invert the order, ostensibly for filing purposes. Correspondingly, Romanians occasionally introduce themselves with their family names first, e.g. a student signing a test paper in school.
Romanians bearing names of non-Romanian origin often adopt Romanianised versions of their ancestral surnames. For example, "Jurovschi" for Polish "Żurowski", or Popovici for Serbian Popović ("son of a priest"), which preserves the original pronunciation of the surname through transliteration. In some cases, these changes were mandated by the state.
In Turkey, following the Surname Law imposed in 1934 in the context of Atatürk's Reforms, every family living in Turkey was given a family name. The surname was generally selected by the elderly people of the family and could be any Turkish word (or a permitted word for families belonging to official minority groups).
Some of the most common family names in Turkey are "Yılmaz" ('undaunted'), "Doğan" ('falcon'), "Şahin" ('hawk'), "Yıldırım" ('thunderbolt'), "Şimşek" ('lightning'), "Öztürk" ('purely Turkish').
Patronymic surnames do not necessarily refer to ancestry, or in most cases cannot be traced back historically. The most usual Turkish patronymic suffix is "–oğlu"; "–ov(a)", "–yev(a)" and "–zade" also occur in the surnames of Azeri or other Turkic descendants.
Official minorities like Armenians, Greeks, and Jews have surnames in their own mother languages.
The Armenian families living in Turkey usually have Armenian surnames and generally have the suffix "–yan", "–ian", or, using Turkish spelling, "-can". Greek descendants usually have Greek surnames which might have Greek suffixes like "–ou", "–aki(s)", "–poulos/poulou", "–idis/idou", "–iadis/iadou" or prefixes like "papa–".
The Sephardic Jews who were expelled from Spain and settled in Turkey in 1492 have both Jewish/Hebrew surnames, and Spanish surnames, usually indicating their native regions, cities or villages back in Spain, like "De Leon" or "Toledano".
However these minorities increasingly tend to "Turkicize" their surnames or replace their original surnames with Turkish surnames altogether to avoid being recognized and discriminated against. | https://en.wikipedia.org/wiki?curid=10814 |
Combination
In mathematics, a combination is a selection of items from a collection, such that (unlike permutations) the order of selection does not matter. For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange.
More formally, a "k"-combination of a set "S" is a subset of "k" distinct elements of "S". If the set has "n" elements, the number of "k"-combinations is equal to the binomial coefficient
which can be written using factorials as formula_2 whenever formula_3, and which is zero when formula_4. The set of all "k"-combinations of a set "S" is often denoted by formula_5.
Combinations refer to the combination of "n" things taken "k" at a time without repetition. To refer to combinations in which repetition is allowed, the terms "k"-selection, "k"-multiset, or "k"-combination with repetition are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears.
Although the set of three fruits was small enough to write a complete list of combinations, with large sets this becomes impractical. For example, a poker hand can be described as a 5-combination ("k" = 5) of cards from a 52 card deck ("n" = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960.
The number of "k"-combinations from a given set "S" of "n" elements is often denoted in elementary combinatorics texts by formula_6, or by a variation such as formula_7, formula_8, formula_9, formula_10 or even formula_11 (the latter form was standard in French, Romanian, Russian, Chinese and Polish texts). The same number however occurs in many other mathematical contexts, where it is denoted by formula_12 (often read as ""n" choose "k""); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define formula_12 for all natural numbers "k" at once by the relation
from which it is clear that
and further,
To see that these coefficients count "k"-combinations from "S", one can first consider a collection of "n" distinct variables "X""s" labeled by the elements "s" of "S", and expand the product over all elements of "S":
it has 2"n" distinct terms corresponding to all the subsets of "S", each subset giving the product of the corresponding variables "X""s". Now setting all of the "X""s" equal to the unlabeled variable "X", so that the product becomes , the term for each "k"-combination from "S" becomes "X""k", so that the coefficient of that power in the result equals the number of such "k"-combinations.
Binomial coefficients can be computed explicitly in various ways. To get all of them for the expansions up to , one can use (in addition to the basic cases already given) the recursion relation
for 0 < "k" < "n", which follows from =; this leads to the construction of Pascal's triangle.
For determining an individual binomial coefficient, it is more practical to use the formula
The numerator gives the number of "k"-permutations of "n", i.e., of sequences of "k" distinct elements of "S", while the denominator gives the number of such "k"-permutations that give the same "k"-combination when the order is ignored.
When "k" exceeds "n"/2, the above formula contains factors common to the numerator and the denominator, and canceling them out gives the relation
for 0 ≤ "k" ≤ "n". This expresses a symmetry that is evident from the binomial formula, and can also be understood in terms of "k"-combinations by taking the complement of such a combination, which is an -combination.
Finally there is a formula which exhibits this symmetry directly, and has the merit of being easy to remember:
where "n"! denotes the factorial of "n". It is obtained from the previous formula by multiplying denominator and numerator by !, so it is certainly inferior as a method of computation to that formula.
The last formula can be understood directly, by considering the "n"! permutations of all the elements of "S". Each such permutation gives a "k"-combination by selecting its first "k" elements. There are many duplicate selections: any combined permutation of the first "k" elements among each other, and of the final ("n" − "k") elements among each other produces the same combination; this explains the division in the formula.
From the above formulas follow relations between adjacent numbers in Pascal's triangle in all three directions:
Together with the basic cases formula_23, these allow successive computation of respectively all numbers of combinations from the same set (a row in Pascal's triangle), of "k"-combinations of sets of growing sizes, and of combinations with a complement of fixed size .
As a specific example, one can compute the number of five-card hands possible from a standard fifty-two card deck as:
Alternatively one may use the formula in terms of factorials and cancel the factors in the numerator against parts of the factors in the denominator, after which only multiplication of the remaining factors is required:
Another alternative computation, equivalent to the first, is based on writing
which gives
When evaluated in the following order, , this can be computed using only integer arithmetic. The reason is that when each division occurs, the intermediate result that is produced is itself a binomial coefficient, so no remainders ever occur.
Using the symmetric formula in terms of factorials without performing simplifications gives a rather extensive calculation:
One can enumerate all "k"-combinations of a given set "S" of "n" elements in some fixed order, which establishes a bijection from an interval of formula_12 integers with the set of those "k"-combinations. Assuming "S" is itself ordered, for instance "S" = { 1, 2, …, "n" }, there are two natural possibilities for ordering its "k"-combinations: by comparing their smallest elements first (as in the illustrations above) or by comparing their largest elements first. The latter option has the advantage that adding a new largest element to "S" will not change the initial part of the enumeration, but just add the new "k"-combinations of the larger set after the previous ones. Repeating this process, the enumeration can be extended indefinitely with "k"-combinations of ever larger sets. If moreover the intervals of the integers are taken to start at 0, then the "k"-combination at a given place "i" in the enumeration can be computed easily from "i", and the bijection so obtained is known as the combinatorial number system. It is also known as "rank"/"ranking" and "unranking" in computational mathematics.
There are many ways to enumerate "k" combinations. One way is to visit all the binary numbers less than 2"n". Choose those numbers having "k" nonzero bits, although this is very inefficient even for small "n" (e.g. "n" = 20 would require visiting about one million numbers while the maximum number of allowed "k" combinations is about 186 thousand for "k" = 10). The positions of these 1 bits in such a number is a specific "k"-combination of the set { 1, …, "n" }. Another simple, faster way is to track "k" index numbers of the elements selected, starting with {0 .. "k"−1} (zero-based) or {1 .. "k"} (one-based) as the first allowed "k"-combination and then repeatedly moving to the next allowed "k"-combination by incrementing the last index number if it is lower than "n"-1 (zero-based) or "n" (one-based) or the last index number "x" that is less than the index number following it minus one if such an index exists and resetting the index numbers after "x" to {"x"+1, "x"+2, …}.
A "k"-combination with repetitions, or "k"-multicombination, or multisubset of size "k" from a set "S" is given by a sequence of "k" not necessarily distinct elements of "S", where order is not taken into account: two sequences define the same multiset if one can be obtained from the other by permuting the terms. In other words, the number of ways to sample "k" elements from a set of "n" elements allowing for duplicates (i.e., with replacement) but disregarding different orderings (e.g. {2,1,2} = {1,2,2}). Associate an index to each element of "S" and think of the elements of "S" as "types" of objects, then we can let formula_30 denote the number of elements of type "i" in a multisubset. The number of multisubsets of size "k" is then the number of nonnegative integer solutions of the Diophantine equation:
If "S" has "n" elements, the number of such "k"-multisubsets is denoted by,
a notation that is analogous to the binomial coefficient which counts "k"-subsets. This expression, "n" multichoose "k", can also be given in terms of binomial coefficients:
This relationship can be easily proved using a representation known as stars and bars.
A solution of the above Diophantine equation can be represented by formula_34 "stars", a separator (a "bar"), then formula_35 more stars, another separator, and so on. The total number of stars in this representation is "k" and the number of bars is "n" - 1 (since no separator is needed at the very end). Thus, a string of "k" + "n" - 1 symbols (stars and bars) corresponds to a solution if there are "k" stars in the string. Any solution can be represented by choosing "k" out of positions to place stars and filling the remaining positions with bars. For example, the solution formula_36 of the equation formula_37 can be represented by
formula_38.
The number of such strings is the number of ways to place 10 stars in 13 positions, formula_39 which is the number of 10-multisubsets of a set with 4 elements.
As with binomial coefficients, there are several relationships between these multichoose expressions. For example, for formula_40,
This identity follows from interchanging the stars and bars in the above representation.
For example, if you have four types of donuts ("n" = 4) on a menu to choose from and you want three donuts ("k" = 3), the number of ways to choose the donuts with repetition can be calculated as
This result can be verified by listing all the 3-multisubsets of the set "S" = {1,2,3,4}. This is displayed in the following table. The second column shows the nonnegative integer solutions formula_43 of the equation formula_44 and the last column gives the stars and bars representation of the solutions.
The number of "k"-combinations for all "k" is the number of subsets of a set of "n" elements. There are several ways to see that this number is 2"n". In terms of combinations, formula_45, which is the sum of the "n"th row (counting from 0) of the binomial coefficients in Pascal's triangle. These combinations (subsets) are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to 2"n" − 1, where each digit position is an item from the set of "n".
Given 3 cards numbered 1 to 3, there are 8 distinct combinations (subsets), including the empty set:
Representing these subsets (in the same order) as base 2 numerals:
There are various algorithms to pick out a random combination from a given set or list. Rejection sampling is extremely slow for large sample sizes. One way to select a "k"-combination efficiently from a population of size "n" is to iterate across each element of the population, and at each step pick that element with a dynamically changing probability of formula_47. (see reservoir sampling). | https://en.wikipedia.org/wiki?curid=5308 |
Software
Computer software, or simply software, is a collection of data or computer instructions that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be realistically used on its own.
At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example displaying some text on a computer screen; causing state changes which should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction, or is interrupted by the operating system. , most personal computers, smartphone devices and servers have processors with multiple execution units or multiple processors performing computation together, and computing has become a much more concurrent activity than in the past.
The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two. Software may also be written in a low-level assembly language, which has strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler.
An outline (algorithm) for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli Numbers. Because of the proofs and the algorithm, she is considered the first computer programmer.
The first theory about software—prior to the creation of computers as we know them today—was proposed by Alan Turing in his 1935 essay "On Computable Numbers, with an Application to the Entscheidungsproblem" (decision problem).
This eventually led to the creation of the academic fields of computer science and software engineering; Both fields study software and its creation. Computer science is the theoretical study of computer and software (Turing's essay is an example of computer science), whereas software engineering is the application of engineering and development of software.
However, prior to 1946, software was not yet the programs stored in the memory of stored-program digital computers, as we now understand it. The first electronic computing devices were instead rewired in order to "reprogram" them.
In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper "The Teaching of Concrete Mathematics" contained the earliest known usage of the term "software" found in a search of JSTOR's electronic archives, predating the OED's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term "software" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum.
On virtually all computer platforms, software can be grouped into a few broad categories.
Based on the goal, computer software can be divided into:
Programming tools are also software in the form of programs or applications that software developers (also known as
"programmers, coders, hackers" or "software engineers") use to create, debug, maintain (i.e. improve or fix), or otherwise support software.
Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined together to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE.
Users often see things differently from programmers. People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software.
Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to "execute" the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions.
Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly. So, this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together.
Software quality is very important, especially for commercial and system software like Microsoft Office, Microsoft Windows and Linux. If software is faulty (buggy), it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs.
Many bugs are discovered and eliminated (debugged) through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be quite large. For instance, NASA has extremely rigorous software testing procedures for many operating systems and communication functions. Many NASA-based operations interact and identify each other through command programs. This enables many people who work at NASA to check and evaluate functional systems overall. Programs containing command software enable hardware engineering and system operations to function much easier together.
The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies.
Proprietary software can be divided into two types:
Open-source software, on the other hand, comes with a free software license, granting the recipient the rights to modify and redistribute the software.
Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a "detailed idea (e.g. an algorithm) on how to implement" a piece of software, or a component of a piece of software. Ideas for useful things that software could "do", and user "requirements", are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since "all" useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code.
Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for Aspect-Oriented Programming (AOP), which purported to claim rights over "any" programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents.
Design and implementation of software varies depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the latter has much more basic functionality.
Software is usually designed and created (aka coded/written/programmed) in integrated development environments (IDE) like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software (if applicable). As noted in a different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like "Form1.Close()" and "Form1.Show()" to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them.
Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software.
Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods.
A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as "coder" and "hacker"although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems.
A great variety of software companies and programmers in the world comprise a software industry. Software can be quite a profitable industry: Bill Gates, the co-founder of Microsoft was the richest person in the world in 2009, largely due to his ownership of a significant number of shares in Microsoft, the company responsible for Microsoft Windows and Microsoft Office software products - both market leaders in their respective product categories.
Non-profit software organizations include the Free Software Foundation, GNU Project and the Mozilla Foundation. Software standard organizations like the W3C, IETF develop recommended software standards such as XML, HTTP and HTML, so that software can interoperate through these standards.
Other well-known large software companies include Google, IBM, TCS, Infosys, Wipro, HCL Technologies, Oracle, Novell, SAP, Symantec, Adobe Systems, Sidetrade and Corel, while small companies often provide innovation. | https://en.wikipedia.org/wiki?curid=5309 |
Computer programming
Computer programming is the process of designing and building an executable computer program to accomplish a specific computing result. Programming involves tasks such as: analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms in a chosen programming language (commonly referred to as coding). The source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, which is directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task (which can be as complex as an operating system) on a computer, often for solving a given problem. Proficient programming thus often requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.
Tasks accompanying and related to programming include: testing, debugging, source code maintenance, implementation of build systems, and management of derived artifacts, such as the machine code of computer programs. These might be considered part of the programming process, but often the term "software development" is used for this larger process with the term "programming", "implementation", or "coding" reserved for the actual writing of code. "Software engineering" combines engineering techniques with software development practices. "Reverse engineering" is the opposite process. A "hacker" is any skilled computer expert that uses their technical knowledge to overcome a problem, but it can also mean a "security hacker" in common language.
Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the "Book of Ingenious Devices". In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them.
Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in "A Manuscript On Deciphering Cryptographic Messages". He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.
The first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine.
In the 1880s Herman Hollerith invented the concept of storing "data" in machine-readable form. Later a control panel (plugboard) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way; as were the first electronic computers. However, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.
Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format, (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets also have different assembly languages.
High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957 and many other languages were soon developed – in particular, COBOL aimed at commercial data processing, and Lisp for computer research.
Programs were mostly still entered using punched cards or paper tape. See computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors (programs themselves) were developed that allowed changes and corrections to be made much more easily than with punched cards.
Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:
In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.
Readability is important because programmers spend the majority of their time reading, trying to understand and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.
Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include:
The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills.
Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (IDEs) aim to integrate all such help. Techniques like Code refactoring can enhance readability.
The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problem. For this purpose, algorithms are classified into "orders" using so-called Big O notation, which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.
"Programming a Computer for Playing Chess" was a 1950 paper that evaluated a "minimax" algorithm that is part of the history of algorithmic complexity; a course on IBM's Deep Blue (chess computer) is part of the computer science curriculum at Stanford University.
The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of differing approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.
It is very difficult to determine what are the most popular of modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).
Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation).
Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, a bug in a compiler can make it crash when passing some large source file. However, after simplification of the test case, only few lines from the original source file can be sufficient to reproduce the same crash. Such simplification can be done manually, using a divide-and-conquer approach. The programmer will try to remove some parts of original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if remaining actions are sufficient for bugs to appear.
Debugging is often done with IDEs like Eclipse, Visual Studio, Xcode, Kdevelop, NetBeans and . Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment.
Different programming languages support different styles of programming (called "programming paradigms"). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones.
Allen Downey, in his book "How To Think Like A Computer Scientist", writes:
Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language.
Computer programmers are those who write computer software. Their jobs usually involve: | https://en.wikipedia.org/wiki?curid=5311 |
The Consolation of Philosophy
The Consolation of Philosophy () is a philosophical work by the Roman statesman Boethius, written around the year 524. It has been described as the single most important and influential work in the West on Medieval and early Renaissance Christianity, as well as the last great Western work of the Classical Period.
"The Consolation of Philosophy" was written in AD 523 during a one-year imprisonment Boethius served while awaiting trial—and eventual execution–for the alleged crime of treason under the Ostrogothic King Theodoric the Great. Boethius was at the very heights of power in Rome, holding the prestigious office of "magister officiorum", and was brought down by treachery. This experience inspired the text, which reflects on how evil can exist in a world governed by God (the problem of theodicy), and how happiness is still attainable amidst fickle fortune, while also considering the nature of happiness and God. It was described in 1891 as "by far the most interesting example of prison literature the world has ever seen."
Boethius writes the book as a conversation between himself and Lady Philosophy. Lady Philosophy consoles Boethius by discussing the transitory nature of fame and wealth ("no man can ever truly be secure until he has been forsaken by Fortune"), and the ultimate superiority of things of the mind, which she calls the "one true good". She contends that happiness comes from within, and that virtue is all that one truly has, because it is not imperilled by the vicissitudes of fortune.
Boethius engages questions such as the nature of predestination and free will, why evil men often prosper and good men fall into ruin, human nature, virtue, and justice. He speaks about the nature of free will and determinism when he asks if God knows and sees all, or does man have free will. On human nature, Boethius says that humans are essentially good and only when they give in to “wickedness” do they “sink to the level of being an animal.” On justice, he says criminals are not to be abused, rather treated with sympathy and respect, using the analogy of doctor and patient to illustrate the ideal relationship between prosecutor and criminal.
In the "Consolation", Boethius answered religious questions without reference to Christianity, relying solely on natural philosophy and the Classical Greek tradition. He believed in the correspondence between faith and reason. The truths found in Christianity would be no different from the truths found in philosophy. In the words of Henry Chadwick, "If the "Consolation" contains nothing distinctively Christian, it is also relevant that it contains nothing specifically pagan either...[it] is a work written by a Platonist who is also a Christian."
Boethius repeats the Macrobius model of the Earth in the center of a spherical cosmos.
From the Carolingian epoch to the end of the Middle Ages and beyond it was one of the most popular and influential philosophical works, read by statesmen, poets, and historians, as well as of philosophers and theologians. It is through Boethius that much of the thought of the Classical period was made available to the Western Medieval world. It has often been said Boethius was the “last of the Romans and the first of the Scholastics”.
The philosophical message of the book fits well with the religious piety of the Middle Ages. Readers were encouraged not to seek worldly goods such as money and power, but to seek internalized virtues. Evil had a purpose, to provide a lesson to help change for good; while suffering from evil was seen as virtuous. Because God ruled the universe through Love, prayer to God and the application of Love would lead to true happiness. The Middle Ages, with their vivid sense of an overruling fate, found in Boethius an interpretation of life closely akin to the spirit of Christianity. "The Consolation of Philosophy" stands, by its note of fatalism and its affinities with the Christian doctrine of humility, midway between the pagan philosophy of Seneca the Younger and the later Christian philosophy of consolation represented by Thomas à Kempis.
The book is heavily influenced by Plato and his dialogues (as was Boethius himself). Its popularity can in part be explained by its Neoplatonic and Christian ethical messages, although current scholarly research is still far from clear exactly why and how the work became so vastly popular in the Middle Ages.
Translations into the vernacular were done by famous notables, including King Alfred (Old English), Jean de Meun (Old French), Geoffrey Chaucer (Middle English), Queen Elizabeth I (Early Modern English), and Notker Labeo (Old High German).
Found within the "Consolation" are themes that have echoed throughout the Western canon: the female figure of wisdom that informs Dante, the ascent through the layered universe that is shared with Milton, the reconciliation of opposing forces that find their way into Chaucer in The Knight's Tale, and the Wheel of Fortune so popular throughout the Middle Ages.
Citations from it occur frequently in Dante's "Divina Commedia". Of Boethius, Dante remarked "“The blessed soul who exposes the deceptive world to anyone who gives ear to him.”
Boethian influence can be found nearly everywhere in Geoffrey Chaucer's poetry, e.g. in "Troilus and Criseyde", "The Knight's Tale", "The Clerk's Tale", "The Franklin's Tale", "The Parson's Tale" and "The Tale of Melibee", in the character of Lady Nature in "The Parliament of Fowls" and some of the shorter poems, such as "Truth", "The Former Age" and "Lak of Stedfastnesse". Chaucer translated the work in his "Boece".
The Italian composer Luigi Dallapiccola used some of the text in his choral work "Canti di prigionia" (1938). The Australian composer Peter Sculthorpe quoted parts of it in his opera or music theatre work "Rites of Passage" (1972–73), which was commissioned for the opening of the Sydney Opera House but was not ready in time.
Tom Shippey in "The Road to Middle-earth" says how “Boethian” much of the treatment of evil is in Tolkien's "The Lord of the Rings". Shippey says that Tolkien knew well the translation of Boethius that was made by King Alfred and he quotes some “Boethian” remarks from Frodo, Treebeard and Elrond.
Boethius and "Consolatio Philosophiae" are cited frequently by the main character Ignatius J. Reilly in the Pulitzer Prize-winning "A Confederacy of Dunces" (1980).
It is a prosimetrical text, meaning that it is written in alternating sections of prose and metered verse. In the course of the text, Boethius displays a virtuosic command of the forms of Latin poetry. It is classified as a Menippean satire, a fusion of allegorical tale, platonic dialogue, and lyrical poetry.
In the 20th century there were close to four hundred manuscripts still surviving, a testament to its popularity.
Hundreds of Latin songs were recorded in neumes from the ninth century through to the thirteenth century, including settings of the poetic passages from Boethius's "The Consolation of Philosophy". The music of this song repertory had long been considered irretrievably lost because the notational signs indicated only melodic outlines, relying on now-lapsed oral traditions to fill in the missing details. However, research conducted by Dr Sam Barrett at the University of Cambridge, extended in collaboration with medieval music ensemble Sequentia, has shown that principles of musical setting for this period can be identified, providing crucial information to enable modern realisations. Sequentia performed the world premiere of the reconstructed songs from Boethius's "The Consolation of Philosophy" at Pembroke College, Cambridge, in April 2016, bringing to life music not heard in over 1,000 years; a number of the songs were subsequently recorded on the CD "Boethius: Songs of Consolation. Metra from 11th-Century Canterbury" (Glossa, 2018). A website launched by the University of Cambridge in 2018 provides further details of the reconstruction process, bringing together manuscripts, reconstructions, and video resources. | https://en.wikipedia.org/wiki?curid=5312 |
Crouching Tiger, Hidden Dragon
Crouching Tiger, Hidden Dragon () is a 2000 "wuxia" film directed by Ang Lee and written by Wang Hui-ling, James Schamus and Kuo Jung Tsai, based on the Chinese novel by Wang Dulu. The film features an international cast of actors of Chinese ethnicity, including Chow Yun-fat, Michelle Yeoh, Zhang Ziyi and Chang Chen.
A multinational venture, the film was made on a US$17 million budget, and was produced by Asian Union Film & Entertainment, China Film Co-Productions Corporation, Columbia Pictures Film Production Asia, Edko Films, Good Machine International, and Zoom Hunt Productions. With dialogue in Mandarin, subtitled for various markets, "Crouching Tiger, Hidden Dragon" became a surprise international success, grossing $213.5 million worldwide. It grossed US$128 million in the United States, becoming the highest-grossing foreign-language film produced overseas in American history.
The film premiered at the Cannes Film Festival on May 18, 2000, and was theatrically released in the United States on December 8. An overwhelming critical and commercial success, "Crouching Tiger, Hidden Dragon" won over 40 awards and was nominated for 10 Academy Awards in 2001, including Best Picture, and won Best Foreign Language Film, Best Art Direction, Best Original Score and Best Cinematography, receiving the most nominations ever for a non-English language film at the time, until 2018's "Roma" tied this record. The film also won four BAFTAs and two Golden Globe Awards, one for Best Foreign Film. Along with its awards success, "Crouching Tiger" continues to be hailed as one of the greatest and most influential films. The film has been praised for its story, direction, and cinematography, and for its martial arts sequences.
In 18th-century Qing dynasty China, Li Mu Bai is an accomplished Wudang swordsman, and Yu Shu Lien heads a private security company. Yu Shu Lien and Li Mu Bai have feelings for each other, but because Shu Lien had been engaged to Mu Bai's close friend, Meng Sizhao, before his death, Shu Lien and Mu Bai feel bound by loyalty to Meng Sizhao and have not acted on their feelings for one another. Mu Bai, choosing to retire, asks Shu Lien to give his sword "Green Destiny" to their benefactor Sir Te in Beijing. Long ago, Mu Bai's teacher was killed by Jade Fox, a woman who sought to learn Wudang skills. While at Sir Te's place, Shu Lien makes the acquaintance of Jen Yu, who is the daughter of rich and powerful Governor Yu and is about to get married.
One evening, a masked thief sneaks into Sir Te's estate and steals the Green Destiny. Sir Te's servant Master Bo and Shu Lien trace the theft to Governor Yu's compound, where Jade Fox had been posing as Jen's governess for many years. Soon after, Mu Bai arrives in Beijing and discusses the theft with Shu Lien. Master Bo makes the acquaintance of Inspector Tsai, a police investigator from the provinces, and his daughter May, who have come to Beijing in pursuit of Fox. Fox challenges the pair and Master Bo to a showdown that night. Following a protracted battle, the group is on the verge of defeat when Mu Bai arrives and outmaneuvers Fox. Before Mu Bai can kill Fox, the masked thief reappears and helps Fox. Fox kills Tsai before fleeing with the thief (who is revealed to be Jen). After seeing Jen fight Mu Bai, Fox realizes Jen had been secretly studying the Wudang manual and had surpassed her in combat skills.
At night, a desert bandit named Lo breaks into Jen's bedroom and asks her to leave with him. A flashback reveals that in the past, when Governor Yu and his family were traveling in the western deserts, Lo and his bandits had raided Jen's caravan and Lo had stolen her comb. She pursued him to his desert cave to get her comb back. However, the pair soon fell passionately in love. Lo eventually convinced Jen to return to her family, though not before telling her a legend of a man who jumped off a cliff to make his wishes come true. Because the man's heart was pure, he did not die. Lo came to Beijing to persuade Jen not to go through with her arranged marriage. However, Jen refuses to leave with him. Later, Lo interrupts Jen's wedding procession, begging her to leave with him. Nearby, Shu Lien and Mu Bai convince Lo to wait for Jen at Mount Wudang, where he will be safe from Jen's family, who are furious with him. Jen runs away from her husband on their wedding night before the marriage could be consummated. Disguised in male clothing, she is accosted at an inn by a large group of warriors; armed with the Green Destiny and her own superior combat skills, she emerges victorious.
Jen visits Shu Lien, who tells her that Lo is waiting for her at Mount Wudang. After an angry exchange, the two women engage in a duel. Shu Lien is the superior fighter, but Jen wields the Green Destiny: the sword destroys each weapon that Shu Lien wields, until Shu Lien finally manages to defeat Jen with a broken sword. When Shu Lien shows mercy, Jen wounds Shu Lien in the arm. Mu Bai arrives and pursues Jen into a bamboo forest. Mu Bai confronts Jen and offers to take her as his student. She arrogantly promises to accept him as her teacher if he can take Green Destiny from her in three moves. Mu Bai is able to take the sword in only one move, but Jen goes back on her word to accept him as teacher. Mu Bai throws the sword over a waterfall, Jen dives after it, and is then rescued by Fox. Fox puts Jen into a drugged sleep and places her in a cavern; Mu Bai and Shu Lien discover her there. Fox suddenly reappears and attacks the others with poisoned darts. Mu Bai blocks the needles with his sword and avenges his master's death by mortally wounding Fox, only to realize that one of the darts hit him in the neck. Fox dies, confessing that her goal had been to kill Jen because Jen had hidden the secrets of Wudang's best fighting techniques from her.
As Jen leaves to prepare an antidote for the poisoned dart, Mu Bai prepares to die. With his last breaths, he finally confesses his love for Shu Lien. He dies in her arms as Jen returns, too late to save him. The Green Destiny is returned to Sir Te. Jen later goes to Mount Wudang and spends one last night with Lo. The next morning, Lo finds Jen standing on a bridge overlooking the edge of the mountain. In an echo of the legend that they spoke about in the desert, she asks him to make a wish. He wishes for them to be together again, back in the desert, and Jen jumps off the bridge.
The name "Crouching Tiger Hidden Dragon" is a literal translation of the Chinese idiom "臥虎藏龙" which describes a place or situation that is full of unnoticed masters. It is from a poem of the ancient Chinese poet Yu Xin's (513–581) that reads "暗石疑藏虎,盤根似臥龍", which means "behind the rock in the dark probably hides a tiger, and the coiling giant root resembles a crouching dragon." Besides, The title Crouching Tiger, Hidden Dragon has several layers of meanings. On the most obvious level, the Chinese characters in the title connect to the narrative that the last character in Xiaohu and Jiaolong's names mean "Tiger" and "Dragon", respectively. On another level, the Chinese idiomatic phrase “卧虎藏龙 ( Wo Hu Cang Long)” (Crouching tiger hidden dragon) is an expression referring to the undercurrents of emotion, passion, and secret desires that lie beneath the surface of polite society and civil behavior, which alludes to the film's storyline.
Terms like “Wu Xia (martial arts chivalry)”, and subsequent kung fu spin-offs can be considered masculinist films. The success of the Disney animated feature Mulan (1998) popularized the image of the Chinese woman warrior. The storyline of this film is mostly driven by the three female characters. In particular, Yu Jiaolong was driven by her desire to be free from the gender role imposed on her, while Yu Shu Lien, herself oppressed by the gender role, tried to lead Jiaolong back into the role deemed appropriate for her. Some prominent martial arts styles traditionally were held to have been originated by women, e.g. "Wing Chun". The film's title refers to masters one does not notice necessarily includes mostly women, and suggests the advantage of a female bodyguard.
A teacher's desire to have a worthy student, the obligations between a student and a master, and tensions in these relationships are central to the characters' motives, conflicts between the characters, and the unfolding of the film's plot. Li Mu Bai is burdened with the responsibility for avenging his master's death, and turns his back on retirement to live up to this obligation. His fascination with the prospect of having Jen as a disciple also motivates his behavior, and that of Jade Fox.
Regarding conflicts in the student-teacher relationship, the potential for exploitation created by the subordinate position of the student and the tensions that exist when a student surpasses or resists a teacher are explored. Jen hides her mastery of martial arts from her teacher, Jade Fox, which leads both to their parting of ways and to Jade Fox's attempt on Jen's life. At the same time, Jade Fox tried to learn Wudang martial arts from Li Mu Bai's master but was refused, even though she tried convincing him by sleeping with him.
Poison is also a significant theme in the film. The Chinese word "毒" means not only physical poison, but also cruelty and sinfulness. In the world of martial arts, poison is considered the act of one who is too cowardly and dishonorable to fight; and indeed, the only character who explicitly fits these characteristics is Jade Fox. The poison is a weapon of her bitterness, and quest for vengeance: she poisons the master of Wudang, attempts to poison Jen, and succeeds in killing Mu Bai using a poisoned needle. In further play on this theme by the director, Jade Fox, as she dies, refers to the poison from a young child, "the deceit of an eight-year-old girl," obviously referring to what she considers her own spiritual poisoning by her young apprentice Jen. Li Mu Bai himself warns that without guidance, Jen could become a "poison dragon".
The story setting is in the Qing Dynasty (1644-1911), but it does not specify an exact time. Lee seeks to present “China of the imagination” not an accurate vision of Chinese history. At the same time, Lee also wants to make a film western audiences want to see. Thus, the film is shot for a balance between Eastern and Western aesthetics. There are some scenes showing uncommon artistry for the average martial arts film such as an airborne battle among wispy bamboo plants.
The film was originally written as five-part novel series by Wang Dulu starting in the late 1930s. The story presented in the film is adapted and condensed from the storyline of the fourth book in the series, "Crouching Tiger, Hidden Dragon".
Although its Academy Award was presented to Taiwan, "Crouching Tiger, Hidden Dragon" was in fact an international co-production between companies in four regions: the Chinese company China Film Co-Production Corporation; the American companies Columbia Pictures Film Production Asia, Sony Pictures Classics, and Good Machine; the Hong Kong company EDKO Film; and the Taiwanese Zoom Hunt International Productions Company, Ltd; as well as the unspecified United China Vision, and Asia Union Film and Entertainment Ltd., created solely for this film.
The film was made in Beijing, with location shooting in the Anhui, Hebei, Jiangsu, and Xinjiang provinces of China. The first phase of shooting was in the Gobi Desert where it consistently rained. Director Ang Lee noted, "I didn't take one break in eight months, not even for half a day. I was miserable -- I just didn't have the extra energy to be happy. Near the end, I could hardly breathe. I thought I was about to have a stroke." The stunt work was mostly performed by the actors themselves and Ang Lee stated in an interview that computers were used "only to remove the safety wires that held the actors." "Most of the time you can see their faces," he added, "That's really them in the trees."
Another compounding issue was the difference between accents of the four lead actors: Chow Yun-fat is from Hong Kong and speaks Cantonese natively; Michelle Yeoh is from Malaysia and grew up speaking English and Malay so she learned the Mandarin lines phonetically; Chang Chen is from Taiwan and he speaks Mandarin in a Taiwanese accent. Only Zhang Ziyi spoke with a native Mandarin accent that Ang Lee wanted. Chow Yun Fat said, on "the first day [of shooting], I had to do 28 takes just because of the language. That's never happened before in my life."
Because the film specifically targeted Western audiences rather than the domestic audiences who were already used to Wuxia films, English subtitles were needed. Ang Lee, who was educated in the West, personally edited the subtitles to ensure they were satisfactory for Western audiences.
The score was composed by Tan Dun, originally performed by Shanghai Symphony Orchestra, Shanghai National Orchestra, and Shanghai Percussion Ensemble. It also features many solo passages for cello played by Yo-Yo Ma. The "last track" ("A Love Before Time") features Coco Lee, who later performed it at the Academy Awards. The music for the entire film was produced in two weeks.
The film was adapted into a video game, a comics series, and a 34-episode Taiwanese television series based on the original novel. The latter was released in 2004 as "New Crouching Tiger, Hidden Dragon" for US and Canadian release.
The film was released on VHS and DVD on June 5, 2001 by Columbia TriStar Home Entertainment.
"Crouching Tiger, Hidden Dragon" was very well received in the Western world, receiving numerous awards. The review aggregator Rotten Tomatoes reported that 97% of critics gave the film positive reviews, based on 153 reviews with an average rating of 8.6/10. The website's critical consensus states: "The movie that catapulted Ang Lee into the ranks of upper echelon Hollywood filmmakers, "Crouching Tiger, Hidden Dragon" features a deft mix of amazing martial arts battles, beautiful scenery, and tasteful drama." Metacritic reported the film had an average score of 93 out of 100, based on 31 reviews, indicating "universal acclaim".
Some Chinese-speaking viewers were bothered by the accents of the leading actors. Neither Chow (a native Cantonese speaker) nor Yeoh (who was born and raised in Malaysia) spoke Mandarin as a mother tongue. All four main actors spoke with different accents: Chow speaks with a Cantonese accent; Yeoh with a Malaysian accent; Chang Chen a Taiwanese accent; and Zhang Ziyi a Beijing accent. Yeoh responded to this complaint in a December 28, 2000, interview with "Cinescape". She argued, "My character lived outside of Beijing, and so I didn't have to do the Beijing accent." When the interviewer, Craig Reid, remarked, "My mother-in-law has this strange Sichuan-Mandarin accent that's hard for me to understand.", Yeoh responded: "Yes, provinces all have their very own strong accents. When we first started the movie, Cheng Pei Pei was going to have her accent, and Chang Zhen was going to have his accent, and this person would have that accent. And in the end nobody could understand what they were saying. Forget about us, even the crew from Beijing thought this was all weird."
The film led to a boost in popularity of Chinese wuxia films in the western world, where they were previously little known, and led to films such as "House of Flying Daggers" and "Hero" marketed towards Western audiences. The film also provided the breakthrough role for Zhang Ziyi's career, who noted:
The character of Lo, or "Dark Cloud" the desert bandit, influenced the development of the protagonist of the "Prince of Persia" series of video games.
The film is ranked at number 497 on "Empire"'s 2008 list of the 500 greatest movies of all time and at number 66 in the magazine's 100 Best Films of World Cinema, published in 2010.
In 2010, the Independent Film & Television Alliance selected the film as one of the 30 Most Significant Independent Films of the last 30 years.
In 2016, it was voted the 35th-best film of the 21st century as picked by 177 film critics from around the world.
"Film Journal" noted that "Crouching Tiger, Hidden Dragon" "pulled off the rare trifecta of critical acclaim, boffo box-office and gestalt shift", in reference to its ground-breaking success for a subtitled film in the American market.
In 2019, "The Guardian" ranked the film 51st in its 100 best films of the 21st century list.
Wu and Chan (2007) look at "Crouching Tiger, Hidden Dragon" as somewhat of an example of "counter-flow", a film that has challenged Hollywood's grip on the film market. They argue that as a product of globalization, the movie did not demonstrate a one-way flow based on Western ideology, but was multidirectional with the ability of local resources to influence the West and gain capital. Despite its international success and perceived ability to change the flow from East to West, however, there were still instances of Western adaptation for the movie, such as putting more emphasis on female characters to better execute a balance between gender roles in the East and West. The script of the film was written between Taiwan and Hollywood and in translating the film to English, many cultural references were lost, which made maintaining the cultural authenticity of the film while still reaching out to the West very difficult. The thematic conflict throughout the movie between societal roles and personal desires attribute to the international reception of the film, which resonates with both the Eastern and Western audiences. Additionally, international networks were used in the production and promotion of the film, which were needed to achieve its global distribution. Additional marketing strategies were needed for the film to attract the Western audience, who were unfamiliar with the cultural products of the East.
The film premiered in cinemas on December 8, 2000, in limited release within the US. During its opening weekend, the film opened in 15th place, grossing $663,205 in business, showing at 16 locations. On January 12, 2001, "Crouching Tiger, Hidden Dragon" premiered in cinemas in wide release throughout the US grossing $8,647,295 in business, ranking in sixth place. The film "Save the Last Dance" came in first place during that weekend, grossing $23,444,930. The film's revenue dropped by almost 30% in its second week of release, earning $6,080,357. For that particular weekend, the film fell to eighth place screening in 837 theaters. "Save the Last Dance" remained unchanged in first place, grossing $15,366,047 in box-office revenue. During its final week in release, "Crouching Tiger, Hidden Dragon" opened in a distant 50th place with $37,233 in revenue. The film went on to top out domestically at $128,078,872 in total ticket sales through a 31-week theatrical run. Internationally, the film took in an additional $85,446,864 in box-office business for a combined worldwide total of $213,525,736. For 2000 as a whole, the film cumulatively ranked at a worldwide box-office performance position of 19.
Gathering widespread critical acclaim at the Toronto and New York film festivals, the film also became a favorite when Academy Awards nominations were announced in 2001. The film was, however, screened out of competition at the 2000 Cannes Film Festival. The film received ten Academy Award nominations, which was the highest ever for a non-English language film, up until it was tied by "Roma" (2018).
A Direct to TV sequel to the film, "", was released in 2016. It was directed by Yuen Woo-ping, who was the action choreographer for the first film. It is a co-production between Pegasus Media, China Film Group Corporation, and the Weinstein Company. Unlike the original film, the sequel was filmed in English for international release and dubbed to Mandarin for Chinese releases.
"Sword of Destiny" is based on the book "Iron Knight, Silver Vase", the next (and last) novel in the Crane-Iron Pentalogy. It features a mostly new cast, headed by Donnie Yen. Michelle Yeoh reprised her role from the original. Zhang Ziyi was also approached to appear in "Sword of Destiny" but refused, stating that she would only appear in a sequel if Ang Lee were directing it.
In the United States, the sequel was for the most part not shown in theaters, instead being distributed via the video streaming service Netflix.
The theme of Janet Jackson's song "China Love" was related to the film by MTV News, in which Jackson sings of the daughter of an emperor in love with a warrior, unable to sustain relations when forced to marry into royalty.
The names of the pterosaur genus "Kryptodrakon" and the ceratopsian genus "Yinlong" (both meaning "hidden dragon" in Greek and Mandarin respectively) allude to the film.
In the contract reached between Columbia Pictures and Ang Lee and Hsu Li-kong, they agreed to invest US$6 million in filming, but the stipulated recovery amount must be more than six times before the two parties will start to pay dividends.
At first, the role of Jade Dragon was not scheduled to be played by Zhang Ziyi, and Shu Qi's public opinion in the media of Hong Kong and Taiwan at that time penetrated, and was once considered to be a famous candidate. However, because of family economic factors, he was forced to give up because of multiple films This character Zhang Ziyi was invited by VISA, a credit card international organization, to shoot a TV commercial similar to the martial arts segment of the film, and the advertising response was not bad. | https://en.wikipedia.org/wiki?curid=5313 |
Charlemagne
Charlemagne (; ) or Charles the Great (2 April 748 – 28 January 814), numbered Charles I, was the King of the Franks from 768, the King of the Lombards from 774, and the Emperor of the Romans from 800. During the Early Middle Ages, he united the majority of western and central Europe. He was the first recognised emperor to rule from western Europe since the fall of the Western Roman Empire three centuries earlier. The expanded Frankish state that Charlemagne founded is called the Carolingian Empire. He was later canonised by Antipope Paschal III.
Charlemagne was the eldest son of Pepin the Short and Bertrada of Laon, born before their canonical marriage. He became king in 768 following his father's death, initially as co-ruler with his brother Carloman I. Carloman's sudden death in December 771 under unexplained circumstances left Charlemagne the sole ruler of the Frankish Kingdom. He continued his father's policy towards the papacy and became its protector, removing the Lombards from power in northern Italy and leading an incursion into Muslim Spain. He campaigned against the Saxons to his east, Christianising them upon penalty of death and leading to events such as the Massacre of Verden. He reached the height of his power in 800 when he was crowned "Emperor of the Romans" by Pope Leo III on Christmas Day at Old St. Peter's Basilica in Rome.
Charlemagne has been called the "Father of Europe" ("Pater Europae"), as he united most of Western Europe for the first time since the classical era of the Roman Empire and united parts of Europe that had never been under Frankish or Roman rule. His rule spurred the Carolingian Renaissance, a period of energetic cultural and intellectual activity within the Western Church. The Eastern Orthodox Church viewed Charlemagne less favourably due to his support of the filioque and the Pope's having preferred him as Emperor over the Byzantine Empire's first female Empress Irene of Athens. These and other disputes led to the eventual later split of Rome and Constantinople in the Great Schism of 1054.
Charlemagne died in 814 and was laid to rest in Aachen Cathedral in his imperial capital city of Aachen. He married at least four times and had three legitimate sons who lived to adulthood, but only the youngest of them, Louis the Pious, survived to succeed him. He also had numerous illegitimate children with his concubines.
He was named "Charles" in French and English, "Carolus" in Latin, after his grandfather, Charles Martel. Later Old French historians dubbed him "Charles le Magne" (Charles the Great), becoming Charlemagne in English after the Norman conquest of England. The epithet Carolus Magnus was widely used, leading to numerous translations into many languages of Europe.
Charles' achievements gave a new meaning to his name. In many languages of Europe, the very word for "king" derives from his name; e.g., , , , , , , , , , , , , . This development parallels that of the name of the Caesars in the original Roman Empire, which became "kaiser" and "tsar" (or "czar"), among others.
By the 6th century, the western Germanic tribe of the Franks had been Christianised, due in considerable measure to the Catholic conversion of Clovis I. Francia, ruled by the Merovingians, was the most powerful of the kingdoms that succeeded the Western Roman Empire. Following the Battle of Tertry, the Merovingians declined into powerlessness, for which they have been dubbed the "rois fainéants" ("do-nothing kings"). Almost all government powers were exercised by their chief officer, the mayor of the palace.
In 687, Pepin of Herstal, mayor of the palace of Austrasia, ended the strife between various kings and their mayors with his victory at Tertry. He became the sole governor of the entire Frankish kingdom. Pepin was the grandson of two important figures of the Austrasian Kingdom: Saint Arnulf of Metz and Pepin of Landen. Pepin of Herstal was eventually succeeded by his son Charles, later known as Charles Martel (Charles the Hammer).
After 737, Charles governed the Franks in lieu of a king and declined to call himself "king". Charles was succeeded in 741 by his sons Carloman and Pepin the Short, the father of Charlemagne. In 743, the brothers placed Childeric III on the throne to curb separatism in the periphery. He was the last Merovingian king. Carloman resigned office in 746, preferring to enter the church as a monk. Pepin brought the question of the kingship before Pope Zachary, asking whether it was logical for a king to have no royal power. The pope handed down his decision in 749, decreeing that it was better for Pepin to be called king, as he had the powers of high office as Mayor, so as not to confuse the hierarchy. He, therefore, ordered him to become the "true king".
In 750, Pepin was elected by an assembly of the Franks, anointed by the archbishop, and then raised to the office of king. The Pope branded Childeric III as "the false king" and ordered him into a monastery. The Merovingian dynasty was thereby replaced by the Carolingian dynasty, named after Charles Martel. In 753, Pope Stephen II fled from Italy to Francia, appealing to Pepin for assistance for the rights of St. Peter. He was supported in this appeal by Carloman, Charles' brother. In return, the pope could provide only legitimacy. He did this by again anointing and confirming Pepin, this time adding his young sons Carolus (Charlemagne) and Carloman to the royal patrimony. They thereby became heirs to the realm that already covered most of western Europe. In 754, Pepin accepted the Pope's invitation to visit Italy on behalf of St. Peter's rights, dealing successfully with the Lombards.
Under the Carolingians, the Frankish kingdom spread to encompass an area including most of Western Europe; the east-west division of the kingdom formed the basis for modern France and Germany. Orman portrays the Treaty of Verdun (843) between the warring grandsons of Charlemagne as the foundation event of an independent France under its first king Charles the Bald; an independent Germany under its first king Louis the German; and an independent intermediate state stretching from the Low Countries along the borderlands to south of Rome under Lothair I, who retained the title of emperor and the capitals Aachen and Rome without the jurisdiction. The middle kingdom had broken up by 890 and partly absorbed into the Western kingdom (later France) and the Eastern kingdom (Germany) and the rest developing into smaller "buffer" nations that exist between France and Germany to this day, namely the Benelux and Switzerland.
The most likely date of Charlemagne's birth is reconstructed from several sources. The date of 742—calculated from Einhard's date of death of January 814 at age 72—predates the marriage of his parents in 744. The year given in the "Annales Petaviani", 747, would be more likely, except that it contradicts Einhard and a few other sources in making Charlemagne sixty-seven years old at his death. The month and day of 2 April are based on a calendar from Lorsch Abbey.
In 747, Easter fell on 2 April, a coincidence that likely would have been remarked upon by chroniclers but was not. If Easter was being used as the beginning of the calendar year, then 2 April 747 could have been, by modern reckoning, April 748 (not on Easter). The date favoured by the preponderance of evidence is 2 April 742, based on Charlemagne's age at the time of his death. This date supports the concept that Charlemagne was technically an illegitimate child, although that is not mentioned by Einhard in either since he was born out of wedlock; Pepin and Bertrada were bound by a private contract or Friedelehe at the time of his birth, but did not marry until 744.
Charlemagne's exact birthplace is unknown, although historians have suggested Aachen in modern-day Germany, and Liège (Herstal) in present-day Belgium as possible locations. Aachen and Liège are close to the region whence the Merovingian and Carolingian families originated. Other cities have been suggested, including Düren, Gauting, Mürlenbach, Quierzy, and Prüm. No definitive evidence resolves the question.
Charlemagne was the eldest child of Pepin the Short (714 – 24 September 768, reigned from 751) and his wife Bertrada of Laon (720 – 12 July 783), daughter of Caribert of Laon. Many historians consider Charlemagne (Charles) to have been illegitimate, although some state that this is arguable, because Pepin did not marry Bertrada until 744, which was after Charles' birth; this status did not exclude him from the succession.
Records name only Carloman, Gisela, and three short-lived children named Pepin, Chrothais and Adelais as his younger siblings.
The most powerful officers of the Frankish people, the Mayor of the Palace ("Maior Domus") and one or more kings ("rex", "reges"), were appointed by the election of the people. Elections were not periodic, but were held as required to elect officers "ad quos summa imperii pertinebat", "to whom the highest matters of state pertained". Evidently, interim decisions could be made by the Pope, which ultimately needed to be ratified using an assembly of the people that met annually.
Before he was elected king in 751, Pepin was initially a mayor, a high office he held "as though hereditary" ("velut hereditario fungebatur"). Einhard explains that "the honour" was usually "given by the people" to the distinguished, but Pepin the Great and his brother Carloman the Wise received it as though hereditary, as had their father, Charles Martel. There was, however, a certain ambiguity about quasi-inheritance. The office was treated as joint property: one Mayorship held by two brothers jointly. Each, however, had his own geographic jurisdiction. When Carloman decided to resign, becoming ultimately a Benedictine at Monte Cassino, the question of the disposition of his quasi-share was settled by the pope. He converted the mayorship into a kingship and awarded the joint property to Pepin, who gained the right to pass it on by inheritance.
This decision was not accepted by all family members. Carloman had consented to the temporary tenancy of his own share, which he intended to pass on to his son, Drogo, when the inheritance should be settled at someone's death. By the Pope's decision, in which Pepin had a hand, Drogo was to be disqualified as an heir in favour of his cousin Charles. He took up arms in opposition to the decision and was joined by Grifo, a half-brother of Pepin and Carloman, who had been given a share by Charles Martel, but was stripped of it and held under loose arrest by his half-brothers after an attempt to seize their shares by military action. Grifo perished in combat in the Battle of Saint-Jean-de-Maurienne while Drogo was hunted down and taken into custody.
On the death of Pepin, 24 September 768, the kingship passed jointly to his sons, "with divine assent" ("divino nutu"). According to the "Life", Pepin died in Paris. The Franks "in general assembly" ("generali conventu") gave them both the rank of a king ("reges") but "partitioned the whole body of the kingdom equally" ("totum regni corpus ex aequo partirentur"). The "annals" tell a slightly different version, with the king dying at St-Denis, near Paris. The two "lords" ("domni") were "elevated to kingship" ("elevati sunt in regnum"), Charles on 9 October in Noyon, Carloman on an unspecified date in Soissons. If born in 742, Charles was 26 years old, but he had been campaigning at his father's right hand for several years, which may help to account for his military skill. Carloman was 17.
The language, in either case, suggests that there were not two inheritances, which would have created distinct kings ruling over distinct kingdoms, but a single joint inheritance and a joint kingship tenanted by two equal kings, Charles and his brother Carloman. As before, distinct jurisdictions were awarded. Charles received Pepin's original share as Mayor: the outer parts of the kingdom bordering on the sea, namely Neustria, western Aquitaine, and the northern parts of Austrasia; while Carloman was awarded his uncle's former share, the inner parts: southern Austrasia, Septimania, eastern Aquitaine, Burgundy, Provence, and Swabia, lands bordering Italy. The question of whether these jurisdictions were joint shares reverting to the other brother if one brother died or were inherited property passed on to the descendants of the brother who died was never definitely settled. It came up repeatedly over the succeeding decades until the grandsons of Charlemagne created distinct sovereign kingdoms.
Aquitaine under Rome had been in southern Gaul, Romanised and speaking a Romance language. Similarly, Hispania had been populated by peoples who spoke various languages, including Celtic, but the area was now populated primarily by Romance language speakers. Between Aquitaine and Hispania were the Euskaldunak, Latinised to Vascones, or Basques, living in Basque country, Vasconia, which extended, according to the distributions of place names attributable to the Basques, most densely in the western Pyrenees but also as far south as the upper Ebro River in Spain and as far north as the Garonne River in France. The French name, Gascony, derives from Vasconia. The Romans were never able to entirely subject Vasconia. The parts they did, in which they placed the region's first cities, were sources of legions in the Roman army valued for their fighting abilities. The border with Aquitaine was Toulouse.
At about 660, the Duchy of Vasconia united with the Duchy of Aquitaine to form a single realm under Felix of Aquitaine, governing from Toulouse. This was a joint kingship with a Basque Duke, Lupus I. "Lupus" is the Latin translation of Basque Otsoa, "wolf". At Felix's death in 670 the joint property of the kingship reverted entirely to Lupus. As the Basques had no law of joint inheritance but practised primogeniture, Lupus in effect founded a hereditary dynasty of Basque rulers of an expanded Aquitaine.
The Latin chronicles of the end of Visigothic Hispania omit many details, such as identification of characters, filling in the gaps and reconciliation of numerous contradictions. Muslim sources, however, present a more coherent view, such as in the "Ta'rikh iftitah al-Andalus" ("History of the Conquest of al-Andalus") by Ibn al-Qūṭiyya ("the son of the Gothic woman", referring to the granddaughter of Wittiza, the last Visigothic king of a united Hispania, who married a Moor). Ibn al-Qūṭiyya, who had another, much longer name, must have been relying to some degree on family oral tradition.
According to Ibn al-Qūṭiyya Wittiza, the last Visigothic king of a united Hispania died before his three sons, Almund, Romulo, and Ardabast reached maturity. Their mother was queen regent at Toledo, but Roderic, army chief of staff, staged a rebellion, capturing Córdoba. He chose to impose a joint rule over distinct jurisdictions on the true heirs. Evidence of a division of some sort can be found in the distribution of coins imprinted with the name of each king and in the king lists. Wittiza was succeeded by Roderic, who reigned for seven and a half years, followed by Achila (Aquila), who reigned three and a half years. If the reigns of both terminated with the incursion of the Saracens, then Roderic appears to have reigned a few years before the majority of Achila. The latter's kingdom is securely placed to the northeast, while Roderic seems to have taken the rest, notably modern Portugal.
The Saracens crossed the mountains to claim Ardo's Septimania, only to encounter the Basque dynasty of Aquitaine, always the allies of the Goths. Odo the Great of Aquitaine was at first victorious at the Battle of Toulouse in 721. Saracen troops gradually massed in Septimania and in 732 an army under Emir Abdul Rahman Al Ghafiqi advanced into Vasconia, and Odo was defeated at the Battle of the River Garonne. They took Bordeaux and were advancing towards Tours when Odo, powerless to stop them, appealed to his arch-enemy, Charles Martel, mayor of the Franks. In one of the first of the lightning marches for which the Carolingian kings became famous, Charles and his army appeared in the path of the Saracens between Tours and Poitiers, and in the Battle of Tours decisively defeated and killed al-Ghafiqi. The Moors returned twice more, each time suffering defeat at Charles' hands—at the River Berre near Narbonne in 737 and in the Dauphiné in 740. Odo's price for salvation from the Saracens was incorporation into the Frankish kingdom, a decision that was repugnant to him and also to his heirs.
After the death of his father, Hunald I allied himself with free Lombardy. However, Odo had ambiguously left the kingdom jointly to his two sons, Hunald and Hatto. The latter, loyal to Francia, now went to war with his brother over full possession. Victorious, Hunald blinded and imprisoned his brother, only to be so stricken by conscience that he resigned and entered the church as a monk to do penance. The story is told in Annales Mettenses priores. His son Waifer took an early inheritance, becoming duke of Aquitaine and ratified the alliance with Lombardy. Waifer decided to honour it, repeating his father's decision, which he justified by arguing that any agreements with Charles Martel became invalid on Martel's death. Since Aquitaine was now Pepin's inheritance because of the earlier assistance that was given by Charles Martel, according to some the latter and his son, the young Charles, hunted down Waifer, who could only conduct a guerrilla war, and executed him.
Among the contingents of the Frankish army were Bavarians under Tassilo III, Duke of Bavaria, an Agilofing, the hereditary Bavarian ducal family. Grifo had installed himself as Duke of Bavaria, but Pepin replaced him with a member of the ducal family yet a child, Tassilo, whose protector he had become after the death of his father. The loyalty of the Agilolfings was perpetually in question, but Pepin exacted numerous oaths of loyalty from Tassilo. However, the latter had married Liutperga, a daughter of Desiderius, king of Lombardy. At a critical point in the campaign, Tassilo left the field with all his Bavarians. Out of reach of Pepin, he repudiated all loyalty to Francia. Pepin had no chance to respond as he grew ill and died within a few weeks after Waifer's execution.
The first event of the brothers' reign was the uprising of the Aquitainians and Gascons, in 769, in that territory split between the two kings. One year earlier, Pepin had finally defeated Waifer, Duke of Aquitaine, after waging a destructive, ten-year war against Aquitaine. Now, Hunald II led the Aquitainians as far north as Angoulême. Charles met Carloman, but Carloman refused to participate and returned to Burgundy. Charles went to war, leading an army to Bordeaux, where he set up a fort at Fronsac. Hunald was forced to flee to the court of Duke Lupus II of Gascony. Lupus, fearing Charles, turned Hunald over in exchange for peace, and Hunald was put in a monastery. Gascon lords also surrendered, and Aquitaine and Gascony were finally fully subdued by the Franks.
The brothers maintained lukewarm relations with the assistance of their mother Bertrada, but in 770 Charles signed a treaty with Duke Tassilo III of Bavaria and married a Lombard Princess (commonly known today as Desiderata), the daughter of King Desiderius, to surround Carloman with his own allies. Though Pope Stephen III first opposed the marriage with the Lombard princess, he found little to fear from a Frankish-Lombard alliance.
Less than a year after his marriage, Charlemagne repudiated Desiderata and married a 13-year-old Swabian named Hildegard. The repudiated Desiderata returned to her father's court at Pavia. Her father's wrath was now aroused, and he would have gladly allied with Carloman to defeat Charles. Before any open hostilities could be declared, however, Carloman died on 5 December 771, apparently of natural causes. Carloman's widow Gerberga fled to Desiderius' court with her sons for protection.
Charlemagne had eighteen children with eight of his ten known wives or concubines. Nonetheless, he had only four legitimate grandsons, the four sons of his fourth son, Louis. In addition, he had a grandson (Bernard of Italy, the only son of his third son, Pepin of Italy), who was illegitimate but included in the line of inheritance. Among his descendants are several royal dynasties, including the Habsburg, Capetian and Plantagenet dynasties. By consequence, most if not all established European noble families ever since can genealogically trace some of their background to Charlemagne.
During the first peace of any substantial length (780–782), Charles began to appoint his sons to positions of authority. In 781, during a visit to Rome, he made his two youngest sons kings, crowned by the Pope. The elder of these two, Carloman, was made the king of Italy, taking the Iron Crown that his father had first worn in 774, and in the same ceremony was renamed "Pepin" (not to be confused with Charlemagne's eldest, possibly illegitimate son, Pepin the Hunchback). The younger of the two, Louis, became King of Aquitaine. Charlemagne ordered Pepin and Louis to be raised in the customs of their kingdoms, and he gave their regents some control of their subkingdoms, but kept the real power, though he intended his sons to inherit their realms. He did not tolerate insubordination in his sons: in 792, he banished Pepin the Hunchback to Prüm Abbey because the young man had joined a rebellion against him.
Charles was determined to have his children educated, including his daughters, as his parents had instilled the importance of learning in him at an early age. His children were also taught skills in accord with their aristocratic status, which included training in riding and weaponry for his sons, and embroidery, spinning and weaving for his daughters.
The sons fought many wars on behalf of their father. Charles was mostly preoccupied with the Bretons, whose border he shared and who insurrected on at least two occasions and were easily put down. He also fought the Saxons on multiple occasions. In 805 and 806, he was sent into the Böhmerwald (modern Bohemia) to deal with the Slavs living there (Bohemian tribes, ancestors of the modern Czechs). He subjected them to Frankish authority and devastated the valley of the Elbe, forcing tribute from them. Pippin had to hold the Avar and Beneventan borders and fought the Slavs to his north. He was uniquely poised to fight the Byzantine Empire when that conflict arose after Charlemagne's imperial coronation and a Venetian rebellion. Finally, Louis was in charge of the Spanish March and fought the Duke of Benevento in southern Italy on at least one occasion. He took Barcelona in a great siege in 797.
Charlemagne kept his daughters at home with him and refused to allow them to contract sacramental marriages (though he originally condoned an engagement between his eldest daughter Rotrude and Constantine VI of Byzantium, this engagement was annulled when Rotrude was 11). Charlemagne's opposition to his daughters' marriages may possibly have intended to prevent the creation of cadet branches of the family to challenge the main line, as had been the case with Tassilo of Bavaria. However, he tolerated their extramarital relationships, even rewarding their common-law husbands and treasuring the illegitimate grandchildren they produced for him. He also, apparently, refused to believe stories of their wild behaviour. After his death the surviving daughters were banished from the court by their brother, the pious Louis, to take up residence in the convents they had been bequeathed by their father. At least one of them, Bertha, had a recognised relationship, if not a marriage, with Angilbert, a member of Charlemagne's court circle.
At his succession in 772, Pope Adrian I demanded the return of certain cities in the former exarchate of Ravenna in accordance with a promise at the succession of Desiderius. Instead, Desiderius took over certain papal cities and invaded the Pentapolis, heading for Rome. Adrian sent ambassadors to Charlemagne in autumn requesting he enforce the policies of his father, Pepin. Desiderius sent his own ambassadors denying the pope's charges. The ambassadors met at Thionville, and Charlemagne upheld the pope's side. Charlemagne demanded what the pope had requested, but Desiderius swore never to comply. Charlemagne and his uncle Bernard crossed the Alps in 773 and chased the Lombards back to Pavia, which they then besieged. Charlemagne temporarily left the siege to deal with Adelchis, son of Desiderius, who was raising an army at Verona. The young prince was chased to the Adriatic littoral and fled to Constantinople to plead for assistance from Constantine V, who was waging war with Bulgaria.
The siege lasted until the spring of 774 when Charlemagne visited the pope in Rome. There he confirmed his father's grants of land, with some later chronicles falsely claiming that he also expanded them, granting Tuscany, Emilia, Venice and Corsica. The pope granted him the title "patrician". He then returned to Pavia, where the Lombards were on the verge of surrendering. In return for their lives, the Lombards surrendered and opened the gates in early summer. Desiderius was sent to the abbey of Corbie, and his son Adelchis died in Constantinople, a patrician. Charles, unusually, had himself crowned with the Iron Crown and made the magnates of Lombardy pay homage to him at Pavia. Only Duke Arechis II of Benevento refused to submit and proclaimed independence. Charlemagne was then master of Italy as king of the Lombards. He left Italy with a garrison in Pavia and a few Frankish counts in place the same year.
Instability continued in Italy. In 776, Dukes Hrodgaud of Friuli and Hildeprand of Spoleto rebelled. Charlemagne rushed back from Saxony and defeated the Duke of Friuli in battle; the Duke was slain. The Duke of Spoleto signed a treaty. Their co-conspirator, Arechis, was not subdued, and Adelchis, their candidate in Byzantium, never left that city. Northern Italy was now faithfully his.
In 787, Charlemagne directed his attention towards the Duchy of Benevento, where Arechis II was reigning independently with the self-given title of Princeps. Charlemagne's siege of Salerno forced Arechis into submission. However, after Arechis II's death in 787, his son Grimoald III proclaimed the Duchy of Benevento newly independent. Grimoald was attacked many times by Charles' or his sons' armies, without achieving a definitive victory. Charlemagne lost interest and never again returned to Southern Italy where Grimoald was able to keep the Duchy free from Frankish suzerainty.
The destructive war led by Pepin in Aquitaine, although brought to a satisfactory conclusion for the Franks, proved the Frankish power structure south of the Loire was feeble and unreliable. After the defeat and death of Waiofar in 768, while Aquitaine submitted again to the Carolingian dynasty, a new rebellion broke out in 769 led by Hunald II, a possible son of Waifer. He took refuge with the ally Duke Lupus II of Gascony, but probably out of fear of Charlemagne's reprisal, Lupus handed him over to the new King of the Franks to whom he pledged loyalty, which seemed to confirm the peace in the Basque area south of the Garonne. In the campaign of 769, Charlemagne seems to have followed a policy of "overwhelming force" and avoided a major pitched battle
Wary of new Basque uprisings, Charlemagne seems to have tried to contain Duke Lupus's power by appointing Seguin as the Count of Bordeaux (778) and other counts of Frankish background in bordering areas (, County of Fézensac). The Basque Duke, in turn, seems to have contributed decisively or schemed the Battle of Roncevaux Pass (referred to as "Basque treachery"). The defeat of Charlemagne's army in Roncevaux (778) confirmed his determination to rule directly by establishing the Kingdom of Aquitaine (ruled by Louis the Pious) based on a power base of Frankish officials, distributing lands among colonisers and allocating lands to the Church, which he took as an ally. A Christianisation programme was put in place across the high Pyrenees (778).
The new political arrangement for Vasconia did not sit well with local lords. As of 788 Adalric was fighting and capturing Chorson, Carolingian Count of Toulouse. He was eventually released, but Charlemagne, enraged at the compromise, decided to depose him and appointed his trustee William of Gellone. William, in turn, fought the Basques and defeated them after banishing Adalric (790).
From 781 (Pallars, Ribagorça) to 806 (Pamplona under Frankish influence), taking the County of Toulouse for a power base, Charlemagne asserted Frankish authority over the Pyrenees by subduing the south-western marches of Toulouse (790) and establishing vassal counties on the southern Pyrenees that were to make up the Marca Hispanica. As of 794, a Frankish vassal, the Basque lord Belasko ("al-Galashki", 'the Gaul') ruled Álava, but Pamplona remained under Cordovan and local control up to 806. Belasko and the counties in the Marca Hispánica provided the necessary base to attack the Andalusians (an expedition led by William Count of Toulouse and Louis the Pious to capture Barcelona in 801). Events in the Duchy of Vasconia (rebellion in Pamplona, count overthrown in Aragon, Duke Seguin of Bordeaux deposed, uprising of the Basque lords, etc.) were to prove it ephemeral upon Charlemagne's death.
According to the Muslim historian Ibn al-Athir, the Diet of Paderborn had received the representatives of the Muslim rulers of Zaragoza, Girona, Barcelona and Huesca. Their masters had been cornered in the Iberian peninsula by Abd ar-Rahman I, the Umayyad emir of Cordova. These "Saracen" (Moorish and Muladi) rulers offered their homage to the king of the Franks in return for military support. Seeing an opportunity to extend Christendom and his own power and believing the Saxons to be a fully conquered nation, Charlemagne agreed to go to Spain.
In 778, he led the Neustrian army across the Western Pyrenees, while the Austrasians, Lombards, and Burgundians passed over the Eastern Pyrenees. The armies met at Saragossa and Charlemagne received the homage of the Muslim rulers, Sulayman al-Arabi and Kasmin ibn Yusuf, but the city did not fall for him. Indeed, Charlemagne faced the toughest battle of his career. The Muslims forced him to retreat. He decided to go home since he could not trust the Basques, whom he had subdued by conquering Pamplona. He turned to leave Iberia, but as he was passing through the Pass of Roncesvalles one of the most famous events of his reign occurred. The Basques attacked and destroyed his rearguard and baggage train. The Battle of Roncevaux Pass, though less a battle than a skirmish, left many famous dead, including the seneschal Eggihard, the count of the palace Anselm, and the warden of the Breton March, Roland, inspiring the subsequent creation of the Song of Roland ("La Chanson de Roland").
The conquest of Italy brought Charlemagne in contact with the Saracens who, at the time, controlled the Mediterranean. Charlemagne's eldest son, Pepin the Hunchback, was much occupied with Saracens in Italy. Charlemagne conquered Corsica and Sardinia at an unknown date and in 799 the Balearic Islands. The islands were often attacked by Saracen pirates, but the counts of Genoa and Tuscany (Boniface) controlled them with large fleets until the end of Charlemagne's reign. Charlemagne even had contact with the caliphal court in Baghdad. In 797 (or possibly 801), the caliph of Baghdad, Harun al-Rashid, presented Charlemagne with an Asian elephant named Abul-Abbas and a clock.
In Hispania, the struggle against the Moors continued unabated throughout the latter half of his reign. Louis was in charge of the Spanish border. In 785, his men captured Girona permanently and extended Frankish control into the Catalan littoral for the duration of Charlemagne's reign (the area remained nominally Frankish until the Treaty of Corbeil in 1258). The Muslim chiefs in the northeast of Islamic Spain were constantly rebelling against Cordovan authority, and they often turned to the Franks for help. The Frankish border was slowly extended until 795, when Girona, Cardona, Ausona and Urgell were united into the new Spanish March, within the old duchy of Septimania.
In 797, Barcelona, the greatest city of the region, fell to the Franks when Zeid, its governor, rebelled against Cordova and, failing, handed it to them. The Umayyad authority recaptured it in 799. However, Louis of Aquitaine marched the entire army of his kingdom over the Pyrenees and besieged it for two years, wintering there from 800 to 801, when it capitulated. The Franks continued to press forward against the emir. They took Tarragona in 809 and Tortosa in 811. The last conquest brought them to the mouth of the Ebro and gave them raiding access to Valencia, prompting the Emir al-Hakam I to recognise their conquests in 813.
Charlemagne was engaged in almost constant warfare throughout his reign, often at the head of his elite "scara" bodyguard squadrons. In the Saxon Wars, spanning thirty years and eighteen battles, he conquered Saxonia and proceeded to convert it to Christianity.
The Germanic Saxons were divided into four subgroups in four regions. Nearest to Austrasia was Westphalia and furthest away was Eastphalia. Between them was Engria and north of these three, at the base of the Jutland peninsula, was Nordalbingia.
In his first campaign, in 773, Charlemagne forced the Engrians to submit and cut down an Irminsul pillar near Paderborn. The campaign was cut short by his first expedition to Italy. He returned in 775, marching through Westphalia and conquering the Saxon fort at Sigiburg. He then crossed Engria, where he defeated the Saxons again. Finally, in Eastphalia, he defeated a Saxon force, and its leader Hessi converted to Christianity. Charlemagne returned through Westphalia, leaving encampments at Sigiburg and Eresburg, which had been important Saxon bastions. He then controlled Saxony with the exception of Nordalbingia, but Saxon resistance had not ended.
Following his subjugation of the Dukes of Friuli and Spoleto, Charlemagne returned rapidly to Saxony in 776, where a rebellion had destroyed his fortress at Eresburg. The Saxons were once again defeated, but their main leader, Widukind, escaped to Denmark, his wife's home. Charlemagne built a new camp at Karlstadt. In 777, he called a national diet at Paderborn to integrate Saxony fully into the Frankish kingdom. Many Saxons were baptised as Christians.
In the summer of 779, he again invaded Saxony and reconquered Eastphalia, Engria and Westphalia. At a diet near Lippe, he divided the land into missionary districts and himself assisted in several mass baptisms (780). He then returned to Italy and, for the first time, the Saxons did not immediately revolt. Saxony was peaceful from 780 to 782.
He returned to Saxony in 782 and instituted a code of law and appointed counts, both Saxon and Frank. The laws were draconian on religious issues; for example, the "Capitulatio de partibus Saxoniae" prescribed death to Saxon pagans who refused to convert to Christianity. This led to renewed conflict. That year, in autumn, Widukind returned and led a new revolt. In response, at Verden in Lower Saxony, Charlemagne is recorded as having ordered the execution of 4,500 Saxon prisoners by beheading, known as the Massacre of Verden ("Verdener Blutgericht"). The killings triggered three years of renewed bloody warfare. During this war, the East Frisians between the Lauwers and the Weser joined the Saxons in revolt and were finally subdued. The war ended with Widukind accepting baptism. The Frisians afterwards asked for missionaries to be sent to them and a bishop of their own nation, Ludger, was sent. Charlemagne also promulgated a law code, the "Lex Frisonum", as he did for most subject peoples.
Thereafter, the Saxons maintained the peace for seven years, but in 792 Westphalia again rebelled. The Eastphalians and Nordalbingians joined them in 793, but the insurrection was unpopular and was put down by 794. An Engrian rebellion followed in 796, but the presence of Charlemagne, Christian Saxons and Slavs quickly crushed it. The last insurrection occurred in 804, more than thirty years after Charlemagne's first campaign against them, but also failed. According to Einhard:
By 774, Charlemagne had invaded the Kingdom of Lombardy, and he later annexed the Lombardian territories and assumed its crown, placing the Papal States under Frankish protection. The Duchy of Spoleto south of Rome was acquired in 774, while in the central western parts of Europe, the Duchy of Bavaria was absorbed and the Bavarian policy continued of establishing tributary marches, (borders protected in return for tribute or taxes) among the Slavic Serbs and Czechs. The remaining power confronting the Franks in the east were the Avars. However, Charlemagne acquired other Slavic areas, including Bohemia, Moravia, Austria and Croatia.
In 789, Charlemagne turned to Bavaria. He claimed that Tassilo III, Duke of Bavaria was an unfit ruler, due to his oath-breaking. The charges were exaggerated, but Tassilo was deposed anyway and put in the monastery of Jumièges. In 794, Tassilo was made to renounce any claim to Bavaria for himself and his family (the Agilolfings) at the synod of Frankfurt; he formally handed over to the king all of the rights he had held. Bavaria was subdivided into Frankish counties, as had been done with Saxony.
In 788, the Avars, an Asian nomadic group that had settled down in what is today Hungary (Einhard called them Huns), invaded Friuli and Bavaria. Charlemagne was preoccupied with other matters until 790 when he marched down the Danube and ravaged Avar territory to the Győr. A Lombard army under Pippin then marched into the Drava valley and ravaged Pannonia. The campaigns ended when the Saxons revolted again in 792.
For the next two years, Charlemagne was occupied, along with the Slavs, against the Saxons. Pippin and Duke Eric of Friuli continued, however, to assault the Avars' ring-shaped strongholds. The great Ring of the Avars, their capital fortress, was taken twice. The booty was sent to Charlemagne at his capital, Aachen, and redistributed to his followers and to foreign rulers, including King Offa of Mercia. Soon the Avar tuduns had lost the will to fight and travelled to Aachen to become vassals to Charlemagne and to become Christians. Charlemagne accepted their surrender and sent one native chief, baptised Abraham, back to Avaria with the ancient title of khagan. Abraham kept his people in line, but in 800, the Bulgarians under Khan Krum attacked the remains of the Avar state.
In 803, Charlemagne sent a Bavarian army into Pannonia, defeating and bringing an end to the Avar confederation.
In November of the same year, Charlemagne went to Regensburg where the Avar leaders acknowledged him as their ruler. In 805, the Avar khagan, who had already been baptised, went to Aachen to ask permission to settle with his people south-eastward from Vienna. The Transdanubian territories became integral parts of the Frankish realm, which was abolished by the Magyars in 899–900.
In 789, in recognition of his new pagan neighbours, the Slavs, Charlemagne marched an Austrasian-Saxon army across the Elbe into Obotrite territory. The Slavs ultimately submitted, led by their leader Witzin. Charlemagne then accepted the surrender of the Veleti under Dragovit and demanded many hostages. He also demanded permission to send missionaries into this pagan region unmolested. The army marched to the Baltic before turning around and marching to the Rhine, winning much booty with no harassment. The tributary Slavs became loyal allies. In 795, when the Saxons broke the peace, the Abotrites and Veleti rebelled with their new ruler against the Saxons. Witzin died in battle and Charlemagne avenged him by harrying the Eastphalians on the Elbe. Thrasuco, his successor, led his men to conquest over the Nordalbingians and handed their leaders over to Charlemagne, who honoured him. The Abotrites remained loyal until Charles' death and fought later against the Danes.
When Charlemagne incorporated much of Central Europe, he brought the Frankish state face to face with the Avars and Slavs in the southeast. The most southeast Frankish neighbours were Croats, who settled in Pannonian Croatia and Dalmatian Croatia. While fighting the Avars, the Franks had called for their support. During the 790s, he won a major victory over them in 796. Pannonian Croat Duke Vojnomir of Pannonian Croatia aided Charlemagne, and the Franks made themselves overlords over the Croats of northern Dalmatia, Slavonia and Pannonia.
The Frankish commander Eric of Friuli wanted to extend his dominion by conquering the Littoral Croat Duchy. During that time, Dalmatian Croatia was ruled by Duke Višeslav of Croatia. In the Battle of Trsat, the forces of Eric fled their positions and were routed by the forces of Višeslav. Eric was among those killed which was a great blow for the Carolingian Empire.
Charlemagne also directed his attention to the Slavs to the west of the Avar khaganate: the Carantanians and Carniolans. These people were subdued by the Lombards and Bavarii and made tributaries, but were never fully incorporated into the Frankish state.
In 799, Pope Leo III had been assaulted by some of the Romans, who tried to put out his eyes and tear out his tongue. Leo escaped and fled to Charlemagne at Paderborn. Charlemagne, advised by scholar Alcuin, travelled to Rome, in November 800 and held a synod. On 23 December, Leo swore an oath of innocence to Charlemagne. His position having thereby been weakened, the Pope sought to restore his status. Two days later, at Mass, on Christmas Day (25 December), when Charlemagne knelt at the altar to pray, the Pope crowned him "Imperator Romanorum" ("Emperor of the Romans") in Saint Peter's Basilica. In so doing, the Pope rejected the legitimacy of Empress Irene of Constantinople:
Charlemagne's coronation as Emperor, though intended to represent the continuation of the unbroken line of Emperors from Augustus to Constantine VI, had the effect of setting up two separate (and often opposing) Empires and two separate claims to imperial authority. It led to war in 802, and for centuries to come, the Emperors of both West and East would make competing claims of sovereignty over the whole.
Einhard says that Charlemagne was ignorant of the Pope's intent and did not want any such coronation:
A number of modern scholars, however, suggest that Charlemagne was indeed aware of the coronation; certainly, he cannot have missed the bejewelled crown waiting on the altar when he came to pray – something even contemporary sources support.
Historians have debated for centuries whether Charlemagne was aware before the coronation of the Pope's intention to crown him Emperor (Charlemagne declared that he would not have entered Saint Peter's had he known, according to chapter twenty-eight of Einhard's "Vita Karoli Magni"), but that debate obscured the more significant question of "why" the Pope granted the title and why Charlemagne accepted it.
Collins points out "[t]hat the motivation behind the acceptance of the imperial title was a romantic and antiquarian interest in reviving the Roman empire is highly unlikely." For one thing, such romance would not have appealed either to Franks or Roman Catholics at the turn of the ninth century, both of whom viewed the Classical heritage of the Roman Empire with distrust. The Franks took pride in having "fought against and thrown from their shoulders the heavy yoke of the Romans" and "from the knowledge gained in baptism, clothed in gold and precious stones the bodies of the holy martyrs whom the Romans had killed by fire, by the sword and by wild animals", as Pepin III described it in a law of 763 or 764.
Furthermore, the new title—carrying with it the risk that the new emperor would "make drastic changes to the traditional styles and procedures of government" or "concentrate his attentions on Italy or on Mediterranean concerns more generally"—risked alienating the Frankish leadership.
For both the Pope and Charlemagne, the Roman Empire remained a significant power in European politics at this time. The Byzantine Empire, based in Constantinople, continued to hold a substantial portion of Italy, with borders not far south of Rome. Charles' sitting in judgment of the Pope could be seen as usurping the prerogatives of the Emperor in Constantinople:
For the Pope, then, there was "no living Emperor at that time" though Henri Pirenne disputes this saying that the coronation "was not in any sense explained by the fact that at this moment a woman was reigning in Constantinople". Nonetheless, the Pope took the extraordinary step of creating one. The papacy had since 727 been in conflict with Irene's predecessors in Constantinople over a number of issues, chiefly the continued Byzantine adherence to the doctrine of iconoclasm, the destruction of Christian images; while from 750, the secular power of the Byzantine Empire in central Italy had been nullified.
By bestowing the Imperial crown upon Charlemagne, the Pope arrogated to himself "the right to appoint ... the Emperor of the Romans, ... establishing the imperial crown as his own personal gift but simultaneously granting himself implicit superiority over the Emperor whom he had created." And "because the Byzantines had proved so unsatisfactory from every point of view—political, military and doctrinal—he would select a westerner: the one man who by his wisdom and statesmanship and the vastness of his dominions ... stood out head and shoulders above his contemporaries."
With Charlemagne's coronation, therefore, "the Roman Empire remained, so far as either of them [Charlemagne and Leo] were concerned, one and indivisible, with Charles as its Emperor", though there can have been "little doubt that the coronation, with all that it implied, would be furiously contested in Constantinople".
Alcuin writes hopefully in his letters of an "Imperium Christianum" ("Christian Empire"), wherein, "just as the inhabitants of the [Roman Empire] had been united by a common Roman citizenship", presumably this new empire would be united by a common Christian faith. This writes the view of Pirenne when he says "Charles was the Emperor of the "ecclesia" as the Pope conceived it, of the Roman Church, regarded as the universal Church". The "Imperium Christianum" was further supported at a number of synods all across Europe by Paulinus of Aquileia.
What is known, from the Byzantine chronicler Theophanes, is that Charlemagne's reaction to his coronation was to take the initial steps towards securing the Constantinopolitan throne by sending envoys of marriage to Irene, and that Irene reacted somewhat favourably to them.
It is important to distinguish between the universalist and localist conceptions of the empire, which remain controversial among historians. According to the former, the empire was a universal monarchy, a "commonwealth of the whole world, whose sublime unity transcended every minor distinction"; and the emperor "was entitled to the obedience of Christendom". According to the latter, the emperor had no ambition for universal dominion; his realm was limited in the same way as that of every other ruler, and when he made more far-reaching claims his object was normally to ward off the attacks either of the Pope or of the Byzantine emperor. According to this view, also, the origin of the empire is to be explained by specific local circumstances rather than by overarching theories.
According to Ohnsorge, for a long time, it had been the custom of Byzantium to designate the German princes as spiritual "sons" of the Romans. What might have been acceptable in the fifth century had become provoking and insulting to the Franks in the eighth century. Charles came to believe that the Roman emperor, who claimed to head the world hierarchy of states, was, in reality, no greater than Charles himself, a king as other kings, since beginning in 629 he had entitled himself "Basileus" (translated literally as "king"). Ohnsorge finds it significant that the chief wax seal of Charles, which bore only the inscription: "Christe, protege Carolum regem Francorum [Christ, protect Charles, king of the Franks], was used from 772 to 813, even during the imperial period and was not replaced by a special imperial seal; indicating that Charles felt himself to be just the king of the Franks. Finally, Ohnsorge points out that in the spring of 813 at Aachen Charles crowned his only surviving son, Louis, as the emperor without recourse to Rome with only the acclamation of his Franks. The form in which this acclamation was offered was Frankish-Christian rather than Roman. This implies both independence from Rome and a Frankish (non-Roman) understanding of empire.
Charlemagne used these circumstances to claim that he was the "renewer of the Roman Empire", which had declined under the Byzantines. In his official charters, Charles preferred the style "Karolus serenissimus Augustus a Deo coronatus magnus pacificus imperator Romanum gubernans imperium" ("Charles, most serene Augustus crowned by God, the great, peaceful emperor ruling the Roman empire") to the more direct "Imperator Romanorum" ("Emperor of the Romans").
The title of Emperor remained in the Carolingian family for years to come, but divisions of territory and in-fighting over supremacy of the Frankish state weakened its significance. The papacy itself never forgot the title nor abandoned the right to bestow it. When the family of Charles ceased to produce worthy heirs, the Pope gladly crowned whichever Italian magnate could best protect him from his local enemies. The empire would remain in continuous existence for over a millennium, as the Holy Roman Empire, a true imperial successor to Charles.
The iconoclasm of the Byzantine Isaurian Dynasty was endorsed by the Franks. The Second Council of Nicaea reintroduced the veneration of icons under Empress Irene. The council was not recognised by Charlemagne since no Frankish emissaries had been invited, even though Charlemagne ruled more than three provinces of the classical Roman empire and was considered equal in rank to the Byzantine emperor. And while the Pope supported the reintroduction of the iconic veneration, he politically digressed from Byzantium. He certainly desired to increase the influence of the papacy, to honour his saviour Charlemagne, and to solve the constitutional issues then most troubling to European jurists in an era when Rome was not in the hands of an emperor. Thus, Charlemagne's assumption of the imperial title was not a usurpation in the eyes of the Franks or Italians. It was, however, seen as such in Byzantium, where it was protested by Irene and her successor Nikephoros I—neither of whom had any great effect in enforcing their protests.
The East Romans, however, still held several territories in Italy: Venice (what was left of the Exarchate of Ravenna), Reggio (in Calabria), Otranto (in Apulia), and Naples (the "Ducatus Neapolitanus"). These regions remained outside of Frankish hands until 804, when the Venetians, torn by infighting, transferred their allegiance to the Iron Crown of Pippin, Charles' son. The "Pax Nicephori" ended. Nicephorus ravaged the coasts with a fleet, initiating the only instance of war between the Byzantines and the Franks. The conflict lasted until 810 when the pro-Byzantine party in Venice gave their city back to the Byzantine Emperor, and the two emperors of Europe made peace: Charlemagne received the Istrian peninsula and in 812 the emperor Michael I Rangabe recognised his status as Emperor, although not necessarily as "Emperor of the Romans".
After the conquest of Nordalbingia, the Frankish frontier was brought into contact with Scandinavia. The pagan Danes, "a race almost unknown to his ancestors, but destined to be only too well known to his sons" as Charles Oman described them, inhabiting the Jutland peninsula, had heard many stories from Widukind and his allies who had taken refuge with them about the dangers of the Franks and the fury which their Christian king could direct against pagan neighbours.
In 808, the king of the Danes, Godfred, expanded the vast Danevirke across the isthmus of Schleswig. This defence, last employed in the Danish-Prussian War of 1864, was at its beginning a long earthenwork rampart. The Danevirke protected Danish land and gave Godfred the opportunity to harass Frisia and Flanders with pirate raids. He also subdued the Frank-allied Veleti and fought the Abotrites.
Godfred invaded Frisia, joked of visiting Aachen, but was murdered before he could do any more, either by a Frankish assassin or by one of his own men. Godfred was succeeded by his nephew Hemming, who concluded the Treaty of Heiligen with Charlemagne in late 811.
In 813, Charlemagne called Louis the Pious, king of Aquitaine, his only surviving legitimate son, to his court. There Charlemagne crowned his son as co-emperor and sent him back to Aquitaine. He then spent the autumn hunting before returning to Aachen on 1 November. In January, he fell ill with pleurisy. In deep depression (mostly because many of his plans were not yet realised), he took to his bed on 21 January and as Einhard tells it:
He was buried that same day, in Aachen Cathedral, although the cold weather and the nature of his illness made such a hurried burial unnecessary. The earliest surviving "planctus", the "Planctus de obitu Karoli", was composed by a monk of Bobbio, which he had patronised. A later story, told by Otho of Lomello, Count of the Palace at Aachen in the time of Emperor Otto III, would claim that he and Otto had discovered Charlemagne's tomb: Charlemagne, they claimed, was seated upon a throne, wearing a crown and holding a sceptre, his flesh almost entirely incorrupt. In 1165, Emperor Frederick I re-opened the tomb again and placed the emperor in a sarcophagus beneath the floor of the cathedral. In 1215 Emperor Frederick II re-interred him in a casket made of gold and silver known as the Karlsschrein.
Charlemagne's death emotionally affected many of his subjects, particularly those of the literary clique who had surrounded him at Aachen. An anonymous monk of Bobbio lamented:
Louis succeeded him as Charles had intended. He left a testament allocating his assets in 811 that was not updated prior to his death. He left most of his wealth to the Church, to be used for charity. His empire lasted only another generation in its entirety; its division, according to custom, between Louis's own sons after their father's death laid the foundation for the modern states of Germany and France.
The Carolingian king exercised the "bannum", the right to rule and command. Under the Franks, it was a royal prerogative but could be delegated. He had supreme jurisdiction in judicial matters, made legislation, led the army, and protected both the Church and the poor. His administration was an attempt to organise the kingdom, church and nobility around him. As an administrator, Charlemagne stands out for his many reforms: monetary, governmental, military, cultural and ecclesiastical. He is the main protagonist of the "Carolingian Renaissance".
Charlemagne's success rested primarily on novel siege technologies and excellent logistics rather than the long-claimed "cavalry revolution" led by Charles Martel in 730s. However, the stirrup, which made the "shock cavalry" lance charge possible, was not introduced to the Frankish kingdom until the late eighth century.
Horses were used extensively by the Frankish military because they provided a quick, long-distance method of transporting troops, which was critical to building and maintaining the large empire.
Charlemagne had an important role in determining Europe's immediate economic future. Pursuing his father's reforms, Charlemagne abolished the monetary system based on the gold . Instead, he and the Anglo-Saxon King Offa of Mercia took up Pippin's system for pragmatic reasons, notably a shortage of the metal.
The gold shortage was a direct consequence of the conclusion of peace with Byzantium, which resulted in ceding Venice and Sicily to the East and losing their trade routes to Africa. The resulting standardisation economically harmonised and unified the complex array of currencies that had been in use at the commencement of his reign, thus simplifying trade and commerce.
Charlemagne established a new standard, the (from the Latin , the modern pound), which was based upon a pound of silver—a unit of both money and weight—worth 20 sous (from the Latin [which was primarily an accounting device and never actually minted], the modern shilling) or 240 (from the Latin , the modern penny). During this period, the and the were counting units; only the was a coin of the realm.
Charlemagne instituted principles for accounting practice by means of the Capitulare de villis of 802, which laid down strict rules for the way in which incomes and expenses were to be recorded.
Charlemagne applied this system to much of the European continent, and Offa's standard was voluntarily adopted by much of England. After Charlemagne's death, continental coinage degraded, and most of Europe resorted to using the continued high-quality English coin until about 1100.
Early in Charlemagne's rule he tacitly allowed Jews to monopolise money lending. At the time, lending of money for interest was proscribed in 814 because it violated Church law. Charlemagne introduced the "Capitulary for the Jews", a prohibition on Jews engaging in money-lending due to the religious convictions of the majority of his constituents, in essence banning it across the board, a reversal of his earlier recorded general policy. In addition to this broad change, Charlemagne also performed a significant number of microeconomic reforms, such as direct control of prices and levies on certain goods and commodities.
His "Capitulary for the Jews", however, was not representative of his overall economic relationship or attitude towards the Frankish Jews, and certainly not his earlier relationship with them, which evolved over his life. His personal physician, for example, was Jewish, and he employed one Jew, Isaac, who was his personal representative to the Muslim caliphate of Baghdad. Letters have been credited to him that invited Jews to settle in his kingdom.
Part of Charlemagne's success as a warrior, an administrator and ruler can be traced to his admiration for learning and education. His reign is often referred to as the Carolingian Renaissance because of the flowering of scholarship, literature, art and architecture that characterise it. Charlemagne came into contact with the culture and learning of other countries (especially Moorish Spain, Anglo-Saxon England, and Lombard Italy) due to his vast conquests. He greatly increased the provision of monastic schools and scriptoria (centres for book-copying) in Francia.
Charlemagne was a lover of books, sometimes having them read to him during meals. He was thought to enjoy the works of Augustine of Hippo. His court played a key role in producing books that taught elementary Latin and different aspects of the church. It also played a part in creating a royal library that contained in-depth works on language and Christian faith.
Charlemagne encouraged clerics to translate Christian creeds and prayers into their respective vernaculars as well to teach grammar and music. Due to the increased interest of intellectual pursuits and the urging of their king, the monks accomplished so much copying that almost every manuscript from that time was preserved. At the same time, at the urging of their king, scholars were producing more secular books on many subjects, including history, poetry, art, music, law, theology, etc. Due to the increased number of titles, private libraries flourished. These were mainly supported by aristocrats and churchmen who could afford to sustain them. At Charlemagne's court, a library was founded and a number of copies of books were produced, to be distributed by Charlemagne. Book production was completed slowly by hand and took place mainly in large monastic libraries. Books were so in demand during Charlemagne's time that these libraries lent out some books, but only if that borrower offered valuable collateral in return.
Most of the surviving works of classical Latin were copied and preserved by Carolingian scholars. Indeed, the earliest manuscripts available for many ancient texts are Carolingian. It is almost certain that a text which survived to the Carolingian age survives still.
The pan-European nature of Charlemagne's influence is indicated by the origins of many of the men who worked for him: Alcuin, an Anglo-Saxon from York; Theodulf, a Visigoth, probably from Septimania; Paul the Deacon, Lombard; Italians Peter of Pisa and Paulinus of Aquileia; and Franks Angilbert, Angilram, Einhard and Waldo of Reichenau.
Charlemagne promoted the liberal arts at court, ordering that his children and grandchildren be well-educated, and even studying himself (in a time when even leaders who promoted education did not take time to learn themselves) under the tutelage of Peter of Pisa, from whom he learned grammar; Alcuin, with whom he studied rhetoric, dialectic (logic), and astronomy (he was particularly interested in the movements of the stars); and Einhard, who tutored him in arithmetic.
His great scholarly failure, as Einhard relates, was his inability to write: when in his old age he attempted to learn—practising the formation of letters in his bed during his free time on books and wax tablets he hid under his pillow—"his effort came too late in life and achieved little success", and his ability to read—which Einhard is silent about, and which no contemporary source supports—has also been called into question.
In 800, Charlemagne enlarged the hostel at the Muristan in Jerusalem and added a library to it. He certainly had not been personally in Jerusalem.
Charlemagne expanded the reform Church's programme unlike his father, Pippin, and uncle, Carloman. The deepening of the spiritual life was later to be seen as central to public policy and royal governance. His reform focused on strengthening the church's power structure, improving clergy's skill and moral quality, standardising liturgical practices, improvements on the basic tenets of the faith and the rooting out of paganism. His authority extended over church and state. He could discipline clerics, control ecclesiastical property and define orthodox doctrine. Despite the harsh legislation and sudden change, he had developed support from clergy who approved his desire to deepen the piety and morals of his subjects.
In 809–810, Charlemagne called a church council in Aachen, which confirmed the unanimous belief in the West that the Holy Spirit proceeds from the Father and the Son ("ex Patre Filioque") and sanctioned inclusion in the Nicene Creed of the phrase "Filioque" (and the Son). For this Charlemagne sought the approval of Pope Leo III. The Pope, while affirming the doctrine and approving its use in teaching, opposed its inclusion in the text of the Creed as adopted in the 381 First Council of Constantinople. This spoke of the procession of the Holy Spirit from the Father, without adding phrases such as "and the Son", "through the Son", or "alone". Stressing his opposition, the Pope had the original text inscribed in Greek and Latin on two heavy shields that were displayed in Saint Peter's Basilica.
During Charles' reign, the Roman half uncial script and its cursive version, which had given rise to various continental minuscule scripts, were combined with features from the insular scripts in use in Irish and English monasteries. Carolingian minuscule was created partly under the patronage of Charlemagne. Alcuin, who ran the palace school and scriptorium at Aachen, was probably a chief influence.
The revolutionary character of the Carolingian reform, however, can be over-emphasised; efforts at taming Merovingian and Germanic influence had been underway before Alcuin arrived at Aachen. The new minuscule was disseminated first from Aachen and later from the influential scriptorium at Tours, where Alcuin retired as an abbot.
Charlemagne engaged in many reforms of Frankish governance while continuing many traditional practices, such as the division of the kingdom among sons.
In 806, Charlemagne first made provision for the traditional division of the empire on his death. For Charles the Younger he designated Austrasia and Neustria, Saxony, Burgundy and Thuringia. To Pippin, he gave Italy, Bavaria, and Swabia. Louis received Aquitaine, the Spanish March and Provence. The imperial title was not mentioned, which led to the suggestion that, at that particular time, Charlemagne regarded the title as an honorary achievement that held no hereditary significance.
Pepin died in 810 and Charles in 811. Charlemagne then reconsidered the matter, and in 813, crowned his youngest son, Louis, co-emperor and co-King of the Franks, granting him a half-share of the empire and the rest upon Charlemagne's own death. The only part of the Empire that Louis was not promised was Italy, which Charlemagne specifically bestowed upon Pippin's illegitimate son Bernard.
Einhard tells in his twenty-fourth chapter: Charlemagne threw grand banquets and feasts for special occasions such as religious holidays and four of his weddings. When he was not working, he loved Christian books, horseback riding, swimming, bathing in natural hot springs with his friends and family, and hunting. Franks were well known for horsemanship and hunting skills. Charles was a light sleeper and would stay in his bed chambers for entire days at a time due to restless nights. During these days, he would not get out of bed when a quarrel occurred in his kingdom, instead summoning all members of the situation into his bedroom to be given orders. Einhard tells again in the twenty-fourth chapter: "In summer after the midday meal, he would eat some fruit, drain a single cup, put off his clothes and shoes, just as he did for the night, and rest for two or three hours. He was in the habit of awaking and rising from bed four or five times during the night."
Charlemagne probably spoke a Rhenish Franconian dialect.
He also spoke Latin and had at least some understanding of Greek, according to Einhard ("Grecam vero melius intellegere quam pronuntiare poterat", "he could understand Greek better than he could speak it").
The largely fictional account of Charlemagne's Iberian campaigns by Pseudo-Turpin, written some three centuries after his death, gave rise to the legend that the king also spoke Arabic.
Charlemagne's personal appearance is known from a good description by Einhard after his death in the biography "Vita Karoli Magni". Einhard states:
The physical portrait provided by Einhard is confirmed by contemporary depictions such as coins and his bronze statuette kept in the Louvre. In 1861, Charlemagne's tomb was opened by scientists who reconstructed his skeleton and estimated it to be measured . An estimate of his height from an X-ray and CT scan of his tibia performed in 2010 is . This puts him in the 99th percentile of height for his period, given that average male height of his time was . The width of the bone suggested he was gracile in body build.
Charlemagne wore the traditional costume of the Frankish people, described by Einhard thus:
He wore a blue cloak and always carried a sword typically of a golden or silver hilt. He wore intricately jeweled swords to banquets or ambassadorial receptions. Nevertheless:
On great feast days, he wore embroidery and jewels on his clothing and shoes. He had a golden buckle for his cloak on such occasions and would appear with his great diadem, but he despised such apparel according to Einhard, and usually dressed like the common people.
Charlemagne had residences across his kingdom, including numerous private estates that were governed in accordance with the Capitulare de villis. A 9th-century document detailing the inventory of an estate at Asnapium listed amounts of livestock, plants and vegetables and kitchenware including cauldrons, drinking cups, brass kettles and firewood. The manor contained seventeen houses built inside the courtyard for nobles and family members and was separated from its supporting villas.
Charlemagne was revered as a saint in the Holy Roman Empire and some other locations after the twelfth century. The Apostolic See did not recognise his invalid canonisation by Antipope Paschal III, done to gain the favour of Frederick Barbarossa in 1165. The Apostolic See annulled all of Paschal's ordinances at the Third Lateran Council in 1179. He is not enumerated among the 28 saints named "Charles" in the "Roman Martyrology". His beatification has been acknowledged as "cultus confirmed" and is celebrated on 28 January.
Charlemagne had a sustained impact on European culture. The author of the "Visio Karoli Magni" written around 865 uses facts gathered apparently from Einhard and his own observations on the decline of Charlemagne's family after the dissensions war (840–43) as the basis for a visionary tale of Charles' meeting with a prophetic spectre in a dream.
Charlemagne was a model knight as one of the Nine Worthies who enjoyed an important legacy in European culture. One of the great medieval literary cycles, the Charlemagne cycle or the "Matter of France", centres on his deeds—the Emperor with the Flowing Beard of "Roland" fame—and his historical commander of the border with Brittany, Roland, and the 12 paladins. These are analogous to, and inspired the myth of, the Knights of the Round Table of King Arthur's court. Their tales constitute the first "chansons de geste".
In the 12th century, Geoffrey of Monmouth based his stories of Arthur largely on stories of Charlemagne. During the Hundred Years' War in the 14th century, there was considerable cultural conflict in England, where the Norman rulers were aware of their French roots and identified with Charlemagne, Anglo-Saxon natives felt more affinity for Arthur, whose own legends were relatively primitive. Therefore, storytellers in England adapted legends of Charlemagne and his 12 Peers to the Arthurian tales.
In the "Divine Comedy", the spirit of Charlemagne appears to Dante in the , among the other "warriors of the faith".
Charlemagne's capitularies were quoted by Pope Benedict XIV in his apostolic constitution 'Providas' against freemasonry: "For in no way are we able to understand how they can be faithful to us, who have shown themselves unfaithful to God and disobedient to their Priests".
Charlemagne appears in "Adelchi", the second tragedy by Italian writer Alessandro Manzoni, first published in 1822.
In 1867, an equestrian statue of Charlemagne was made by Louis Jehotte and was inaugurated in 1868 on the Boulevard d'Avroy in Liège. In the niches of the neo-roman pedestal are six statues of Charlemagne's ancestors (Sainte Begge, Pépin de Herstal, Charles Martel, Bertrude, Pépin de Landen and Pépin le Bref).
The North Wall Frieze in the courtroom of the Supreme Court of the United States depicts Charlemagne as a legal reformer.
The city of Aachen has, since 1949, awarded an international prize (called the "Karlspreis der Stadt Aachen") in honour of Charlemagne. It is awarded annually to "personages of merit who have promoted the idea of western unity by their political, economic and literary endeavours." Winners of the prize include Richard von Coudenhove-Kalergi, the founder of the pan-European movement, Alcide De Gasperi, and Winston Churchill.
In its national anthem, "El Gran Carlemany", the nation of Andorra credits Charlemagne with its independence.
In 1964, young French singer France Gall released the hit song "Sacré Charlemagne" in which the lyrics blame the great king for imposing the burden of compulsory education on French children.
Charlemagne is quoted by Dr Henry Jones, Sr. in "Indiana Jones and the Last Crusade". After using his umbrella to induce a flock of seagulls to smash through the glass cockpit of a pursuing German fighter plane, Henry Jones remarks, "I suddenly remembered my Charlemagne: 'Let my armies be the rocks and the trees and the birds in the sky. Despite the quote's popularity since the movie, there is no evidence that Charlemagne actually said this.
"The Economist" features a weekly column entitled "Charlemagne", focusing generally on European affairs and, more usually and specifically, on the European Union and its politics.
Actor and singer Christopher Lee's symphonic metal concept album "" and its heavy metal follow-up "" feature the events of Charlemagne's life.
A 2010 episode of "QI" discussed the mathematics completed by Mark Humphrys that calculated that all modern Europeans are highly likely to share Charlemagne as a common ancestor (see most recent common ancestor).
In April 2014, on the occasion of the 1200th anniversary of Charlemagne's death, public art "Mein Karl" by Ottmar Hörl at Katschhof place was installed between city hall and the Aachen cathedral, displaying 500 Charlemagne statues.
In the video game "Age of Empires II", Charlemagne featured as a throwing axeman. | https://en.wikipedia.org/wiki?curid=5314 |
Character encodings in HTML
HTML (Hypertext Markup Language) has been in use since 1991, but HTML 4.0 (December 1997) was the first standardized version where international characters were given reasonably complete treatment. When an HTML document includes special characters outside the range of seven-bit ASCII, two goals are worth considering: the information's integrity, and universal browser display.
There are several ways to specify which character encoding is used in the document. First, the web server can include the character encoding or "codice_1" in the Hypertext Transfer Protocol (HTTP) codice_2 header, which would typically look like this:
This method gives the HTTP server a convenient way to alter document's encoding according to content negotiation; certain HTTP server software can do it, for example Apache with the module codice_3.
For HTML it is possible to include this information inside the codice_4 element near the top of the document:
HTML5 also allows the following syntax to mean exactly the same:
XHTML documents have a third option: to express the character encoding via XML declaration, as follows:
As the character encoding cannot be known until this declaration is parsed, there can be a problem knowing which encoding is used for the declaration itself. The main principle is that the declaration shall be encoded in pure ASCII, and therefore (if the declaration is inside the file) the encoding needs to be an ASCII extension. In order to allow encodings not backwards compatible with ASCII, browsers must be able to parse declarations in such encodings. Examples of such encodings are UTF-16BE and UTF-16LE.
As of HTML5 the recommended charset is UTF-8. An "encoding sniffing algorithm" is defined in the specification to determine the character encoding of the document based on multiple sources of input, including:
For ASCII-compatible character encodings the consequence of choosing incorrectly is that characters outside the printable ASCII range (32 to 126) usually appear incorrectly. This presents few problems for English-speaking users, but other languages regularly—in some cases, always—require characters outside that range. In CJK environments where there are several different multi-byte encodings in use, auto-detection is also often employed. Finally, browsers usually permit the user to override "incorrect" charset label manually as well.
It is increasingly common for multilingual websites and websites in non-Western languages to use UTF-8, which allows use of the same encoding for all languages. UTF-16 or UTF-32, which can be used for all languages as well, are less widely used because they can be harder to handle in programming languages that assume a byte-oriented ASCII superset encoding, and they are less efficient for text with a high frequency of ASCII characters, which is usually the case for HTML documents.
Successful viewing of a page is not necessarily an indication that its encoding is specified correctly. If the page's creator and reader are both assuming some platform-specific character encoding, and the server does not send any identifying information, then the reader will nonetheless see the page as the creator intended, but other readers on different platforms or with different native languages will not see the page as intended.
In addition to native character encodings, characters can also be encoded as "character references", which can be "numeric character references" (decimal or hexadecimal) or "character entity references". Character entity references are also sometimes referred to as "named entities", or "HTML entities" for HTML. HTML's usage of character references derives from SGML.
A "numeric character reference" in HTML refers to a character by its Universal Character Set/Unicode "code point", and uses the format
or
where "nnnn" is the code point in decimal form, and "hhhh" is the code point in hexadecimal form. The "x" must be lowercase in XML documents. The "nnnn" or "hhhh" may be any number of digits and may include leading zeros. The "hhhh" may mix uppercase and lowercase, though uppercase is the usual style.
Not all web browsers or email clients used by receivers of HTML documents, or text editors used by authors of HTML documents, will be able to render all HTML characters. Most modern software is able to display most or all of the characters for the user's language, and will draw a box or other clear indicator for characters they cannot render.
For codes from 0 to 127, the original 7-bit ASCII standard set, most of these characters can be used without a character reference. Codes from 160 to 255 can all be created using character entity names. Only a few higher-numbered codes can be created using entity names, but all can be created by decimal number character reference.
Character entity references can also have the format codice_7 where "name" is a case-sensitive alphanumeric string. For example, "λ" can also be encoded as codice_8 in an HTML document. The character entity references codice_9, codice_10, codice_11 and codice_12 are predefined in HTML and SGML, because codice_13, codice_14, codice_15 and codice_16 are already used to delimit markup. This notably does not include XML's codice_17 (') entity. For a list of all named HTML character entity references (about 250), see List of XML and HTML character entity references.
Unnecessary use of HTML character references may significantly reduce HTML readability. If the character encoding for a web page is chosen appropriately, then HTML character references are usually only required for markup delimiting characters as mentioned above, and for a few special characters (or none at all if a native Unicode encoding like UTF-8 is used). Incorrect HTML entity escaping may also open up security vulnerabilities for injection attacks such as cross-site scripting. If HTML attributes are left unquoted, certain characters, most importantly whitespace, such as space and tab, must be escaped using entities. Other languages related to HTML have their own methods of escaping characters.
Unlike traditional HTML with its large range of character entity references, in XML there are only five predefined character entity references. These are used to escape characters that are markup sensitive in certain contexts:
All other character entity references have to be defined before they can be used. For example, use of codice_23 (which gives é, Latin lower-case E with acute accent, U+00E9 in Unicode) in an XML document will generate an error unless the entity has already been defined. XML also requires that the codice_24 in hexadecimal numeric references be in lowercase: for example codice_25 rather than codice_26. XHTML, which is an XML application, supports the HTML entity set, along with XML's predefined entities. | https://en.wikipedia.org/wiki?curid=5315 |
Carbon nanotube
Carbon nanotubes (CNTs) are tubes made of carbon with diameters typically measured in nanometers.
Carbon nanotubes often refer to single-wall carbon nanotubes (SWCNTs) with diameters in the range of a nanometer. They were discovered independently by Iijima and and Bethune et al. in carbon arc chambers similar to those used to produce fullerenes. Single-wall carbon nanotubes are one of the allotropes of carbon, intermediate between fullerene cages and flat graphene.
Although not made this way, single-wall carbon nanotubes can be thought of as cutouts from a two-dimensional hexagonal lattice of carbon atoms rolled up along one of the Bravais lattice vectors of the hexagonal lattice to form a hollow cylinder. In this construction, periodic boundary conditions are imposed over the length of this roll up vector to yield a lattice with helical symmetry of seamlessly bonded carbon atoms on the cylinder surface.
Carbon nanotubes also often refer to multi-wall carbon nanotubes (MWCNTs) consisting of nested single-wall carbon nanotubes. If not identical, these tubes are very similar to Oberlin, Endo and Koyama's long straight and parallel carbon layers cylindrically rolled around a hollow tube. Multi-wall carbon nanotubes are also sometimes used to refer to double- and triple-wall carbon nanotubes.
Carbon nanotubes can also refer to tubes with an undetermined carbon-wall structure and diameters less than 100 nanometers. Such tubes were discovered by Radushkevich and Lukyanovich. While nanotubes of other compositions exist, most research has been focused on the carbon ones. Therefore, the "carbon" qualifier is often left implicit in the acronyms, and the names are abbreviated NT, SWNT, and MWNT.
Carbon nanotubes can exhibit remarkable electrical conductivity. They also have exceptional and thermal conductivity, because of their nanostructure and strength of the bonds between carbon atoms. In addition, they can be chemically modified. These properties are expected to be valuable in many areas of technology, such as electronics, optics, composite materials (replacing or complementing carbon fibers), nanotechnology, and other applications of materials science.
Individual carbon nanotubes naturally align themselves into "ropes" held together by relatively weak van der Waals forces. The length of a carbon nanotube produced by common production methods is often not reported, but is much larger than its diameter. Although rare, nanotubes half a meter long have been created, with a length-to-diameter ratio of more than 100,000,000:1. For many purposes, the length of carbon nanotubes can be assumed to be infinite.
Rolling up a hexagonal lattice along different directions to form different single-wall carbon nanotubes shows that all of these tubes have helical and translational symmetry along the tube axis and many also have nontrivial rotational symmetry about this axis. In addition, most are chiral, meaning the tube and its mirror image cannot be superimposed. This construction also allows single-wall carbon nanotubes to be labeled by a pair of small integers.
A special group of achiral single-wall carbon nanotubes are metallic, but all the rest are either small or moderate band gap semiconductors. These electrical properties, however, do not depend on whether the tube is rolled up above or below the graphene plane and hence are the same for a tube and its mirror image.
The structure of an ideal (infinitely long) single-walled carbon nanotube is that of a regular hexagonal lattice drawn on an infinite cylindrical surface, whose vertices are the positions of the carbon atoms. Since the length of the carbon-carbon bonds is fairly fixed, there are constraints on the diameter of the cylinder and the arrangement of the atoms on it.
In the study of nanotubes, one defines a "zigzag" path on a graphene-like lattice as a path that turns 60 degrees, alternating left and right, after stepping through each bond. It is also conventional to define an "armchair" path as one that makes two left turns of 60 degrees followed by two right turns every four steps.
On some carbon nanotubes, there is a closed zigzag path that goes around the tube. One says that the tube is of the zigzag type or configuration, or simply is a zigzag nanotube. If the tube is instead encircled by a closed armchair path, it is said to be of the armchair type, or an armchair nanotube.
An infinite nanotube that is of the zigzag (or armchair) type consists entirely of closed zigzag (or armchair) paths, connected to each other.
The zigzag and armchair configurations are not the only structures that a single-walled nanotube can have. To describe the structure of a general infinitely long tube, one should imagine it being sliced open by a cut parallel to its axis, that goes through some atom "A", and then unrolled flat on the plane, so that its atoms and bonds coincide with those of an imaginary graphene sheet—more precisely, with an infinitely long strip of that sheet.
The two halves of the atom "A" will end up on opposite edges of the strip, over two atoms "A1" and "A2" of the graphene. The line from "A1" to "A2" will correspond to the circumference of the cylinder that went through the atom "A", and will be perpendicular to the edges of the strip.
In the graphene lattice, the atoms can be split into two classes, depending on the directions of their three bonds. Half the atoms have their three bonds directed the same way, and half have their three bonds rotated 180 degrees relative to the first half. The atoms "A1" and "A2", which correspond to the same atom "A" on the cylinder, must be in the same class.
It follows that the circumference of the tube and the angle of the strip are not arbitrary, because they are constrained to the lengths and directions of the lines that connect pairs of graphene atoms in the same class.
Let u and v be two linearly independent vectors that connect the graphene atom "A1" to two of its nearest atoms with the same bond directions. That is, if one numbers consecutive carbons around a graphene cell with C1 to C6, then u can be the vector from C1 to C3, and v be the vector from C1 to C5. Then, for any other atom "A2" with same class as "A1", the vector from "A1" to "A2" can be written as a linear combination "n" u + "m" v, where "n" and "m" are integers. And, conversely, each pair of integers ("n","m") defines a possible position for "A2".
Given "n" and "m", one can reverse this theoretical operation by drawing the vector w on the graphene lattice, cutting a strip of the latter along lines perpendicular to w through its endpoints "A1" and "A2", and rolling the strip into a cylinder so as to bring those two points together. If this construction is applied to a pair ("k",0), the result is a zigzag nanotube, with closed zigzag paths of 2"k" atoms. If it is applied to a pair ("k","k"), one obtains an armchair tube, with closed armchair paths of 4"k" atoms.
Moreover, the structure of the nanotube is not changed if the strip is rotated by 60 degrees clockwise around "A1" before applying the hypothetical reconstruction above. Such a rotation changes the corresponding pair ("n","m") to the pair (−2"m","n"+"m").
It follows that many possible positions of "A2" relative to "A1" — that is, many pairs ("n","m") — correspond to the same arrangement of atoms on the nanotube. That is the case, for example, of the six pairs (1,2), (−2,3), (−3,1), (−1,−2), (2,−3), and (3,−1). In particular, the pairs ("k",0) and (0,"k") describe the same nanotube geometry.
These redundancies can be avoided by considering only pairs ("n","m") such that "n" > 0 and "m" ≥ 0; that is, where the direction of the vector w lies between those of u (inclusive) and v (exclusive). It can be verified that every nanotube has exactly one pair ("n","m") that satisfies those conditions, which is called the tube's type. Conversely, for every type there is a hypothetical nanotube. In fact, two nanotubes have the same type if and only if one can be conceptually rotated and translated so as to match the other exactly.
Instead of the type ("n","m"), the structure of a carbon nanotube can be specified by giving the length of the vector w (that is, the circumference of the nanotube), and the angle "α" between the directions of u and w, which may range from 0 (inclusive) to 60 degrees clockwise (exclusive). If the diagram is drawn with u horizontal, the latter is the tilt of the strip away from the vertical.
Here are some unrolled nanotube diagrams:
A nanotube is chiral if it has type ("n","m"), with "m" > 0 and "m" ≠ "n"; then its enantiomer (mirror image) has type ("m","n"), which is different from ("n","m"). This operation corresponds to mirroring the unrolled strip about the line "L" through "A1" that makes an angle of 30 degrees clockwise from the direction of the u vector (that is, with the direction of the vector u+v). The only types of nanotubes that are achiral are the ("k",0) "zigzag" tubes and the ("k","k") "armchair" tubes.
If two enantiomers are to be considered the same structure, then one may consider only types ("n","m") with 0 ≤ "m" ≤ "n" and "n" > 0. Then the angle "α" between u and w, which may range from 0 to 30 degrees (inclusive both), is called the "chiral angle" of the nanotube.
From "n" and "m" one can also compute the circumference "c", which is the length of the vector w, which turns out to be
in picometres. The diameter formula_2 of the tube is then formula_3, that is
also in picometres. (These formulas are only approximate, especially for small "n" and "m" where the bonds are strained; and they do not take into account the thickness of the wall.)
The tilt angle "α" between u and w and the circumference "c" are related to the type indices "n" and "m" by
where arg("x","y") is the clockwise angle between the "X"-axis and the vector ("x","y"); a function that is available in many programming languages as codice_1("y","x"). Conversely, given "c" and "α", one can get the type ("n","m") by the formulas
which must evaluate to integers.
If "n" and "m" are too small, the structure described by the pair ("n","m") will describe a molecule that cannot be reasonably called a "tube", and may not even be stable. For example, the structure theoretically described by the pair (1,0) (the limiting "zigzag" type) would be just a chain of carbons. That is a real molecule, the carbyne; which has some characteristics of nanotubes (such as orbital hybridization, high tensile strength, etc.) — but has no hollow space, and may not be obtainable as a condensed phase. The pair (2,0) would theoretically yield a chain of fused 4-cycles; and (1,1), the limiting "armchair" structure, would yield a chain of bi-connected 4-rings. These structures may not be realizable.
The thinnest carbon nanotube proper is the armchair structure with type (2,2), which has a diameter of 0.3 nm. This nanotube was grown inside a multi-walled carbon nanotube. Assigning of the carbon nanotube type was done by a combination of high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, and density functional theory (DFT) calculations.
The thinnest "freestanding" single-walled carbon nanotube is about 0.43 nm in diameter. Researchers suggested that it can be either (5,1) or (4,2) SWCNT, but the exact type of the carbon nanotube remains questionable. (3,3), (4,3), and (5,1) carbon nanotubes (all about 0.4 nm in diameter) were unambiguously identified using aberration-corrected high-resolution transmission electron microscopy inside double-walled CNTs.
Here are some tube types that are "degenerate" for being too narrow:
The observation of the "longest" carbon nanotubes grown so far, around 1/2 meter (550 mm long), was reported in 2013. These nanotubes were grown on silicon substrates using an improved chemical vapor deposition (CVD) method and represent electrically uniform arrays of single-walled carbon nanotubes.
The "shortest" carbon nanotube can be considered to be the organic compound cycloparaphenylene, which was synthesized in 2008.
The "highest density" of CNTs was achieved in 2013, grown on a conductive titanium-coated copper surface that was coated with co-catalysts cobalt and molybdenum at lower than typical temperatures of 450 °C. The tubes averaged a height of 380 nm and a mass density of 1.6 g cm−3. The material showed ohmic conductivity (lowest resistance ∼22 kΩ).
There is no consensus on some terms describing carbon nanotubes in scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple", or "multi", and the letter C is often omitted in the abbreviation, for example, multi-walled carbon nanotube (MWNT). International Standards Organization uses single-wall or multi-wall in its documents.
Multi-walled nanotubes (MWNTs) consist of multiple rolled layers (concentric tubes) of graphene. There are two models that can be used to describe the structures of multi-walled nanotubes. In the "Russian Doll" model, sheets of graphite are arranged in concentric cylinders, e.g., a (0,8) single-walled nanotube (SWNT) within a larger (0,17) single-walled nanotube. In the "Parchment" model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper. The interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite, approximately 3.4 Å. The Russian Doll structure is observed more commonly. Its individual shells can be described as SWNTs, which can be metallic or semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, and thus the whole MWNT, is usually a zero-gap metal.
Double-walled carbon nanotubes (DWNTs) form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to chemicals. This is especially important when it is necessary to graft chemical functions to the surface of the nanotubes (functionalization) to add properties to the CNT. Covalent functionalization of SWNTs will break some C=C double bonds, leaving "holes" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen.
The telescopic motion ability of inner shells and their unique mechanical properties will permit the use of multi-walled nanotubes as the main movable arms in upcoming nanomechanical devices. The retraction force that occurs to telescopic motion is caused by the Lennard-Jones interaction between shells, and its value is about 1.5 nN.
Junctions between two or more nanotubes have been widely discussed theoretically. Such junctions are quite frequently observed in samples prepared by arc discharge as well as by chemical vapor deposition. The electronic properties of such junctions were first considered theoretically by Lambin et al., who pointed out that a connection between a metallic tube and a semiconducting one would represent a nanoscale heterojunction. Such a junction could therefore form a component of a nanotube-based electronic circuit. The adjacent image shows a junction between two multiwalled nanotubes.
Junctions between nanotubes and graphene have been considered theoretically and studied experimentally. Nanotube-graphene junctions form the basis of pillared graphene, in which parallel graphene sheets are separated by short nanotubes. Pillared graphene represents a class of three-dimensional carbon nanotube architectures.
Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>100 nm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical-initiated thermal crosslinking method to fabricate macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano-structured pores, and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices, implants, and sensors.
Carbon nanobuds are a newly created material combining two previously discovered allotropes of carbon: carbon nanotubes and fullerenes. In this new material, fullerene-like "buds" are covalently bonded to the outer sidewalls of the underlying carbon nanotube. This hybrid material has useful properties of both fullerenes and carbon nanotubes. In particular, they have been found to be exceptionally good field emitters. In composite materials, the attached fullerene molecules may function as molecular anchors preventing slipping of the nanotubes, thus improving the composite's mechanical properties.
A carbon peapod is a novel hybrid carbon material which traps fullerene inside a carbon nanotube. It can possess interesting magnetic properties with heating and irradiation. It can also be applied as an oscillator during theoretical investigations and predictions.
In theory, a nanotorus is a carbon nanotube bent into a torus (doughnut shape). Nanotori are predicted to have many unique properties, such as magnetic moments 1000 times larger than that previously expected for certain specific radii. Properties such as magnetic moment, thermal stability, etc. vary widely depending on the radius of the torus and the radius of the tube.
Graphenated carbon nanotubes are a relatively new hybrid that combines graphitic foliates grown along the sidewalls of multiwalled or bamboo style CNTs. The foliate density can vary as a function of deposition conditions (e.g., temperature and time) with their structure ranging from a few layers of graphene (< 10) to thicker, more graphite-like. The fundamental advantage of an integrated graphene-CNT structure is the high surface area three-dimensional framework of the CNTs coupled with the high edge density of graphene. Depositing a high density of graphene foliates along the length of aligned CNTs can significantly increase the total charge capacity per unit of nominal area as compared to other carbon nanostructures.
Cup-stacked carbon nanotubes (CSCNTs) differ from other quasi-1D carbon structures, which normally behave as quasi-metallic conductors of electrons. CSCNTs exhibit semiconducting behavior because of the stacking microstructure of graphene layers.
Many properties of single-walled carbon nanotubes depend significantly on the ("n","m") type, and this dependence is non-monotonic (see Kataura plot). In particular, the band gap can vary from zero to about 2 eV and the electrical conductivity can show metallic or semiconducting behavior.
Carbon nanotubes are the strongest and stiffest materials yet discovered in terms of tensile strength and elastic modulus. This strength results from the covalent sp2 bonds formed between the individual carbon atoms. In 2000, a multiwalled carbon nanotube was tested to have a tensile strength of . (For illustration, this translates into the ability to endure tension of a weight equivalent to on a cable with cross-section of ). Further studies, such as one conducted in 2008, revealed that individual CNT shells have strengths of up to ≈, which is in agreement with quantum/atomistic models. Because carbon nanotubes have a low density for a solid of 1.3 to 1.4 g/cm3, its specific strength of up to 48,000 kN·m·kg−1 is the best of known materials, compared to high-carbon steel's 154 kN·m·kg−1.
Although the strength of individual CNT shells is extremely high, weak shear interactions between adjacent shells and tubes lead to significant reduction in the effective strength of multiwalled carbon nanotubes and carbon nanotube bundles down to only a few GPa. This limitation has been recently addressed by applying high-energy electron irradiation, which crosslinks inner shells and tubes, and effectively increases the strength of these materials to ≈60 GPa for multiwalled carbon nanotubes and ≈17 GPa for double-walled carbon nanotube bundles. CNTs are not nearly as strong under compression. Because of their hollow structure and high aspect ratio, they tend to undergo buckling when placed under compressive, torsional, or bending stress.
On the other hand, there was evidence that in the radial direction they are rather soft. The first transmission electron microscope observation of radial elasticity suggested that even van der Waals forces can deform two adjacent nanotubes. Later, nanoindentations with an atomic force microscope were performed by several groups to quantitatively measure radial elasticity of multiwalled carbon nanotubes and tapping/contact mode atomic force microscopy was also performed on single-walled carbon nanotubes. Young's modulus of on the order of several GPa showed that CNTs are in fact very soft in the radial direction.
Unlike graphene, which is a two-dimensional semimetal, carbon nanotubes are either metallic or semiconducting along the tubular axis. For a given ("n","m") nanotube, if "n" = "m", the nanotube is metallic; if "n" − "m" is a multiple of 3 and n ≠ m and nm ≠ 0, then the nanotube is quasi-metallic with a very small band gap, otherwise the nanotube is a moderate semiconductor.
Thus, all armchair ("n" = "m") nanotubes are metallic, and nanotubes (6,4), (9,1), etc. are semiconducting.
Carbon nanotubes are not semimetallic because the degenerate point (the point where the π [bonding] band meets the π* [anti-bonding] band, at which the energy goes to zero) is slightly shifted away from the "K" point in the Brillouin zone because of the curvature of the tube surface, causing hybridization between the σ* and π* anti-bonding bands, modifying the band dispersion.
The rule regarding metallic versus semiconductor behavior has exceptions because curvature effects in small-diameter tubes can strongly influence electrical properties. Thus, a (5,0) SWCNT that should be semiconducting in fact is metallic according to the calculations. Likewise, zigzag and chiral SWCNTs with small diameters that should be metallic have a finite gap (armchair nanotubes remain metallic). In theory, metallic nanotubes can carry an electric current density of 4 × 109 A/cm2, which is more than 1,000 times greater than those of metals such as copper, where for copper interconnects, current densities are limited by electromigration. Carbon nanotubes are thus being explored as interconnects and conductivity-enhancing components in composite materials, and many groups are attempting to commercialize highly conducting electrical wire assembled from individual carbon nanotubes. There are significant challenges to be overcome however, such as undesired current saturation under voltage, and the much more resistive nanotube-to-nanotube junctions and impurities, all of which lower the electrical conductivity of the macroscopic nanotube wires by orders of magnitude, as compared to the conductivity of the individual nanotubes.
Because of its nanoscale cross-section, electrons propagate only along the tube's axis. As a result, carbon nanotubes are frequently referred to as one-dimensional conductors. The maximum electrical conductance of a single-walled carbon nanotube is 2"G"0, where "G"0 = 2"e"2/"h" is the conductance of a single ballistic quantum channel.
Because of the role of the π-electron system in determining the electronic properties of graphene, doping in carbon nanotubes differs from that of bulk crystalline semiconductors from the same group of the periodic table (e.g., silicon). Graphitic substitution of carbon atoms in the nanotube wall by boron or nitrogen dopants leads to p-type and n-type behavior, respectively, as would be expected in silicon. However, some non-substitutional (intercalated or adsorbed) dopants introduced into a carbon nanotube, such as alkali metals and electron-rich metallocenes, result in n-type conduction because they donate electrons to the π-electron system of the nanotube. By contrast, π-electron acceptors such as FeCl3 or electron-deficient metallocenes function as p-type dopants because they draw π-electrons away from the top of the valence band.
Intrinsic superconductivity has been reported, although other experiments found no evidence of this, leaving the claim a subject of debate.
Carbon nanotubes have useful absorption, photoluminescence (fluorescence), and Raman spectroscopy properties. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes. There is a strong demand for such characterization from the industrial point of view: numerous parameters of nanotube synthesis can be changed, intentionally or unintentionally, to alter the nanotube quality. As shown below, optical absorption, photoluminescence, and Raman spectroscopies allow quick and reliable characterization of this "nanotube quality" in terms of non-tubular carbon content, structure (chirality) of the produced nanotubes, and structural defects. These features determine nearly any other properties such as optical, mechanical, and electrical properties.
Carbon nanotubes are unique "one-dimensional systems" which can be envisioned as rolled single sheets of graphite (or more precisely graphene). This rolling can be done at different angles and curvatures resulting in different nanotube properties. The diameter typically varies in the range 0.4–40 nm (i.e., "only" ~100 times), but the length can vary ~100,000,000,000 times, from 0.14 nm to 55.5 cm. The nanotube aspect ratio, or the length-to-diameter ratio, can be as high as 132,000,000:1, which is unequalled by any other material. Consequently, all the properties of the carbon nanotubes relative to those of typical semiconductors are extremely anisotropic (directionally dependent) and tunable.
Whereas mechanical, electrical, and electrochemical (supercapacitor) properties of the carbon nanotubes are well established and have immediate applications, the practical use of optical properties is yet unclear. The aforementioned tunability of properties is potentially useful in optics and photonics. In particular, light-emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes.
Crystallographic defects also affect the tube's electrical properties. A common result is lowered conductivity through the defective region of the tube. A defect in armchair-type tubes (which can conduct electricity) can cause the surrounding region to become semiconducting, and single monatomic vacancies induce magnetic properties.
All nanotubes are expected to be very good thermal conductors along the tube, exhibiting a property known as "ballistic conduction", but good insulators lateral to the tube axis. Measurements show that an individual SWNT has a room-temperature thermal conductivity along its axis of about 3500 W·m−1·K−1; compare this to copper, a metal well known for its good thermal conductivity, which transmits 385 W·m−1·K−1. An individual SWNT has a room-temperature thermal conductivity lateral to its axis (in the radial direction) of about 1.52 W·m−1·K−1, which is about as thermally conductive as soil. Macroscopic assemblies of nanotubes such as films or fibres have reached up to 1500 W·m−1·K−1 so far. Networks composed of nanotubes demonstrate different values of thermal conductivity, from the level of thermal insulation with the thermal conductivity of 0.1 W·m−1·K−1 to such high values. That is dependent on the amount of contribution to the thermal resistance of the system caused by the presence of impurities, misalignments and other factors. The temperature stability of carbon nanotubes is estimated to be up to 2800 °C in vacuum and about 750 °C in air.
Crystallographic defects strongly affect the tube's thermal properties. Such defects lead to phonon scattering, which in turn increases the relaxation rate of the phonons. This reduces the mean free path and reduces the thermal conductivity of nanotube structures. Phonon transport simulations indicate that substitutional defects such as nitrogen or boron will primarily lead to scattering of high-frequency optical phonons. However, larger-scale defects such as Stone Wales defects cause phonon scattering over a wide range of frequencies, leading to a greater reduction in thermal conductivity.
Techniques have been developed to produce nanotubes in sizable quantities, including arc discharge, laser ablation, chemical vapor deposition (CVD) and high-pressure carbon monoxide disproportionation (HiPCO). Among these arc discharge, laser ablation, chemical vapor deposition (CVD) are batch by batch process and HiPCO is gas phase continuous process. Most of these processes take place in a vacuum or with process gases. The CVD growth method is popular, as it yields high quantity and has a degree of control over diameter, length and morphology. Using particulate catalysts, large quantities of nanotubes can be synthesized by these methods, but achieving the repeatability becomes a major problem with CVD growth. The HiPCO process advances in catalysis and continuous growth are making CNTs more commercially viable. The HiPCO process helps in producing high purity single walled carbon nanotubes in higher quantity. The HiPCO reactor operates at high temperature 900-1100 °C and high pressure ~30-50 bar. It uses carbon monoxide as the carbon source and Iron pentacarbonyl or Nickel tetracarbonyl as a catalyst. These catalyst acts as the nucleation site for the nanotubes to grow.
Vertically aligned carbon nanotube arrays are also grown by thermal chemical vapor deposition. A substrate (quartz, silicon, stainless steel, etc.) is coated with a catalytic metal (Fe, Co, Ni) layer. Typically that layer is iron, and is deposited via sputtering to a thickness of 1–5 nm. A 10–50 nm underlayer of alumina is often also put down on the substrate first. This imparts controllable wetting and good interfacial properties.
When the substrate is heated to the growth temperature (~700 °C), the continuous iron film breaks up into small islands... each island then nucleates a carbon nanotube. The sputtered thickness controls the island size, and this in turn determines the nanotube diameter. Thinner iron layers drive down the diameter of the islands, and they drive down the diameter of the nanotubes grown. The amount of time that the metal island can sit at the growth temperature is limited, as they are mobile, and can merge into larger (but fewer) islands. Annealing at the growth temperature reduces the site density (number of CNT/mm2) while increasing the catalyst diameter.
The as-prepared carbon nanotubes always have impurities such as other forms of carbon (amorphous carbon, fullerene, etc.) and non-carbonaceous impurities (metal used for catalyst). These impurities need to be removed to make use of the carbon nanotubes in applications.
Carbon nanotubes are modelled in a similar manner as traditional composites in which a reinforcement phase is surrounded by a matrix phase. Ideal models such as cylindrical, hexagonal and square models are common. The size of the micromechanics model is highly function of the studied mechanical properties. The concept of representative volume element (RVE) is used to determine the appropriate size and configuration of computer model to replicate the actual behavior of CNT reinforced nanocomposite. Depending on the material property of interest (thermal, electrical, modulus, creep), one RVE might predict the property better than the alternatives. While the implementation of ideal model is comutationally efficient, they do not represent microstructural features observed in scanning electron microscopy of actual nanocomposites. To incorporate realistic modeling, computer models are also generated to incorporate variability such as waviness, orientation and agglomeration of multiwall or single wall carbon nanotubes.
There are many metrology standards and reference materials available for carbon nanotubes.
For single-wall carbon nanotubes, ISO/TS 10868 describes a measurement method for the diameter, purity, and fraction of metallic nanotubes through optical absorption spectroscopy, while ISO/TS 10797 and ISO/TS 10798 establish methods to characterize the morphology and elemental composition of single-wall carbon nanotubes, using transmission electron microscopy and scanning electron microscopy respectively, coupled with energy dispersive X-ray spectrometry analysis.
NIST SRM 2483 is a soot of single-wall carbon nanotubes used as a reference material for elemental analysis, and was characterized using thermogravimetric analysis, prompt gamma activation analysis, induced neutron activation analysis, inductively coupled plasma mass spectroscopy, resonant Raman scattering, UV-visible-near infrared fluorescence spectroscopy and absorption spectroscopy, scanning electron microscopy, and transmission electron microscopy. The Canadian National Research Council also offers a certified reference material SWCNT-1 for elemental analysis using neutron activation analysis and inductively coupled plasma mass spectroscopy. NIST RM 8281 is a mixture of three lengths of single-wall carbon nanotube.
For multiwall carbon nanotubes, ISO/TR 10929 identifies the basic properties and the content of impurities, while ISO/TS 11888 describes morphology using scanning electron microscopy, transmission electron microscopy, viscometry, and light scattering analysis. ISO/TS 10798 is also valid for multiwall carbon nanotubes.
Carbon nanotubes can be functionalized to attain desired properties that can be used in a wide variety of applications. The two main methods of carbon nanotube functionalization are covalent and non-covalent modifications. Because of their apparent hydrophobic nature, carbon nanotubes tend to agglomerate hindering their dispersion in solvents or viscous polymer melts. The resulting nanotube bundles or aggregates reduce the mechanical performance of the final composite. The surface of the carbon nanotubes can be modified to reduce the hydrophobicity and improve interfacial adhesion to a bulk polymer through chemical attachment.
Also surface of carbon nanotubes can be fluorinated or halofluorinated by CVD-method with fluorocarbons, hydro- or halofluorocarbons by heating while in contact of such carbon material with fluoroorganic substance to form partially fluorinated carbons (so called Fluocar materials) with grafted (halo)fluoroalkyl functionality.
A primary obstacle for applications of carbon nanotubes has been their cost. Prices for single-walled nanotubes declined from around $1500 per gram as of 2000 to retail prices of around $50 per gram of as-produced 40–60% by weight SWNTs as of March 2010. As of 2016, the retail price of as-produced 75% by weight SWNTs was $2 per gram. SWNTs are forecast to make a large impact in electronics applications by 2020 according to "The Global Market for Carbon Nanotubes" report.
Current use and application of nanotubes has mostly been limited to the use of bulk nanotubes, which is a mass of rather unorganized fragments of nanotubes. Bulk nanotube materials may never achieve a tensile strength similar to that of individual tubes, but such composites may, nevertheless, yield strengths sufficient for many applications. Bulk carbon nanotubes have already been used as composite fibers in polymers to improve the mechanical, thermal and electrical properties of the bulk product.
Other current applications include:
Current research for modern applications include:
Carbon nanotubes can serve as additives to various structural materials. For instance, nanotubes form a tiny portion of the material(s) in some (primarily carbon fiber) baseball bats, golf clubs, car parts, or damascus steel.
IBM expected carbon nanotube transistors to be used on Integrated Circuits by 2020.
The strength and flexibility of carbon nanotubes makes them of potential use in controlling other nanoscale structures, which suggests they will have an important role in nanotechnology engineering. The highest tensile strength of an individual multi-walled carbon nanotube has been tested to be 63 GPa. Carbon nanotubes were found in Damascus steel from the 17th century, possibly helping to account for the legendary strength of the swords made of it. Recently, several studies have highlighted the prospect of using carbon nanotubes as building blocks to fabricate three-dimensional macroscopic (>1mm in all three dimensions) all-carbon devices. Lalwani et al. have reported a novel radical initiated thermal crosslinking method to fabricated macroscopic, free-standing, porous, all-carbon scaffolds using single- and multi-walled carbon nanotubes as building blocks. These scaffolds possess macro-, micro-, and nano- structured pores and the porosity can be tailored for specific applications. These 3D all-carbon scaffolds/architectures may be used for the fabrication of the next generation of energy storage, supercapacitors, field emission transistors, high-performance catalysis, photovoltaics, and biomedical devices and implants.
CNTs are potential candidates for future via and wire material in nano-scale VLSI circuits. Eliminating electromigration reliability concerns that plague today's Cu interconnects, isolated (single and multi-wall) CNTs can carry current densities in excess of 1000 MA/cm2 without electromigration damage.
Single-walled nanotubes are likely candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, and SWNTs with diameters of an order of a nanometer can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors (FET). The first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET. Because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a "not" logic gate with both p- and n-type FETs in the same molecule.
Large quantities of pure CNTs can be made into a freestanding sheet or film by surface-engineered tape-casting (SETC) fabrication technique which is a scalable method to fabricate flexible and foldable sheets with superior properties. Another reported form factor is CNT fiber (a.k.a. filament) by wet spinning. The fiber is either directly spun from the synthesis pot or spun from pre-made dissolved CNTs. Individual fibers can be turned into a yarn. Apart from its strength and flexibility, the main advantage is making an electrically conducting yarn. The electronic properties of individual CNT fibers (i.e. bundle of individual CNT) are governed by the two-dimensional structure of CNTs. The fibers were measured to have a resistivity only one order of magnitude higher than metallic conductors at 300K. By further optimizing the CNTs and CNT fibers, CNT fibers with improved electrical properties could be developed.
CNT-based yarns are suitable for applications in energy and electrochemical water treatment when coated with an ion-exchange membrane. Also, CNT-based yarns could replace copper as a winding material. Pyrhönen et al. (2015) have built a motor using CNT winding.
The National Institute for Occupational Safety and Health (NIOSH) is the leading United States federal agency conducting research and providing guidance on the occupational safety and health implications and applications of nanotechnology. Early scientific studies have indicated that some of these nanoscale particles may pose a greater health risk than the larger bulk form of these materials. In 2013, NIOSH published a Current Intelligence Bulletin detailing the potential hazards and recommended exposure limit for carbon nanotubes and fibers.
As of October 2016, single wall carbon nanotubes have been registered through the European Union's Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) regulations, based on evaluation of the potentially hazardous properties of SWCNT. Based on this registration, SWCNT commercialization is allowed in the EU up to 10 metric tons. Currently, the type of SWCNT registered through REACH is limited to the specific type of single wall carbon nanotubes manufactured by OCSiAl, which submitted the application.
The true identity of the discoverers of carbon nanotubes is a subject of some controversy. A 2006 editorial written by Marc Monthioux and Vladimir Kuznetsov in the journal "Carbon" described the interesting and often-misstated origin of the carbon nanotube. A large percentage of academic and popular literature attributes the discovery of hollow, nanometer-size tubes composed of graphitic carbon to Sumio Iijima of NEC in 1991. He published a paper describing his discovery which initiated a flurry of excitement and could be credited by inspiring the many scientists now studying applications of carbon nanotubes. Though Iijima has been given much of the credit for discovering carbon nanotubes, it turns out that the timeline of carbon nanotubes goes back much further than 1991.
In 1952, L. V. Radushkevich and V. M. Lukyanovich published clear images of 50 nanometer diameter tubes made of carbon in the Soviet "Journal of Physical Chemistry". This discovery was largely unnoticed, as the article was published in Russian, and Western scientists' access to Soviet press was limited during the Cold War. Monthioux and Kuznetsov mentioned in their "Carbon" editorial:
In 1976, Morinobu Endo of CNRS observed hollow tubes of rolled up graphite sheets synthesised by a chemical vapour-growth technique. The first specimens observed would later come to be known as single-walled carbon nanotubes (SWNTs). Endo, in his early review of vapor-phase-grown carbon fibers (VPCF), also reminded us that he had observed a hollow tube, linearly extended with parallel carbon layer faces near the fiber core. This appears to be the observation of multi-walled carbon nanotubes at the center of the fiber. The mass-produced MWCNTs today are strongly related to the VPGCF developed by Endo. In fact, they call it the "Endo-process", out of respect for his early work and patents.
In 1979, John Abrahamson presented evidence of carbon nanotubes at the 14th Biennial Conference of Carbon at Pennsylvania State University. The conference paper described carbon nanotubes as carbon fibers that were produced on carbon anodes during arc discharge. A characterization of these fibers was given as well as hypotheses for their growth in a nitrogen atmosphere at low pressures.
In 1981, a group of Soviet scientists published the results of chemical and structural characterization of carbon nanoparticles produced by a thermocatalytical disproportionation of carbon monoxide. Using TEM images and XRD patterns, the authors suggested that their "carbon multi-layer tubular crystals" were formed by rolling graphene layers into cylinders. They speculated that by rolling graphene layers into a cylinder, many different arrangements of graphene hexagonal nets are possible. They suggested two possibilities of such arrangements: circular arrangement (armchair nanotube) and a spiral, helical arrangement (chiral tube).
In 1987, Howard G. Tennent of Hyperion Catalysis was issued a U.S. patent for the production of "cylindrical discrete carbon fibrils" with a "constant diameter between about 3.5 and about 70 nanometers..., length 102 times the diameter, and an outer region of multiple essentially continuous layers of ordered carbon atoms and a distinct inner core..."
Iijima's discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods in 1991 and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, then they would exhibit remarkable conducting properties helped create the initial excitement associated with carbon nanotubes. Nanotube research accelerated greatly following the independent discoveries by Iijima and Ichihashi at NEC and Bethune "et al." at IBM of "single-walled" carbon nanotubes and methods to specifically produce them by adding transition-metal catalysts to the carbon in an arc discharge. The arc discharge technique was well known to produce the famed Buckminster fullerene on a preparative scale, and these results appeared to extend the run of accidental discoveries relating to fullerenes. The discovery of nanotubes remains a contentious issue. Many believe that Iijima's report in 1991 is of particular importance because it brought carbon nanotubes into the awareness of the scientific community as a whole.
"This article incorporates public domain text from National Institute of Environmental Health Sciences (NIEHS) as quoted."
--> | https://en.wikipedia.org/wiki?curid=5320 |
Czech Republic
The Czech Republic (; ), also known by its short-form name, Czechia (; ), is a landlocked country in Central Europe bordered by Austria to the south, Germany to the west, Poland to the northeast and Slovakia to the southeast. The Czech Republic has hilly landscape that covers an area of with a mostly temperate continental climate and oceanic climate. It is a unitary parliamentary republic, with /1e6 round 1 million inhabitants. Its capital and largest city is Prague, with 1.3 million residents; other major cities include Brno, Ostrava, Olomouc and Pilsen.
The Czech state was formed in the late 9th century as the Duchy of Bohemia under the Great Moravian Empire. In 1002, the duchy was formally recognized as an Imperial State of the Holy Roman Empire, and became the Kingdom of Bohemia in 1198, reaching its greatest territorial extent in the 14th century. Prague was the imperial seat in periods between the 14th and 17th century. The Bohemian Reformation of the 15th century led to the Hussite Wars, which resulted in a period of confessional pluralism and relative religious tolerance.
Following the Battle of Mohács in 1526, the whole Crown of Bohemia was gradually integrated into the Habsburg Monarchy. The Protestant Bohemian Revolt (1618–20) against the Catholic Habsburgs led to the Thirty Years' War. After the Battle of the White Mountain, the Habsburgs consolidated their rule, reimposed Catholicism, and adopted a policy of gradual Germanization. With the dissolution of the Holy Roman Empire in 1806, the Bohemian Crown lands became part of the Austrian Empire, and the Czech (Bohemian) language and literature experienced a cultural revival. In the 19th century, the Czech lands became heavily industrialized and were subsequently the core of the First Czechoslovak Republic, which was formed in 1918 following the collapse of the Austro-Hungarian Empire after World War I.
Czechoslovakia was the only democracy in Central Europe during the interwar period. However, beginning in 1938, Nazi Germany systematically annexed the Czech Lands, while Slovakia became a German puppet state. The country was restored in 1945. Most members of the German-speaking minority were expelled following the war. The Communist Party of Czechoslovakia won a plurality in the 1946 elections and after the 1948 "coup d'état" established a one-party communist state under Soviet influence. Increasing dissatisfaction with the regime culminated in 1968 to the reform movement known as the Prague Spring, which ended in a Soviet-led invasion. Czechoslovakia remained occupied until the 1989 Velvet Revolution, which peacefully ended communist rule and reestablished democracy with a market economy. On 1 January 1993, Czechoslovakia peacefully dissolved, with its constituent states becoming the independent states of the Czech Republic and Slovakia. The Czech Republic joined NATO in 1999 and the European Union in 2004. It is also a member of the OECD, the United Nations, the OSCE, and the Council of Europe.
The Czech Republic is a developed country with an advanced, high income social market economy. It is a welfare state with a European social model, universal health care, and tuition-free university education. It ranks 13th in the UN inequality-adjusted human development and 14th in the World Bank Human Capital Index ahead of countries such as the United States, the United Kingdom and France. It ranks as the eleventh safest and most peaceful country and performs well in democratic governance.
The traditional English name "Bohemia" derives from Latin "Boiohaemum", which means "home of the Boii" (Gallic tribe). The current English name comes from the Polish ethnonym associated with the area, which ultimately comes from the Czech word "Čech". The name comes from the Slavic tribe () and, according to legend, their leader Čech, who brought them to Bohemia, to settle on Říp Mountain. The etymology of the word "Čech" can be traced back to the Proto-Slavic root "*čel-", meaning "member of the people; kinsman", thus making it cognate to the Czech word "člověk" (a person).
The country has been traditionally divided into three lands, namely Bohemia ("Čechy") in the west, Moravia ("Morava") in the east, and Czech Silesia ("Slezsko"; the smaller, south-eastern part of historical Silesia, most of which is located within modern Poland) in the northeast. Known as the "lands of the Bohemian Crown" since the 14th century, a number of other names for the country have been used, including "Czech/Bohemian lands", "Bohemian Crown", "Czechia" and the "lands of the Crown of Saint Wenceslas". When the country regained its independence after the dissolution of the Austro-Hungarian empire in 1918, the new name of "Czechoslovakia" was coined to reflect the union of the Czech and Slovak nations within the one country.
After Czechoslovakia dissolved in 1992, the new Czech state lacked a common English short name. The Czech Ministry of Foreign Affairs recommended the English name "Czechia" in 1993, and the Czech government approved "Czechia" as the official short name in 2016.
Archaeologists have found evidence of prehistoric human settlements in the area, dating back to the Paleolithic era. The Venus of Dolní Věstonice, dated to 29,000–25,000 BCE, together with a few others from nearby locations, is the oldest known ceramic artifact in the world.
In the classical era, as a result of the 3rd century BC Celtic migrations, Bohemia became associated with the Boii. The Boii founded an oppidum near the site of modern Prague. Later in the 1st century, the Germanic tribes of the Marcomanni and Quadi settled there. Their king Maroboduus is the first documented ruler of Bohemia. During the Migration Period around the 5th century, many Germanic tribes moved westwards and southwards out of Central Europe. Most of the names of Czech rivers are Celtic or old Germanic in origin, dating from usage in those years.
Slavs from the Black Sea–Carpathian region settled in the area (their migration was pushed by an invasion of peoples from Siberia and Eastern Europe into their area: Huns, Avars, Bulgars and Magyars). In the sixth century, they moved westwards into Bohemia, Moravia, and some of present-day Austria and Germany.
During the 7th century, the Frankish merchant Samo, supporting the Slavs fighting against nearby settled Avars, became the ruler of the first known Slavic state in Central Europe, Samo's Empire. The principality of Great Moravia, controlled by Moymir dynasty, arose in the 8th century. It reached its zenith in the 9th (during the reign of Svatopluk I of Moravia), holding off the influence of the Franks. Great Moravia was Christianized, with a crucial role being played by the Byzantine mission of Cyril and Methodius. They created the artificial language Old Church Slavonic, the first literary and liturgical language of the Slavs, and the Glagolitic alphabet.
The Duchy of Bohemia emerged in the late 9th century, when it was unified by the Přemyslid dynasty. In 10th century Boleslaus I, Duke of Bohemia conquered Moravia, Silesia and expanded farther to the east. The Duchy of Bohemia, raised to the Kingdom of Bohemia in 1198, was from 1002 until 1806 an Imperial State of the Holy Roman Empire alongside the Kingdom of Germany, the Kingdom of Burgundy, the Kingdom of Italy and numerous other territories such as the Old Swiss Confederacy and various Papal States. The kingdom was a significant regional power during the Middle Ages.
In 1212, King Přemysl Ottokar I (bearing the title "king" from 1198) extracted the Golden Bull of Sicily (a formal edict) from the emperor, confirming Ottokar and his descendants' royal status; the Duchy of Bohemia was raised to a Kingdom. The bull declared that the King of Bohemia would be exempt from all future obligations to the Holy Roman Empire except for participation in imperial councils. German immigrants settled in the Bohemian periphery in the 13th century. Germans populated towns and mining districts and, in some cases, formed German colonies in the interior of Bohemia. In 1235, the Mongols launched an invasion of Europe. After the Battle of Legnica in Poland, the Mongols carried their raids into Moravia, but were defensively defeated at the fortified town of Olomouc. The Mongols subsequently invaded and defeated Hungary.
King Přemysl Otakar II earned the nickname "Iron and Golden King" because of his military power and wealth. He acquired Austria, Styria, Carinthia and Carniola, thus spreading the Bohemian territory to the Adriatic Sea. He met his death at the Battle on the Marchfeld in 1278 in a war with his rival, King Rudolph I of Germany. Ottokar's son Wenceslaus II acquired the Polish crown in 1300 for himself and the Hungarian crown for his son. He built a great empire stretching from the Danube river to the Baltic Sea. In 1306, the last king of Přemyslid line Wenceslaus III was murdered in mysterious circumstances in Olomouc while he was resting. After a series of dynastic wars, the House of Luxembourg gained the Bohemian throne.
The 14th century, in particular, the reign of the Bohemian king Charles IV (1316–1378), who in 1346 became King of the Romans and in 1354 both King of Italy and Holy Roman Emperor, is considered the Golden Age of Czech history. Of particular significance was the founding of Charles University in Prague in 1348, Charles Bridge, Charles Square. Much of Prague Castle and the cathedral of Saint Vitus in Gothic style were completed during his reign. He unified Brandenburg (until 1415), Lusatia (until 1635), and Silesia (until 1742) under the Bohemian crown. The Black Death, which had raged in Europe from 1347 to 1352, decimated the Kingdom of Bohemia in 1380, killing about 10% of the population.
Efforts for a reform of the church in Bohemia started already in the late 14th century with personalities like Milíč of Kroměříž and Matthias of Janov. The most famous figure of the nascent Bohemian Reformation became Jan Hus. Although Hus was named a heretic and burnt in Constance in 1415, his followers (led by warlords Jan Žižka and Prokop the Great) seceded from some practices of the Roman Church and in the Hussite Wars (1419–1434) defeated five crusades organized against them by the Holy Roman Emperor Sigismund. During the next two centuries, 90% of the population in Bohemian and Moravian lands were considered Hussites. Hussite George of Podebrady was even a king. Another great thinker of the Bohemian Reformation, Petr Chelčický, inspired the movement of the Bohemian Brethren that completely separated from the Catholic Church (unlike the Hussites). Hus's thoughts were a major influence on the later Lutheranism. Martin Luther himself said "we are all Hussites, without having been aware of it" and considered himself as Hus's direct successor.
After 1526 Bohemia came increasingly under Habsburg control as the Habsburgs became first the elected and then in 1627 the hereditary rulers of Bohemia. The of the 16th century, the founders of the Central European Habsburg Monarchy, were buried in Prague. Between 1583–1611 Prague was the official seat of the Holy Roman Emperor Rudolf II and his court.
The Defenestration of Prague and subsequent revolt against the Habsburgs in 1618 marked the start of the Thirty Years' War, which quickly spread throughout Central Europe. In 1620, the rebellion in Bohemia was crushed at the Battle of White Mountain, and the ties between Bohemia and the Habsburgs' hereditary lands in Austria were strengthened. The leaders of the Bohemian Revolt were executed in 1621. The nobility and the middle class Protestants had to either convert to Catholicism or leave the country. This is said to have contributed to anti-Habsburg sentiment and resentment of the Catholic Church that continues to this day.
The following period, from 1620 to the late 18th century, has often been called colloquially the "Dark Age". The population of the Czech lands declined by a third through the expulsion of Czech Protestants as well as due to the war, disease and famine. The Habsburgs prohibited all Christian confessions other than Catholicism. The flowering of Baroque culture shows the ambiguity of this historical period.
Ottoman Turks and Tatars invaded Moravia in 1663. In 1679–1680 the Czech lands faced the devastating Great Plague of Vienna and an uprising of serfs.
The reigns of Maria Theresa of Austria and her son Joseph II, Holy Roman Emperor and co-regent from 1765, were characterized by enlightened absolutism. In 1740, most of Silesia (except the southernmost area) was seized by King Frederick II of Prussia in the Silesian Wars. In 1757 the Prussians invaded Bohemia and after the Battle of Prague (1757) occupied the city. More than one quarter of Prague was destroyed and St. Vitus Cathedral also suffered heavy damage. Frederick was defeated soon after at the Battle of Kolín and had to leave Prague and retreat from Bohemia. In 1770 and 1771 Great Famine killed about one tenth of the Czech population, or 250,000 inhabitants, and radicalized the countryside leading to peasant uprisings. Serfdom was abolished (in two steps) between 1781 and 1848. Several large battles of the Napoleonic Wars – Battle of Austerlitz, Battle of Kulm – took place on the current territory of the Czech Republic. Joseph Radetzky von Radetz, born to a noble Czech family, was a field marshal and chief of the general staff of the Austrian Empire army during these wars.
The end of the Holy Roman Empire in 1806 led to degradation of the political status of the Kingdom of Bohemia. Bohemia lost its position of an electorate of the Holy Roman Empire as well as its own political representation in the Imperial Diet. Bohemian lands became part of the Austrian Empire and later of Austria–Hungary. During the 18th and 19th century the Czech National Revival began its rise, with the purpose to revive Czech language, culture and national identity. The Revolution of 1848 in Prague, striving for liberal reforms and autonomy of the Bohemian Crown within the Austrian Empire, was suppressed.
In 1866 Austria was defeated by Prussia in the Austro-Prussian War (see also Battle of Königgrätz and Peace of Prague). The Austrian Empire needed to redefine itself to maintain unity in the face of nationalism. At first it seemed that some concessions would be made also to Bohemia, but in the end the Emperor Franz Joseph I effected a compromise with Hungary only. The Austro-Hungarian Compromise of 1867 and the never realized coronation of Franz Joseph as King of Bohemia led to a huge disappointment of Czech politicians. The Bohemian Crown lands became part of the so-called Cisleithania (officially "The Kingdoms and Lands represented in the Imperial Council").
Prague pacifist Bertha von Suttner was awarded the Nobel Peace Prize in 1905. In the same year, the Czech Social Democratic and progressive politicians (including Tomáš Garrigue Masaryk) started the fight for universal suffrage. The first elections under universal male suffrage were held in 1907. The last King of Bohemia was Charles I of Austria who ruled in 1916–1918.
An estimated 1.4 million Czech soldiers fought in World War I, of whom some 150,000 died. Although the majority of Czech soldiers fought for the Austro-Hungarian Empire, more than 90,000 Czech volunteers formed the Czechoslovak Legions in France, Italy and Russia, where they fought against the Central Powers and later against Bolshevik troops. In 1918, during the collapse of the Habsburg Empire at the end of World War I, the independent republic of Czechoslovakia, which joined the winning Allied powers, was created, with Tomáš Garrigue Masaryk in the lead. This new country incorporated the Bohemian Crown (Bohemia, Moravia and Silesia) and parts of the Kingdom of Hungary (Slovakia and the Carpathian Ruthenia) with significant German, Hungarian, Polish and Ruthenian speaking minorities. Czechoslovakia concluded a treaty of alliance with Romania and Yugoslavia (the so-called Little Entente) and particularly with France.
The First Czechoslovak Republic comprised only 27% of the population of the former Austria-Hungary, but nearly 80% of the industry, which enabled it to successfully compete with Western industrial states. In 1929 compared to 1913, the gross domestic product increased by 52% and industrial production by 41%. In 1938 Czechoslovakia held 10th place in the world industrial production.
Although the First Czechoslovak Republic was a unitary state, it provided what were at the time rather extensive rights to its minorities and remained the only democracy in this part of Europe in the interwar period. The effects of the Great Depression including high unemployment and massive propaganda from Nazi Germany, however, resulted in discontent and strong support among ethnic Germans for a break from Czechoslovakia.
Adolf Hitler took advantage of this opportunity and using Konrad Henlein's separatist Sudeten German Party, gained the largely German-speaking Sudetenland (and its substantial Maginot Line-like border fortifications) through the 1938 Munich Agreement (signed by Nazi Germany, France, Britain, and Italy). Czechoslovakia was not invited to the conference, and Czechs and Slovaks call the Munich Agreement the Munich Betrayal because France (which had an alliance with Czechoslovakia) and Britain gave up Czechoslovakia instead of facing Hitler, which later proved inevitable.
Despite the mobilization of 1.2 million-strong Czechoslovak army and the Franco-Czech military alliance, Poland annexed the Zaolzie area around Český Těšín; Hungary gained parts of Slovakia and the Subcarpathian Rus as a result of the First Vienna Award in November 1938. The remainders of Slovakia and the Subcarpathian Rus gained greater autonomy, with the state renamed to "Czecho-Slovakia". After Nazi Germany threatened to annex part of Slovakia, allowing the remaining regions to be partitioned by Hungary and Poland, Slovakia chose to maintain its national and territorial integrity, seceding from Czecho-Slovakia in March 1939, and allying itself, as demanded by Germany, with Hitler's coalition.
The remaining Czech territory was occupied by Germany, which transformed it into the so-called Protectorate of Bohemia and Moravia. The protectorate was proclaimed part of the Third Reich, and the president and prime minister were subordinated to the Nazi Germany's "Reichsprotektor". Subcarpathian Rus declared independence as the Republic of Carpatho-Ukraine on 15 March 1939 but was invaded by Hungary the same day and formally annexed the next day. Approximately 345,000 Czechoslovak citizens, including 277,000 Jews, were killed or executed while hundreds of thousands of others were sent to prisons and Nazi concentration camps or used as forced labor. Up to two-thirds of the citizens were in groups targeted by the Nazis for deportation or death. One concentration camp was located within the Czech territory at Terezín, north of Prague. The Nazi "Generalplan Ost" called for the extermination, expulsion, Germanization or enslavement of most or all Czechs for the purpose of providing more living space for the German people.
There was Czech resistance to Nazi occupation, both at home and abroad, most notably with the assassination of Nazi German leader Reinhard Heydrich by Czechoslovakian soldiers Jozef Gabčík and Jan Kubiš in a Prague suburb on 27 May 1942. On 9 June 1942 Hitler ordered bloody reprisals against the Czechs as a response to the Czech anti-Nazi resistance. The Edvard Beneš's Czechoslovak government-in-exile and its army fought against the Germans and were acknowledged by the Allies; Czech/Czechoslovak troops fought from the very beginning of the war in Poland, France, the UK, North Africa, the Middle East and the Soviet Union (see I Czechoslovakian Corps). The German occupation ended on 9 May 1945, with the arrival of the Soviet and American armies and the Prague uprising. An estimated 140,000 Soviet soldiers died in liberating Czechoslovakia from German rule.
In 1945–1946, almost the entire German-speaking minority in Czechoslovakia, about 3 million people, were expelled to Germany and Austria (see also Beneš decrees). During this time, thousands of Germans were held in prisons and detention camps or used as forced labor. In the summer of 1945, there were several massacres, such as the Postoloprty massacre. Research by a joint German and Czech commission of historians in 1995 found that the death toll of the expulsions was at least 15,000 persons and that it could range up to a maximum of 30,000 dead. The only Germans not expelled were some 250,000 who had been active in the resistance against the Nazi Germans or were considered economically important, though many of these emigrated later. Following a Soviet-organized referendum, the Subcarpathian Rus never returned under Czechoslovak rule but became part of the Ukrainian Soviet Socialist Republic, as the Zakarpattia Oblast in 1946.
Czechoslovakia uneasily tried to play the role of a "bridge" between the West and East. However, the Communist Party of Czechoslovakia rapidly increased in popularity, with a general disillusionment with the West, because of the pre-war Munich Agreement, and a favourable popular attitude towards the Soviet Union, because of the Soviets' role in liberating Czechoslovakia from German rule. In the 1946 elections, the Communists gained 38% of the votes and became the largest party in the Czechoslovak parliament. They formed a coalition government with other parties of the National Front and moved quickly to consolidate power. A significant change came in 1948 with coup d'état by the Communist Party. The Communist People's Militias secured control of key locations in Prague, and a single party government was formed.
For the next 41 years, Czechoslovakia was a Communist state within the Eastern Bloc. This period is characterized by lagging behind the West in almost every aspect of social and economic development. The country's GDP per capita fell from the level of neighboring Austria to below that of Greece or Portugal in the 1980s. The Communist government completely nationalized the means of production and established a command economy. The economy grew rapidly during the 1950s but slowed down in the 1960s and 1970s and stagnated in the 1980s.
The political climate was highly repressive during the 1950s, including numerous show trials (the most famous victims: Milada Horáková and Rudolf Slánský) and hundreds of thousands of political prisoners, but became more open and tolerant in the late 1960s, culminating in Alexander Dubček's leadership in the 1968 Prague Spring, which tried to create "socialism with a human face" and perhaps even introduce political pluralism. This was forcibly ended by invasion by all Warsaw Pact member countries with the exception of Romania and Albania on 21 August 1968. Student Jan Palach became a symbol of resistance to the occupation, when he committed self-immolation as a political protest.
The invasion was followed by a harsh program of "Normalization" in the late 1960s and the 1970s. Until 1989, the political establishment relied on censorship of the opposition. Dissidents published Charter 77 in 1977, and the first of a new wave of protests were seen in 1988. Between 1948 and 1989 about 250,000 Czechs and Slovaks were sent to prison for political reasons, and over 400,000 emigrated.
In November 1989, Czechoslovakia returned to a liberal democracy through the peaceful "Velvet Revolution" (led by Václav Havel and his Civic Forum). However, Slovak national aspirations strengthened (see Hyphen War) and on 1 January 1993, the country peacefully split into the independent countries of the Czech Republic and Slovakia. Both countries went through economic reforms and privatisations, with the intention of creating a market economy. This process was largely successful; in 2006 the Czech Republic was recognized by the World Bank as a "developed country", and in 2009 the Human Development Index ranked it as a nation of "Very High Human Development".
From 1991, the Czech Republic, originally as part of Czechoslovakia and since 1993 in its own right, has been a member of the Visegrád Group and from 1995, the OECD. The Czech Republic joined NATO on 12 March 1999 and the European Union on 1 May 2004. On 21 December 2007 the Czech Republic joined the Schengen Area. Until 2017, either the Social Democrats (under Miloš Zeman, Vladimír Špidla, Stanislav Gross, Jiří Paroubek and Bohuslav Sobotka), or liberal-conservatives (under Václav Klaus, Mirek Topolánek and Petr Nečas) led the government of the Czech Republic.
The Czech Republic lies mostly between latitudes 48° and 51° N (a small area lies north of 51°), and longitudes 12° and 19° E.
The Czech landscape is exceedingly varied. Bohemia, to the west, consists of a basin drained by the Elbe () and the Vltava rivers, surrounded by mostly low mountains, such as the Krkonoše range of the Sudetes. The highest point in the country, Sněžka at , is located here. Moravia, the eastern part of the country, is also quite hilly. It is drained mainly by the Morava River, but it also contains the source of the Oder River ().
Water from the Czech Republic flows to three different seas: the North Sea, Baltic Sea and Black Sea. The Czech Republic also leases the Moldauhafen, a lot in the middle of the Hamburg Docks, which was awarded to Czechoslovakia by Article 363 of the Treaty of Versailles, to allow the landlocked country a place where goods transported down river could be transferred to seagoing ships. The territory reverts to Germany in 2028.
Phytogeographically, the Czech Republic belongs to the Central European province of the Circumboreal Region, within the Boreal Kingdom. According to the World Wide Fund for Nature, the territory of the Czech Republic can be subdivided into four ecoregions: the Western European broadleaf forests, Central European mixed forests, Pannonian mixed forests, and Carpathian montane conifer forests.
There are four national parks in the Czech Republic. The oldest is Krkonoše National Park (Biosphere Reserve), and the others are Šumava National Park (Biosphere Reserve), Podyjí National Park, Bohemian Switzerland.
The three historical lands of the Czech Republic (formerly the core countries of the Bohemian Crown) correspond almost perfectly with the river basins of the Elbe () and the Vltava basin for Bohemia, the Morava one for Moravia, and the Oder river basin for Czech Silesia (in terms of the Czech territory).
The Czech Republic mostly has a temperate oceanic climate, with warm summers and cold, cloudy and snowy winters. The temperature difference between summer and winter is relatively high, due to the landlocked geographical position.
Within the Czech Republic, temperatures vary greatly, depending on the elevation. In general, at higher altitudes, the temperatures decrease and precipitation increases. The wettest area in the Czech Republic is found around Bílý Potok in Jizera Mountains and the driest region is the Louny District to the northwest of Prague. Another important factor is the distribution of the mountains; therefore, the climate is quite varied.
At the highest peak of Sněžka (), the average temperature is only , whereas in the lowlands of the South Moravian Region, the average temperature is as high as . The country's capital, Prague, has a similar average temperature, although this is influenced by urban factors.
The coldest month is usually January, followed by February and December. During these months, there is usually snow in the mountains and sometimes in the major cities and lowlands. During March, April, and May, the temperature usually increases rapidly, especially during April, when the temperature and weather tends to vary widely during the day. Spring is also characterized by high water levels in the rivers, due to melting snow with occasional flooding.
The warmest month of the year is July, followed by August and June. On average, summer temperatures are about – higher than during winter. Summer is also characterized by rain and storms.
Autumn generally begins in September, which is still relatively warm and dry. During October, temperatures usually fall below or and deciduous trees begin to shed their leaves. By the end of November, temperatures usually range around the freezing point.
The coldest temperature ever measured was in Litvínovice near České Budějovice in 1929, at and the hottest measured, was at in Dobřichovice in 2012.
Most rain falls during the summer. Sporadic rainfall is relatively constant throughout the year (in Prague, the average number of days per month experiencing at least of rain varies from 12 in September and October to 16 in November) but concentrated heavy rainfall (days with more than per day) are more frequent in the months of May to August (average around two such days per month). Severe thunderstorms, producing damaging straight-line winds, hail, and even occasional tornadoes occur, especially during the summer period.
The Czech Republic ranks as the 27th most environmentally conscious country in the world in Environmental Performance Index. The Czech Republic has four National Parks (Šumava National Park, Krkonoše National Park, České Švýcarsko National Park, Podyjí National Park) and 25 Protected Landscape Areas.
The Czech Republic is a pluralist multi-party parliamentary representative democracy, with the President as head of state and Prime Minister as head of government. The Parliament ("Parlament České republiky") is bicameral, with the Chamber of Deputies () (200 members) and the Senate () (81 members).
The president is a formal head of state with limited and specific powers, most importantly to return bills to the parliament, appoint members to the board of the Czech National Bank, nominate constitutional court judges for the Senate's approval and dissolve the Chamber of Deputies under certain special and unusual circumstances. The president and vice president of the Supreme Court are appointed by the President of the Republic. He also appoints the prime minister, as well the other members of the cabinet on a proposal by the prime minister. From 1993 until 2012, the President of the Czech Republic was selected by a joint session of the parliament for a five-year term, with no more than two consecutive terms (2x Václav Havel, 2x Václav Klaus). Since 2013 the presidential election is direct. Miloš Zeman was the first directly elected Czech President.
The Government of the Czech Republic's exercise of executive power derives from the Constitution. The members of the government are the Prime Minister, Deputy prime ministers and other ministers. The Government is responsible to the Chamber of Deputies.
The Prime Minister is the head of government and wields considerable powers, such as the right to set the agenda for most foreign and domestic policy and choose government ministers. The current Prime Minister of the Czech Republic is Andrej Babiš, serving since 6 December 2017 as the 12th Prime Minister.
The members of the Chamber of Deputies are elected for a four-year term by proportional representation, with a 5% election threshold. There are 14 voting districts, identical to the country's administrative regions. The Chamber of Deputies, the successor to the Czech National Council, has the powers and responsibilities of the now defunct federal parliament of the former Czechoslovakia.
The members of the Senate are elected in single-seat constituencies by two-round runoff voting for a six-year term, with one-third elected every even year in the autumn. The first election was in 1996, for differing terms. This arrangement is modeled on the U.S. Senate, but each constituency is roughly the same size and the voting system used is a two-round runoff.
The Czech Republic is a unitary state with a civil law system based on the continental type, rooted in Germanic legal culture. The basis of the legal system is the Constitution of the Czech Republic adopted in 1993. The Penal Code is effective from 2010. A new Civil code became effective in 2014. The court system includes district, county and supreme courts and is divided into civil, criminal, and administrative branches. The Czech judiciary has a triumvirate of supreme courts. The Constitutional Court consists of 15 constitutional judges and oversees violations of the Constitution by either the legislature or by the government. The Supreme Court is formed of 67 judges and is the court of highest appeal for almost all legal cases heard in the Czech Republic. The Supreme Administrative Court decides on issues of procedural and administrative propriety. It also has jurisdiction over many political matters, such as the formation and closure of political parties, jurisdictional boundaries between government entities, and the eligibility of persons to stand for public office. The Supreme Court and the Supreme Administrative Court are both based in Brno, as is the Supreme Public Prosecutor's Office.
The Czech Republic ranks as the 11th safest or most peaceful country. It is a member of the United Nations, the European Union, NATO, OECD, Council of Europe and is an observer to the Organization of American States. The embassies of most countries with diplomatic relations with the Czech Republic are located in Prague, while consulates are located across the country.
The Czech passport is one of the least restricted by visas. According to the 2018 Henley & Partners Visa Restrictions Index, Czech citizens have visa-free access to 173 countries, which ranks them 7th along with Malta and New Zealand. The World Tourism Organization ranks the Czech passport 24th. The US Visa Waiver Program applies to Czech nationals.
The Prime Minister and Minister of Foreign Affairs have primary roles in setting foreign policy, although the President has considerable influence and also represents the country abroad. Membership in the European Union and NATO is central to the Czech Republic's foreign policy. The Office for Foreign Relations and Information (ÚZSI) serves as the foreign intelligence agency responsible for espionage and foreign policy briefings, as well as protection of Czech Republic's embassies abroad.
The Czech Republic has strong ties with Slovakia, Poland and Hungary as a member of the Visegrad Group, as well as with Germany, Israel, the United States and the European Union and its members.
Czech officials have supported dissenters in Belarus, Moldova, Myanmar and Cuba.
The Czech armed forces consist of the Czech Land Forces, the Czech Air Force and of specialized support units. The armed forces are managed by the Ministry of Defence. The President of the Czech Republic is Commander-in-chief of the armed forces. In 2004 the army transformed itself into a fully professional organization and compulsory military service was abolished. The country has been a member of NATO since 12 March 1999. Defence spending is approximately 1.19% of the GDP (2019). The armed forces are charged with protecting the Czech Republic and its allies, promoting global security interests, and contributing to NATO.
Currently, as a member of NATO, the Czech military are participating in the Resolute Support and KFOR operations and have soldiers in Afghanistan, Mali, Bosnia and Herzegovina, Kosovo, Egypt, Israel and Somalia. The Czech Air Force also served in the Baltic states and Iceland. The main equipment of the Czech military includes JAS 39 Gripen multi-role fighters, Aero L-159 Alca combat aircraft, Mi-35 attack helicopters, armored vehicles (Pandur II, OT-64, OT-90, BVP-2) and tanks (T-72 and T-72M4CZ).
Since 2000, the Czech Republic has been divided into thirteen regions (Czech: "kraje", singular "kraj") and the capital city of Prague. Every region has its own elected regional assembly ("krajské zastupitelstvo") and "hejtman" (a regional governor). In Prague, the assembly and presidential powers are executed by the city council and the mayor.
The older seventy-six districts ("okresy", singular "okres") including three "statutory cities" (without Prague, which had special status) lost most of their importance in 1999 in an administrative reform; they remain as territorial divisions and seats of various branches of state administration.
The Czech Republic has a developed, high-income export-oriented social market economy based in services, manufacturing and innovation, that maintains a welfare state and the European social model. The Czech Republic participates in the European Single Market as a member of the European Union, and is therefore a part of the economy of the European Union, but uses its own currency, the Czech koruna, instead of the euro. It has a per capita GDP rate that is 91% of the EU average and is a member of the OECD. Monetary policy is conducted by the Czech National Bank, whose independence is guaranteed by the Constitution. The Czech Republic ranks 13th in the UN inequality-adjusted human development and 14th in World Bank Human Capital Index ahead of countries such as the United States, the United Kingdom and France. It was described by "The Guardian" as "one of Europe’s most flourishing economies".
, the country's GDP per capita at purchasing power parity is $37,370 (similar to Israel, Italy or Slovenia) and $22,850 at nominal value. According to Allianz A.G., in 2018 the country was an MWC (mean wealth country), ranking 26th in net financial assets. The country experienced a 4.5% GDP growth in 2017, giving the Czech economy one of the highest growth rates in the European Union. The 2016 unemployment rate was the lowest in the EU at 2.4%, and the 2016 poverty rate was the second lowest of OECD members behind only Denmark. Czech Republic ranks 24th in both the Index of Economic Freedom and the Global Innovation Index , 29th in the Global Competitiveness Report 30th in the ease of doing business index and 25th in the Global Enabling Trade Report.
The Czech Republic has a highly diverse economy that ranks 7th in the 2016 Economic Complexity Index. The industrial sector accounts for 37.5% of the economy, while services account for 60% and agriculture for 2.5%. The largest trading partner for both export and import is Germany and the EU in general. Dividends worth CZK 270 billion were paid to the foreign owners of Czech companies in 2017, which has become a political issue in the Czech Republic. The country has been a member of the Schengen Area since 1 May 2004, having abolished border controls, completely opening its borders with all of its neighbors (Germany, Austria, Poland and Slovakia) on 21 December 2007. The Czech Republic became a member of the World Trade Organization on 1 January 1995.
In 2018 the largest companies by revenue in the Czech Republic were: one of the largest car automobile manufacturers in Central Europe Škoda Auto, utility company ČEZ Group, conglomerate Agrofert, energy trading company EPH, oil processing company Unipetrol, electronics manufacturer Foxconn CZ and steel producer Moravia Steel. Other Czech transportation companies include: Škoda Transportation (tramways, trolleybuses, metro), Tatra (heavy trucks, the second oldest car maker in the world), Avia (medium trucks), Karosa and SOR Libchavy (buses), Aero Vodochody (military aircraft), Let Kunovice (civil aircraft), Zetor (tractors), Jawa Moto (motorcycles) and Čezeta (electric scooters).
Škoda Transportation is the fourth largest tramway producer in the world; nearly one third of all trams in the world come from Czech factories. The Czech Republic is also the world's largest vinyl records manufacturer, with GZ Media producing about 6 million pieces annually in Loděnice. Česká zbrojovka is among the ten largest firearms producers in the world and five who produce automatic weapons.
In the food industry succeeded companies Agrofert, Kofola, Hamé and Bageterie Boulevard.
Production of Czech electricity exceeds consumption by about 10 TWh per year, which are exported. Nuclear power presently provides about 30 percent of the total power needs, its share is projected to increase to 40 percent. In 2005, 65.4 percent of electricity was produced by steam and combustion power plants (mostly coal); 30 percent by nuclear plants; and 4.6 percent from renewable sources, including hydropower. The largest Czech power resource is Temelín Nuclear Power Station, another nuclear power plant is in Dukovany.
The Czech Republic is reducing its dependence on highly polluting low-grade brown coal as a source of energy. Natural gas is procured from Russian Gazprom, roughly three-fourths of domestic consumption and from Norwegian companies, which make up most of the remaining one-fourth. Russian gas is imported via Ukraine, Norwegian gas is transported through Germany. Gas consumption (approx. 100 TWh in 2003–2005) is almost double electricity consumption. South Moravia has small oil and gas deposits.
The road network in the Czech Republic is long. There are 1,232 km of motorways as of 2017. The speed limit is 50 km/h within towns, 90 km/h outside of towns and 130 km/h on motorways.
The Czech Republic has the densest rail network in the world with of tracks. Of that number, is electrified, are single-line tracks and are double and multiple-line tracks. České dráhy (the Czech Railways) is the main railway operator in the Czech Republic, with about 180 million passengers carried yearly. Maximum speed is limited to 160 km/h. In 2006 seven Italian tilting trainsets Pendolino ČD Class 680 entered service.
Václav Havel Airport in Prague is the main international airport in the country. In 2017, it handled 15 million passengers, which makes it the one of the busiest airports in Central Europe. In total, the Czech Republic has 46 airports with paved runways, six of which provide international air services in Brno, Karlovy Vary, Mošnov (near Ostrava), Pardubice, Prague and Kunovice (near Uherské Hradiště).
Russia, via pipelines through Ukraine and to a lesser extent, Norway, via pipelines through Germany, supply the Czech Republic with liquid and natural gas.
The Czech Republic ranks in the top 10 countries worldwide with the fastest average internet speed. By the beginning of 2008, there were over 800 mostly local WISPs, with about 350,000 subscribers in 2007. Plans based on either GPRS, EDGE, UMTS or CDMA2000 are being offered by all three mobile phone operators (T-Mobile, O2, Vodafone) and internet provider U:fon. Government-owned Český Telecom slowed down broadband penetration. At the beginning of 2004, local-loop unbundling began and alternative operators started to offer ADSL and also SDSL. This and later privatisation of Český Telecom helped drive down prices.
On 1 July 2006, Český Telecom was acquired by globalized company (Spain-owned) Telefónica group and adopted the new name Telefónica O2 Czech Republic. , VDSL and ADSL2+ are offered in many variants, with download speeds of up to 50 Mbit/s and upload speeds of up to 5 Mbit/s. Cable internet is gaining popularity with its higher download speeds ranging from 50 Mbit/s to 1 Gbit/s.
Two major computer security companies, Avast and AVG, were founded in the Czech Republic. In 2016, Avast led by Pavel Baudiš bought rival AVG for US$1.3 billion, together at the time, these companies had a user base of about 400 million people and 40% of the consumer market outside of China. Avast is the leading provider of antivirus software, with a 20.5% market share.
The Czech lands have a long and rich scientific tradition. The research based on cooperation between universities, Academy of Sciences and specialized research centers brings new inventions and impulses in this area. Important contributions include the modern contact lens, the separation of blood types, the basic theory of genetics, many advances in nanotechnologies and the production of Semtex plastic explosive.
Cyril and Methodius laid the foundations of education and the Czech theological thinking in the 9th century. Original theological and philosophical stream – Hussitism – originated in the Middle Ages. It was represented by Jan Hus, Jerome of Prague or Petr Chelčický. At the end of the Middle Ages, Jan Amos Comenius substantially contributed to the development of modern pedagogy. Jewish philosophy in the Czech lands was represented mainly by Judah Loew ben Bezalel (known for the legend of the Golem of Prague). Bernard Bolzano was the personality of German-speaking philosophy in the Czech lands. Bohuslav Balbín was a key Czech philosopher and historian of the Baroque era. He also started the struggle for rescuing the Czech language. This culminated in the Czech national revival in the first half of the 19th century. Linguistics (Josef Dobrovský, Pavel Jozef Šafařík, Josef Jungmann), ethnography (Karel Jaromír Erben, František Ladislav Čelakovský) and history (František Palacký) played a big role in revival. Palacký was the eminent personality. He wrote the first synthetic history of the Czech nation. He was also the first Czech modern politician and geopolitician (see also Austro-Slavism). He is often called "The Father of the Nation".
In the second half of the 19th century and at the beginning of the 20th century there was a huge development of social sciences. Tomáš Garrigue Masaryk laid the foundations of Czech sociology. Konstantin Jireček founded Byzantology (see also Jireček Line). Alois Musil was a prominent orientalist, Emil Holub ethnographer. Lubor Niederle was a founder of modern Czech archeology. Sigmund Freud established psychoanalysis. Edmund Husserl defined a new philosophical doctrine – phenomenology. Joseph Schumpeter brought genuine economic ideas of "creative destruction" of capitalism. Hans Kelsen was significant legal theorist. Karl Kautsky influenced the history of Marxism. On the contrary, economist Eugen Böhm von Bawerk led a campaign against Marxism. Max Wertheimer was one of the three founders of Gestalt psychology. Musicologists Eduard Hanslick and Guido Adler influenced debates on the development of classical music in Vienna.
The new Czechoslovak republic (1918–1938) wanted to develop sciences. Significant linguistic school was established in Prague – Prague Linguistic Circle (Vilém Mathesius, Jan Mukařovský, René Wellek), moreover linguist Bedřich Hrozný deciphered the ancient Hittite language and linguist Julius Pokorny deepened knowledge about Celtic languages. Philosopher Herbert Feigl was a member of the Vienna Circle. Ladislav Klíma has developed a special version of Nietzschean philosophy. In the second half of the 20th century can be mentioned philosopher Ernest Gellner who is considered one of the leading theoreticians on the issue of nationalism. Also Czech historian Miroslav Hroch analyzed modern nationalism. Vilém Flusser developed the philosophy of technology and image. Marxist Karel Kosík was a major philosopher in the background of the Prague Spring 1968. Jan Patočka and Václav Havel were the main ideologists of the Charter 77. Egon Bondy was a major philosophical spokesman of the Czech underground in the 1970s and 1980s. Czech Egyptology has scored some successes, its main representative is Miroslav Verner. Czech psychologist Stanislav Grof developed a method of "Holotropic Breathwork". Experimental archaeologist Pavel Pavel made several attempts, they had to answer the question how ancient civilizations transported heavy weights.
Famous scientists who were born on the territory of the current Czech Republic:
A number of other scientists are also connected in some way with the Czech lands. The following taught at the University of Prague: astronomers Johannes Kepler and Tycho Brahe, physicists Christian Doppler, Nikola Tesla, and Albert Einstein, and geologist Joachim Barrande.
The Czech economy gets a substantial income from tourism. Prague is the fifth most visited city in Europe after London, Paris, Istanbul and Rome. In 2001, the total earnings from tourism reached 118 billion CZK, making up 5.5% of GNP and 9% of overall export earnings. The industry employs more than 110,000 people – over 1% of the population.
The country's reputation has suffered with guidebooks and tourists reporting overcharging by taxi drivers and pickpocketing problems mainly in Prague, though the situation has improved recently. Since 2005, Prague's mayor, Pavel Bém, has worked to improve this reputation by cracking down on petty crime and, aside from these problems, Prague is a safe city. Also, the Czech Republic as a whole generally has a low crime rate. For tourists, the Czech Republic is considered a safe destination to visit. The low crime rate makes most cities and towns very safe to walk around.
One of the most visited tourist attractions in the Czech Republic is the Nether district Vítkovice in Ostrava, a post-industrial city in the east within the country. The territory was formerly the site of steel production, but now it hosts a technical museum with many interactive expositions for tourists.
The Czech Republic boasts 14 UNESCO World Heritage Sites. All of them are in the cultural category. , further 18 sites are on the tentative list.
There are several centres of tourist activity. The spa towns, such as Karlovy Vary, Mariánské Lázně and Františkovy Lázně and Jáchymov, are particularly popular relaxing holiday destinations. Architectural heritage is another object of interest to visitors – it includes many castles and châteaux from different historical epoques, namely Karlštejn Castle, Český Krumlov and the Lednice–Valtice area.
There are 12 cathedrals and 15 churches elevated to the rank of basilica by the Pope, calm monasteries, many modern and ancient churches – for example Pilgrimage Church of Saint John of Nepomuk is one of those inscribed on the World Heritage List. Away from the towns, areas such as Český ráj, Šumava and the Krkonoše Mountains attract visitors seeking outdoor pursuits.
The country is also known for its various museums. Puppetry and marionette exhibitions are very popular, with a number of puppet festivals throughout the country. Aquapalace Praha in Čestlice near Prague, is the biggest water park in central Europe.
The Czech Republic has a number of beer festivals, including: Czech Beer Festival (the biggest Czech beer festival, it is usually 17 days long and held every year in May in Prague), Pilsner Fest (every year in August in Plzeň), The Olomoucký pivní festival (in Olomouc) or festival Slavnosti piva v Českých Budějovicích (in České Budějovice).
The total fertility rate (TFR) in 2015 was estimated at 1.57 children born/woman, which is below the replacement rate of 2.1, and one of the lowest in the world. The Czech Republic subsequently has one of the oldest populations in the world, with an average age of 42.5 years. The life expectancy in 2013 was estimated at 77.56 years (74.29 years male, 81.01 years female). Immigration increased the population by almost 1% in 2007. About 77,000 people immigrate to the Czech Republic annually. Vietnamese immigrants began settling in the Czech Republic during the Communist period, when they were invited as guest workers by the Czechoslovak government. In 2009, there were about 70,000 Vietnamese in the Czech Republic. Most decide to stay in the country permanently.
According to preliminary results of the 2011 census, the majority of the inhabitants of the Czech Republic are Czechs (63.7%), followed by Moravians (4.9%), Slovaks (1.4%), Poles (0.4%), Germans (0.2%) and Silesians (0.1%). As the 'nationality' was an optional item, a substantial number of people left this field blank (26.0%). According to some estimates, there are about 250,000 Romani people in the Czech Republic. The Polish minority resides mainly in the Zaolzie region.
There were 496,413 (4.5% of population) foreigners residing in the country in 2016, according to the Czech Statistical Office, with the largest groups being Ukrainian (22%), Slovak (22%), Vietnamese (12%), Russian (7%) and German (4%). Most of the foreign population lives in Prague (37.3%) and Central Bohemia Region (13.2%).
The Jewish population of Bohemia and Moravia, 118,000 according to the 1930 census, was virtually annihilated by the Nazi Germans during the Holocaust. There were approximately 4,000 Jews in the Czech Republic in 2005. The former Czech prime minister, Jan Fischer, is of Jewish faith.
At the turn of the 20th century, Chicago was the city with the third largest Czech population, after Prague and Vienna. At the 2010 US census, there were 1,533,826 Americans of full or partial Czech descent.
The Czech Republic has one of the least religious populations in the world with 75% to 79% of people not declaring any religion or faith in polls and the percentage of convinced atheists being third highest (30%) only behind China (47%) and Japan (31%). The Czech people have been historically characterized as "tolerant and even indifferent towards religion".
Christianization in the 9th and 10th centuries introduced Catholicism. After the Bohemian Reformation, most Czechs became followers of Jan Hus, Petr Chelčický and other regional Protestant Reformers. Taborites and Utraquists were major Hussite groups. During the Hussite Wars, Utraquists sided with the Catholic Church. Following the joint Utraquist—Catholic victory, Utraquism was accepted as a distinct form of Christianity to be practiced in Bohemia by the Catholic Church while all remaining Hussite groups were prohibited. After the Reformation, some Bohemians went with the teachings of Martin Luther, especially Sudeten Germans. In the wake of the Reformation, Utraquist Hussites took a renewed increasingly anti-Catholic stance, while some of the defeated Hussite factions (notably Taborites) were revived. After the Habsburgs regained control of Bohemia, the whole population was forcibly converted to Catholicism—even the Utraquist Hussites. Going forward, Czechs have become more wary and pessimistic of religion as such. A long history of resistance to the Catholic Church followed. It suffered a schism with the neo-Hussite Czechoslovak Hussite Church in 1920, lost the bulk of its adherents during the Communist era and continues to lose in the modern, ongoing secularization. Protestantism never recovered after the Counter-Reformation was introduced by the Austrian Habsburgs in 1620.
According to the 2011 census, 34% of the population stated they had no religion, 10.3% was Catholic, 0.8% was Protestant (0.5% Czech Brethren and 0.4% Hussite), and 9% followed other forms of religion both denominational or not (of which 863 people answered they are Pagan). 45% of the population did not answer the question about religion. From 1991 to 2001 and further to 2011 the adherence to Catholicism decreased from 39% to 27% and then to 10%; Protestantism similarly declined from 3.7% to 2% and then to 0.8%. The Muslim population is estimated to be 20,000 representing 0.2% of the Czech population.
Education in the Czech Republic is compulsory for 9 years and citizens have access to a tuition-free university education, while the average number of years of education is 13.1. Additionally, the Czech Republic has a relatively equal educational system in comparison with other countries in Europe. Founded in 1348, Charles University was the first university in Central Europe. Other major universities in the country are Masaryk University, Czech Technical University, Palacký University, Academy of Performing Arts and University of Economics.
The Programme for International Student Assessment, coordinated by the OECD, currently ranks the Czech education system as the 15th most successful in the world, higher than the OECD average. The UN Education Index ranks the Czech Republic 10th (positioned behind Denmark and ahead of South Korea).
Healthcare in the Czech Republic is similar in quality to other developed nations. The Czech universal health care system is based on a compulsory insurance model, with fee-for-service care funded by mandatory employment-related insurance plans. According to the 2016 Euro health consumer index, a comparison of healthcare in Europe, the Czech healthcare is 13th, ranked behind Sweden and two positions ahead of the United Kingdom.
Venus of Dolní Věstonice is the treasure of prehistoric art. Theodoric of Prague was the most famous Czech painter in the Gothic era. For example, he decorated the castle Karlstejn. In the Baroque era, the famous painters were Wenceslaus Hollar, Jan Kupecký, Karel Škréta, Anton Raphael Mengs or Petr Brandl, sculptors Matthias Braun and Ferdinand Brokoff. In the first half of the 19th century, Josef Mánes joined the romantic movement. In the second half of the 19th century had the main say the so-called "National Theatre generation": sculptor Josef Václav Myslbek and painters Mikoláš Aleš, Václav Brožík, Vojtěch Hynais or Julius Mařák. At the end of the century came a wave of Art Nouveau. Alfons Mucha became the main representative. He is today the most famous Czech painter. He is mainly known for Art Nouveau posters and his cycle of 20 large canvases named the Slav Epic, which depicts the history of Czechs and other Slavs.
, the Slav Epic can be seen in the Veletržní Palace of the National Gallery in Prague, which manages the largest collection of art in the Czech Republic. Max Švabinský was another important Art nouveau painter. The 20th century brought avant-garde revolution. In the Czech lands mainly expressionist and cubist: Josef Čapek, Emil Filla, Bohumil Kubišta, Jan Zrzavý. Surrealism emerged particularly in the work of Toyen, Josef Šíma and Karel Teige. In the world, however, he pushed mainly František Kupka, a pioneer of abstract painting. As illustrators and cartoonists in the first half of the 20th century gained fame Josef Lada, Zdeněk Burian or Emil Orlík. Art photography has become a new field (František Drtikol, Josef Sudek, later Jan Saudek or Josef Koudelka).
The Czech Republic is known worldwide for its individually made, mouth blown and decorated Bohemian glass.
The earliest preserved stone buildings in Bohemia and Moravia date back to the time of the Christianization in the 9th and 10th century. Since the Middle Ages, the Czech lands have been using the same architectural styles as most of Western and Central Europe. The oldest still standing churches were built in the Romanesque style (St. George's Basilica, St. Procopius Basilica in Třebíč). During the 13th century it was replaced by the Gothic style (Charles Bridge, Bethlehem Chapel, Old New Synagogue, Sedlec Ossuary, Old Town Hall with Prague astronomical clock, Church of Our Lady before Týn). In the 14th century Emperor Charles IV invited to his court in Prague talented architects from France and Germany, Matthias of Arras and Peter Parler (Karlštejn, St. Vitus Cathedral, St. Barbara's Church in Kutná Hora). During the Middle Ages, many fortified castles were built by the king and aristocracy, as well as many monasteries (Strahov Monastery, Špilberk, Křivoklát Castle, Vyšší Brod Monastery). During the Hussite wars, many of them were damaged or destroyed.
The Renaissance style penetrated the Bohemian Crown in the late 15th century when the older Gothic style started to be slowly mixed with Renaissance elements (architects Matěj Rejsek, Benedikt Rejt and their Powder Tower). An example of the pure Renaissance architecture in Bohemia is the Queen Anne's Summer Palace, which was situated in a newly established garden of Prague Castle. Evidence of the general reception of the Renaissance in Bohemia, involving a massive influx of Italian architects, can be found in spacious châteaux with arcade courtyards and geometrically arranged gardens (Litomyšl Castle, Hluboká Castle). Emphasis was placed on comfort, and buildings that were built for entertainment purposes also appeared.
In the 17th century, the Baroque style spread throughout the Crown of Bohemia. Very outstanding are the architectural projects of the Czech nobleman and imperial generalissimo Albrecht von Wallenstein from the 1620s (Wallenstein Palace). His architects Andrea Spezza and Giovanni Pieroni reflected the most recent Italian production and were very innovative at the same time. Czech Baroque architecture is considered to be a unique part of the European cultural heritage thanks to its extensiveness and extraordinariness (Kroměříž Castle, Holy Trinity Column in Olomouc, St. Nicholas Church at Malá Strana, Karlova Koruna Chateau). In the first third of the 18th century the Bohemian lands were one of the leading artistic centers of the Baroque style. In Bohemia there was completed the development of the Radical Baroque style created in Italy by Francesco Borromini and Guarino Guarini in a very original way. Leading architects of the Bohemian Baroque were Jean-Baptiste Mathey, František Maxmilián Kaňka, Christoph Dientzenhofer, and his son Kilian Ignaz Dientzenhofer.
In the 18th century Bohemia produced an architectural peculiarity – the "Baroque Gothic style", a synthesis of the Gothic and Baroque styles. This was not a simple return to Gothic details, but rather an original Baroque transformation. The main representative and originator of this style was Jan Blažej Santini-Aichel, who used this style in renovating medieval monastic buildings or in Pilgrimage Church of Saint John of Nepomuk.
During the 19th century, the revival architectural styles were very popular in the Bohemian monarchy. Many churches were restored to their presumed medieval appearance and there were constructed many new buildings in the Neo-Romanesque, Neo-Gothic and Neo-Renaissance styles (National Theatre, Lednice–Valtice Cultural Landscape, Cathedral of St. Peter and Paul in Brno). At the turn of the 19th and 20th centuries the new art style appeared in the Czech lands – Art Nouveau. The best-known representatives of Czech Art Nouveau architecture were Osvald Polívka, who designed the Municipal House in Prague, Josef Fanta, the architect of the Prague Main Railway Station, Jan Letzel, Josef Hoffmann and Jan Kotěra.
Bohemia contributed an unusual style to the world's architectural heritage when Czech architects attempted to transpose the Cubism of painting and sculpture into architecture (House of the Black Madonna). During the first years of the independent Czechoslovakia (after 1918), a specifically Czech architectural style, called "Rondo-Cubism", came into existence. Together with the pre-war Czech Cubist architecture it is unparalleled elsewhere in the world. The first Czechoslovak president T. G. Masaryk invited the prominent Slovene architect Jože Plečnik to Prague, where he modernized the Castle and built some other buildings (Church of the Most Sacred Heart of Our Lord).
Between World Wars I and II, Functionalism, with its sober, progressive forms, took over as the main architectural style in the newly established Czechoslovak Republic. In the city of Brno, one of the most impressive functionalist works has been preserved – Villa Tugendhat, designed by the architect Ludwig Mies van der Rohe. The most significant Czech architects of this era were Adolf Loos, Pavel Janák and Josef Gočár.
After World War II and the Communist coup in 1948, art in Czechoslovakia became strongly Soviet influenced. Hotel International in Prague is a brilliant example of the so-called Socialist realism, the Stalinistic art style of the 1950s. The Czechoslovak avant-garde artistic movement known as the "Brussels style" (named after the Brussels World's Fair Expo 58) became popular in the time of political liberalization of Czechoslovakia in the 1960s. Brutalism dominated in the 1970s and 1980s (Kotva Department Store, Ostravar Aréna, Barrandov Bridge, Transgas building).
Even today, the Czech Republic is not shying away from the most modern trends of international architecture. This fact is attested to by a number of projects by world-renowned architects (Frank Gehry and his Dancing House, Jean Nouvel, Ricardo Bofill, and John Pawson). There are also contemporary Czech architects whose works can be found all over the world (Vlado Milunić, Eva Jiřičná, Jan Kaplický).
In a strict sense, Czech literature is the literature written in the Czech language. A more liberal definition incorporates all literary works written in the Czech lands regardless of language. The literature from the area of today's Czech Republic was mostly written in Czech, but also in Latin and German or even Old Church Slavonic. Thus Franz Kafka, who—while bilingual in Czech and German—wrote his works ("The Trial", "The Castle") in German, during the era of Austrian rule, can represent the Czech, German or Austrian literature depending on the point of view.
Influential Czech authors who wrote in Latin include Cosmas of Prague († 1125), Martin of Opava († 1278), Peter of Zittau († 1339), John Hus († 1415), Bohuslav Hasištejnský z Lobkovic (1461–1510), Jan Dubravius (1486–1553), Tadeáš Hájek (1525–1600), Johannes Vodnianus Campanus (1572–1622), John Amos Comenius (1592–1670), and Bohuslav Balbín (1621–1688).
In the second half of the 13th century, the royal court in Prague became one of the centers of the German Minnesang and courtly literature (Reinmar von Zweter, Heinrich von Freiberg, Ulrich von Etzenbach, Wenceslaus II of Bohemia). The most famous Czech medieval German-language work is the "Ploughman of Bohemia" ("Der Ackermann aus Böhmen"), written around 1401 by Johannes von Tepl. The heyday of Czech German-language literature can be seen in the first half of the 20th century, which is represented by the well-known names of Franz Kafka, Max Brod, Franz Werfel, Rainer Maria Rilke, Karl Kraus, Egon Erwin Kisch, and others.
Bible translations played an important role in the development of Czech literature and the standard Czech language. The oldest Czech translation of the Psalms originated in the late 13th century and the first complete Czech translation of the Bible was finished around 1360. The first complete printed Czech Bible was published in 1488 (Prague Bible). The first complete Czech Bible translation from the original languages was published between 1579 and 1593 and is known as the Bible of Kralice. The Codex Gigas from the 12th century is the largest extant medieval manuscript in the world.
Czech-language literature can be divided into several periods: the Middle Ages (Chronicle of Dalimil); the Hussite period (Tomáš Štítný ze Štítného, Jan Hus, Petr Chelčický); the Renaissance humanism (Henry the Younger of Poděbrady, Luke of Prague, Wenceslaus Hajek, Jan Blahoslav, Daniel Adam z Veleslavína); the Baroque period (John Amos Comenius, Adam Václav Michna z Otradovic, Bedřich Bridel, Jan František Beckovský); the Enlightenment and Czech reawakening in the first half of the 19th century (Václav Matěj Kramerius, Karel Hynek Mácha, Karel Jaromír Erben, Karel Havlíček Borovský, Božena Němcová, Ján Kollár, Josef Kajetán Tyl), modern literature in second half of the 19th century (Jan Neruda, Alois Jirásek, Viktor Dyk, Jaroslav Vrchlický, Julius Zeyer, Svatopluk Čech); the avant-garde of the interwar period (Karel Čapek, Jaroslav Hašek, Vítězslav Nezval, Jaroslav Seifert, Jiří Wolker, Vladimír Holan); the years under Communism and the Prague Spring (Josef Škvorecký, Bohumil Hrabal, Milan Kundera, Arnošt Lustig, Václav Havel, Pavel Kohout, Ivan Klíma); and the literature of the post-Communist Czech Republic (Ivan Martin Jirous, Michal Viewegh, Jáchym Topol, Patrik Ouředník, Kateřina Tučková).
Noted journalists include Julius Fučík, Milena Jesenská, and Ferdinand Peroutka.
Jaroslav Seifert was the only Czech writer awarded the Nobel Prize in Literature. The famous antiwar comedy novel "The Good Soldier Švejk" by Jaroslav Hašek is the most translated Czech book in history. It was adapted by Karel Steklý in two color films "The Good Soldier Schweik" in 1956 and 1957. Widely translated Czech books are also Milan Kundera's "The Unbearable Lightness of Being" and Karel Čapek's "War with the Newts".
The international literary award the Franz Kafka Prize is awarded in the Czech Republic.
The Czech Republic has the densest network of libraries in Europe. At its center stands the National Library of the Czech Republic, based in the baroque complex Klementinum.
Czech literature and culture played a major role on at least two occasions when Czechs lived under oppression and political activity was suppressed. On both of these occasions, in the early 19th century and then again in the 1960s, the Czechs used their cultural and literary effort to strive for political freedom, establishing a confident, politically aware nation.
The musical tradition of the Czech lands arose from first church hymns, whose first evidence is suggested at the break of 10th and 11th century. The first significant pieces of Czech music include two chorales, which in their time performed the function of anthems: "Hospodine pomiluj ny" (Lord, Have Mercy on Us) from around 1050, unmistakably the oldest and most faithfully preserved popular spiritual song to have survived to the present, and the hymn "Svatý Václave" (Saint Wenceslas) or "Saint Wenceslas Chorale" from around 1250. Its roots can be found in the 12th century and it still belongs to the most popular religious songs to this day. In 1918, in the beginning of the Czechoslovak state, the song was discussed as one of the possible choices for the national anthem. The authorship of the anthem "Lord, Have Mercy on Us" is ascribed by some historians to Saint Adalbert of Prague (sv.Vojtěch), bishop of Prague, living between 956 and 997.
The wealth of musical culture in the Czech Republic lies in the long-term high-culture classical music tradition during all historical periods, especially in the Baroque, Classicism, Romantic, modern classical music and in the traditional folk music of Bohemia, Moravia and Silesia. Since the early era of artificial music, Czech musicians and composers have often been influenced the folk music of the region and dances (e.g. the polka, which originated in Bohemia). Among the most notable Czech composers are Adam Michna, Jan Dismas Zelenka, Jan Václav Antonín Stamic, Jiří Antonín Benda, Jan Křtitel Vaňhal, Josef Mysliveček, Heinrich Biber, Antonín Rejcha, František Xaver Richter, František Brixi and Jan Ladislav Dussek in baroque era, Bedřich Smetana and Antonín Dvořák in romanticism, Gustav Mahler, Josef Suk, Leoš Janáček, Bohuslav Martinů, Vítězslav Novák, Zdeněk Fibich, Alois Hába, Viktor Ullmann, Ervín Schulhoff, Pavel Haas, Josef Bohuslav Foerster in modern classical music, Miloslav Kabeláč and Petr Eben in contemporary classical music.
Other examples of famous musicians, interpreters and conductors are František Benda, Rafael Kubelík, Jan Kubelík, David Popper, Alice Herz-Sommer, Rudolf Serkin, Heinrich Wilhelm Ernst, Otakar Ševčík, Václav Neumann, Václav Talich, Karel Ančerl, Jiří Bělohlávek, Wojciech Żywny, Emma Destinnová, Magdalena Kožená, Rudolf Firkušný, Czech Philharmonic Orchestra, Panocha Quartet or non-classical musicians: Julius Fučík (brass band), Karel Svoboda and Erich Wolfgang Korngold (film music), Ralph Benatzky, Rudolf Friml and Oskar Nedbal (operetta), Jan Hammer and Karel Gott (pop), Jaroslav Ježek and Miroslav Vitouš (jazz), Karel Kryl (folk).
Czech music can be considered to have been beneficial in both the European and worldwide context, several times co-determined or even determined a newly arriving era in musical art, above all of Classical era, as well as by original attitudes in Baroque, Romantic and modern classical music. The most famous Czech musical works are Smetana's "The Bartered Bride" and "Má vlast", Dvořák's "New World Symphony", "Rusalka" and "Slavonic Dances" or Janáček's "Sinfonietta" and operas, above all "Jenůfa".
The most famous music festival in the country is Prague Spring International Music Festival of classical music, a permanent showcase for outstanding performing artists, symphony orchestras and chamber music ensembles of the world.
The roots of Czech theatre can be found in the Middle Ages, especially in cultural life of gothic period. In the 19th century, the theatre played an important role in the national awakening movement and later, in the 20th century it became a part of the modern European theatre art. Original Czech cultural phenomenon came into being at the end of the 1950s. This project called Laterna magika (The Magic Lantern) was the brainchild of renowned film and theater director Alfred Radok, resulting in productions that combined theater, dance and film in a poetic manner, considered the first multimedia art project in international context.
The most famous Czech drama is Karel Čapek's play "R.U.R.", which introduced the word "robot".
The tradition of Czech cinematography started in the second half of the 1890s. Peaks of the production in the era of silent movies include the historical drama "The Builder of the Temple" and the social and erotic (very controversial and innovative at that time) drama "Erotikon" directed by Gustav Machatý. The early Czech sound film era was very productive, above all in mainstream genres, especially the comedies of Martin Frič or Karel Lamač. However, dramatic movies were more internationally successful. Among the most successful being the romantic drama "Ecstasy" by Gustav Machatý and the romantic "The River" by Josef Rovenský.
After the repressive period of Nazi occupation and early communist official dramaturgy of socialist realism in movies at the turn of the 1940s and 1950s with a few exceptions such as "Krakatit" by Otakar Vávra or "Men without wings" by František Čáp (awarded by Palme d'Or of the Cannes Film Festival in 1946), a new era of the Czech film began with outstanding animated films by important filmmakers such as Karel Zeman, a pioneer with special effects (culminating in successful films such as artistically exceptional "Vynález zkázy" ("A Deadly Invention"), performed in anglophone countries under the name "The Fabulous World of Jules Verne" from 1958, which combined acted drama with animation, and Jiří Trnka, the founder of the modern puppet film. This began a strong tradition of animated films (Zdeněk Miler's "Mole" etc.). Another Czech cultural phenomenon came into being at the end of the 1950s. This project called "Laterna magika" ("The Magic Lantern"), resulting in productions that combined theater, dance and film in a poetic manner, considered the first multimedia art project in international context (mentioned also in Theatre section above).
In the 1960s, so called Czech New Wave (also Czechoslovak New Wave) received international acclaim. It is linked with names of Miloš Forman, Věra Chytilová, Jiří Menzel, Ján Kadár, Elmar Klos, Evald Schorm, Vojtěch Jasný, Ivan Passer, Jan Schmidt, Juraj Herz, Juraj Jakubisko, Jan Němec, Jaroslav Papoušek, etc. The hallmark of the films of this movement were long, often improvised dialogues, black and absurd humor and the occupation of non-actors. Directors are trying to preserve natural atmosphere without refinement and artificial arrangement of scenes. The unique personality of the 1960s and the beginning of the 1970s with original manuscript, deep psychological impact and extraordinarily high quality art is the director František Vláčil. His films Marketa Lazarová, Údolí včel ("The Valley of The Bees") or Adelheid belong to the artistic peaks of Czech cinema production. The film "Marketa Lazarová" was voted the all-time best Czech movie in a prestigious 1998 poll of Czech film critics and publicists. Another internationally well-known author is Jan Švankmajer (in the beginning of the career conjoined with above mentioned project "Laterna Magika"), a filmmaker and artist whose work spans several media. He is a self-labeled surrealist known for his animations and features, which have greatly influenced many artists worldwide.
Kadár & Klos's "The Shop on Main Street" (1965), Menzel's "Closely Watched Trains" (1967) and Jan Svěrák's "Kolya" (1996) won the Academy Award for Best Foreign Language Film while six others earned a nomination: "Loves of a Blonde" (1966), "The Fireman's Ball" (1968), "My Sweet Little Village" (1986), "The Elementary School" (1991), "Divided We Fall" (2000) and "Želary" (2003).
The Czech Lion is the highest Czech award for film achievement. Herbert Lom, Karel Roden and Libuše Šafránková (known from Christmas classic "Three Nuts for Cinderella", especially popular in Norway) among the best known Czech actors.
The Barrandov Studios in Prague are the largest film studios in country and one of the largest in Europe with many many popular film locations in the country. Filmmakers have come to Prague to shoot scenery no longer found in Berlin, Paris and Vienna. The city of Karlovy Vary was used as a location for the 2006 James Bond film Casino Royale.
Karlovy Vary International Film Festival is one of the oldest in the world and has become Central and Eastern Europe's leading film event. It is also one of few film festivals have been given competitive status by the FIAPF. Other film festivals held in the country include Febiofest, Jihlava International Documentary Film Festival, One World Film Festival, Zlín Film Festival and Fresh Film Festival.
Since the Czech Republic is a democratic republic, journalists and media enjoy a great degree of freedom. There are restrictions only against writing in support of Nazism, racism or violating Czech law. The Czech press was ranked as the 23rd most free press in the World Freedom Index by Reporters Without Borders in 2017. The most trusted news webpage in the Czech Republic is ct24.cz, which is owned by Czech Television – the only national public television service – and its 24-hour news channel ČT24. Other public services include the Czech Radio and the Czech News Agency (ČTK). Privately owned television services such as TV Nova, TV Prima and TV Barrandov are also very popular, with TV Nova being the most popular channel in the Czech Republic.
Newspapers are quite popular in the Czech Republic. The best-selling daily national newspapers are Blesk (average 1.15M daily readers), Mladá fronta DNES (average 752,000 daily readers), Právo (average 260,00 daily readers) and Deník (average 72,000 daily readers).
The Czech Republic is home to several globally successful video game developers, including Illusion Softworks (2K Czech), Bohemia Interactive, Keen Software House, Amanita Design and Madfinger Games. The Czech video game development scene has a long history, and a number of Czech games were produced for the ZX Spectrum, PMD 85 and Atari systems in the 1980s. In the early 2000s, a number of Czech games achieved international acclaim, including "Hidden & Dangerous", "", "", "Vietcong" and "". The most globally successful Czech games include "ARMA", "DayZ", "Space Engineers", "Machinarium", "Euro Truck Simulator", "American Truck Simulator", "", "18 Wheels of Steel", "Bus Driver", "Shadowgun" and "Blackhole". The Czech Game of the Year Awards are held annually to recognize accomplishments in video game development.
Czech cuisine is marked by a strong emphasis on meat dishes. Pork is quite common; beef and chicken are also popular. Goose, duck, rabbit and venison are served. Fish is less common, with the occasional exception of fresh trout and carp, which is served at Christmas.
Czech beer has a long and important history. The first brewery is known to have existed in 993 and the Czech Republic has the highest beer consumption per capita in the world. The famous "pilsner style beer" (pils) originated in the western Bohemian city of Plzeň, where the world's first-ever blond lager Pilsner Urquell is still being produced, making it the inspiration for more than two-thirds of the beer produced in the world today. Further south the town of České Budějovice, known as Budweis in German, lent its name to its beer, eventually known as Budweiser Budvar. Apart from these and other major brands, the Czech Republic also has a growing number of small breweries and mini-breweries.
Tourism is slowly growing around the Southern Moravian region too, which has been producing wine since the Middle Ages; about 94% of vineyards in the Czech Republic are Moravian. Aside from slivovitz, Czech beer and wine, the Czechs also produce two unique liquors, Fernet Stock and Becherovka. Kofola is a non-alcoholic domestic cola soft drink which competes with Coca-Cola and Pepsi in popularity.
Some popular Czech dishes include:
There is also a large variety of local sausages, wurst, pâtés, and smoked and cured meats. Czech desserts include a wide variety of whipped cream, chocolate, and fruit pastries and tarts, crêpes, creme desserts and cheese, poppy-seed-filled and other types of traditional cakes such as "buchty", "koláče" and "štrúdl".
Sports play a part in the life of many Czechs, who are generally loyal supporters of their favorite teams or individuals. The two leading sports in the Czech Republic are ice hockey and football. The most watched events in the Czech Republic are Olympic Ice hockey tournaments and Ice Hockey World Championships. Tennis is also a very popular sport in the Czech Republic. The many other sports with professional leagues and structures include basketball, volleyball, team handball, track and field athletics and floorball.
The country has won 14 gold medals in summer (plus 49 as Czechoslovakia) and five gold medals (plus two as Czechoslovakia) in winter Olympic history. Famous Olympians are Věra Čáslavská, Emil Zátopek, Jan Železný, Barbora Špotáková, Martina Sáblíková, Martin Doktor, Štěpánka Hilgertová or Kateřina Neumannová. Sports legends are also runner Jarmila Kratochvílová or chess-player Wilhelm Steinitz.
Czech hockey school has a good reputation. The Czech ice hockey team won the gold medal at the 1998 Winter Olympics and has won twelve gold medals at the World Championships (including 6 as Czechoslovakia), including three straight from 1999 to 2001. Former NHL superstars Jaromír Jágr and Dominik Hašek are among the best known Czech hockey players of all time as well as current Czech NHL star David Pastrňák of the Boston Bruins.
The Czechoslovakia national football team was a consistent performer on the international scene, with eight appearances in the FIFA World Cup Finals, finishing in second place in 1934 and 1962. The team also won the European Football Championship in 1976, came in third in 1980 and won the Olympic gold in 1980. After dissolution of Czechoslovakia, the Czech national football team finished in second (1996) and third (2004) place at the European Football Championship. The most famous Czech footballers were Oldřich Nejedlý, Antonín Puč, František Plánička, Josef Bican, Josef Masopust (Ballon d'or 1962), Ladislav Novák, Svatopluk Pluskal, Antonín Panenka, Ivo Viktor, Pavel Nedvěd (Ballon d'or 2003), Karel Poborský, Vladimír Šmicer, Jan Koller, Milan Baroš, Marek Jankulovski, Tomáš Rosický and Petr Čech.
The Czech Republic also has a great influence in tennis, with such players as Karolína Plíšková, Tomáš Berdych, Jan Kodeš, Jaroslav Drobný, Hana Mandlíková, Wimbledon Women's Singles winners Petra Kvitová and Jana Novotná, 8-time Grand Slam singles champion Ivan Lendl, and 18-time Grand Slam champion Martina Navratilova.
The Czech Republic men's national volleyball team won a silver medal at the 1964 Summer Olympics and two gold medals in the FIVB Volleyball World Championship in 1956 and 1966. Czech Republic women's national basketball team won the EuroBasket 2005 Women. Czechoslovakia national basketball team won EuroBasket 1946. Czech Republic will host the EuroBasket 2021 along with Georgia (Tbilisi), Germany (Berlin, Cologne) and Italy (Milan). It will take place in Prague for the Group Phase matches. Czech Republic hosted the EuroBasket Women 2017 recently.
The Škoda Motorsport is engaged in competition racing since 1901, and has gained a number of titles with various vehicles around the world. MTX automobile company was formerly engaged in the manufacture of racing and formula cars since 1969. Czech Republic MotoGP is the most famous motor race in the country.
Sport is a source of strong waves of patriotism, usually rising several days or weeks before an event. The events considered the most important by Czech fans are: the Ice Hockey World Championships, Olympic Ice hockey tournament, UEFA European Football Championship, UEFA Champions League, FIFA World Cup and qualification matches for such events. In general, any international match of the Czech ice hockey or football national team draws attention, especially when played against a traditional rival.
Czechs are also generally keen on engaging in sports activities themselves. One of the most popular sports Czechs do is hiking, mainly in the mountains. The word for "tourist" in the Czech language, "turista", also means "trekker" or "hiker". For hikers, thanks to the more than 120-year-old tradition, there is a Czech Hiking Markers System of trail blazing, that has been adopted by countries worldwide. There is a network of around 40,000 km of marked short- and long-distance trails crossing the whole country and all the Czech mountains.
The most significant sports venues are Eden Arena (e.g. 2013 UEFA Super Cup, 2015 UEFA European Under-21 Championship; home venue of SK Slavia Prague), O2 Arena (2015 European Athletics Indoor Championships, 2015 IIHF World Championship; home venue of HC Sparta Prague), Generali Arena (home venue of AC Sparta Prague), Masaryk Circuit (annual Czech Republic motorcycle Grand Prix), Strahov Stadium (mass games of Sokol and Spartakiades in communist era), Tipsport Arena (1964 World Men's Handball Championship, EuroBasket 1981, 1990 World Men's Handball Championship; home venue of ex-KHL's HC Lev Praha) and Stadion Evžena Rošického (1978 European Athletics Championships).
Government
Statistics
Trade
Travel | https://en.wikipedia.org/wiki?curid=5321 |
Czechoslovakia
Czechoslovakia, or Czecho-Slovakia (; Czech and , "Česko-Slovensko"), was a sovereign state in Central Europe that existed from October 1918, when it declared its independence from the Austro-Hungarian Empire, until its peaceful dissolution into the Czech Republic and Slovakia on 1 January 1993.
From 1939 to 1945, following its forced division and partial incorporation into Nazi Germany, the state did not "de facto" exist but its government-in-exile continued to operate.
From 1948 to 1990, Czechoslovakia was part of the Eastern Bloc with a command economy. Its economic status was formalized in membership of Comecon from 1949 and its defense status in the Warsaw Pact of May 1955. A period of political liberalization in 1968, known as the Prague Spring, was forcibly ended when the Soviet Union, assisted by several other Warsaw Pact countries, invaded Czechoslovakia. In 1989, as Marxist–Leninist governments and communism were ending all over Europe, Czechoslovaks peacefully deposed their government in the Velvet Revolution; state price controls were removed after a period of preparation.
In 1993, Czechoslovakia split into the two sovereign states of the Czech Republic and Slovakia.
The country was of generally irregular terrain. The western area was part of the north-central European uplands. The eastern region was composed of the northern reaches of the Carpathian Mountains and lands of the Danube River basin.
The weather is mild winters and mild summers. Influenced by the Atlantic Ocean from the west, Baltic Sea from the north, and Mediterranean Sea from the south. There is no continental weather.
The area was long a part of the Austro-Hungarian Empire until the empire collapsed at the end of World War I. The new state was founded by Tomáš Garrigue Masaryk (1850–1937), who served as its first president from 14 November 1918 to 14 December 1935. He was succeeded by his close ally, Edvard Beneš (1884–1948).
The roots of Czech nationalism go back to the 19th century, when philologists and educators, influenced by Romanticism, promoted the Czech language and pride in the Czech people. Nationalism became a mass movement in the second half of the 19th century. Taking advantage of the limited opportunities for participation in political life under Austrian rule, Czech leaders such as historian František Palacký (1798–1876) founded many patriotic, self-help organizations which provided a chance for many of their compatriots to participate in communal life prior to independence. Palacký supported Austro-Slavism and worked for a reorganized and federal Austrian Empire, which would protect the Slavic speaking peoples of Central Europe against Russian and German threats.
An advocate of democratic reform and Czech autonomy within Austria-Hungary, Masaryk was elected twice to the "Reichsrat" (Austrian Parliament), first from 1891 to 1893 for the Young Czech Party, and again from 1907 to 1914 for the Czech Realist Party, which he had founded in 1889 with Karel Kramář and Josef Kaizl.
During World War I small numbers of Czechs and Slovaks, the Czechoslovak Legions, fought with the Allies in France and Italy, while large numbers deserted to Russia in exchange for its support for the independence of Czechoslovakia from the Austrian Empire. With the outbreak of World War I, Masaryk began working for Czech independence in a union with Slovakia. With Edvard Beneš and Milan Rastislav Štefánik, Masaryk visited several Western countries and won support from influential publicists.
The Bohemian Kingdom ceased to exist in 1918 when it was incorporated into Czechoslovakia. Czechoslovakia was founded in October 1918, as one of the successor states of the Austro-Hungarian Empire at the end of World War I and as part of the Treaty of Saint-Germain-en-Laye. It consisted of the present day territories of Bohemia, Moravia, Slovakia and Carpathian Ruthenia. Its territory included some of the most industrialized regions of the former Austria-Hungary.
The new country was a multi-ethnic state, with Czechs and Slovaks as "constituent peoples". The population consisted of Czechs (51%), Slovaks (16%), Germans (22%), Hungarians (5%) and Rusyns (4%). Many of the Germans, Hungarians, Ruthenians and Poles and some Slovaks, felt oppressed because the political elite did not generally allow political autonomy for minority ethnic groups. This policy led to unrest among the non-Czech population, particularly in German-speaking Sudetenland, which initially had proclaimed itself part of the Republic of German-Austria in accordance with the self-determination principle.
The state proclaimed the official ideology that there were no separate Czech and Slovak nations, but only one nation of Czechoslovaks (see Czechoslovakism), to the disagreement of Slovaks and other ethnic groups. Once a unified Czechoslovakia was restored after World War II (after the country had been divided during the war), the conflict between the Czechs and the Slovaks surfaced again. The governments of Czechoslovakia and other eastern European nations deported ethnic Germans to the West, reducing the presence of minorities in the nation. Most of the Jews had been killed during the war by the Nazis and their allies.
"*Jews identified themselves as Germans or Hungarians (and Jews only by religion not ethnicity), the sum is, therefore, more than 100%."
During the period between the two world wars, democracy thrived in Czechoslovakia. Of all the new states established in central Europe after 1918, only Czechoslovakia preserved a democratic government until the war broke out. Thus, despite regional disparities, its level of development was much higher than that of neighboring states. The population was generally literate, and contained fewer alienated groups. The influence of these conditions was augmented by the political values of Czechoslovakia's leaders and the policies they adopted. Under Tomas Masaryk, Czech and Slovak politicians promoted progressive social and economic conditions that served to defuse discontent.
Foreign minister Beneš became the prime architect of the Czechoslovak-Romanian-Yugoslav alliance (the "Little Entente", 1921–38) directed against Hungarian attempts to reclaim lost areas. Beneš worked closely with France. Far more dangerous was the German element, which after 1933 became allied with the Nazis in Germany. The increasing feeling of inferiority among the Slovaks, who were hostile to the more numerous Czechs, weakened the country in the late 1930s. Many Slovaks supported an extreme nationalist movement and welcomed the puppet Slovak state set up under Hitler's control in 1939.
After 1933, Czechoslovakia remained the only democracy in central and eastern Europe.
In September 1938, Adolf Hitler demanded control of the Sudetenland. On 29 September 1938, Britain and France ceded control in the Appeasement at the Munich Conference; France ignored the military alliance it had with Czechoslovakia. During October 1938, Nazi Germany occupied and annexed the Sudetenland border region, effectively crippling Czechoslovak defences.
On 15 March 1939, the remainder ("rump") of Czechoslovakia was invaded and divided into the Protectorate of Bohemia and Moravia and the puppet Slovak State.
Much of Slovakia and all of Carpathian Ruthenia were annexed by Hungary. Poland occupied Zaolzie, an area whose population was majority Polish, in October 1938.
The eventual goal of the German state under Nazi leadership was to eradicate Czech nationality through assimilation, deportation, and extermination of the Czech intelligentsia; the intellectual elites and middle class made up a considerable number of the 200,000 people who passed through concentration camps and the 250,000 who died during German occupation. Under Generalplan Ost, it was assumed that around 50% Czechs would be fit for Germanization. The Czech intellectual elites were to be removed not only from Czech territories but from Europe completely. The authors of Generalplan Ost believed it would be best if they emigrated overseas, as even in Siberia they were considered a threat to German rule. Just like Jews, Poles, Serbs, and several other nations, Czechs were considered to be untermenschen by the Nazi state. In 1940, in a secret Nazi plan for the Germanization of the Protectorate of Bohemia and Moravia it was declared that those considered to be of racially Mongoloid origin and the Czech intelligentsia were not to be Germanized.
The deportation of Jews to concentration camps was organized under the direction of Reinhard Heydrich, and the fortress town of Terezín was made into a ghetto way station for Jewish families. On 4 June 1942 Heydrich died after being wounded by an assassin in Operation Anthropoid. Heydrich's successor, Colonel General Kurt Daluege, ordered mass arrests and executions and the destruction of the villages of Lidice and Ležáky. In 1943 the German war effort was accelerated. Under the authority of Karl Hermann Frank, German minister of state for Bohemia and Moravia, some 350,000 Czech laborers were dispatched to the Reich. Within the protectorate, all non-war-related industry was prohibited. Most of the Czech population obeyed quiescently up until the final months preceding the end of the war, while thousands were involved in the resistance movement.
For the Czechs of the Protectorate Bohemia and Moravia, German occupation was a period of brutal oppression. Czech losses resulting from political persecution and deaths in concentration camps totaled between 36,000 and 55,000. The Jewish population of Bohemia and Moravia (118,000 according to the 1930 census) was virtually annihilated. Many Jews emigrated after 1939; more than 70,000 were killed; 8,000 survived at Terezín. Several thousand Jews managed to live in freedom or in hiding throughout the occupation.
Despite the estimated 136,000 deaths at the hands of the Nazi regime, the population in the Reichsprotektorate saw a net increase during the war years of approximately 250,000 in line with an increased birth rate.
On 6 May 1945, the third US Army of General Patton entered Pilsen from the south west. On 9 May 1945, Soviet Red Army troops entered Prague.
After World War II, pre-war Czechoslovakia was re-established, with the exception of Subcarpathian Ruthenia, which was annexed by the Soviet Union and incorporated into the Ukrainian Soviet Socialist Republic. The Beneš decrees were promulgated concerning ethnic Germans (see Potsdam Agreement) and ethnic Hungarians. Under the decrees, citizenship was abrogated for people of German and Hungarian ethnic origin who had accepted German or Hungarian citizenship during the occupations. In 1948, this provision was cancelled for the Hungarians, but only partially for the Germans. The government then confiscated the property of the Germans and expelled about 90% of the ethnic German population, over 2 million people. Those who remained were collectively accused of supporting the Nazis after the Munich Agreement, as 97.32% of Sudeten Germans had voted for the NSDAP in the December 1938 elections. Almost every decree explicitly stated that the sanctions did not apply to antifascists. Some 250,000 Germans, many married to Czechs, some antifascists, and also those required for the post-war reconstruction of the country, remained in Czechoslovakia. The Beneš Decrees still cause controversy among nationalist groups in the Czech Republic, Germany, Austria and Hungary.
Carpathian Ruthenia (Podkarpatská Rus) was occupied by (and in June 1945 formally ceded to) the Soviet Union. In the 1946 parliamentary election, the Communist Party of Czechoslovakia was the winner in the Czech lands, and the Democratic Party won in Slovakia. In February 1948 the Communists seized power. Although they would maintain the fiction of political pluralism through the existence of the National Front, except for a short period in the late 1960s (the Prague Spring) the country had no liberal democracy. Since citizens lacked significant electoral methods of registering protest against government policies, periodically there were street protests that became violent. For example, there were riots in the town of Plzeň in 1953, reflecting economic discontent. Police and army units put down the rebellion, and hundreds were injured but no one was killed. While its economy remained more advanced than those of its neighbors in Eastern Europe, Czechoslovakia grew increasingly economically weak relative to Western Europe.
The currency reform of 1953 caused dissatisfaction among Czechoslovak laborers. Prior to World War II, the Czech purchasing power surpassed that of the Soviet Union by 115–144%. This disparity was noted after Czechoslovakia came under the Soviet Bloc. To equalize the wage rate, Czechoslovaks had to turn in their old money for new at a decreased value. This lowered the real value of wages by about 11%. The banks also confiscated savings and bank deposits to control the amount of money in circulation. The economy continued to suffer as production achievements of bituminous coal was less than anticipated. Bituminous coal powered 85% of Czechoslovakia's economy. Because of low production, coal was utilized in industry only. Pre-war years, consumers used both coal and lignite for fuel, however due to low production, coal was for industrial use only which meant the consumer was only able to utilize lignite. In 1929, a typical family of four consumed approximately 2.34 tons of lignite, but by 1953 it was allowed to use only 1.6–1.8 tons per year.
In 1968, when the reformer Alexander Dubček was appointed to the key post of First Secretary of the Czechoslovak Communist Party, there was a brief period of liberalization known as the Prague Spring. In response, after failing to persuade the Czechoslovak leaders to change course, five other members of the Warsaw Pact invaded. Soviet tanks rolled into Czechoslovakia on the night of 20–21 August 1968. Soviet Communist Party General Secretary Leonid Brezhnev viewed this intervention as vital for the preservation of the Soviet, socialist system and vowed to intervene in any state that sought to replace Marxism-Leninism with capitalism. In the week after the invasion there was a spontaneous campaign of civil resistance against the occupation. This resistance involved a wide range of acts of non-cooperation and defiance: this was followed by a period in which the Czechoslovak Communist Party leadership, having been forced in Moscow to make concessions to the Soviet Union, gradually put the brakes on their earlier liberal policies. In April 1969 Dubček was finally dismissed from the First Secretaryship of the Czechoslovak Communist Party. Meanwhile, one plank of the reform program had been carried out: in 1968–69, Czechoslovakia was turned into a federation of the Czech Socialist Republic and Slovak Socialist Republic. The theory was that under the federation, social and economic inequities between the Czech and Slovak halves of the state would be largely eliminated. A number of ministries, such as education, now became two formally equal bodies in the two formally equal republics. However, the centralized political control by the Czechoslovak Communist Party severely limited the effects of federalization.
The 1970s saw the rise of the dissident movement in Czechoslovakia, represented among others by Václav Havel. The movement sought greater political participation and expression in the face of official disapproval, manifested in limitations on work activities, which went as far as a ban on professional employment, the refusal of higher education for the dissidents' children, police harassment and prison.
In 1989, the Velvet Revolution restored democracy. This occurred at around the same time as the fall of communism in Romania, Bulgaria, Hungary and Poland.
The word "socialist" was removed from the country's full name on 29 March 1990 and replaced by "federal".
In 1992, because of growing nationalist tensions in the government, Czechoslovakia was peacefully dissolved by parliament. On 1 January 1993 it formally separated into two independent countries, the Czech Republic and the Slovak Republic.
After World War II, a political monopoly was held by the Communist Party of Czechoslovakia (KSČ). Gustáv Husák was elected first secretary of the KSČ in 1969 (changed to general secretary in 1971) and president of Czechoslovakia in 1975. Other parties and organizations existed but functioned in subordinate roles to the KSČ. All political parties, as well as numerous mass organizations, were grouped under umbrella of the National Front. Human rights activists and religious activists were severely repressed.
Czechoslovakia had the following constitutions during its history (1918–1992):
In the 1930s, the nation formed a military alliance with France, which collapsed in the Munich Agreement of 1938. After World War II, active participant in Council for Mutual Economic Assistance (Comecon), Warsaw Pact, United Nations and its specialized agencies; signatory of conference on Security and Cooperation in Europe.
Before World War II, the economy was about the fourth in all industrial states in Europe. The state was based on strong economy, manufacturing cars (Škoda, Tatra), trams, aircraft (Aero, Avia), ships, ship engines (Škoda), canons, shoes (Baťa), turbines, guns (Zbrojovka Brno). It was the industrial workshop for Austro-Hungarian empire. The Slovak lands were more in agriculture.
After World War II, the economy was centrally planned, with command links controlled by the communist party, similarly to the Soviet Union. The large metallurgical industry was dependent on imports of iron and non-ferrous ores.
After World War II, the country was short of energy, relying on imported crude oil and natural gas from Soviet Union, domestic brown coal, and nuclear and hydroelectric energy. Energy constraints were a major factor in the 1980s.
Slightly after the foundation of Czechoslovakia in 1918, there was a lack of needful infrastructure in many areas – paved roads, railways, bridges etc. Massive improvement in the following years enabled Czechoslovakia to develop its industry. Prague's civil airport in Ruzyně became one of the most modern terminals in the world, when it was finished in 1937. Tomáš Baťa, Czech entrepreneur and visionary outlined his ideas in the publication "Budujme stát pro 40 milionů lidí", where he described the future motorway system. Construction of the first motorways in Czechoslovakia begun in 1939, nevertheless, they were stopped after Nazi occupation during the World War II.
Education was free at all levels and compulsory from age 6 to 15. The vast majority of the population was literate. There was a highly developed system of apprenticeship training and vocational schools supplemented general secondary schools and institutions of higher education.
In 1991: Roman Catholics 46%, Evangelical Lutheran 5.3%, Atheist 30%, n/a 17%, but there were huge differences in religious practices between the two constituent republics; see Czech Republic and Slovakia.
After World War II, free health care was available to all citizens. National health planning emphasized preventive medicine; factory and local health care centres supplemented hospitals and other inpatient institutions. There was substantial improvement in rural health care during the 1960s and 1970s.
During the era between the World Wars, Czechoslovak democracy and liberalism facilitated conditions for free publication. The most significant daily newspapers in these times were Lidové noviny, Národní listy, Český deník and Československá republika.
During Communist rule, the mass media in Czechoslovakia were controlled by the Communist Party. Private ownership of any publication or agency of the mass media was generally forbidden, although churches and other organizations published small periodicals and newspapers. Even with this information monopoly in the hands of organizations under KSČ control, all publications were reviewed by the government's Office for Press and Information.
The Czechoslovakia national football team was a consistent performer on the international scene, with eight appearances in the FIFA World Cup Finals, finishing in second place in 1934 and 1962. The team also won the European Football Championship in 1976, came in third in 1980 and won the Olympic gold in 1980.
Well-known football players such as Pavel Nedvěd, Antonín Panenka, Milan Baroš, Tomáš Rosický, Vladimír Šmicer or Petr Čech were all born in Czechoslovakia.
The International Olympic Committee code for Czechoslovakia is TCH, which is still used in historical listings of results.
The Czechoslovak national ice hockey team won many medals from the world championships and Olympic Games. Peter Šťastný, Jaromír Jágr, Dominik Hašek, Peter Bondra, Petr Klíma, Marián Gáborík, Marián Hossa, Miroslav Šatan and Pavol Demitra all come from Czechoslovakia.
Emil Zátopek, winner of four Olympic gold medals in athletics, is considered one of the top athletes in Czechoslovak history.
Věra Čáslavská was an Olympic gold medallist in gymnastics, winning seven gold medals and four silver medals. She represented Czechoslovakia in three consecutive Olympics.
Several accomplished professional tennis players including Ivan Lendl, Jan Kodeš, Miloslav Mečíř, Hana Mandlíková, Martina Hingis, Martina Navratilova, Jana Novotna, Petra Kvitová and Daniela Hantuchová were born in Czechoslovakia.
Maps with Hungarian-language rubrics: | https://en.wikipedia.org/wiki?curid=5322 |
Computer science
Computer science deals with the theoretical foundations of computation and practical techniques for their application.
Computer science is the study of computation and information. Computer science deals with theory of computation, algorithms, computational problems and the design of computer systems hardware, software and applications. Computer science addresses both human-made and natural information processes, such as communication, control, perception, learning and intelligence especially in human-made computing systems and machines. According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?"
Its fields can be divided into theoretical and practical disciplines. Computational complexity theory is highly abstract, while computer graphics and computational geometry emphasizes real-world applications. Algorithmics is called the heart of computer science. Programming language theory considers approaches to the description of computational processes, while software engineering involves the use of programming languages and complex systems. Computer architecture and computer engineering deals with construction of computer components and computer-controlled equipment. Human–computer interaction considers the challenges in making computers useful, usable, and accessible. Artificial intelligence aims to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, motion planning, learning, and communication found in humans and animals.
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment.
Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, which was the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first "automatic mechanical calculator", his Difference Engine, in 1822, which eventually gave him the idea of the first "programmable mechanical calculator", his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".
During the 1940s, as new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC were developed, the term "computer" came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
Although many initially believed it was impossible that computers themselves could actually be a scientific field of study, in the late fifties it gradually became accepted among the greater academic population. It is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM (short for International Business Machines) released the IBM 704 and later the IBM 709 computers, which were widely used during the exploration period of such devices. "Still, working with the IBM [computer] was frustrating […] if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again". During the late 1950s, the computer science discipline was very much in its developmental stages, and such issues were commonplace.
The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947. In 1953, the University of Manchester built the first transistorized computer, called the Transistor Computer. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications. The metal–oxide–silicon field-effect transistor (MOSFET, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. The MOSFET made it possible to build high-density integrated circuit chips, leading to what is known as the computer revolution or microcomputer revolution.
Time has seen significant improvements in the usability and effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base. Initially, computers were quite costly, and some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage.
Although first proposed in 1956, the term "computer science" appears in a 1959 article in "Communications of the ACM",
in which Louis Fein argues for the creation of a "Graduate School in Computer Sciences" analogous to the creation of Harvard Business School in 1921, justifying the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.
His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term "computing science", to emphasize precisely that difference. Danish scientist Peter Naur suggested the term "datalogy", to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases.
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the "Communications of the ACM"—"turingineer", "turologist", "flow-charts-man", "applied meta-mathematician", and "applied epistemologist". Three months later in the same journal, "comptologist" was suggested, followed next year by "hypologist". The term "computics" has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. "informatique" (French), "Informatik" (German), "informatica" (Italian, Dutch), "informática" (Spanish, Portuguese), "informatika" (Slavic languages and Hungarian) or "pliroforiki" ("πληροφορική", which means informatics) in Greek. Similar words have also been adopted in the UK (as in "the School of Informatics of the University of Edinburgh").
"In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as philosophy, cognitive science, linguistics, mathematics, physics, biology, statistics, and logic.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.
The relationship between Computer Science and Software Engineering is a contentious issue, which is further muddied by disputes over what the term "Software Engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.
The academic, political, and funding aspects of computer science tend to depend on whether a department formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).
Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems.
As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.
CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: "theory of computation", "algorithms and data structures", "programming methodology and languages", and "computer elements and architecture". In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.
"Theoretical Computer Science" is mathematical and abstract in spirit, but it derives its motivation from the practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. All studies related to mathematical, logic and formal concepts and methods could be considered as theoretical computer science, provided that the motivation is clearly drawn from the field of computing.
Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency.
According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation.
Information theory is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.
Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.
Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. The field often involves disciplines of computer engineering and electrical engineering, selecting and interconnecting hardware components to create computers that meet functional, performance, and cost goals.
Computer performance analysis is the study of work flowing through computers with the general goals of improving throughput, controlling response time, using resources efficiently, eliminating bottlenecks, and predicting performance under anticipated peak loads.
Benchmarks are used to compare the performance of systems carrying different chips and/or system architectures.
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals.
This branch of computer science aims to manage networks between computers worldwide.
Computer security is a branch of computer technology with an objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users. Cryptography is the practice and study of hiding (encryption) and therefore deciphering (decryption) information. Modern cryptography is largely related to computer science, for many encryption and decryption algorithms are based on their computational complexity.
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages.
Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.
Research that develops theories, principles, and guidelines for user interface designers, so they can create satisfactory user experiences with desktop, laptop, and mobile devices.
Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.
Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it doesn't just deal with the creation or manufacture of new software, but its internal arrangement and maintenance.
The philosopher of computing Bill Rapaport noted three "Great Insights of Computer Science":
Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include:
Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities.
Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.
Computer Science, known by its near synonyms, Computing, Computer Studies, Information Technology (IT) and Information and Computing Technology (ICT), has been taught in UK schools since the days of batch processing, mark sensitive cards and paper tape but usually to a select few students. In 1981, the BBC produced a micro-computer and classroom network and Computer Studies became common for GCE O level students (11–16-year-old), and Computer Science to A level students. Its importance was recognised, and it became a compulsory part of the National Curriculum, for Key Stage 3 & 4. In September 2014 it became an entitlement for all pupils over the age of 4.
In the US, with 14,000 school districts deciding the curriculum, provision was fractured. According to a 2010 report by the Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA), only 14 out of 50 states have adopted significant education standards for high school computer science.
Israel, New Zealand, and South Korea have included computer science in their national secondary education curricula, and several others are following. | https://en.wikipedia.org/wiki?curid=5323 |
Creationism
Creationism is the religious belief that nature, and aspects such as the universe, Earth, life, and humans, originated with supernatural acts of divine creation.
In its broadest sense, creationism includes a continuum of religious views, which vary in their acceptance or rejection of scientific explanations such as evolution that describe the origin and development of natural phenomena.
The term "creationism" most often refers to belief in special creation; the claim that the universe and lifeforms were created as they exist today by divine action, and that the only true explanations are those which are compatible with a Christian fundamentalist literal interpretation of the creation myths found in the Bible's Genesis creation narrative. Since the 1970s, the commonest form of this has been Young Earth Creationism which posits special creation of the universe and lifeforms within the last 10,000 years on the basis of Flood geology, and promotes pseudoscientific creation science. From the 18th century onward, Old Earth Creationism accepted geological time harmonized with Genesis through gap or day-age theory, while supporting anti-evolution. Modern old-Earth creationists support progressive creationism and continue to reject evolutionary explanations. Following political controversy, creation science was reformulated as intelligent design and neo-creationism.
Mainline Protestants and the Catholic Church reconcile modern science with their faith in Creation through forms of theistic evolution which hold that God purposefully created through the laws of nature, and accept evolution. Some groups call their belief evolutionary creationism.
Less prominently, there are also members of the Islamic and Hindu faiths who are creationists.
Use of the term "creationist" in this context dates back to Charles Darwin's unpublished 1842 sketch draft for what became "On the Origin of Species", and he used the term later in letters to colleagues. | https://en.wikipedia.org/wiki?curid=5326 |
History of Chad
Chad (; ), officially the Republic of Chad, is a landlocked country in West Africa. It borders Libya to the north, Sudan to the east, the Central African Republic to the south, Cameroon and Nigeria to the southwest, and Niger to the west. Due to its distance from the sea and its largely desert climate, the country is sometimes referred to as the "Dead Heart of Africa".
The territory now known as Chad possesses some of the richest archaeological sites in Africa. A hominid skull was found by Michel Brunet in 2002, in Borkou, that is more than 7 million years old, the oldest discovered anywhere in the world; it has been given the name Sahelanthropus tchadensis. In 1996 Michel Brunet had unearthed a hominid jaw which he named Australopithecus bahrelghazali, and unofficially dubbed Abel. It was dated using Beryllium based Radiometric dating as living circa. 3.6 million years ago.
During the 7th millennium BC, the northern half of Chad was part of a broad expanse of land, stretching from the Indus River in the east to the Atlantic Ocean in the west, in which ecological conditions favored early human settlement. Rock art of the "Round Head" style, found in the Ennedi region, has been dated to before the 7th millennium BC and, because of the tools with which the rocks were carved and the scenes they depict, may represent the oldest evidence in the Sahara of Neolithic industries. Many of the pottery-making and Neolithic activities in Ennedi date back further than any of those of the Nile Valley to the east.
In the prehistoric period, Chad was much wetter than it is today, as evidenced by large game animals depicted in rock paintings in the Tibesti and Borkou regions.
Recent linguistic research suggests that all of Africa's major language groupings south of the Sahara Desert (except Khoisan, which is not considered a valid genetic grouping anyway), i. e. the Afro-Asiatic, Nilo-Saharan and Niger–Congo phyla, originated in prehistoric times in a narrow band between Lake Chad and the Nile Valley. The origins of Chad's peoples, however, remain unclear. Several of the proven archaeological sites have been only partially studied, and other sites of great potential have yet to be mapped.
At the end of the 1st millennium AD, the formation of states began across central Chad in the sahelian zone between the desert and the savanna. For almost the next 1,000 years, these states, their relations with each other, and their effects on the peoples who lived in stateless societies along their peripheries dominated Chad's political history. Recent research suggests that indigenous Africans founded of these states, not migrating Arabic-speaking groups, as was believed previously. Nonetheless, immigrants, Arabic-speaking or otherwise, played a significant role, along with Islam, in the formation and early evolution of these states.
Most states began as kingdoms, in which the king was considered divine and endowed with temporal and spiritual powers. All states were militaristic (or they did not survive long), but none was able to expand far into southern Chad, where forests and the tsetse fly complicated the use of cavalry. Control over the trans-Saharan trade routes that passed through the region formed the economic basis of these kingdoms. Although many states rose and fell, the most important and durable of the empires were Kanem-Bornu, Baguirmi, and Ouaddai, according to most written sources (mainly court chronicles and writings of Arab traders and travelers).
The Kanem Empire originated in the 9th century AD to the northeast of Lake Chad. Historians agree that the leaders of the new state were ancestors of the Kanembu people. Toward the end of the 11th century the Sayfawa king (or "mai", the title of the Sayfawa rulers) Hummay, converted to Islam. In the following century the Sayfawa rulers expanded southward into Kanem, where was to rise their first capital, Njimi. Kanem's expansion peaked during the long and energetic reign of Mai Dunama Dabbalemi (c. 1221–1259).
By the end of the 14th century, internal struggles and external attacks had torn Kanem apart. Finally, around 1396 the Bulala invaders forced "Mai" Umar Idrismi to abandon Njimi and move the Kanembu people to Bornu on the western edge of Lake Chad. Over time, the intermarriage of the Kanembu and Bornu peoples created a new people and language, the Kanuri, and founded a new capital, Ngazargamu.
Kanem-Bornu peaked during the reign of the outstanding statesman "Mai" Idris Aluma (c. 1571–1603). Aluma is remembered for his military skills, administrative reforms, and Islamic piety. The administrative reforms and military brilliance of Aluma sustained the empire until the mid-17th century, when its power began to fade. By the early 19th century, Kanem-Bornu was clearly an empire in decline, and in 1808 Fulani warriors conquered Ngazargamu. Bornu survived, but the Sayfawa dynasty ended in 1846 and the Empire itself fell in 1893.
In addition to Kanem-Bornu, two other states in the region, Baguirmi and Ouaddai, achieved historical prominence. Baguirmi emerged to the southeast of Kanem-Bornu in the 16th century. Islam was adopted, and the state became a sultanate. Absorbed into Kanem-Bornu, Baguirmi broke free later in the 17th century, only to be returned to tributary status in the mid-18th century. Early in the 19th century, Baguirmi fell into decay and was threatened militarily by the nearby kingdom of Ouaddai. Although Baguirmi resisted, it accepted tributary status in order to obtain help from Ouaddai in putting down internal dissension. When the capital was burned in 1893, the sultan sought and received protectorate status from the French.
Located in northeast of Baguirmi, Ouaddai was a non-Muslim kingdom that emerged in the 16th century as an offshoot of the state of Darfur (in present-day Sudan). Early in the 12th century, groups in the region rallied to Abd al-Karim Sabun, who overthrew the ruling Tunjur group, transforming Ouaddai into an Islamic sultanate. During much of the 18th century, Ouaddai resisted reincorporation into Darfur.
In about 1804, under the rule of Sabun, the sultanate began to expand its power. A new trade route north was discovered, and Sabun outfitted royal caravans to take advantage of it. He began minting his own coinage and imported chain mail, firearms, and military advisers from North Africa. Sabun's successors were less able than he, and Darfur took advantage of a disputed political succession in 1838 to put its own candidate in power. This tactic backfired when Darfur's choice, Muhammad Sharif, rejected Darfur and asserted his own authority. In doing so, he gained acceptance from Ouaddai's various factions and went on to become Ouaddai's ablest ruler. Sharif eventually established Ouaddai's hegemony over Baguirmi and kingdoms as far away as the Chari River. The Ouaddai opposed French domination until well into the 20th century.
The French first penetrated Chad in 1891, establishing their authority through military expeditions primarily against the Muslim kingdoms. The decisive colonial battle for Chad was fought on April 22, 1900 at Battle of Kousséri between forces of French Major Amédée-François Lamy and forces of the Sudanese warlord Rabih az-Zubayr. Both leaders were killed in the battle.
In 1905, administrative responsibility for Chad was placed under a governor-general stationed at Brazzaville, capital of French Equatorial Africa (AEF). Chad did not have a separate colonial status until 1920, when it was placed under a lieutenant-governor stationed in Fort-Lamy (today N'Djamena).
Two fundamental themes dominated Chad's colonial experience with the French: an absence of policies designed to unify the territory and an exceptionally slow pace of modernization. In the French scale of priorities, the colony of Chad ranked near the bottom, and the French came to perceive Chad primarily as a source of raw cotton and untrained labour to be used in the more productive colonies to the south.
Throughout the colonial period, large areas of Chad were never governed effectively: in the huge BET Prefecture, the handful of French military administrators usually left the people alone, and in central Chad, French rule was only slightly more substantive. Truly speaking, France managed to govern effectively only the south.
During World War II, Chad was the first French colony to rejoin the Allies (August 26, 1940), after the defeat of France by Germany. Under the administration of Félix Éboué, France's first black colonial governor, a military column, commanded by Colonel Philippe Leclerc de Hauteclocque, and including two battalions of Sara troops, moved north from N'Djamena (then Fort Lamy) to engage Axis forces in Libya, where, in partnership with the British Army's Long Range Desert Group, they captured Kufra. On 21 January 1942, N'Djamena was bombed by a German aircraft.
After the war ended, local parties started to develop in Chad. The first to be born was the radical Chadian Progressive Party (PPT) in February 1947, initially headed by Panamanian born Gabriel Lisette, but from 1959 headed by François Tombalbaye. The more conservative Chadian Democratic Union (UDT) was founded in November 1947 and represented French commercial interests and a bloc of traditional leaders composed primarily of Muslim and Ouaddaïan nobility. The confrontation between the PPT and UDT was more than simply ideological; it represented different regional identities, with the PPT representing the Christian and animist south and the UDT the Islamic north.
The PPT won the May 1957 pre-independence elections thanks to a greatly expanded franchise, and Lisette led the government of the Territorial Assembly until he lost a confidence vote on 11 February 1959. After a referendum on territorial autonomy on 28 September 1958, French Equatorial Africa was dissolved, and its four constituent states – Gabon, Congo (Brazzaville), the Central African Republic, and Chad became autonomous members of the French Community from 28 November 1958. Following Lisette's fall in February 1959 the opposition leaders Gontchome Sahoulba and Ahmed Koulamallah could not form a stable government, so the PPT was again asked to form an administration - which it did under the leadership of François Tombalbaye on 26 March 1959. On 12 July 1960 France agreed to Chad becoming fully independent. On 11 August 1960, Chad became an independent country and François Tombalbaye became its first president.
One of the most prominent aspects of Tombalbaye's rule to prove itself was his authoritarianism and distrust of democracy. Already in January 1962 he banned all political parties except his own PPT, and started immediately concentrating all power in his own hands. His treatment of opponents, real or imagined, was extremely harsh, filling the prisons with thousands of political prisoners.
What was even worse was his constant discrimination against the central and northern regions of Chad, where the southern Chadian administrators came to be perceived as arrogant and incompetent. This resentment at last exploded in a tax revolt on November 1, 1965, in the Guéra Prefecture, causing 500 deaths. The year after saw the birth in Sudan of the National Liberation Front of Chad (FROLINAT), created to militarily oust Tombalbaye and the Southern dominance. It was the start of a bloody civil war.
Tombalbaye resorted to calling in French troops; while moderately successful, they were not fully able to quell the insurgency. Proving more fortunate was his choice to break with the French and seek friendly ties with Libyan Brotherly Leader Gaddafi, taking away the rebels' principal source of supplies.
But while he had reported some success against the rebels, Tombalbaye started behaving more and more irrationally and brutally, continuously eroding his consensus among the southern elites, which dominated all key positions in the army, the civil service and the ruling party. As a consequence on April 13, 1975, several units of N'Djamena's gendarmerie killed Tombalbaye during a coup.
The coup d'état that terminated Tombalbaye's government received an enthusiastic response in N'Djamena. The southerner General Félix Malloum emerged early as the chairman of the new "junta".
The new military leaders were unable to retain for long the popularity that they had gained through their overthrow of Tombalbaye. Malloum proved himself unable to cope with the FROLINAT and at the end decided his only chance was in coopting some of the rebels: in 1978 he allied himself with the insurgent leader Hissène Habré, who entered the government as prime minister.
Internal dissent within the government led Prime Minister Habré to send his forces against Malloum's national army in the capital in February 1979. Malloum was ousted from the presidency, but the resulting civil war amongst the 11 emergent factions was so widespread that it rendered the central government largely irrelevant. At that point, other African governments decided to intervene.
A series of four international conferences held first under Nigerian and then Organization of African Unity (OAU) sponsorship attempted to bring the Chadian factions together. At the fourth conference, held in Lagos, Nigeria, in August 1979, the Lagos Accord was signed. This accord established a transitional government pending national elections. In November 1979, the Transitional Government of National Unity (GUNT) was created with a mandate to govern for 18 months. Goukouni Oueddei, a northerner, was named president; Colonel Kamougué, a southerner, Vice President; and Habré, Minister of Defense. This coalition proved fragile; in January 1980, fighting broke out again between Goukouni's and Habré's forces. With assistance from Libya, Goukouni regained control of the capital and other urban centers by year's end. However, Goukouni's January 1981 statement that Chad and Libya had agreed to work for the realization of complete unity between the two countries generated intense international pressure and Goukouni's subsequent call for the complete withdrawal of external forces.
Libya's partial withdrawal to the Aozou Strip in northern Chad cleared the way for Habré's forces to enter N’Djamena in June. French troops and an OAU peacekeeping force of 3,500 Nigerian, Senegalese, and Zairian troops (partially funded by the United States) remained neutral during the conflict.
Habré continued to face armed opposition on various fronts, and was brutal in his repression of suspected opponents, massacring and torturing many during his rule. In the summer of 1983, GUNT forces launched an offensive against government positions in northern and eastern Chad with heavy Libyan support. In response to Libya's direct intervention, French and Zairian forces intervened to defend Habré, pushing Libyan and rebel forces north of the 16th parallel. In September 1984, the French and the Libyan governments announced an agreement for the mutual withdrawal of their forces from Chad. By the end of the year, all French and Zairian troops were withdrawn. Libya did not honor the withdrawal accord, and its forces continued to occupy the northern third of Chad.
Rebel commando groups (Codos) in southern Chad were broken up by government massacres in 1984. In 1985 Habré briefly reconciled with some of his opponents, including the Democratic Front of Chad (FDT) and the Coordinating Action Committee of the Democratic Revolutionary Council. Goukouni also began to rally toward Habré, and with his support Habré successfully expelled Libyan forces from most of Chadian territory. A cease-fire between Chad and Libya held from 1987 to 1988, and negotiations over the next several years led to the 1994 International Court of Justice decision granting Chad sovereignty over the Aouzou strip, effectively ending Libyan occupation.
However, rivalry between Hadjerai, Zaghawa and Gorane groups within the government grew in the late 1980s. In April 1989, Idriss Déby, one of Habré's leading generals and a Zaghawa, defected and fled to Darfur in Sudan, from which he mounted a Zaghawa-supported series of attacks on Habré (a Gorane). In December 1990, with Libyan assistance and no opposition from French troops stationed in Chad, Déby's forces successfully marched on N’Djamena. After 3 months of provisional government, Déby's Patriotic Salvation Movement (MPS) approved a national charter on February 28, 1991, with Déby as president.
During the next two years, Déby faced at least two coup attempts. Government forces clashed violently with rebel forces, including the Movement for Democracy and Development, MDD, National Revival Committee for Peace and Democracy (CSNPD), Chadian National Front (FNT) and the Western Armed Forces (FAO), near Lake Chad and in southern regions of the country. Earlier French demands for the country to hold a National Conference resulted in the gathering of 750 delegates representing political parties (which were legalized in 1992), the government, trade unions and the army to discuss the creation of a pluralist democratic regime.
However, unrest continued, sparked in part by large-scale killings of civilians in southern Chad. The CSNPD, led by Kette Moise and other southern groups entered into a peace agreement with government forces in 1994, which later broke down. Two new groups, the Armed Forces for a Federal Republic (FARF) led by former Kette ally Laokein Barde and the Democratic Front for Renewal (FDR), and a reformulated MDD clashed with government forces from 1994 to 1995.
Talks with political opponents in early 1996 did not go well, but Déby announced his intent to hold presidential elections in June. Déby won the country's first multi-party presidential elections with support in the second round from opposition leader Kebzabo, defeating General Kamougue (leader of the 1975 coup against Tombalbaye). Déby's MPS party won 63 of 125 seats in the January 1997 legislative elections. International observers noted numerous serious irregularities in presidential and legislative election proceedings.
By mid-1997 the government signed peace deals with FARF and the MDD leadership and succeeded in cutting off the groups from their rear bases in the Central African Republic and Cameroon. Agreements also were struck with rebels from the National Front of Chad (FNT) and Movement for Social Justice and Democracy in October 1997. However, peace was short-lived, as FARF rebels clashed with government soldiers, finally surrendering to government forces in May 1998. Barde was killed in the fighting, as were hundreds of other southerners, most civilians.
Since October 1998, Chadian Movement for Justice and Democracy (MDJT) rebels, led by Youssuf Togoimi until his death in September 2002, have skirmished with government troops in the Tibesti region, resulting in hundreds of civilian, government, and rebel casualties, but little ground won or lost. No active armed opposition has emerged in other parts of Chad, although Kette Moise, following senior postings at the Ministry of Interior, mounted a smallscale local operation near Moundou which was quickly and violently suppressed by government forces in late 2000.
Déby, in the mid-1990s, gradually restored basic functions of government and entered into agreements with the World Bank and IMF to carry out substantial economic reforms. Oil exploitation in the southern Doba region began in June 2000, with World Bank Board approval to finance a small portion of a project, the Chad-Cameroon Petroleum Development Project, aimed at transport of Chadian crude through a 1000-km buried pipeline through Cameroon to the Gulf of Guinea. The project established unique mechanisms for World Bank, private sector, government, and civil society collaboration to guarantee that future oil revenues benefit local populations and result in poverty alleviation. Success of the project depended on multiple monitoring efforts to ensure that all parties keep their commitments. These "unique" mechanisms for monitoring and revenue management have faced intense criticism from the beginning. Debt relief was accorded to Chad in May 2001.
Déby won a flawed 63% first-round victory in May 2001 presidential elections after legislative elections were postponed until spring 2002. Having accused the government of fraud, six opposition leaders were arrested (twice) and one opposition party activist was killed following the announcement of election results. However, despite claims of government corruption, favoritism of Zaghawas, and abuses by the security forces, opposition party and labor union calls for general strikes and more active demonstrations against the government have been unsuccessful. Despite movement toward democratic reform, power remains in the hands of a northern ethnic oligarchy.
In 2003, Chad began receiving refugees from the Darfur region of western Sudan. More than 200,000 refugees fled the fighting between two rebel groups and government-supported militias known as Janjaweed. A number of border incidents led to the Chadian-Sudanese War.
The war started on December 23, 2005, when the government of Chad declared a state of war with Sudan and called for the citizens of Chad to mobilize themselves against the "common enemy," which the Chadian government sees as the Rally for Democracy and Liberty (RDL) militants, Chadian rebels, backed by the Sudanese government, and Sudanese militiamen. Militants have attacked villages and towns in eastern Chad, stealing cattle, murdering citizens, and burning houses. Over 200,000 refugees from the Darfur region of northwestern Sudan currently claim asylum in eastern Chad. Chadian president Idriss Déby accuses Sudanese President Omar Hasan Ahmad al-Bashir of trying to "destabilize our country, to drive our people into misery, to create disorder and export the war from Darfur to Chad."
An attack on the Chadian town of Adre near the Sudanese border led to the deaths of either one hundred rebels, as every news source other than CNN has reported, or three hundred rebels. The Sudanese government was blamed for the attack, which was the second in the region in three days, but Sudanese foreign ministry spokesman Jamal Mohammed Ibrahim denies any Sudanese involvement, "We are not for any escalation with Chad. We technically deny involvement in Chadian internal affairs." This attack was the final straw that led to the declaration of war by Chad and the alleged deployment of the Chadian airforce into Sudanese airspace, which the Chadian government denies.
An attack on N'Djamena was defeated on April 13, 2006 in the Battle of N'Djamena. The President on national radio stated that the situation was under control, but residents, diplomats and journalists reportedly heard shots of weapons fire.
On November 25, 2006, rebels captured the eastern town of Abeche, capital of the Ouaddaï Region and center for humanitarian aid to the Darfur region in Sudan. On the same day, a separate rebel group Rally of Democratic Forces had captured Biltine. On November 26, 2006, the Chadian government claimed to have recaptured both towns, although rebels still claimed control of Biltine. Government buildings and humanitarian aid offices in Abeche were said to have been looted. The Chadian government denied a warning issued by the French Embassy in N'Djamena that a group of rebels was making its way through the Batha Prefecture in central Chad. Chad insists that both rebel groups are supported by the Sudanese government.
Nearly 100 children at the center of an international scandal that left them stranded at an orphanage in remote eastern Chad returned home after nearly five months March 14, 2008. The 97 children were taken from their homes in October 2007 by a then-obscure French charity, Zoé's Ark, which claimed they were orphans from Sudan's war-torn Darfur region.
On Friday, February 1, 2008, rebels, an opposition alliance of leaders Mahamat Nouri, a former defense minister, and Timane Erdimi, a nephew of Idriss Déby who was his chief of staff, attacked the Chadian capital of Ndjamena - even surrounding the Presidential Palace. But Idris Deby with government troops fought back. French forces flew in ammunition for Chadian government troops but took no active part in the fighting. UN has said that up to 20,000 people left the region, taking refuge in nearby Cameroon and Nigeria. Hundreds of people were killed, mostly civilians. The rebels accuse Deby of corruption and embezzling millions in oil revenue. While many Chadians may share that assessment, the uprising appears to be a power struggle within the elite that has long controlled Chad. The French government believes that the opposition has regrouped east of the capital. Déby has blamed Sudan for the current unrest in Chad. | https://en.wikipedia.org/wiki?curid=5329 |
Geography of Chad
Chad is one of the 48 landlocked countries in the world and is located in North Central Africa, measuring , nearly twice the size of France and slightly more than three times the size of California. Most of its ethnically and linguistically diverse population lives in the south, with densities ranging from 54 persons per square kilometer in the Logone River basin to 0.1 persons in the northern B.E.T. (Borkou-Ennedi-Tibesti) desert region, which itself is larger than France. The capital city of N'Djaména, situated at the confluence of the Chari and Logone Rivers, is cosmopolitan in nature, with a current population in excess of 700,000 people.
Chad has four bioclimatic zones. The northernmost Saharan zone averages less than of rainfall annually. The sparse human population is largely nomadic, with some livestock, mostly small ruminants and camels. The central Sahelian zone receives between rainfall and has vegetation ranging from grass/shrub steppe to thorny, open savanna. The southern zone, often referred to as the Sudan zone, receives between , with woodland savanna and deciduous forests for vegetation. Rainfall in the Guinea zone, located in Chad's southwestern tip, ranges between .
The country's topography is generally flat, with the elevation gradually rising as one moves north and east away from Lake Chad. The highest point in Chad is Emi Koussi, a mountain that rises in the northern Tibesti Mountains. The Ennedi Plateau and the Ouaddaï highlands in the east complete the image of a gradually sloping basin, which descends towards Lake Chad. There are also central highlands in the Guera region rising to .
Lake Chad is the second largest lake in west Africa and is one of the most important wetlands on the continent. Home to 120 species of fish and at least that many species of birds, the lake has shrunk dramatically in the last four decades due to increased water usage from an expanding population and low rainfall. Bordered by Chad, Niger, Nigeria, and Cameroon, Lake Chad currently covers only 1350 square kilometers, down from 25,000 square kilometers in 1963. The Chari and Logone Rivers, both of which originate in the Central African Republic and flow northward, provide most of the surface water entering Lake Chad. Chad is also next to Niger.
Located in north-central Africa, Chad stretches for about 1,800 kilometers from its northernmost point to its southern boundary. Except in the far northwest and south, where its borders converge, Chad's average width is about 800 kilometers. Its area of 1,284,000 square kilometers is roughly equal to the combined areas of Idaho, Wyoming, Utah, Nevada, and Arizona. Chad's neighbors include Libya to the north, Niger and Nigeria to the west, Sudan to the east, Central African Republic to the south, and Cameroon to the southwest.
Chad exhibits two striking geographical characteristics. First, the country is landlocked. N'Djamena, the capital, is located more than 1,100 kilometers northeast of the Atlantic Ocean; Abéché, a major city in the east, lies 2,650 kilometers from the Red Sea; and Faya-Largeau, a much smaller but strategically important center in the north, is in the middle of the Sahara Desert, 1,550 kilometers from the Mediterranean Sea. These vast distances from the sea have had a profound impact on Chad's historical and contemporary development.
The second noteworthy characteristic is that the country borders on very different parts of the African continent: North Africa, with its Islamic culture and economic orientation toward the Mediterranean Basin and West Africa, with its diverse religions and cultures and its history of highly developed states and regional economies;
Chad also borders Northeast Africa, oriented toward the Nile Valley and the Red Sea region - and Central or Equatorial Africa, some of whose people have retained classical African religions while others have adopted Christianity, and whose economies were part of the great Congo River system. Although much of Chad's distinctiveness comes from this diversity of influences, since independence the diversity has also been an obstacle to the creation of a national identity.
Although Chadian society is economically, socially, and culturally fragmented, the country's geography is unified by the Lake Chad Basin. Once a huge inland sea (the Pale-Chadian Sea) whose only remnant is shallow Lake Chad, this vast depression extends west into Nigeria and Niger. The larger, northern portion of the basin is bounded within Chad by the Tibesti Mountains in the northwest, the Ennedi Plateau in the northeast, the Ouaddaï Highlands in the east along the border with Sudan, the Guéra Massif in central Chad, and the Mandara Mountains along Chad's southwestern border with Cameroon. The smaller, southern part of the basin falls almost exclusively in Chad. It is delimited in the north by the Guéra Massif, in the south by highlands 250 kilometers south of the border with Central African Republic, and in the southwest by the Mandara Mountains.
Lake Chad, located in the southwestern part of the basin at an altitude of 282 meters, surprisingly does not mark the basin's lowest point; instead, this is found in the Bodele and Djourab regions in the north-central and northeastern parts of the country, respectively. This oddity arises because the great stationary dunes (ergs) of the Kanem region create a dam, preventing lake waters from flowing to the basin's lowest point. At various times in the past, and as late as the 1870s, the Bahr el Ghazal Depression, which extends from the northeastern part of the lake to the Djourab, acted as an overflow canal; since independence, climatic conditions have made overflows impossible.
North and northeast of Lake Chad, the basin extends for more than 800 kilometers, passing through regions characterized by great rolling dunes separated by very deep depressions. Although vegetation holds the dunes in place in the Kanem region, farther north they are bare and have a fluid, rippling character. From its low point in the Djourab, the basin then rises to the plateaus and peaks of the Tibesti Mountains in the north. The summit of this formation—as well as the highest point in the Sahara Desert—is Emi Koussi, a dormant volcano that reaches 3,414 meters above sea level.
The basin's northeastern limit is the Ennedi Plateau, whose limestone bed rises in steps etched by erosion. East of the lake, the basin rises gradually to the Ouaddaï Highlands, which mark Chad's eastern border and also divide the Chad and Nile watersheds. These highland areas are part of the East Saharan montane xeric woodlands ecoregion.
Southeast of Lake Chad, the regular contours of the terrain are broken by the Guéra Massif, which divides the basin into its northern and southern parts. South of the lake lie the floodplains of the Chari and Logone rivers, much of which are inundated during the rainy season. Farther south, the basin floor slopes upward, forming a series of low sand and clay plateaus, called koros, which eventually climb to 615 meters above sea level. South of the Chadian border, the koros divide the Lake Chad Basin from the Ubangi-Zaire river system.
Permanent streams do not exist in northern or central Chad. Following infrequent rains in the Ennedi Plateau and Ouaddaï Highlands, water may flow through depressions called enneris and wadis. Often the result of flash floods, such streams usually dry out within a few days as the remaining puddles seep into the sandy clay soil. The most important of these streams is the Batha, which in the rainy season carries water west from the Ouaddaï Highlands and the Guéra Massif to Lake Fitri.
Chad's major rivers are the Chari and the Logone and their tributaries, which flow from the southeast into Lake Chad. Both river systems rise in the highlands of Central African Republic and Cameroon, regions that receive more than 1,250 millimeters of rainfall annually. Fed by rivers of Central African Republic, as well as by the Bahr Salamat, Bahr Aouk, and Bahr Sara rivers of southeastern Chad, the Chari River is about 1,200 kilometers long. From its origins near the city of Sarh, the middle course of the Chari makes its way through swampy terrain; the lower Chari is joined by the Logone River near N'Djamena. The Chari's volume varies greatly, from 17 cubic meters per second during the dry season to 340 cubic meters per second during the wettest part of the year.
The Logone River is formed by tributaries flowing from Cameroon and Central African Republic. Both shorter and smaller in volume than the Chari, it flows northeast for 960 kilometers; its volume ranges from five to eighty-five cubic meters per second. At N'Djamena the Logone empties into the Chari, and the combined rivers flow together for thirty kilometers through a large delta and into Lake Chad. At the end of the rainy season in the fall, the river overflows its banks and creates a huge floodplain in the delta.
The seventh largest lake in the world (and the fourth largest in Africa), Lake Chad is located in the sahelian zone, a region just south of the Sahara Desert. The Chari River contributes 95 percent of Lake Chad's water, an average annual volume of 40 billion cubic meters, 95% of which is lost to evaporation. The size of the lake is determined by rains in the southern highlands bordering the basin and by temperatures in the Sahel. Fluctuations in both cause the lake to change dramatically in size, from 9,800 square kilometers in the dry season to 25,500 at the end of the rainy season.
Lake Chad also changes greatly in size from one year to another. In 1870 its maximum area was 28,000 square kilometers. The measurement dropped to 12,700 in 1908. In the 1940s and 1950s, the lake remained small, but it grew again to 26,000 square kilometers in 1963. The droughts of the late 1960s, early 1970s, and mid-1980s caused Lake Chad to shrink once again, however. The only other lakes of importance in Chad are Lake Fitri, in Batha Prefecture, and Lake Iro, in the marshy southeast.
The Lake Chad Basin embraces a great range of tropical climates from north to south, although most of these climates tend to be dry. Apart from the far north, most regions are characterized by a cycle of alternating rainy and dry seasons. In any given year, the duration of each season is determined largely by the positions of two great air masses—a maritime mass over the Atlantic Ocean to the southwest and a much drier continental mass.
During the rainy season, winds from the southwest push the moister maritime system north over the African continent where it meets and slips under the continental mass along a front called the "intertropical convergence zone". At the height of the rainy season, the front may reach as far as Kanem Prefecture. By the middle of the dry season, the intertropical convergence zone moves south of Chad, taking the rain with it. This weather system contributes to the formation of three major regions of climate and vegetation.
The Saharan region covers roughly the northern half of the country, including Borkou-Ennedi-Tibesti Prefecture along with the northern parts of Kanem, Batha, and Biltine prefectures. Much of this area receives only traces of rain during the entire year; at Faya Largeau, for example, annual rainfall averages less than . Scattered small oases and occasional wells provide water for a few date palms or small plots of millet and garden crops.
In much of the north, the average daily maximum temperature is about during January, the coolest month of the year, and about during May, the hottest month. On occasion, strong winds from the northeast produce violent sandstorms. In northern Biltine Prefecture, a region called the Mortcha plays a major role in animal husbandry. Dry for nine months of the year, it receives or more of rain, mostly during July and August.
A carpet of green springs from the desert during this brief wet season, attracting herders from throughout the region who come to pasture their cattle and camels. Because very few wells and springs have water throughout the year, the herders leave with the end of the rains, turning over the land to the antelopes, gazelles, and ostriches that can survive with little groundwater. Northern Chad averages over 3500 hours of sunlight per year, the south somewhat less.
The semiarid sahelian zone, or Sahel, forms a belt about wide that runs from Lac and Chari-Baguirmi prefectures eastward through Guéra, Ouaddaï, and northern Salamat prefectures to the Sudanese frontier. The climate in this transition zone between the desert and the southern sudanian zone is divided into a rainy season (from June to September) and a dry period (from October to May).
In the northern Sahel, thorny shrubs and acacia trees grow wild, while date palms, cereals, and garden crops are raised in scattered oases. Outside these settlements, nomads tend their flocks during the rainy season, moving southward as forage and surface water disappear with the onset of the dry part of the year. The central Sahel is characterized by drought-resistant grasses and small woods. Rainfall is more abundant there than in the Saharan region. For example, N'Djamena records a maximum annual average rainfall of , while Ouaddaï Prefecture receives just a bit less.
During the hot season, in April and May, maximum temperatures frequently rise above . In the southern part of the Sahel, rainfall is sufficient to permit crop production on unirrigated land, and millet and sorghum are grown. Agriculture is also common in the marshlands east of Lake Chad and near swamps or wells. Many farmers in the region combine subsistence agriculture with the raising of cattle, sheep, goats, and poultry.
The humid "sudanian" zone includes the Sahel, the southern prefectures of Mayo-Kebbi, Tandjilé, Logone Occidental, Logone Oriental, Moyen-Chari, and southern Salamat. Between April and October, the rainy season brings between of precipitation. Temperatures are high throughout the year. Daytime readings in Moundou, the major city in the southwest, range from in the middle of the cool season in January to about in the hot months of March, April, and May.
The sudanian region is predominantly East Sudanian savanna, or plains covered with a mixture of tropical or subtropical grasses and woodlands. The growth is lush during the rainy season but turns brown and dormant during the five-month dry season between November and March. Over a large part of the region, however, natural vegetation has yielded to agriculture.
On 22 June, the temperature reached in Faya, breaking a record set in 1961 at the same location. Similar temperature rises were also reported in Niger, which began to enter a famine situation.
On 26 July the heat reached near-record levels over Chad and Niger.
Area:
"total:"
1.284 million km²
"land:"
1,259,200 km²
"water:"
24,800 km²
Area - comparative:
Canada: smaller than the Northwest Territories
US: slightly more than three times the size of California
Land boundaries:
"total:"
6,406 km
"border countries:"
Cameroon 1,116 km, Central African Republic 1,556 km, Libya 1,050 km, Niger 1,196 km, Nigeria 85 km, Sudan 1,403 km
Coastline:
0 km (landlocked)
Maritime claims:
none (landlocked)
Elevation extremes:
"lowest point:"
Djourab Depression 160 m
"highest point:'
Emi Koussi 3,415 m
Natural resources:
petroleum, uranium, natron, kaolin, fish (Chari River, Logone River), gold, limestone, sand and gravel, salt
Land use:
"arable land:"
3.89%
"permanent crops:"
0.03%
"other:"
96.08% (2012)
Irrigated land:
302.7 km² (2003)
Total renewable water resources:
43 km3 (2011)
Freshwater withdrawal (domestic/industrial/agricultural):
"total:"
0.88 km3/yr (12%/12%/76%)
"per capita:"
84.81 m3/yr (2005)
Natural hazards:
hot, dry, dusty, Harmattan winds occur in north; periodic droughts; locust plagues
Environment - current issues:
inadequate supplies of potable water; improper waste disposal in rural areas contributes to soil and water pollution; desertification
See also: 2010 Sahel famine
This is a list of the extreme points of Chad, the points that are farther north, south, east or west than any other location.
"*Note: technically Chad does not have an easternmost point, the eastern-most section of the border being formed by the 24° of longitude" | https://en.wikipedia.org/wiki?curid=5330 |
Demographics of Chad
The people of Chad speak more than 100 different languages and divide themselves into many ethnic groups. However, language and ethnicity are not the same. Moreover, neither element can be tied to a particular physical type.
Although the possession of a common language shows that its speakers have lived together and have a common history, peoples also change languages. This is particularly so in Chad, where the openness of the terrain, marginal rainfall, frequent drought and famine, and low population densities have encouraged physical and linguistic mobility. Slave raids among non-Muslim peoples, internal slave trade, and exports of captives northward from the ninth to the twentieth centuries also have resulted in language changes.
Anthropologists view ethnicity as being more than genetics. Like language, ethnicity implies a shared heritage, partly economic, where people of the same ethnic group may share a livelihood, and partly social, taking the form of shared ways of doing things and organizing relations among individuals and groups. Ethnicity also involves a cultural component made up of shared values and a common worldview. Like language, ethnicity is not immutable. Shared ways of doing things change over time and alter a group's perception of its own identity.
Not only do the social aspects of ethnic identity change but the biological composition (or gene pool) also may change over time. Although most ethnic groups emphasize intermarriage, people are often proscribed from seeking partners among close relatives—a prohibition that promotes biological variation. In all groups, the departure of some individuals or groups and the integration of others also changes the biological component.
The Chadian government has avoided official recognition of ethnicity. With the exception of a few surveys conducted shortly after independence, little data were available on this important aspect of Chadian society. Nonetheless, ethnic identity was a significant component of life in Chad.
The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa.
Chad's languages fall into ten major groups, each of which belongs to either the
Nilo-Saharan, Afro-Asiatic, or Niger–Congo language family. These represent three of the four major language families in Africa; only the Khoisan languages of southern Africa are not represented. The presence of such different languages suggests that the Lake Chad Basin may have been an important point of dispersal in ancient times.
According to the total population was in , compared to only 2 429 000 in 1950. The proportion of children below the age of 15 in 2010 was 45.4%, 51.7% was between 15 and 65 years of age, while 2.9% was 65 years or older
.
Registration of vital events is in Chad not complete. The Population Departement of the United Nations prepared the following estimates.
Total Fertility Rate (TFR) (Wanted Fertility Rate) and Crude Birth Rate (CBR):
Fertility data as of 2014-2015 (DHS Program):
The separation of religion from social structure in Chad represents a false dichotomy, for they are perceived as two sides of the same coin. Three religious traditions coexist in Chad- classical African religions, Islam, and Christianity. None is monolithic. The first tradition includes a variety of ancestor and/or place-oriented religions whose expression is highly specific. Islam, although characterized by an orthodox set of beliefs and observances, also is expressed in diverse ways. Christianity arrived in Chad much more recently with the arrival of Europeans. Its followers are divided into Roman Catholics and Protestants (including several denominations); as with Chadian Islam, Chadian Christianity retains aspects of pre-Christian religious belief.
The number of followers of each tradition in Chad is unknown. Estimates made in 1962 suggested that 35 percent of Chadians practiced classical African religions, 55 percent were Muslims, and 10 percent were Christians. In the 1970s and 1980s, this distribution undoubtedly changed. Observers report that Islam has spread among the Hajerai and among other non-Muslim populations of the Saharan and sahelian zones. However, the proportion of Muslims may have fallen because the birthrate among the followers of traditional religions and Christians in southern Chad is thought to be higher than that among Muslims. In addition, the upheavals since the mid-1970s have resulted in the departure of some missionaries; whether or not Chadian Christians have been numerous enough and organized enough to have attracted more converts since that time is unknown.
Demographic statistics according to the World Population Review in 2019.
The following demographic statistics are from the CIA World Factbook.
The peoples of Chad carry significant ancestry from Eastern, Central, Western, and Northern Africa.
About 5,000 French citizens live in Chad.
Works cited | https://en.wikipedia.org/wiki?curid=5331 |
Economy of Chad
The economy of Chad suffers from the landlocked country's geographic remoteness, drought, lack of infrastructure, and political turmoil. About 85% of the population depends on agriculture, including the herding of livestock. Of Africa's Francophone countries, Chad benefited least from the 50% devaluation of their currencies in January 1994. Financial aid from the World Bank, the African Development Bank, and other sources is directed largely at the improvement of agriculture, especially livestock production. Because of lack of financing, the development of oil fields near Doba, originally due to finish in 2000, was delayed until 2003. It was finally developed and is now operated by Exxon Mobil Corporation.
The following table shows the main economic indicators in 1980–2017.
GDP:
purchasing power parity – $28.62 billion (2017 est.)
GDP – real growth rate:
-3.1% (2017 est.)
GDP – per capita:
$2,300 (2017 est.)
Gross national saving:
15.5% of GDP (2017 est.)
GDP – composition by sector:
"agriculture:"
52.3% (2017 est.)
"industry:"
14.7% (2017 est.)
"services:"
33.1% (2017 est.)
Population below poverty line::
46.7% (2011 est.)
Distribution of family income – Gini index:
43.3 (2011 est.)
Inflation rate (consumer prices):
-0.9% (2017 est.)
Labor force:
5.654 million (2017 est.)
Labor force – by occupation:
agriculture 80%, industry and services 20% (2006 est.)
Budget:
"revenues:"
1.337 billion (2017 est.)
"expenditures:"
1.481 billion (2017 est.)
Budget surplus (+) or deficit (-):
-1.5% (of GDP) (2017 est.)
Public debt:
52.5% of GDP (2017 est.)
Industries:
oil, cotton textiles, brewing, natron (sodium carbonate), soap, cigarettes, construction materials
Industrial production growth rate:
-4% (2017 est.)
electrification: total population: 4% (2013)
electrification: urban areas: 14% (2013)
electrification: rural areas: 1% (2013)
Electricity – production:
224.3 million kWh (2016 est.)
Electricity – production by source:
"fossil fuel:"
98%
"hydro:"
0%
"nuclear:"
0%
"other renewable:"
3% (2017)
Electricity – consumption:
208.6 million kWh (2016 est.)
Electricity – exports:
0 kWh (2016 est.)
Electricity – imports:
0 kWh (2016 est.)
Agriculture – products:
cotton, sorghum, millet, peanuts, sesame, corn, rice, potatoes, onions, cassava (manioc, tapioca), cattle, sheep, goats, camels
Exports:
$2.464 billion (2017 est.)
Exports – commodities:
oil, livestock, cotton, sesame, gum arabic, shea butter
Exports – partners:
US 38.7%, China 16.6%, Netherlands 15.7%, UAE 12.2%, India 6.3% (2017)
Imports:
$2.16 billion (2017 est.)
Imports – commodities:
machinery and transportation equipment, industrial goods, foodstuffs, textiles
Imports – partners:
China 19.9%, Cameroon 17.2%, France 17%, US 5.4%, India 4.9%, Senegal 4.5% (2017)
Debt – external:
$1.724 billion (31 December 2017 est.)
Reserves of foreign exchange and gold:
$22.9 million (31 December 2017 est.) | https://en.wikipedia.org/wiki?curid=5333 |
Telecommunications in Chad
Telecommunications in Chad include radio, television, fixed and mobile telephones, and the Internet.
Radio stations:
Radios:
1.7 million (1997).
Television stations:
Television sets:
10,000 (1997).
Radio is the most important medium of mass communication. State-run Radiodiffusion Nationale Tchadienne operates national and regional radio stations. Around a dozen private radio stations are on the air, despite high licensing fees, some run by religious or other non-profit groups. The BBC World Service (FM 90.6) and Radio France Internationale (RFI) broadcast in the capital, N'Djamena. The only television station, Tele Tchad, is state-owned.
State control of many broadcasting outlets allows few dissenting views. Journalists are harassed and attacked. On rare occasions journalists are warned in writing by the High Council for Communication to produce more "responsible" journalism or face fines. Some journalists and publishers practice self-censorship. On 10 October 2012, the High Council on Communications issued a formal warning to La Voix du Paysan, claiming that the station's live broadcast on 30 September incited the public to "insurrection against the government." The station had broadcast a sermon by a bishop who criticized the government for allegedly failing to use oil wealth to benefit the region.
Calling code: +235
International call prefix: 00
Main lines:
Mobile cellular:
Telephone system: inadequate system of radiotelephone communication stations with high costs and low telephone density; fixed-line connections for less than 1 per 100 persons coupled with mobile-cellular subscribership base of only about 35 per 100 persons (2011).
Satellite earth stations: 1 Intelsat (Atlantic Ocean) (2011).
Top-level domain: .td
Internet users:
Fixed broadband: 18,000 subscriptions, 132nd in the world; 0.2% of the population, 161st in the world (2012).
Wireless broadband: Unknown (2012).
Internet hosts:
IPv4: 4,096 addresses allocated, less than 0.05% of the world total, 0.4 addresses per 1000 people (2012).
There are no government restrictions on access to the Internet or credible reports that the government monitors e-mail or Internet chat rooms.
The constitution provides for freedom of opinion, expression, and press, but the government does not always respect these rights. Private individuals are generally free to criticize the government without reprisal, but reporters and publishers risk harassment from authorities when publishing critical articles. The 2010 media law abolished prison sentences for defamation and insult, but prohibits "inciting racial, ethnic, or religious hatred," which is punishable by one to two years in prison and a fine of one to three million CFA francs ($2,000 to $6,000). | https://en.wikipedia.org/wiki?curid=5334 |
Transport in Chad
Transport infrastructure within Chad is generally poor, especially in the north and east of the country. River transport is limited to the south-west corner. As of 2011 Chad had no railways though two lines are planned - from the capital to the Sudanese and Cameroonian borders.during the wet season, especially in the southern half of the country. In the north, roads are merely tracks across the desert and land mines continue to present a danger. Draft animals (horses, donkeys and camels) remain important in much of the country.
Fuel supplies can be erratic, even in the south-west of the country, and are expensive. Elsewhere they are practically non-existent.
As of 2011 Chad had no railways. Two lines were planned to Sudan and Cameroon from the capital, with construction expected to start in 2012.No operative lines were listed as at 2019.
As at 2018 Chad had a total of 44,000 km of roads of which approximately 260 km are paved. Some, but not all of the roads in the capital N'Djamena are paved. Outside of N'Djamena there is one paved road which runs from Massakory in the north, through N'Djamena and then south, through the cities of Guélengdeng, Bongor, Kélo and Moundou, with a short spur leading in the direction of Kousseri, Cameroon, near N'Djamena. Expansion of the road towards Cameroon through Pala and Léré is reportedly in the preparatory stages.
Most rivers flow but intermittently. On the Chari, between N’Djamena and Lake Chad, transportation is possible all year round. In September and October, the Logone is navigable between N’Djamena and Moundou, and the Chari between N’Djamena and Sarh. Total waterways cover 4,800 km (3,000 mi), of which 2,000 km (1,250 mi) are navigable all year.
As at 2012, Chari and Logone Rivers were navigable only in wet season (2002). Both flow northwards, from the south of Chad, into Lake Chad.
Since 2003, a 1,070 km pipeline has been used to export crude oil from the oil fields around Doba to offshore oil-loading facilities on Cameroon's Atlantic coast at Kribi. | https://en.wikipedia.org/wiki?curid=5335 |
Military of Chad
The military of Chad consists of the National Army (includes Ground Forces, Air Force, and Gendarmerie), Republican Guard, Rapid Intervention Force, Police, and National and Nomadic Guard (GNNT). Currently the main task of the Chadian military is to combat the various rebel forces inside the country.
From independence through the period of the presidency of Félix Malloum (1975–79), the official national army was known as the Chadian Armed Forces (Forces Armées Tchadiennes—FAT). Composed mainly of soldiers from southern Chad, FAT had its roots in the army recruited by France and had military traditions dating back to World War I. FAT lost its status as the legal state army when Malloum's civil and military administration disintegrated in 1979. Although it remained a distinct military body for several years, FAT was eventually reduced to the status of a regional army representing the south.
After Habré consolidated his authority and assumed the presidency in 1982, his victorious army, the Armed Forces of the North (Forces Armées du Nord—FAN), became the nucleus of a new national army. The force was officially constituted in January 1983, when the various pro-Habré contingents were merged and renamed the Chadian National Armed Forces (Forces Armées Nationales Tchadiennes—FANT).
The Military of Chad was dominated by members of Toubou, Zaghawa, Kanembou, Hadjerai, and Massa ethnic groups during the presidency of Hissène Habré. Current Chadian president Idriss Déby, revolted and fled to the Sudan, taking with him many Zaghawa and Hadjerai soldiers in 1989.
Chad's armed forces numbered about 36,000 at the end of the Habré regime, but swelled to an estimated 50,000 in the early days of Déby's rule. With French support, a reorganization of the armed forces was initiated early in 1991 with the goal of reducing its numbers and making its ethnic composition reflective of the country as a whole. Neither of these goals was achieved, and the military is still dominated by the Zaghawa.
In 2004, the government discovered that many of the soldiers it was paying did not exist and that there were only about 19,000 soldiers in the army, as opposed to the 24,000 that had been previously believed. Government crackdowns against the practice are thought to have been a factor in a failed military mutiny in May 2004.
The current conflict, in which the Chadian military is involved, is the civil war against Sudanese-backed rebels. Chad successfully manages to repel the rebel movements, but recently, with some losses (see Battle of N'Djamena (2008)). The army uses its artillery systems and tanks, but well-equipped insurgents have probably managed to destroy over 20 of Chad's 60 t-55 tanks, and probably shot down a Mi-24 Hind gunship, which bombed enemy positions near the border with Sudan. In November 2006 Libya supplied Chad with four Aermacchi SF.260W light attack planes. They are used to strike enemy positions by the Chadian Air Force, but one was shot down by rebels. During the last battle of N'Djamena gunships and tanks have been put to good use, pushing armed militia forces back from the Presidential palace. The battle impacted the highest levels of the army leadership, as Daoud Soumain, its Chief of Staff, was killed.
On March 23, 2020 a Chadian army base was ambushed by fighters of the jihadist insurgent group Boko Haram. The army lost 92 servicemen in one day. In response, President Déby launched an operation dubbed "Wrath of Boma". International security experts credit the Chadian army with still one of Africa's best trained. However, according to Canadian counter terrorism St-Pierre, numerous external operations and rising insecurity in the neighboring countries had recently overstrechted the capacities of the Chadian armed forces.
The CIA World Factbook estimates the military budget of Chad to be 4.2% of GDP as of 2006.. Given the then GDP ($7.095 bln) of the country, military spending was estimated to be about $300 million. This estimate however dropped after the end of the Civil war in Chad (2005–2010) to 2.0% as estimated by the World Bank for the year 2011. There aren't any more recent estimates available for 2012, 2013.
Chad participated in a peace mission under the authority of African Union in the neighboring Central African Republic to try to pacify the recent conflict, but has chosen to withdraw after its soldiers were accused of shooting into a marketplace, unprovoked, according to BBC.
"Currently, Cameroon has an ongoing military-military relationship with Chad, which includes associates training for Chadian military in Cameroon. There are four brigade Chado-Cameroonian in January 2012. Cameroon and Chad are developing excellent relations". | https://en.wikipedia.org/wiki?curid=5336 |
Colloid
In chemistry, a colloid is a mixture in which one substance of microscopically dispersed insoluble or soluble particles is suspended throughout another substance. Sometimes the dispersed substance alone is called the colloid; the term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word "suspension" is distinguished from colloids by larger particle size). Unlike a solution, whose solute and solvent constitute only one phase, a colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension) that arise by phase separation. To qualify as a colloid, the mixture must be one that does not settle or would take a very long time to settle appreciably.
The dispersed-phase particles have a diameter between approximately 1 and 1000 nanometers. Such particles are normally easily visible in an optical microscope, although at the smaller size range (), an ultramicroscope or an electron microscope may be required. Homogeneous mixtures with a dispersed phase in this size range may be called "colloidal aerosols", "colloidal emulsions", "colloidal foams", "colloidal dispersions", or "hydrosols". The dispersed-phase particles or droplets are affected largely by the surface chemistry present in the colloid.
Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color. The cytoplasm of living cells is an example of a colloid, containing many types of biomolecular condensate.
Colloidal suspensions are the subject of interface and colloid science. This field of study was introduced in 1845 by Italian chemist Francesco Selmi and further investigated since 1861 by Scottish scientist Thomas Graham.
Because the size of the dispersed phase may be difficult to measure, and because colloids have the appearance of solutions, colloids are sometimes identified and characterized by their physico-chemical and transport properties. For example, if a colloid consists of a solid phase dispersed in a liquid, the solid particles will not diffuse through a membrane, whereas with a true solution the dissolved ions or molecules will diffuse through a membrane. Because of the size exclusion, the colloidal particles are unable to pass through the pores of an ultrafiltration membrane with a size smaller than their own dimension. The smaller the size of the pore of the ultrafiltration membrane, the lower the concentration of the dispersed colloidal particles remaining in the ultrafiltered liquid. The measured value of the concentration of a truly dissolved species will thus depend on the experimental conditions applied to separate it from the colloidal particles also dispersed in the liquid. This is particularly important for solubility studies of readily hydrolyzed species such as Al, Eu, Am, Cm, or organic matter complexing these species.
Colloids can be classified as follows:
Based on the nature of interaction between the dispersed phase and the dispersion medium, colloids can be classified as: Hydrophilic colloids: The colloid particles are attracted toward water. They are also called reversible sols. Hydrophobic colloids: These are opposite in nature to hydrophilic colloids. The colloid particles are repelled by water. They are also called irreversible sols.
In some cases, a colloid suspension can be considered a homogeneous mixture. This is because the distinction between "dissolved" and "particulate" matter can be sometimes a matter of approach, which affects whether or not it is homogeneous or heterogeneous.
The following forces play an important role in the interaction of colloid particles:
There are two principal ways to prepare colloids:
The stability of a colloidal system is defined by particles remaining suspended in solution at equilibrium.
Stability is hindered by aggregation and sedimentation phenomena, which are driven by the colloid's tendency to reduce surface energy. Reducing the interfacial tension will stabilize the colloidal system by reducing this driving force.
Aggregation is due to the sum of the interaction forces between particles. If attractive forces (such as van der Waals forces) prevail over the repulsive ones (such as the electrostatic ones) particles aggregate in clusters.
Electrostatic stabilization and steric stabilization are the two main mechanisms for stabilization against aggregation.
A combination of the two mechanisms is also possible (electrosteric stabilization). All the above-mentioned mechanisms for minimizing particle aggregation rely on the enhancement of the repulsive interaction forces.
Electrostatic and steric stabilization do not directly address the sedimentation/floating problem.
Particle sedimentation (and also floating, although this phenomenon is less common) arises from a difference in the density of the dispersed and of the continuous phase. The higher the difference in densities, the faster the particle settling.
The method consists in adding to the colloidal suspension a polymer able to form a gel network and characterized by shear thinning properties. Examples of such substances are xanthan and guar gum.
Particle settling is hindered by the stiffness of the polymeric matrix where particles are trapped. In addition, the long polymeric chains can provide a steric or electrosteric stabilization to dispersed particles.
The rheological shear thinning properties find beneficial in the preparation of the suspensions and in their use, as the reduced viscosity at high shear rates facilitates deagglomeration, mixing and in general the flow of the suspensions.
Unstable colloidal dispersions can form either "flocs" or "aggregates" as the particles assemble due to interparticle attractions. Flocs are loose and flexible conglomerates of the particles, whereas aggregates are compact and rigid entities. There are methods that distinguish between flocculation and aggregation, such as acoustic spectroscopy. Destabilization can be accomplished by different methods:
Unstable colloidal suspensions of low-volume fraction form clustered liquid suspensions, wherein individual clusters of particles fall to the bottom of the suspension (or float to the top if the particles are less dense than the suspending medium) once the clusters are of sufficient size for the Brownian forces that work to keep the particles in suspension to be overcome by gravitational forces. However, colloidal suspensions of higher-volume fraction form colloidal gels with viscoelastic properties. Viscoelastic colloidal gels, such as bentonite and toothpaste, flow like liquids under shear, but maintain their shape when shear is removed. It is for this reason that toothpaste can be squeezed from a toothpaste tube, but stays on the toothbrush after it is applied.
Multiple light scattering coupled with vertical scanning is the most widely used technique to monitor the dispersion state of a product, hence identifying and quantifying destabilisation phenomena. It works on concentrated dispersions without dilution. When light is sent through the sample, it is backscattered by the particles / droplets. The backscattering intensity is directly proportional to the size and volume fraction of the dispersed phase. Therefore, local changes in concentration ("e.g."Creaming and Sedimentation) and global changes in size ("e.g." flocculation, coalescence) are detected and monitored.
The kinetic process of destabilisation can be rather long (up to several months or even years for some products) and it is often required for the formulator to use further accelerating methods in order to reach reasonable development time for new product design. Thermal methods are the most commonly used and consists in increasing temperature to accelerate destabilisation (below critical temperatures of phase inversion or chemical degradation). Temperature affects not only the viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables to simulate real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times.
Mechanical acceleration including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / droplets against one another, hence helping in the film drainage. However, some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Moreover, segregation of different populations of particles have been highlighted when using centrifugation and vibration.
In physics, colloids are an interesting model system for atoms. Micrometre-scale colloidal particles are large enough to be observed by optical techniques such as confocal microscopy. Many of the forces that govern the structure and behavior of matter, such as excluded volume interactions or electrostatic forces, govern the structure and behavior of colloidal suspensions. For example, the same techniques used to model ideal gases can be applied to model the behavior of a hard sphere colloidal suspension. In addition, phase transitions in colloidal suspensions can be studied in real time using optical techniques, and are analogous to phase transitions in liquids. In many interesting cases optical fluidity is used to control colloid suspensions.
A colloidal crystal is a highly ordered array of particles that can be formed over a very long range (typically on the order of a few millimeters to one centimeter) and that appear analogous to their atomic or molecular counterparts. One of the finest natural examples of this ordering phenomenon can be found in precious opal, in which brilliant regions of pure spectral color result from close-packed domains of amorphous colloidal spheres of silicon dioxide (or silica, SiO2). These spherical particles precipitate in highly siliceous pools in Australia and elsewhere, and form these highly ordered arrays after years of sedimentation and compression under hydrostatic and gravitational forces. The periodic arrays of submicrometre spherical particles provide similar arrays of interstitial voids, which act as a natural diffraction grating for visible light waves, particularly when the interstitial spacing is of the same order of magnitude as the incident lightwave.
Thus, it has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations with interparticle separation distances, often being considerably greater than the individual particle diameter. In all of these cases in nature, the same brilliant iridescence (or play of colors) can be attributed to the diffraction and constructive interference of visible lightwaves that satisfy Bragg’s law, in a matter analogous to the scattering of X-rays in crystalline solids.
The large number of experiments exploring the physics and chemistry of these so-called "colloidal crystals" has emerged as a result of the relatively simple methods that have evolved in the last 20 years for preparing synthetic monodisperse colloids (both polymer and mineral) and, through various mechanisms, implementing and preserving their long-range order formation.
Colloidal phase separation is an important organising principle for compartmentalisation of both the cytoplasm and nucleus of cells, similar in importance to compartmentalisation via lipid bilayer membranes. The term biomolecular condensate has been used to refer to clusters of macromolecules that arise via liquid-liquid, liquid-gel, or liquid-solid phase separation within the cytosol. Macromolecular crowding strongly enhances colloidal phase separation and formation of biomolecular condensates.
Colloidal particles can also serve as transport vector
of diverse contaminants in the surface water (sea water, lakes, rivers, fresh water bodies) and in underground water circulating in fissured rocks
(e.g. limestone, sandstone, granite). Radionuclides and heavy metals easily sorb onto colloids suspended in water. Various types of colloids are recognised: inorganic colloids (e.g. clay particles, silicates, iron oxy-hydroxides), organic colloids (humic and fulvic substances). When heavy metals or radionuclides form their own pure colloids, the term ""eigencolloid"" is used to designate pure phases, i.e., pure Tc(OH)4, U(OH)4, or Am(OH)3. Colloids have been suspected for the long-range transport of plutonium on the Nevada Nuclear Test Site. They have been the subject of detailed studies for many years. However, the mobility of inorganic colloids is very low in compacted bentonites and in deep clay formations
because of the process of ultrafiltration occurring in dense clay membrane.
The question is less clear for small organic colloids often mixed in porewater with truly dissolved organic molecules.
In soil science, the colloidal fraction in soils consists of tiny clay and humus particles that are less than 1ɥm in diameter and carry either positive and/or negative electrostatic charges that vary depending on the chemical conditions of the soil sample, i.e. soil pH.
Colloid solutions used in intravenous therapy belong to a major group of volume expanders, and can be used for intravenous fluid replacement. Colloids preserve a high colloid osmotic pressure in the blood, and therefore, they should theoretically preferentially increase the intravascular volume, whereas other types of volume expanders called crystalloids also increase the interstitial volume and intracellular volume. However, there is still controversy to the actual difference in efficacy by this difference, and much of the research related to this use of colloids is based on fraudulent research by Joachim Boldt. Another difference is that crystalloids generally are much cheaper than colloids. | https://en.wikipedia.org/wiki?curid=5346 |
Cooking
Cooking or cookery is the art, technology, science and craft of preparing food for consumption. Cooking techniques and ingredients vary widely across the world, from grilling food over an open fire to using electric stoves, to baking in various types of ovens, reflecting unique environmental, economic, and cultural traditions and trends.
Types of cooking also depend on the skill levels and training of cooks. Cooking is done both by people in their own dwellings and by professional cooks and chefs in restaurants and other food establishments. Cooking can also occur through chemical reactions without the presence of heat, such as in ceviche, a traditional South American dish where fish is cooked with the acids in lemon or lime juice or orange juice.
Preparing food with heat or fire is an activity unique to humans. It may have started around 2 million years ago, though archaeological evidence for it reaches no more than 1 million years ago.
The expansion of agriculture, commerce, trade, and transportation between civilizations in different regions offered cooks many new ingredients. New inventions and technologies, such as the invention of pottery for holding and boiling water, expanded cooking techniques. Some modern cooks apply advanced scientific techniques to food preparation to further enhance the flavor of the dish served.
Phylogenetic analysis suggests that human ancestors may have invented cooking as far back as 1.8 million to 2.3 million years ago. Re-analysis of burnt bone fragments and plant ashes from the Wonderwerk Cave in South Africa has provided evidence supporting control of fire by early humans by 1 million years ago. There is evidence that "Homo erectus" was cooking their food as early as 500,000 years ago. Evidence for the controlled use of fire by "Homo erectus" beginning some 400,000 years ago has wide scholarly support. Archaeological evidence from 300,000 years ago, in the form of ancient hearths, earth ovens, burnt animal bones, and flint, are found across Europe and the Middle East. Anthropologists think that widespread cooking fires began about 250,000 years ago when hearths first appeared.
Recently, the earliest hearths have been reported to be at least 790,000 years old.
Communication between the Old World and the New World in the Columbian Exchange influenced the history of cooking. The movement of foods across the Atlantic from the New World, such as potatoes, tomatoes, maize, beans, bell pepper, chili pepper, vanilla, pumpkin, cassava, avocado, peanut, pecan, cashew, pineapple, blueberry, sunflower, chocolate, gourds, and squash, had a profound effect on Old World cooking. The movement of foods across the Atlantic from the Old World, such as cattle, sheep, pigs, wheat, oats, barley, rice, apples, pears, peas, chickpeas, green beans, mustard, and carrots, similarly changed New World cooking.
In the seventeenth and eighteenth centuries, food was a classic marker of identity in Europe. In the nineteenth-century "Age of Nationalism" cuisine became a defining symbol of national identity.
The Industrial Revolution brought mass-production, mass-marketing, and standardization of food. Factories processed, preserved, canned, and packaged a wide variety of foods, and processed cereals quickly became a defining feature of the American breakfast. In the 1920s, freezing methods, cafeterias, and fast food restaurants emerged.
Starting early in the 20th century, governments issued nutrition guidelines that led to the food pyramid (introduced in Sweden in 1974). The 1916 "Food For Young Children" became the first USDA guide to give specific dietary guidelines. Updated in the 1920s, these guides gave shopping suggestions for different-sized families along with a Depression Era revision which included four cost levels. In 1943, the USDA created the "Basic Seven" chart to promote nutrition. It included the first-ever Recommended Daily Allowances from the National Academy of Sciences. In 1956, the "Essentials of an Adequate Diet" brought recommendations which cut the number of groups that American school children would learn about down to four. In 1979, a guide called "Food" addressed the link between excessive amounts of unhealthy foods and chronic diseases. Fats, oils, and sweets were added to the four basic food groups.
Most ingredients in cooking are derived from living organisms. Vegetables, fruits, grains and nuts as well as herbs and spices come from plants, while meat, eggs, and dairy products come from animals. Mushrooms and the yeast used in baking are kinds of fungi. Cooks also use water and minerals such as salt. Cooks can also use wine or spirits.
Naturally occurring ingredients contain various amounts of molecules called "proteins", "carbohydrates" and "fats". They also contain water and minerals. Cooking involves a manipulation of the chemical properties of these molecules.
Carbohydrates include the common sugar, sucrose (table sugar), a disaccharide, and such simple sugars as glucose (made by enzymatic splitting of sucrose) and fructose (from fruit), and starches from sources such as cereal flour, rice, arrowroot and potato.
The interaction of heat and carbohydrate is complex. Long-chain sugars such as starch tend to break down into simpler sugars when cooked, while simple sugars can form syrups. If sugars are heated so that all water of crystallisation is driven off, then caramelization starts, with the sugar undergoing thermal decomposition with the formation of carbon, and other breakdown products producing caramel. Similarly, the heating of sugars and proteins elicits the Maillard reaction, a basic flavor-enhancing technique.
An emulsion of starch with fat or water can, when gently heated, provide thickening to the dish being cooked. In European cooking, a mixture of butter and flour called a roux is used to thicken liquids to make stews or sauces. In Asian cooking, a similar effect is obtained from a mixture of rice or corn starch and water. These techniques rely on the properties of starches to create simpler mucilaginous saccharides during cooking, which causes the familiar thickening of sauces. This thickening will break down, however, under additional heat.
Types of fat include vegetable oils, animal products such as butter and lard, as well as fats from grains, including maize and flax oils. Fats are used in a number of ways in cooking and baking. To prepare stir fries, grilled cheese or pancakes, the pan or griddle is often coated with fat or oil. Fats are also used as an ingredient in baked goods such as cookies, cakes and pies. Fats can reach temperatures higher than the boiling point of water, and are often used to conduct high heat to other ingredients, such as in frying, deep frying or sautéing. Fats are used to add flavor to food (e.g., butter or bacon fat), prevent food from sticking to pans and create a desirable texture.
Edible animal material, including muscle, offal, milk, eggs and egg whites, contains substantial amounts of protein. Almost all vegetable matter (in particular legumes and seeds) also includes proteins, although generally in smaller amounts. Mushrooms have high protein content. Any of these may be sources of essential amino acids. When proteins are heated they become denatured (unfolded) and change texture. In many cases, this causes the structure of the material to become softer or more friable – meat becomes "cooked" and is more friable and less flexible. In some cases, proteins can form more rigid structures, such as the coagulation of albumen in egg whites. The formation of a relatively rigid but flexible matrix from egg white provides an important component in baking cakes, and also underpins many desserts based on meringue.
Cooking often involves water and water-based liquids. These can be added in order to immerse the substances being cooked (this is typically done with water, stock or wine). Alternatively, the foods themselves can release water. A favorite method of adding flavor to dishes is to save the liquid for use in other recipes. Liquids are so important to cooking that the name of the cooking method used is often based on how the liquid is combined with the food, as in steaming, simmering, boiling, braising and blanching. Heating liquid in an open container results in rapidly increased evaporation, which concentrates the remaining flavor and ingredients – this is a critical component of both stewing and sauce making.
Vitamins and minerals are required for normal metabolism but which the body cannot manufacture itself and which must therefore come from external sources. Vitamins come from several sources including fresh fruit and vegetables (Vitamin C), carrots, liver (Vitamin A), cereal bran, bread, liver (B vitamins), fish liver oil (Vitamin D) and fresh green vegetables (Vitamin K). Many minerals are also essential in small quantities including iron, calcium, magnesium, sodium chloride and sulfur; and in very small quantities copper, zinc and selenium. The micronutrients, minerals, and vitamins in fruit and vegetables may be destroyed or eluted by cooking. Vitamin C is especially prone to oxidation during cooking and may be completely destroyed by protracted cooking. The bioavailability of some vitamins such as thiamin, vitamin B6, niacin, folate, and carotenoids are increased with cooking by being freed from the food microstructure. Blanching or steaming vegetables is a way of minimizing vitamin and mineral loss in cooking.
There are very many methods of cooking, most of which have been known since antiquity. These include baking, roasting, frying, grilling, barbecuing, smoking, boiling, steaming and braising. A more recent innovation is microwaving. Various methods use differing levels of heat and moisture and vary in cooking time. The method chosen greatly affects the end result because some foods are more appropriate to some methods than others. Some major hot cooking techniques include:
Cooking can prevent many foodborne illnesses that would otherwise occur if the food is eaten raw. When heat is used in the preparation of food, it can kill or inactivate harmful organisms, such as bacteria and viruses, as well as various parasites such as tapeworms and "Toxoplasma gondii". Food poisoning and other illness from uncooked or poorly prepared food may be caused by bacteria such as of "Escherichia coli", "Salmonella typhimurium" and "Campylobacter", viruses such as noroviruses, and protozoa such as "Entamoeba histolytica". Bacteria, viruses and parasites may be introduced through salad, meat that is uncooked or done rare, and unboiled water.
The sterilizing effect of cooking depends on temperature, cooking time, and technique used. Some food spoilage bacteria such as "Clostridium botulinum" or "Bacillus cereus" can form spores that survive boiling, which then germinate and regrow after the food has cooled. This makes it unsafe to reheat cooked food more than once.
Cooking increases the digestibility of many foods which are inedible or poisonous when raw. For example, raw cereal grains are hard to digest, while kidney beans are toxic when raw or improperly cooked due to the presence of phytohaemagglutinin, which is inactivated by cooking for at least ten minutes at .
Food safety depends on the safe preparation, handling, and storage of food. Food spoilage bacteria proliferate in the "Danger zone" temperature range from , food therefore should not be stored in this temperature range. Washing of hands and surfaces, especially when handling different meats, and keeping raw food separate from cooked food to avoid cross-contamination, are good practices in food preparation. Foods prepared on plastic cutting boards may be less likely to harbor bacteria than wooden ones. Washing and disinfecting cutting boards, especially after use with raw meat, poultry, or seafood, reduces the risk of contamination.
Proponents of raw foodism argue that cooking food increases the risk of some of the detrimental effects on food or health. They point out that during cooking of vegetables and fruit containing vitamin C, the vitamin elutes into the cooking water and becomes degraded through oxidation. Peeling vegetables can also substantially reduce the vitamin C content, especially in the case of potatoes where most vitamin C is in the skin. However, research has shown that in the specific case of carotenoids a greater proportion is absorbed from cooked vegetables than from raw vegetables.
German research in 2003 showed significant benefits in reducing breast cancer risk when large amounts of raw vegetable matter are included in the diet. The authors attribute some of this effect to heat-labile phytonutrients. Sulforaphane, a glucosinolate breakdown product, which may be found in vegetables such as broccoli, has been shown to be protective against prostate cancer, however, much of it is destroyed when the vegetable is boiled.
The USDA has studied retention data for 16 vitamins, 8 minerals, and alcohol for approximately 290 foods for various cooking methods.
In a human epidemiological analysis by Richard Doll and Richard Peto in 1981, diet was estimated to cause a large percentage of cancers. Studies suggest that around 32% of cancer deaths may be avoidable by changes to the diet. Some of these cancers may be caused by carcinogens in food generated during the cooking process, although it is often difficult to identify the specific components in diet that serve to increase cancer risk. Many foods, such as beef steak and broccoli, contain low concentrations of both carcinogens and anticarcinogens.
Several studies published since 1990 indicate that cooking meat at high temperature creates heterocyclic amines (HCAs), which are thought to increase cancer risk in humans. Researchers at the National Cancer Institute found that human subjects who ate beef rare or medium-rare had less than one third the risk of stomach cancer than those who ate beef medium-well or well-done. While avoiding meat or eating meat raw may be the only ways to avoid HCAs in meat fully, the National Cancer Institute states that cooking meat below creates "negligible amounts" of HCAs. Also, microwaving meat before cooking may reduce HCAs by 90% by reducing the time needed for the meat to be cooked at high heat. Nitrosamines are found in some food, and may be produced by some cooking processes from proteins or from nitrites used as food preservatives; cured meat such as bacon has been found to be carcinogenic, with links to colon cancer. Ascorbate, which is added to cured meat, however, reduces nitrosamine formation.
Research has shown that grilling, barbecuing and smoking meat and fish increases levels of carcinogenic polycyclic aromatic hydrocarbons (PAH). In Europe, grilled meat and smoked fish generally only contribute a small proportion of dietary PAH intake since they are a minor component of diet – most intake comes from cereals, oils and fats. However, in the US, grilled/barbecued meat is the second highest contributor of the mean daily intake of a known PAH carcinogen benzo[a]pyrene at 21% after ‘bread, cereal and grain’ at 29%.
Baking, grilling or broiling food, especially starchy foods, until a toasted crust is formed generates significant concentrations of acrylamide, a known carcinogen from animal studies; its potential to cause cancer in humans at normal exposures is uncertain. Public health authorities recommend reducing the risk by avoiding overly browning starchy foods or meats when frying, baking, toasting or roasting them.
Cooking dairy products may reduce a protective effect against colon cancer. Researchers at the University of Toronto suggest that ingesting uncooked or unpasteurized dairy products (see also Raw milk) may reduce the risk of colorectal cancer. Mice and rats fed uncooked sucrose, casein, and beef tallow had one-third to one-fifth the incidence of microadenomas as the mice and rats fed the same ingredients cooked. This claim, however, is contentious. According to the Food and Drug Administration of the United States, health benefits claimed by raw milk advocates do not exist. "The small quantities of antibodies in milk are not absorbed in the human intestinal tract," says Barbara Ingham, PhD, associate professor and extension food scientist at the University of Wisconsin-Madison. "There is no scientific evidence that raw milk contains an anti-arthritis factor or that it enhances resistance to other diseases."
Heating sugars with proteins or fats can produce advanced glycation end products ("glycotoxins").
Deep fried food in restaurants may contain high level of trans fat, which is known to increase levels of low-density lipoprotein that in turn may increase risk of heart diseases and other conditions. However, many fast food chains have now switched to trans-fat-free alternatives for deep-frying.
The application of scientific knowledge to cooking and gastronomy has become known as molecular gastronomy. This is a subdiscipline of food science. Important contributions have been made by scientists, chefs and authors such as Herve This (chemist), Nicholas Kurti (physicist), Peter Barham (physicist), Harold McGee (author), Shirley Corriher (biochemist, author), Heston Blumenthal (chef), Ferran Adria (chef), Robert Wolke (chemist, author) and Pierre Gagnaire (chef).
Chemical processes central to cooking include the Maillard reaction – a form of non-enzymatic browning involving an amino acid, a reducing sugar and heat.
Home cooking has traditionally been a process carried out informally in a home or around a communal fire, and can be enjoyed by all members of the family, although in many cultures women bear primary responsibility. Cooking is also often carried out outside of personal quarters, for example at restaurants, or schools. Bakeries were one of the earliest forms of cooking outside the home, and bakeries in the past often offered the cooking of pots of food provided by their customers as an additional service. In the present day, factory food preparation has become common, with many "ready-to-eat" foods being prepared and cooked in factories and home cooks using a mixture of scratch made, and factory made foods together to make a meal. The nutritional value of including more commercially prepared foods has been found to be inferior to home-made foods. Home-cooked meals tend to be healthier with fewer calories, and less saturated fat, cholesterol and sodium on a per calorie basis while providing more fiber, calcium, and iron. The ingredients are also directly sourced, so there is control over authenticity, taste, and nutritional value. The superior nutritional quality of home-cooking could therefore play a role in preventing chronic disease. Cohort studies following the elderly over 10 years show that adults who cook their own meals have significantly lower mortality, even when controlling for confounding variables.
"Home-cooking" may be associated with comfort food, and some commercially produced foods and restaurant meals are presented through advertising or packaging as having been "home-cooked," regardless of their actual origin. This trend began in the 1920s and is attributed to people in urban areas of the U.S. wanting homestyle food even though their schedules and smaller kitchens made cooking harder. | https://en.wikipedia.org/wiki?curid=5355 |
Card game
A card game is any game using playing cards as the primary device with which the game is played, be they traditional or game-specific.
Countless card games exist, including families of related games (such as poker). A small number of card games played with traditional decks have formally standardized rules, but most are folk games whose rules vary by region, culture, and person.
A card game is played with a "deck" or "pack" of playing cards which are identical in size and shape. Each card has two sides, the "face" and the "back". Normally the backs of the cards are indistinguishable. The faces of the cards may all be unique, or there can be duplicates. The composition of a deck is known to each player. In some cases several decks are shuffled together to form a single "pack" or "shoe".
Games using playing cards exploit the fact that cards are individually identifiable from one side only, so that each player knows only the cards they hold and not those held by anyone else. For this reason card games are often characterized as games of chance or “imperfect information”—as distinct from games of strategy or “perfect information,” where the current position is fully visible to all players throughout the game. Many games that are not generally placed in the family of card games do in fact use cards for some aspect of their gameplay.
Some games that are placed in the card game genre involve a board. The distinction is that the gameplay of a card game chiefly depends on the use of the cards by players (the board is simply a guide for scorekeeping or for card placement), while board games (the principal non-card game genre to use cards) generally focus on the players' positions on the board, and use the cards for some secondary purpose.
The object of a trick-taking game is based on the play of multiple rounds, or tricks, in each of which each player plays a single card from their hand, and based on the values of played cards one player wins or "takes" the trick. The specific object varies with each game and can include taking as many tricks as possible, taking as many scoring cards within the tricks won as possible, taking as few tricks (or as few penalty cards) as possible, taking a particular trick in the hand, or taking an exact number of tricks. Bridge, Whist, Euchre, 500, Spades, and the various Tarot card games are popular examples.
The object of a matching (or sometimes "melding") game is to acquire a particular groups of matching cards before an opponent can do so. In Rummy, this is done through drawing and discarding, and the groups are called melds. Mahjong is a very similar game played with tiles instead of cards. Non-Rummy examples of match-type games generally fall into the "fishing" genre and include the children's games Go Fish and Old Maid.
In a shedding game, players start with a hand of cards, and the object of the game is to be the first player to discard all cards from one's hand. Common shedding games include Crazy Eights (commercialized by Mattel as Uno) and Daihinmin. Some matching-type games are also shedding-type games; some variants of Rummy such as Phase 10, Rummikub, the bluffing game I Doubt It, and the children's game Old Maid, fall into both categories.
The object of an accumulating game is to acquire all cards in the deck. Examples include most War type games, and games involving slapping a discard pile such as Slapjack. Egyptian Ratscrew has both of these features.
In fishing games, cards from the hand are played against cards in a layout on the table, capturing table cards if they match. Fishing games are popular in many nations, including China, where there are many diverse fishing games. Scopa is considered one of the national card games of Italy. Cassino is the only fishing game to be widely played in English-speaking countries. Zwicker has been described as a "simpler and jollier version of Cassino", played in Germany. Seep is a classic Indian fishing card game mainly popular in northern parts of India. Tablanet (tablić) is fishing-style game popular in Balkans.
Comparing card games are those where hand values are compared to determine the winner, also known as "vying" or "showdown" games. Poker, blackjack, and baccarat are examples of comparing card games. As seen, nearly all of these games are designed as gambling games.
Solitaire games are designed to be played by one player. Most games begin with a specific layout of cards, called a tableau, and the object is then either to construct a more elaborate final layout, or to clear the tableau and/or the draw pile or "stock" by moving all cards to one or more "discard" or "foundation" piles.
Drinking card games are drinking games using cards, in which the object in playing the game is either to drink or to force others to drink. Many games are simply ordinary card games with the establishment of "drinking rules"; President, for instance, is virtually identical to Daihinmin but with additional rules governing drinking. Poker can also be played using a number of drinks as the wager. Another game often played as a drinking game is Toepen, quite popular in the Netherlands. Some card games are designed specifically to be played as drinking games.
Many card games borrow elements from more than one type. The most common combination is matching and shedding, as in some variants of Rummy, Old Maid, and Go Fish. However, many multi-genre games involve different stages of play for each hand. The most common multi-stage combination is a "trick-and-meld" game, such as Pinochle or Belote. Other multi-stage, multi-genre games include Poke, Gleek, Skitgubbe, and Tichu.
Collectible card games (CCG) are proprietary playing card games. CCGs are games of strategy between two players though multiplayer exists too. Both have their own personally built deck constructed from a very large pool of individually unique cards in the commercial market. The cards have different effects, costs, and art. Obtaining the different cards makes the game a collectible and cards are sold or traded on the secondary market. "" and "Yu-Gi-Oh!" are well-known collectible card games.
These games revolve around wagers of money. Though virtually any game in which there are winning and losing outcomes can be wagered on, these games are specifically designed to make the betting process a strategic part of the game. Some of these games involve players betting against each other, such as poker, while in others, like blackjack, players wager against the house.
Poker is a family of gambling games in which players bet into a pool, called the pot, the value of which changes as the game progresses that the value of the hand they carry will beat all others according to the ranking system. Variants largely differ on how cards are dealt and the methods by which players can improve a hand. For many reasons, including its age and its popularity among Western militaries, it is one of the most universally known card games in existence.
Many other card games have been designed and published on a commercial or amateur basis. In some cases, the game uses the standard 52-card deck, but the object is unique. In Eleusis, for example, players play single cards, and are told whether the play was legal or illegal, in an attempt to discover the underlying rules made up by the dealer.
Most of these games however typically use a specially made deck of cards designed specifically for the game (or variations of it). The decks are thus usually proprietary, but may be created by the game's players. Uno, Phase 10, Set, and 1000 Blank White Cards are popular dedicated-deck card games; 1000 Blank White Cards is unique in that the cards for the game are designed by the players of the game while playing it; there is no commercially available deck advertised as such.
A deck of either customised dedicated cards or a standard deck of playing cards with assigned meanings is used to simulate the actions of another activity, for example card football.
Many games, including card games, are fabricated by science fiction authors and screenwriters to distance a culture depicted in the story from present-day Western culture. They are commonly used as filler to depict background activities in an atmosphere like a bar or rec room, but sometimes the drama revolves around the play of the game. Some of these games become real card games as the holder of the intellectual property develops and markets a suitable deck and ruleset for the game, while others, such as "Exploding Snap" from the Harry Potter franchise, lack sufficient descriptions of rules, or depend on cards or other hardware that are infeasible or physically impossible.
Any specific card game imposes restrictions on the number of players. The most significant dividing lines run between one-player games and two-player games, and between two-player games and multi-player games. Card games for one player are known as "solitaire" or "patience" card games. (See list of solitaire card games.) Generally speaking, they are in many ways special and atypical, although some of them have given rise to two- or multi-player games such as Spite and Malice.
In card games for two players, usually not all cards are distributed to the players, as they would otherwise have perfect information about the game state. Two-player games have always been immensely popular and include some of the most significant card games such as piquet, bezique, sixty-six, klaberjass, gin rummy and cribbage. Many multi-player games started as two-player games that were adapted to a greater number of players. For such adaptations a number of non-obvious choices must be made beginning with the choice of a game orientation.
One way of extending a two-player game to more players is by building two teams of equal size. A common case is four players in two fixed partnerships, sitting crosswise as in whist and contract bridge. Partners sit opposite to each other and cannot see each other's hands. If communication between the partners is allowed at all, then it is usually restricted to a specific list of permitted signs and signals. 17th-century French partnership games such as triomphe were special in that partners sat next to each other and were allowed to communicate freely so long as they did not exchange cards or play out of order.
Another way of extending a two-player game to more players is as a "cut-throat" game, in which all players fight on their own, and win or lose alone. Most cut-throat card games are "round games", i.e. they can be played by any number of players starting from two or three, so long as there are enough cards for all.
For some of the most interesting games such as ombre, tarot and skat, the associations between players change from hand to hand. Ultimately players all play on their own, but for each hand, some game mechanism divides the players into two teams. Most typically these are "solo games", i.e. games in which one player becomes the soloist and has to achieve some objective against the others, who form a team and win or lose all their points jointly. But in games for more than three players, there may also be a mechanism that selects two players who then have to play against the others.
The players of a card game normally form a circle around a table or other space that can hold cards. The "game orientation" or "direction of play", which is only relevant for three or more players, can be either clockwise or counterclockwise. It is the direction in which various roles in the game proceed. Most regions have a traditional direction of play, such as:
Europe is roughly divided into a clockwise area in the north and a counterclockwise area in the south. The boundary runs between England, Ireland, Netherlands, Germany, Austria (mostly), Slovakia, Finland, Ukraine and Russia (clockwise) and France, Switzerland, Spain, Italy, Slovenia, Balkans, Hungary, Romania, Bulgaria, Greece and Turkey (anticlockwise).
Games that originate in a region with a strong preference are often initially played in the original direction, even in regions that prefer the opposite direction. For games that have official rules and are played in tournaments, the direction of play is often prescribed in those rules.
Most games have some form of asymmetry between players. The roles of players are normally expressed in terms of the "dealer", i.e. the player whose task it is to shuffle the cards and distribute them to the players. Being the dealer can be a (minor or major) advantage or disadvantage, depending on the game. Therefore, after each played hand, the deal normally passes to the next player according to the game orientation.
As it can still be an advantage or disadvantage to be the first dealer, there are some standard methods for determining who is the first dealer. A common method is by cutting, which works as follows. One player shuffles the deck and places it on the table. Each player lifts a packet of cards from the top, reveals its bottom card, and returns it to the deck. The player who reveals the highest (or lowest) card becomes dealer. In case of a tie, the process is repeated by the tied players. For some games such as whist this process of cutting is part of the official rules, and the hierarchy of cards for the purpose of cutting (which need not be the same as that used otherwise in the game) is also specified. But in general any method can be used, such as tossing a coin in case of a two-player game, drawing cards until one player draws an ace, or rolling dice.
A "hand" is a unit of the game that begins with the dealer shuffling and dealing the cards as described below, and ends with the players scoring and the next dealer being determined. The set of cards that each player receives and holds in his or her hands is also known as that player's hand.
The hand is over when the players have finished playing their hands. Most often this occurs when one player (or all) has no cards left. The player who sits after the dealer in the direction of play is known as eldest hand (or in two-player games as elder hand) or forehand. A "game round" consists of as many hands as there are players. After each hand, the deal is passed on in the direction of play, i.e. the previous eldest hand becomes the new dealer. Normally players score points after each hand. A game may consist of a fixed number of rounds. Alternatively it can be played for a fixed number of points. In this case it is over with the hand in which a player reaches the target score.
Shuffling is the process of bringing the cards of a pack into a random order. There are a large number of techniques with various advantages and disadvantages. "Riffle shuffling" is a method in which the deck is divided into two roughly equal-sized halves that are bent and then released, so that the cards interlace. Repeating this process several times randomizes the deck well, but the method is harder to learn than some others and may damage the cards. The "overhand shuffle" and the "Hindu shuffle" are two techniques that work by taking batches of cards from the top of the deck and reassembling them in the opposite order. They are easier to learn but must be repeated more often. A method suitable for small children consists in spreading the cards on a large surface and moving them around before picking up the deck again. This is also the most common method for shuffling tiles such as dominoes.
For casino games that are played for large sums it is vital that the cards be properly randomised, but for many games this is less critical, and in fact player experience can suffer when the cards are shuffled too well. The official skat rules stipulate that the cards are "shuffled well", but according to a decision of the German skat court, a one-handed player should ask another player to do the shuffling, rather than use a shuffling machine, as it would shuffle the cards "too" well. French belote rules go so far as to prescribe that the deck never be shuffled between hands.
The dealer takes all of the cards in the pack, arranges them so that they are in a uniform stack, and shuffles them. In strict play, the dealer then offers the deck to the previous player (in the sense of the game direction) for "cutting". If the deal is clockwise, this is the player to the dealer's right; if counterclockwise, it is the player to the dealer's left. The invitation to cut is made by placing the pack, face downward, on the table near the player who is to cut: who then lifts the upper portion of the pack clear of the lower portion and places it alongside. (Normally the two portions have about equal size. Strict rules often indicate that each portion must contain a certain minimum number of cards, such as three or five.) The formerly lower portion is then replaced on top of the formerly upper portion. Instead of cutting, one may also knock on the deck to indicate that one trusts the dealer to have shuffled fairly.
The actual "deal" (distribution of cards) is done in the direction of play, beginning with eldest hand. The dealer holds the pack, face down, in one hand, and removes cards from the top of it with his or her other hand to distribute to the players, placing them face down on the table in front of the players to whom they are dealt. The cards may be dealt one at a time, or in batches of more than one card; and either the entire pack or a determined number of cards are dealt out. The undealt cards, if any, are left face down in the middle of the table, forming the "stock" (also called the talon, widow, skat or kitty depending on the game and region).
Throughout the shuffle, cut, and deal, the dealer should prevent the players from seeing the faces of any of the cards. The players should not try to see any of the faces. Should a player accidentally see a card, other than one's own, proper etiquette would be to admit this. It is also dishonest to try to see cards as they are dealt, or to take advantage of having seen a card. Should a card accidentally become exposed, (visible to all), any player can demand a redeal (all the cards are gathered up, and the shuffle, cut, and deal are repeated) or that the card be replaced randomly into the deck ("burning" it) and a replacement dealt from the top to the player who was to receive the revealed card.
When the deal is complete, all players pick up their cards, or "hand", and hold them in such a way that the faces can be seen by the holder of the cards but not the other players, or vice versa depending on the game. It is helpful to fan one's cards out so that if they have corner indices all their values can be seen at once. In most games, it is also useful to sort one's hand, rearranging the cards in a way appropriate to the game. For example, in a trick-taking game it may be easier to have all one's cards of the same suit together, whereas in a rummy game one might sort them by rank or by potential combinations.
A new card game starts in a small way, either as someone's invention, or as a modification of an existing game. Those playing it may agree to change the rules as they wish. The rules that they agree on become the "house rules" under which they play the game. A set of house rules may be accepted as valid by a group of players wherever they play, as it may also be accepted as governing all play within a particular house, café, or club.
When a game becomes sufficiently popular, so that people often play it with strangers, there is a need for a generally accepted set of rules. This need is often met when a particular set of house rules becomes generally recognized. For example, when Whist became popular in 18th-century England, players in the Portland Club agreed on a set of house rules for use on its premises. Players in some other clubs then agreed to follow the "Portland Club" rules, rather than go to the trouble of codifying and printing their own sets of rules. The Portland Club rules eventually became generally accepted throughout England and Western cultures.
There is nothing static or "official" about this process. For the majority of games, there is no one set of universal rules by which the game is played, and the most common ruleset is no more or less than that. Many widely played card games, such as Canasta and Pinochle, have no official regulating body. The most common ruleset is often determined by the most popular distribution of rulebooks for card games. Perhaps the original compilation of popular playing card games was collected by Edmund Hoyle, a self-made authority on many popular parlor games. The U.S. Playing Card Company now owns the eponymous Hoyle brand, and publishes a series of rulebooks for various families of card games that have largely standardized the games' rules in countries and languages where the rulebooks are widely distributed. However, players are free to, and often do, invent "house rules" to supplement or even largely replace the "standard" rules.
If there is a sense in which a card game can have an "official" set of rules, it is when that card game has an "official" governing body. For example, the rules of tournament bridge are governed by the World Bridge Federation, and by local bodies in various countries such as the American Contract Bridge League in the U.S., and the English Bridge Union in England. The rules of skat are governed by The International Skat Players Association and, in Germany, by the "Deutscher Skatverband" which publishes the "Skatordnung". The rules of French tarot are governed by the Fédération Française de Tarot. The rules of Poker's variants are largely traditional, but enforced by the World Series of Poker and the World Poker Tour organizations which sponsor tournament play. Even in these cases, the rules must only be followed exactly at games sanctioned by these governing bodies; players in less formal settings are free to implement agreed-upon supplemental or substitute rules at will.
An infraction is any action which is against the rules of the game, such as playing a card when it is not one's turn to play or the accidental exposure of a card, informally known as "bleeding."
In many official sets of rules for card games, the rules specifying the penalties for various infractions occupy more pages than the rules specifying how to play correctly. This is tedious, but necessary for games that are played seriously. Players who intend to play a card game at a high level generally ensure before beginning that all agree on the penalties to be used. When playing privately, this will normally be a question of agreeing house rules. In a tournament there will probably be a tournament director who will enforce the rules when required and arbitrate in cases of doubt.
If a player breaks the rules of a game deliberately, this is cheating. The rest of this section is therefore about accidental infractions, caused by ignorance, clumsiness, inattention, etc.
As the same game is played repeatedly among a group of players, precedents build up about how a particular infraction of the rules should be handled. For example, "Sheila just led a card when it wasn't her turn. Last week when Jo did that, we agreed ... etc." Sets of such precedents tend to become established among groups of players, and to be regarded as part of the house rules. Sets of house rules may become formalized, as described in the previous section. Therefore, for some games, there is a "proper" way of handling infractions of the rules. But for many games, without governing bodies, there is no standard way of handling infractions.
In many circumstances, there is no need for special rules dealing with what happens after an infraction. As a general principle, the person who broke a rule should not benefit by it, and the other players should not lose by it. An exception to this may be made in games with fixed partnerships, in which it may be felt that the partner(s) of the person who broke a rule should also not benefit. The penalty for an accidental infraction should be as mild as reasonable, consistent with there being no possible benefit to the person responsible.
The same kind of games can also be played with tiles made of wood, plastic, bone, or similar materials. The most notable examples of such tile sets are dominoes, mahjong tiles and Rummikub tiles. Chinese dominoes are also available as playing cards. It is not clear whether Emperor Muzong of Liao really played with domino cards as early as 969, though. Legend dates the invention of dominoes in the year 1112, and the earliest known domino rules are from the following decade. 500 years later domino cards were reported as a new invention.
The first playing cards appeared in the 9th century during Tang-dynasty China.
The first reference to the card game in world history dates no later than the 9th century, when the "Collection of Miscellanea at Duyang", written by Tang Dynasty writer Su E, described Princess (daughter of Emperor Yizong of Tang) playing the "leaf game" with members of the Wei clan (the family of the princess' husband) in 868 . The Song dynasty statesman and historian Ouyang Xiu has noted that paper playing cards arose in connection to an earlier development in the book format from scrolls to pages. During the Ming dynasty (1368–1644), characters from popular novels such as the "Water Margin" were widely featured on the faces of playing cards. A precise description of Chinese money playing cards (in four suits) survived from the 15th century. Mahjong tiles are a 19th-century invention based on three-suited money playing card decks, similar to the way in which Rummikub tiles were derived recently from modern Western playing cards.
Playing cards first appeared in Europe in the last quarter of the 14th century. The earliest European references speak of a Saracen or Moorish game called "naib", and in fact an almost complete Mamluk Egyptian deck of 52 cards in a distinct oriental design has survived from around the same time, with the four suits "swords", "polo sticks", "cups" and "coins" and the ranks "king", "governor", "second governor", and "ten" to "one".
The 1430s in Italy saw the invention of the tarot deck, a full Latin-suited deck augmented by suitless cards with painted motifs that played a special role as trumps. Tarot card games are still played with (subsets of) these decks in parts of Central Europe. A full tarot deck contains 14 cards in each suit; low cards labeled 1–10, and court cards (jack), (cavalier/knight), (queen), and (king), plus the fool or excuse card, and 21 trump cards. In the 18th century the card images of the traditional Italian tarot decks became popular in cartomancy and evolved into "esoteric" decks used primarily for the purpose; today most tarot decks sold in North America are the occult type, and are closely associated with fortune telling. In Europe, "playing tarot" decks remain popular for games, and have evolved since the 18th century to use regional suits (spades, hearts, diamonds and clubs in France; leaves, hearts, bells and acorns in Germany) as well as other familiar aspects of the Anglo-American deck such as corner card indices and "stamped" card symbols for non-court cards. Decks differ regionally based on the number of cards needed to play the games; the French tarot consists of the "full" 78 cards, while Germanic, Spanish and Italian Tarot variants remove certain values (usually low suited cards) from the deck, creating a deck with as few as 32 cards.
The French suits were introduced around 1480 and, in France, mostly replaced the earlier Latin suits of "swords", "clubs", "cups" and "coins". (which are still common in Spanish- and Portuguese-speaking countries as well as in some northern regions of Italy) The suit symbols, being very simple and single-color, could be stamped onto the playing cards to create a deck, thus only requiring special full-color card art for the court cards. This drastically simplifies the production of a deck of cards versus the traditional Italian deck, which used unique full-color art for each card in the deck. The French suits became popular in English playing cards in the 16th century (despite historic animosity between France and England), and from there were introduced to British colonies including North America. The rise of Western culture has led to the near-universal popularity and availability of French-suited playing cards even in areas with their own regional card art.
In Japan, a distinct 48-card hanafuda deck is popular. It is derived from 16th-century Portuguese decks, after undergoing a long evolution driven by laws enacted by the Tokugawa shogunate attempting to ban the use of playing cards
The best-known deck internationally is the Anglo-American pattern of the 52-card French deck used for such games as poker and contract bridge. It contains one card for each unique combination of thirteen "ranks" and the four French "suits" "spades", "hearts", "diamonds", and "clubs". The ranks (from highest to lowest in bridge and poker) are "ace", "king", "queen", "jack" (or "knave"), and the numbers from "ten" down to "two" (or "deuce"). The trump cards and "knight" cards from the French playing tarot are not included.
Originally the term "knave" was more common than "jack"; the card had been called a jack as part of the terminology of All-Fours since the 17th century, but the word was considered vulgar. (Note the exclamation by Estella in Charles Dickens's novel "Great Expectations": "He calls the knaves, Jacks, this boy!") However, because the card abbreviation for knave ("Kn") was so close to that of the king, it was very easy to confuse them, especially after suits and rankings were moved to the corners of the card in order to enable people to fan them in one hand and still see all the values. (The earliest known deck to place suits and rankings in the corner of the card is from 1693, but these cards did not become common until after 1864 when Hart reintroduced them along with the knave-to-jack change.) However, books of card games published in the third quarter of the 19th century evidently still referred to the "knave", and the term with this definition is still recognized in the United Kingdom.
In the 17th century, a French, five-trick, gambling game called Bête became popular and spread to Germany, where it was called La Bete and England where it was named Beast. It was a derivative of Triomphe and was the first card game in history to introduce the concept of bidding.
Chinese handmade mother-of-pearl gaming counters were used in scoring and bidding of card games in the West during the approximate period of 1700–1840. The gaming counters would bear an engraving such as a coat of arms or a monogram to identify a family or individual. Many of the gaming counters also depict Chinese scenes, flowers or animals. Queen Charlotte, wife of George III, is one prominent British individual who is known to have played with the Chinese gaming counters. Card games such as Ombre, Quadrille and Pope Joan were popular at the time and required counters for scoring. The production of counters declined after Whist, with its different scoring method, became the most popular card game in the West.
Based on the association of card games and gambling, Pope Benedict XIV banned card games on October 17, 1750.
Since the 19th century some decks have been specially printed for certain games. Old Maid, Phase 10, Rook, and Uno are examples of games that can be played with one or more 52-card decks but are usually played with custom decks. Cards play an important role in board games like Risk and Monopoly. | https://en.wikipedia.org/wiki?curid=5360 |
Cross-stitch
Cross-stitch is a form of sewing and a popular form of counted-thread embroidery in which X-shaped stitches in a tiled, raster-like pattern are used to form a picture. The stitcher counts the threads on a piece of evenweave fabric (such as linen) in each direction so that the stitches are of uniform size and appearance. This form of cross-stitch is also called counted cross-stitch in order to distinguish it from other forms of cross-stitch. Sometimes cross-stitch is done on designs printed on the fabric (stamped cross-stitch); the stitcher simply stitches over the printed pattern. Cross-stitch is often executed on easily countable fabric called aida cloth whose weave creates a plainly visible grid of squares with holes for the needle at each corner.
Fabrics used in cross-stitch include linen, aida, and mixed-content fabrics called 'evenweave' such as jobelan. All cross-stitch fabrics are technically "evenweave" as the term refers to the fact that the fabric is woven to make sure that there are the same number of threads per inch in both the warp and the weft (i.e. vertically and horizontally). Fabrics are categorized by threads per inch (referred to as 'count'), which can range from 11 to 40 count.
Counted cross-stitch projects are worked from a gridded pattern called a chart and can be used on any count fabric; the count of the fabric and the number of threads per stitch determine the size of the finished stitching. For example, if a given design is stitched on a 28 count cross-stitch fabric with each cross worked over two threads, the finished stitching size is the same as it would be on a 14 count aida fabric with each cross worked over one square. These methods are referred to as "2 over 2" (2 embroidery threads used to stitch over 2 fabric threads) and "1 over 1" (1 embroidery thread used to stitch over 1 fabric thread or square), respectively. There are different methods of stitching a pattern, including the cross-country method where one colour is stitched at a time, or the parking method where one block of fabric is stitched at a time and the end of the thread is "parked" at the next point the same colour occurs in the pattern.
Cross-stitch is the oldest form of embroidery and can be found all over the world since the middle ages. Many folk museums show examples of clothing decorated with cross-stitch, especially from continental Europe and Asia.
The cross-stitch sampler is called that because it was generally stitched by a young girl to learn how to stitch and to record alphabet and other patterns to be used in her household sewing. These samples of her stitching could be referred back to over the years. Often, motifs and initials were stitched on household items to identify their owner, or simply to decorate the otherwise-plain cloth. The earliest known cross stitch sampler made in the United States is currently housed at Pilgrim Hall in Plymouth, Massachusetts. The sampler was created by Loara Standish, daughter of Captain Myles Standish and pioneer of the Leviathan stitch, circa 1653.
Traditionally, cross-stitch was used to embellish items like household linens, tablecloths, dishcloths, and doilies (only a small portion of which would actually be embroidered, such as a border). Although there are many cross-stitchers who still employ it in this fashion, it is now increasingly popular to work the pattern on pieces of fabric and hang them on the wall for decoration. Cross-stitch is also often used to make greeting cards, pillowtops, or as inserts for box tops, coasters and trivets.
Multicoloured, shaded, painting-like patterns as we know them today are a fairly modern development, deriving from similar shaded patterns of Berlin wool work of the mid-nineteenth century. Besides designs created expressly for cross-stitch, there are software programs that convert a photograph or a fine art image into a chart suitable for stitching. One example of this is in the cross-stitched reproduction of the Sistine Chapel charted and stitched by Joanna Lopianowski-Roberts.
There are many cross-stitching "guilds" and groups across the United States and Europe which offer classes, collaborate on large projects, stitch for charity, and provide other ways for local cross-stitchers to get to know one another. Individually owned local needlework shops (LNS) often have stitching nights at their shops, or host weekend stitching retreats.
Today, cotton floss is the most common embroidery thread. It is a thread made of mercerized cotton, composed of six strands that are only loosely twisted together and easily separable. While there are other manufacturers, the two most-commonly used (and oldest) brands are DMC and Anchor, both of which have been manufacturing embroidery floss since the 1800s.
Other materials used are pearl (or perle) cotton, Danish flower thread, silk and Rayon. Different wool threads, metallic threads or other novelty threads are also used, sometimes for the whole work, but often for accents and embellishments. Hand-dyed cross-stitch floss is created just as the name implies—it is dyed by hand. Because of this, there are variations in the amount of color throughout the thread. Some variations can be subtle, while some can be a huge contrast. Some also have more than one color per thread, which in the right project, creates amazing results.
Cross-stitch is widely used in traditional Palestinian dressmaking.
Other stitches are also often used in cross-stitch, among them quarter-, half-, and three-quarter-stitches and backstitches.
Cross-stitch is often used together with other stitches. A cross-stitch can come in a variety of prostational forms. It is sometimes used in crewel embroidery, especially in its more modern derivatives. It is also often used in needlepoint.
A specialized historical form of embroidery using cross-stitch is Assisi embroidery.
There are many stitches which are related to cross-stitch and were used in similar ways in earlier times. The best known are Italian cross-stitch, Celtic Cross Stitch, Irish Cross Stitch, long-armed cross-stitch, Ukrainian cross-stitch and Montenegrin stitch. Italian cross-stitch and Montenegrin stitch are reversible, meaning the work looks the same on both sides. These styles have a slightly different look than ordinary cross-stitch. These more difficult stitches are rarely used in mainstream embroidery, but they are still used to recreate historical pieces of embroidery or by the creative and adventurous stitcher.
The double cross-stitch, also known as a Leviathan stitch or Smyrna cross-stitch, combines a cross-stitch with an upright cross-stitch.
Berlin wool work and similar petit point stitchery resembles the heavily shaded, opulent styles of cross-stitch, and sometimes also used charted patterns on paper.
Cross-stitch is often combined with other popular forms of embroidery, such as Hardanger embroidery or blackwork embroidery. Cross-stitch may also be combined with other work, such as canvaswork or drawn thread work. Beadwork and other embellishments such as paillettes, charms, small buttons and specialty threads of various kinds may also be used.
Cross-stitch has become increasingly popular with the younger generation of Europe in recent years. Retailers such as John Lewis experienced a 17% rise in sales of haberdashery products between 2009 and 2010. Hobbycraft, a chain of stores selling craft supplies, also enjoyed an 11% increase in sales over the year to February 22, 2009.
Knitting and cross-stitching have become more popular hobbies for a younger market, in contrast to its traditional reputation as a hobby for retirees. Sewing and craft groups such as Stitch and Bitch London have resurrected the idea of the traditional craft club. At Clothes Show Live 2010 there was a new area called "Sknitch" promoting modern sewing, knitting and embroidery.
In a departure from the traditional designs associated with cross-stitch, there is a current trend for more postmodern or tongue-in-cheek designs featuring retro images or contemporary sayings. It is linked to a concept known as 'subversive cross-stitch', which involves more risque designs, often fusing the traditional sampler style with sayings designed to shock or be incongruous with the old-fashioned image of cross-stitch.
Stitching designs on other materials can be accomplished by using waste canvas. This is a temporary gridded canvas similar to regular canvas used for embroidery that is held together by a water-soluble glue, which is removed after completion of stitch design. Other crafters have taken to cross-stitching on all manner of gridded objects as well including old kitchen strainers or chain-link fences.
In the 21st Century, an emphasis on feminist design has emerged within cross-stitch communities. There are collections of patterns available with feminist themes, and many more feminist patterns online. Some cross-stitchers have commented on the way that the practice of embroidery makes them feel connected to the women who practised it before them. There is a push for all embroidery, including cross-stitch, to be respected as a significant art form.
An increasingly popular activity for cross-stitchers is to watch and make YouTube videos detailing their hobby. Flosstubers, as they are known, typically cover WIPs (Works in Progress), FOs (Finished Objects), and Haul (new patterns, thread, and fabric, as well as cross-stitching accessories, such as needleminders). | https://en.wikipedia.org/wiki?curid=5361 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.