text
stringlengths
151
4.06k
Compact Disc (CD) is a digital optical disc data storage format. The format was originally developed to store and play only sound recordings but was later adapted for storage of data (CD-ROM). Several other formats were further derived from these, including write-once audio and data storage (CD-R), rewritable media (CD-RW), Video Compact Disc (VCD), Super Video Compact Disc (SVCD), Photo CD, PictureCD, CD-i, and Enhanced Music CD. Audio CDs and audio CD players have been commercially available since October 1982.
In 2004, worldwide sales of audio CDs, CD-ROMs and CD-Rs reached about 30 billion discs. By 2007, 200 billion CDs had been sold worldwide. CDs are increasingly being replaced by other forms of digital storage and distribution, with the result that audio CD sales rates in the U.S. have dropped about 50% from their peak; however, they remain one of the primary distribution methods for the music industry. In 2014, revenues from digital music services matched those from physical format sales for the first time.
The Compact Disc is an evolution of LaserDisc technology, where a focused laser beam is used that enables the high information density required for high-quality digital audio signals. Prototypes were developed by Philips and Sony independently in the late 1970s. In 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. After a year of experimentation and discussion, the Red Book CD-DA standard was published in 1980. After their commercial release in 1982, compact discs and their players were extremely popular. Despite costing up to $1,000, over 400,000 CD players were sold in the United States between 1983 and 1984. The success of the compact disc has been credited to the cooperation between Philips and Sony, who came together to agree upon and develop compatible hardware. The unified design of the compact disc allowed consumers to purchase any disc or player from any company, and allowed the CD to dominate the at-home music market unchallenged.
In 1974, L. Ottens, director of the audio division of Philips, started a small group with the aim to develop an analog optical audio disc with a diameter of 20 cm and a sound quality superior to that of the vinyl record. However, due to the unsatisfactory performance of the analog format, two Philips research engineers recommended a digital format in March 1974. In 1977, Philips then established a laboratory with the mission of creating a digital audio disc. The diameter of Philips's prototype compact disc was set at 11.5 cm, the diagonal of an audio cassette.
Heitaro Nakajima, who developed an early digital audio recorder within Japan's national public broadcasting organization NHK in 1970, became general manager of Sony's audio department in 1971. His team developed a digital PCM adaptor audio tape recorder using a Betamax video recorder in 1973. After this, in 1974 the leap to storing digital audio on an optical disc was easily made. Sony first publicly demonstrated an optical digital audio disc in September 1976. A year later, in September 1977, Sony showed the press a 30 cm disc that could play 60 minutes of digital audio (44,100 Hz sampling rate and 16-bit resolution) using MFM modulation. In September 1978, the company demonstrated an optical digital audio disc with a 150-minute playing time, 44,056 Hz sampling rate, 16-bit linear resolution, and cross-interleaved error correction code—specifications similar to those later settled upon for the standard Compact Disc format in 1980. Technical details of Sony's digital audio disc were presented during the 62nd AES Convention, held on 13–16 March 1979, in Brussels. Sony's AES technical paper was published on 1 March 1979. A week later, on 8 March, Philips publicly demonstrated a prototype of an optical digital audio disc at a press conference called "Philips Introduce Compact Disc" in Eindhoven, Netherlands.
As a result, in 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. Led by engineers Kees Schouhamer Immink and Toshitada Doi, the research pushed forward laser and optical disc technology. After a year of experimentation and discussion, the task force produced the Red Book CD-DA standard. First published in 1980, the standard was formally adopted by the IEC as an international standard in 1987, with various amendments becoming part of the standard in 1996.
The Japanese launch was followed in March 1983 by the introduction of CD players and discs to Europe and North America (where CBS Records released sixteen titles). This event is often seen as the "Big Bang" of the digital audio revolution. The new audio disc was enthusiastically received, especially in the early-adopting classical music and audiophile communities, and its handling quality received particular praise. As the price of players gradually came down, and with the introduction of the portable Walkman the CD began to gain popularity in the larger popular and rock music markets. The first artist to sell a million copies on CD was Dire Straits, with their 1985 album Brothers in Arms. The first major artist to have his entire catalogue converted to CD was David Bowie, whose 15 studio albums were made available by RCA Records in February 1985, along with four greatest hits albums. In 1988, 400 million CDs were manufactured by 50 pressing plants around the world.
The CD was planned to be the successor of the gramophone record for playing music, rather than primarily as a data storage medium. From its origins as a musical format, CDs have grown to encompass other applications. In 1983, following the CD's introduction, Immink and Braat presented the first experiments with erasable compact discs during the 73rd AES Convention. In June 1985, the computer-readable CD-ROM (read-only memory) and, in 1990, CD-Recordable were introduced, also developed by both Sony and Philips. Recordable CDs were a new alternative to tape for recording music and copying music albums without defects introduced in compression used in other digital recording methods. Other newer video formats such as DVD and Blu-ray use the same physical geometry as CD, and most DVD and Blu-ray players are backward compatible with audio CD.
Meanwhile, with the advent and popularity of Internet-based distribution of files in lossily-compressed audio formats such as MP3, sales of CDs began to decline in the 2000s. For example, between 2000 - 2008, despite overall growth in music sales and one anomalous year of increase, major-label CD sales declined overall by 20%, although independent and DIY music sales may be tracking better according to figures released 30 March 2009, and CDs still continue to sell greatly. As of 2012, CDs and DVDs made up only 34 percent of music sales in the United States. In Japan, however, over 80 percent of music was bought on CDs and other physical formats as of 2015.
Replicated CDs are mass-produced initially using a hydraulic press. Small granules of heated raw polycarbonate plastic are fed into the press. A screw forces the liquefied plastic into the mold cavity. The mold closes with a metal stamper in contact with the disc surface. The plastic is allowed to cool and harden. Once opened, the disc substrate is removed from the mold by a robotic arm, and a 15 mm diameter center hole (called a stacking ring) is created. The time it takes to "stamp" one CD is usually two to three seconds.
This method produces the clear plastic blank part of the disc. After a metallic reflecting layer (usually aluminium, but sometimes gold or other metal) is applied to the clear blank substrate, the disc goes under a UV light for curing and it is ready to go to press. To prepare to press a CD, a glass master is made, using a high-powered laser on a device similar to a CD writer. The glass master is a positive image of the desired CD surface (with the desired microscopic pits and lands). After testing, it is used to make a die by pressing it against a metal disc.
The die is a negative image of the glass master: typically, several are made, depending on the number of pressing mills that are to make the CD. The die then goes into a press, and the physical image is transferred to the blank CD, leaving a final positive image on the disc. A small amount of lacquer is applied as a ring around the center of the disc, and rapid spinning spreads it evenly over the surface. Edge protection lacquer is applied before the disc is finished. The disc can then be printed and packed.
The most expensive part of a CD is the jewel case. In 1995, material costs were 30 cents for the jewel case and 10 to 15 cents for the CD. Wholesale cost of CDs was $0.75 to $1.15, which retailed for $16.98. On average, the store received 35 percent of the retail price, the record company 27 percent, the artist 16 percent, the manufacturer 13 percent, and the distributor 9 percent. When 8-track tapes, cassette tapes, and CDs were introduced, each was marketed at a higher price than the format they succeeded, even though the cost to produce the media was reduced. This was done because the apparent value increased. This continued from vinyl to CDs but was broken when Apple marketed MP3s for $0.99, and albums for $9.99. The incremental cost, though, to produce an MP3 is very small.
CD-R recordings are designed to be permanent. Over time, the dye's physical characteristics may change causing read errors and data loss until the reading device cannot recover with error correction methods. The design life is from 20 to 100 years, depending on the quality of the discs, the quality of the writing drive, and storage conditions. However, testing has demonstrated such degradation of some discs in as little as 18 months under normal storage conditions. This failure is known as disc rot, for which there are several, mostly environmental, reasons.
The ReWritable Audio CD is designed to be used in a consumer audio CD recorder, which will not (without modification) accept standard CD-RW discs. These consumer audio CD recorders use the Serial Copy Management System (SCMS), an early form of digital rights management (DRM), to conform to the United States' Audio Home Recording Act (AHRA). The ReWritable Audio CD is typically somewhat more expensive than CD-RW due to (a) lower volume and (b) a 3% AHRA royalty used to compensate the music industry for the making of a copy.
Due to technical limitations, the original ReWritable CD could be written no faster than 4x speed. High Speed ReWritable CD has a different design, which permits writing at speeds ranging from 4x to 12x. Original CD-RW drives can only write to original ReWritable CDs. High Speed CD-RW drives can typically write to both original ReWritable CDs and High Speed ReWritable CDs. Both types of CD-RW discs can be read in most CD drives. Higher speed CD-RW discs, Ultra Speed (16x to 24x write speed) and Ultra Speed+ (32x write speed) are now available.
A CD is read by focusing a 780 nm wavelength (near infrared) semiconductor laser housed within the CD player, through the bottom of the polycarbonate layer. The change in height between pits and lands results in a difference in the way the light is reflected. By measuring the intensity change with a photodiode, the data can be read from the disc. In order to accommodate the spiral pattern of data, the semiconductor laser is placed on a swing arm within the disc tray of any CD player. This swing arm allows the laser to read information from the centre to the edge of a disc, without having to interrupt the spinning of the disc itself.
The pits and lands themselves do not directly represent the zeros and ones of binary data. Instead, non-return-to-zero, inverted encoding is used: a change from pit to land or land to pit indicates a one, while no change indicates a series of zeros. There must be at least two and no more than ten zeros between each one, which is defined by the length of the pit. This in turn is decoded by reversing the eight-to-fourteen modulation used in mastering the disc, and then reversing the cross-interleaved Reed–Solomon coding, finally revealing the raw data stored on the disc. These encoding techniques (defined in the Red Book) were originally designed for CD Digital Audio, but they later became a standard for almost all CD formats (such as CD-ROM).
CDs are susceptible to damage during handling and from environmental exposure. Pits are much closer to the label side of a disc, enabling defects and contaminants on the clear side to be out of focus during playback. Consequently, CDs are more likely to suffer damage on the label side of the disc. Scratches on the clear side can be repaired by refilling them with similar refractive plastic or by careful polishing. The edges of CDs are sometimes incompletely sealed, allowing gases and liquids to corrode the metal reflective layer and to interfere with the focus of the laser on the pits. The fungus Geotrichum candidum, found in Belize, has been found to consume the polycarbonate plastic and aluminium found in CDs.
The digital data on a CD begins at the center of the disc and proceeds toward the edge, which allows adaptation to the different size formats available. Standard CDs are available in two sizes. By far, the most common is 120 millimetres (4.7 in) in diameter, with a 74- or 80-minute audio capacity and a 650 or 700 MiB (737,280,000-byte) data capacity. This capacity was reportedly specified by Sony executive Norio Ohga in May 1980 so as to be able to contain the entirety of the London Philharmonic Orchestra's recording of Beethoven's Ninth Symphony on one disc. This is a myth according to Kees Immink, as the code format had not yet been decided in May 1980. The adoption of EFM one month later would have allowed a playing time of 97 minutes for 120 mm diameter or 74 minutes for a disc as small as 100 mm. The 120 mm diameter has been adopted by subsequent formats, including Super Audio CD, DVD, HD DVD, and Blu-ray Disc. Eighty-millimeter discs ("Mini CDs") were originally designed for CD singles and can hold up to 24 minutes of music or 210 MiB of data but never became popular.[citation needed] Today, nearly every single is released on a 120 mm CD, called a Maxi single.[citation needed]
The logical format of an audio CD (officially Compact Disc Digital Audio or CD-DA) is described in a document produced in 1980 by the format's joint creators, Sony and Philips. The document is known colloquially as the Red Book CD-DA after the colour of its cover. The format is a two-channel 16-bit PCM encoding at a 44.1 kHz sampling rate per channel. Four-channel sound was to be an allowable option within the Red Book format, but has never been implemented. Monaural audio has no existing standard on a Red Book CD; thus, mono source material is usually presented as two identical channels in a standard Red Book stereo track (i.e., mirrored mono); an MP3 CD, however, can have audio file formats with mono sound.
Compact Disc + Graphics is a special audio compact disc that contains graphics data in addition to the audio data on the disc. The disc can be played on a regular audio CD player, but when played on a special CD+G player, it can output a graphics signal (typically, the CD+G player is hooked up to a television set or a computer monitor); these graphics are almost exclusively used to display lyrics on a television set for karaoke performers to sing along with. The CD+G format takes advantage of the channels R through W. These six bits store the graphics information.
SVCD has two-thirds the resolution of DVD, and over 2.7 times the resolution of VCD. One CD-R disc can hold up to 60 minutes of standard quality SVCD-format video. While no specific limit on SVCD video length is mandated by the specification, one must lower the video bit rate, and therefore quality, to accommodate very long videos. It is usually difficult to fit much more than 100 minutes of video onto one SVCD without incurring significant quality loss, and many hardware players are unable to play video with an instantaneous bit rate lower than 300 to 600 kilobits per second.
Photo CD is a system designed by Kodak for digitizing and storing photos on a CD. Launched in 1992, the discs were designed to hold nearly 100 high-quality images, scanned prints and slides using special proprietary encoding. Photo CDs are defined in the Beige Book and conform to the CD-ROM XA and CD-i Bridge specifications as well. They are intended to play on CD-i players, Photo CD players and any computer with the suitable software irrespective of the operating system. The images can also be printed out on photographic paper with a special Kodak machine. This format is not to be confused with Kodak Picture CD, which is a consumer product in CD-ROM format.
The Red Book audio specification, except for a simple "anti-copy" statement in the subcode, does not include any copy protection mechanism. Known at least as early as 2001, attempts were made by record companies to market "copy-protected" non-standard compact discs, which cannot be ripped, or copied, to hard drives or easily converted to MP3s. One major drawback to these copy-protected discs is that most will not play on either computer CD-ROM drives or some standalone CD players that use CD-ROM mechanisms. Philips has stated that such discs are not permitted to bear the trademarked Compact Disc Digital Audio logo because they violate the Red Book specifications. Numerous copy-protection systems have been countered by readily available, often free, software.
A transistor is a semiconductor device used to amplify or switch electronic signals and electrical power. It is composed of semiconductor material with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals changes the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Today, some transistors are packaged individually, but many more are found embedded in integrated circuits.
The transistor is the fundamental building block of modern electronic devices, and is ubiquitous in modern electronic systems. First conceived by Julius Lilienfeld in 1926 and practically implemented in 1947 by American physicists John Bardeen, Walter Brattain, and William Shockley, the transistor revolutionized the field of electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things. The transistor is on the list of IEEE milestones in electronics, and Bardeen, Brattain, and Shockley shared the 1956 Nobel Prize in Physics for their achievement.
The thermionic triode, a vacuum tube invented in 1907, enabled amplified radio technology and long-distance telephony. The triode, however, was a fragile device that consumed a lot of power. Physicist Julius Edgar Lilienfeld filed a patent for a field-effect transistor (FET) in Canada in 1925, which was intended to be a solid-state replacement for the triode. Lilienfeld also filed identical patents in the United States in 1926 and 1928. However, Lilienfeld did not publish any research articles about his devices nor did his patents cite any specific examples of a working prototype. Because the production of high-quality semiconductor materials was still decades away, Lilienfeld's solid-state amplifier ideas would not have found practical use in the 1920s and 1930s, even if such a device had been built. In 1934, German inventor Oskar Heil patented a similar device.
From November 17, 1947 to December 23, 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in the United States performed experiments and observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors. The term transistor was coined by John R. Pierce as a contraction of the term transresistance. According to Lillian Hoddeson and Vicki Daitch, authors of a biography of John Bardeen, Shockley had proposed that Bell Labs' first patent for a transistor should be based on the field-effect and that he be named as the inventor. Having unearthed Lilienfeld’s patents that went into obscurity years earlier, lawyers at Bell Labs advised against Shockley's proposal because the idea of a field-effect transistor that used an electric field as a "grid" was not new. Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the first point-contact transistor. In acknowledgement of this accomplishment, Shockley, Bardeen, and Brattain were jointly awarded the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect."
In 1948, the point-contact transistor was independently invented by German physicists Herbert Mataré and Heinrich Welker while working at the Compagnie des Freins et Signaux, a Westinghouse subsidiary located in Paris. Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort during World War II. Using this knowledge, he began researching the phenomenon of "interference" in 1947. By June 1948, witnessing currents flowing through point-contacts, Mataré produced consistent results using samples of germanium produced by Welker, similar to what Bardeen and Brattain had accomplished earlier in December 1947. Realizing that Bell Labs' scientists had already invented the transistor before them, the company rushed to get its "transistron" into production for amplified use in France's telephone network.
Although several companies each produce over a billion individually packaged (known as discrete) transistors every year, the vast majority of transistors are now produced in integrated circuits (often shortened to IC, microchips or simply chips), along with diodes, resistors, capacitors and other electronic components, to produce complete electronic circuits. A logic gate consists of up to about twenty transistors whereas an advanced microprocessor, as of 2009, can use as many as 3 billion transistors (MOSFETs). "About 60 million transistors were built in 2002… for [each] man, woman, and child on Earth."
The essential usefulness of a transistor comes from its ability to use a small signal applied between one pair of its terminals to control a much larger signal at another pair of terminals. This property is called gain. It can produce a stronger output signal, a voltage or current, which is proportional to a weaker input signal; that is, it can act as an amplifier. Alternatively, the transistor can be used to turn current on or off in a circuit as an electrically controlled switch, where the amount of current is determined by other circuit elements.
There are two types of transistors, which have slight differences in how they are used in a circuit. A bipolar transistor has terminals labeled base, collector, and emitter. A small current at the base terminal (that is, flowing between the base and the emitter) can control or switch a much larger current between the collector and emitter terminals. For a field-effect transistor, the terminals are labeled gate, source, and drain, and a voltage at the gate can control a current between source and drain.
In a grounded-emitter transistor circuit, such as the light-switch circuit shown, as the base voltage rises, the emitter and collector currents rise exponentially. The collector voltage drops because of reduced resistance from collector to emitter. If the voltage difference between the collector and emitter were zero (or near zero), the collector current would be limited only by the load resistance (light bulb) and the supply voltage. This is called saturation because current is flowing from collector to emitter freely. When saturated, the switch is said to be on.
Providing sufficient base drive current is a key problem in the use of bipolar transistors as switches. The transistor provides current gain, allowing a relatively large current in the collector to be switched by a much smaller current into the base terminal. The ratio of these currents varies depending on the type of transistor, and even for a particular type, varies depending on the collector current. In the example light-switch circuit shown, the resistor is chosen to provide enough base current to ensure the transistor will be saturated.
In a switching circuit, the idea is to simulate, as near as possible, the ideal switch having the properties of open circuit when off, short circuit when on, and an instantaneous transition between the two states. Parameters are chosen such that the "off" output is limited to leakage currents too small to affect connected circuitry; the resistance of the transistor in the "on" state is too small to affect circuitry; and the transition between the two states is fast enough not to have a detrimental effect.
Bipolar transistors are so named because they conduct by using both majority and minority carriers. The bipolar junction transistor, the first type of transistor to be mass-produced, is a combination of two junction diodes, and is formed of either a thin layer of p-type semiconductor sandwiched between two n-type semiconductors (an n–p–n transistor), or a thin layer of n-type semiconductor sandwiched between two p-type semiconductors (a p–n–p transistor). This construction produces two p–n junctions: a base–emitter junction and a base–collector junction, separated by a thin region of semiconductor known as the base region (two junction diodes wired together without sharing an intervening semiconducting region will not make a transistor).
BJTs have three terminals, corresponding to the three layers of semiconductor—an emitter, a base, and a collector. They are useful in amplifiers because the currents at the emitter and collector are controllable by a relatively small base current. In an n–p–n transistor operating in the active region, the emitter–base junction is forward biased (electrons and holes recombine at the junction), and electrons are injected into the base region. Because the base is narrow, most of these electrons will diffuse into the reverse-biased (electrons and holes are formed at, and move away from the junction) base–collector junction and be swept into the collector; perhaps one-hundredth of the electrons will recombine in the base, which is the dominant mechanism in the base current. By controlling the number of electrons that can leave the base, the number of electrons entering the collector can be controlled. Collector current is approximately β (common-emitter current gain) times the base current. It is typically greater than 100 for small-signal transistors but can be smaller in transistors designed for high-power applications.
In a FET, the drain-to-source current flows via a conducting channel that connects the source region to the drain region. The conductivity is varied by the electric field that is produced when a voltage is applied between the gate and source terminals; hence the current flowing between the drain and source is controlled by the voltage applied between the gate and source. As the gate–source voltage (VGS) is increased, the drain–source current (IDS) increases exponentially for VGS below threshold, and then at a roughly quadratic rate (IGS ∝ (VGS − VT)2) (where VT is the threshold voltage at which drain current begins) in the "space-charge-limited" region above threshold. A quadratic behavior is not observed in modern devices, for example, at the 65 nm technology node.
FETs are divided into two families: junction FET (JFET) and insulated gate FET (IGFET). The IGFET is more commonly known as a metal–oxide–semiconductor FET (MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the insulation), and semiconductor. Unlike IGFETs, the JFET gate forms a p–n diode with the channel which lies between the source and drain. Functionally, this makes the n-channel JFET the solid-state equivalent of the vacuum tube triode which, similarly, forms a diode between its grid and cathode. Also, both devices operate in the depletion mode, they both have a high input impedance, and they both conduct current under the control of an input voltage.
FETs are further divided into depletion-mode and enhancement-mode types, depending on whether the channel is turned on or off with zero gate-to-source voltage. For enhancement mode, the channel is off at zero bias, and a gate potential can "enhance" the conduction. For the depletion mode, the channel is on at zero bias, and a gate potential (of the opposite polarity) can "deplete" the channel, reducing conduction. For either mode, a more positive gate voltage corresponds to a higher current for n-channel devices and a lower current for p-channel devices. Nearly all JFETs are depletion-mode because the diode junctions would forward bias and conduct if they were enhancement-mode devices; most IGFETs are enhancement-mode types.
The bipolar junction transistor (BJT) was the most commonly used transistor in the 1960s and 70s. Even after MOSFETs became widely available, the BJT remained the transistor of choice for many analog circuits such as amplifiers because of their greater linearity and ease of manufacture. In integrated circuits, the desirable properties of MOSFETs allowed them to capture nearly all market share for digital circuits. Discrete MOSFETs can be applied in transistor applications, including analog circuits, voltage regulators, amplifiers, power transmitters and motor drivers.
The Pro Electron standard, the European Electronic Component Manufacturers Association part numbering scheme, begins with two letters: the first gives the semiconductor type (A for germanium, B for silicon, and C for materials like GaAs); the second letter denotes the intended use (A for diode, C for general-purpose transistor, etc.). A 3-digit sequence number (or one letter then 2 digits, for industrial types) follows. With early devices this indicated the case type. Suffixes may be used, with a letter (e.g. "C" often means high hFE, such as in: BC549C) or other codes may follow to show gain (e.g. BC327-25) or voltage rating (e.g. BUK854-800A). The more common prefixes are:
The JEDEC EIA370 transistor device numbers usually start with "2N", indicating a three-terminal device (dual-gate field-effect transistors are four-terminal devices, so begin with 3N), then a 2, 3 or 4-digit sequential number with no significance as to device properties (although early devices with low numbers tend to be germanium). For example, 2N3055 is a silicon n–p–n power transistor, 2N1301 is a p–n–p germanium switching transistor. A letter suffix (such as "A") is sometimes used to indicate a newer variant, but rarely gain groupings.
Manufacturers of devices may have their own proprietary numbering system, for example CK722. Since devices are second-sourced, a manufacturer's prefix (like "MPF" in MPF102, which originally would denote a Motorola FET) now is an unreliable indicator of who made the device. Some proprietary naming schemes adopt parts of other naming schemes, for example a PN2222A is a (possibly Fairchild Semiconductor) 2N2222A in a plastic case (but a PN108 is a plastic version of a BC108, not a 2N108, while the PN100 is unrelated to other xx100 devices).
The junction forward voltage is the voltage applied to the emitter–base junction of a BJT in order to make the base conduct a specified current. The current increases exponentially as the junction forward voltage is increased. The values given in the table are typical for a current of 1 mA (the same values apply to semiconductor diodes). The lower the junction forward voltage the better, as this means that less power is required to "drive" the transistor. The junction forward voltage for a given current decreases with increase in temperature. For a typical silicon junction the change is −2.1 mV/°C. In some circuits special compensating elements (sensistors) must be used to compensate for such changes.
Because the electron mobility is higher than the hole mobility for all semiconductor materials, a given bipolar n–p–n transistor tends to be swifter than an equivalent p–n–p transistor. GaAs has the highest electron mobility of the three semiconductors. It is for this reason that GaAs is used in high-frequency applications. A relatively recent FET development, the high-electron-mobility transistor (HEMT), has a heterostructure (junction between different semiconductor materials) of aluminium gallium arsenide (AlGaAs)-gallium arsenide (GaAs) which has twice the electron mobility of a GaAs-metal barrier junction. Because of their high speed and low noise, HEMTs are used in satellite receivers working at frequencies around 12 GHz. HEMTs based on gallium nitride and aluminium gallium nitride (AlGaN/GaN HEMTs) provide a still higher electron mobility and are being developed for various applications.
Discrete transistors are individually packaged transistors. Transistors come in many different semiconductor packages (see image). The two main categories are through-hole (or leaded), and surface-mount, also known as surface-mount device (SMD). The ball grid array (BGA) is the latest surface-mount package (currently only for large integrated circuits). It has solder "balls" on the underside in place of leads. Because they are smaller and have shorter interconnections, SMDs have better high-frequency characteristics but lower power rating.
In the Pre-Modern era, many people's sense of self and purpose was often expressed via a faith in some form of deity, be that in a single God or in many gods. Pre-modern cultures have not been thought of creating a sense of distinct individuality, though. Religious officials, who often held positions of power, were the spiritual intermediaries to the common person. It was only through these intermediaries that the general masses had access to the divine. Tradition was sacred to ancient cultures and was unchanging and the social order of ceremony and morals in a culture could be strictly enforced.
The term "modern" was coined in the 16th century to indicate present or recent times (ultimately derived from the Latin adverb modo, meaning "just now). The European Renaissance (about 1420–1630), which marked the transition between the Late Middle Ages and Early Modern times, started in Italy and was spurred in part by the rediscovery of classical art and literature, as well as the new perspectives gained from the Age of Discovery and the invention of the telescope and microscope, expanding the borders of thought and knowledge.
The term "Early Modern" was introduced in the English language in the 1930s. to distinguish the time between what we call Middle Ages and time of the late Enlightenment (1800) (when the meaning of the term Modern Ages was developing its contemporary form). It is important to note that these terms stem from European history. In usage in other parts of the world, such as in Asia, and in Muslim countries, the terms are applied in a very different way, but often in the context with their contact with European culture in the Age of Discovery.
In the Contemporary era, there were various socio-technological trends. Regarding the 21st century and the late modern world, the Information age and computers were forefront in use, not completely ubiquitous but often present in daily life. The development of Eastern powers was of note, with China and India becoming more powerful. In the Eurasian theater, the European Union and Russian Federation were two forces recently developed. A concern for Western world, if not the whole world, was the late modern form of terrorism and the warfare that has resulted from the contemporary terrorist acts.
In Asia, various Chinese dynasties and Japanese shogunates controlled the Asian sphere. In Japan, the Edo period from 1600 to 1868 is also referred to as the early modern period. And in Korea, from the rising of Joseon Dynasty to the enthronement of King Gojong is referred to as the early modern period. In the Americas, Native Americans had built a large and varied civilization, including the Aztec Empire and alliance, the Inca civilization, the Mayan Empire and cities, and the Chibcha Confederation. In the west, the European kingdoms and movements were in a movement of reformation and expansion. Russia reached the Pacific coast in 1647 and consolidated its control over the Russian Far East in the 19th century.
In China, urbanization increased as the population grew and as the division of labor grew more complex. Large urban centers, such as Nanjing and Beijing, also contributed to the growth of private industry. In particular, small-scale industries grew up, often specializing in paper, silk, cotton, and porcelain goods. For the most part, however, relatively small urban centers with markets proliferated around the country. Town markets mainly traded food, with some necessary manufactures such as pins or oil. Despite the xenophobia and intellectual introspection characteristic of the increasingly popular new school of neo-Confucianism, China under the early Ming dynasty was not isolated. Foreign trade and other contacts with the outside world, particularly Japan, increased considerably. Chinese merchants explored all of the Indian Ocean, reaching East Africa with the treasure voyages of Zheng He.
The Qing dynasty (1644–1911) was founded after the fall of the Ming, the last Han Chinese dynasty, by the Manchus. The Manchus were formerly known as the Jurchens. When Beijing was captured by Li Zicheng's peasant rebels in 1644, the Chongzhen Emperor, the last Ming emperor, committed suicide. The Manchus then allied with former Ming general Wu Sangui and seized control of Beijing, which became the new capital of the Qing dynasty. The Mancus adopted the Confucian norms of traditional Chinese government in their rule of China proper. Schoppa, the editor of The Columbia Guide to Modern Chinese History argues, "A date around 1780 as the beginning of modern China is thus closer to what we know today as historical 'reality'. It also allows us to have a better baseline to understand the precipitous decline of the Chinese polity in the nineteenth and twentieth centuries."
Society in the Japanese "Tokugawa period" (Edo society), unlike the shogunates before it, was based on the strict class hierarchy originally established by Toyotomi Hideyoshi. The daimyo, or lords, were at the top, followed by the warrior-caste of samurai, with the farmers, artisans, and traders ranking below. In some parts of the country, particularly smaller regions, daimyo and samurai were more or less identical, since daimyo might be trained as samurai, and samurai might act as local lords. Otherwise, the largely inflexible nature of this social stratification system unleashed disruptive forces over time. Taxes on the peasantry were set at fixed amounts which did not account for inflation or other changes in monetary value. As a result, the tax revenues collected by the samurai landowners were worth less and less over time. This often led to numerous confrontations between noble but impoverished samurai and well-to-do peasants, ranging from simple local disturbances to much bigger rebellions. None, however, proved compelling enough to seriously challenge the established order until the arrival of foreign powers.
On the Indian subcontinent, the Mughal Empire ruled most of India in the early 18th century. The "classic period" ended with the death and defeat of Emperor Aurangzeb in 1707 by the rising Hindu Maratha Empire, although the dynasty continued for another 150 years. During this period, the Empire was marked by a highly centralized administration connecting the different regions. All the significant monuments of the Mughals, their most visible legacy, date to this period which was characterised by the expansion of Persian cultural influence in the Indian subcontinent, with brilliant literary, artistic, and architectural results. The Maratha Empire was located in the south west of present-day India and expanded greatly under the rule of the Peshwas, the prime ministers of the Maratha empire. In 1761, the Maratha army lost the Third Battle of Panipat which halted imperial expansion and the empire was then divided into a confederacy of Maratha states.
The development of New Imperialism saw the conquest of nearly all eastern hemisphere territories by colonial powers. The commercial colonization of India commenced in 1757, after the Battle of Plassey, when the Nawab of Bengal surrendered his dominions to the British East India Company, in 1765, when the Company was granted the diwani, or the right to collect revenue, in Bengal and Bihar, or in 1772, when the Company established a capital in Calcutta, appointed its first Governor-General, Warren Hastings, and became directly involved in governance.
The Maratha states, following the Anglo-Maratha wars, eventually lost to the British East India Company in 1818 with the Third Anglo-Maratha War. The rule lasted until 1858, when, after the Indian rebellion of 1857 and consequent of the Government of India Act 1858, the British government assumed the task of directly administering India in the new British Raj. In 1819 Stamford Raffles established Singapore as a key trading post for Britain in their rivalry with the Dutch. However, their rivalry cooled in 1824 when an Anglo-Dutch treaty demarcated their respective interests in Southeast Asia. From the 1850s onwards, the pace of colonization shifted to a significantly higher gear.
The Dutch East India Company (1800) and British East India Company (1858) were dissolved by their respective governments, who took over the direct administration of the colonies. Only Thailand was spared the experience of foreign rule, although, Thailand itself was also greatly affected by the power politics of the Western powers. Colonial rule had a profound effect on Southeast Asia. While the colonial powers profited much from the region's vast resources and large market, colonial rule did develop the region to a varying extent.
Many major events caused Europe to change around the start of the 16th century, starting with the Fall of Constantinople in 1453, the fall of Muslim Spain and the discovery of the Americas in 1492, and Martin Luther's Protestant Reformation in 1517. In England the modern period is often dated to the start of the Tudor period with the victory of Henry VII over Richard III at the Battle of Bosworth in 1485. Early modern European history is usually seen to span from the start of the 15th century, through the Age of Reason and the Age of Enlightenment in the 17th and 18th centuries, until the beginning of the Industrial Revolution in the late 18th century.
Russia experienced territorial growth through the 17th century, which was the age of Cossacks. Cossacks were warriors organized into military communities, resembling pirates and pioneers of the New World. In 1648, the peasants of Ukraine joined the Zaporozhian Cossacks in rebellion against Poland-Lithuania during the Khmelnytsky Uprising, because of the social and religious oppression they suffered under Polish rule. In 1654 the Ukrainian leader, Bohdan Khmelnytsky, offered to place Ukraine under the protection of the Russian Tsar, Aleksey I. Aleksey's acceptance of this offer led to another Russo-Polish War (1654–1667). Finally, Ukraine was split along the river Dnieper, leaving the western part (or Right-bank Ukraine) under Polish rule and eastern part (Left-bank Ukraine and Kiev) under Russian. Later, in 1670–71 the Don Cossacks led by Stenka Razin initiated a major uprising in the Volga region, but the Tsar's troops were successful in defeating the rebels. In the east, the rapid Russian exploration and colonisation of the huge territories of Siberia was led mostly by Cossacks hunting for valuable furs and ivory. Russian explorers pushed eastward primarily along the Siberian river routes, and by the mid-17th century there were Russian settlements in the Eastern Siberia, on the Chukchi Peninsula, along the Amur River, and on the Pacific coast. In 1648 the Bering Strait between Asia and North America was passed for the first time by Fedot Popov and Semyon Dezhnyov.
Traditionally, the European intellectual transformation of and after the Renaissance bridged the Middle Ages and the Modern era. The Age of Reason in the Western world is generally regarded as being the start of modern philosophy, and a departure from the medieval approach, especially Scholasticism. Early 17th-century philosophy is often called the Age of Rationalism and is considered to succeed Renaissance philosophy and precede the Age of Enlightenment, but some consider it as the earliest part of the Enlightenment era in philosophy, extending that era to two centuries. The 18th century saw the beginning of secularization in Europe, rising to notability in the wake of the French Revolution.
The Age of Enlightenment is a time in Western philosophy and cultural life centered upon the 18th century in which reason was advocated as the primary source and legitimacy for authority. Enlightenment gained momentum more or less simultaneously in many parts of Europe and America. Developing during the Enlightenment era, Renaissance humanism as an intellectual movement spread across Europe. The basic training of the humanist was to speak well and write (typically, in the form of a letter). The term umanista comes from the latter part of the 15th century. The people were associated with the studia humanitatis, a novel curriculum that was competing with the quadrivium and scholastic logic.
Renaissance humanism took a close study of the Latin and Greek classical texts, and was antagonistic to the values of scholasticism with its emphasis on the accumulated commentaries; and humanists were involved in the sciences, philosophies, arts and poetry of classical antiquity. They self-consciously imitated classical Latin and deprecated the use of medieval Latin. By analogy with the perceived decline of Latin, they applied the principle of ad fontes, or back to the sources, across broad areas of learning.
The quarrel of the Ancients and the Moderns was a literary and artistic quarrel that heated up in the early 1690s and shook the Académie française. The opposing two sides were, the Ancients (Anciens) who constrain choice of subjects to those drawn from the literature of Antiquity and the Moderns (Modernes), who supported the merits of the authors of the century of Louis XIV. Fontenelle quickly followed with his Digression sur les anciens et les modernes (1688), in which he took the Modern side, pressing the argument that modern scholarship allowed modern man to surpass the ancients in knowledge.
The Scientific Revolution was a period when European ideas in classical physics, astronomy, biology, human anatomy, chemistry, and other classical sciences were rejected and led to doctrines supplanting those that had prevailed from Ancient Greece to the Middle Ages which would lead to a transition to modern science. This period saw a fundamental transformation in scientific ideas across physics, astronomy, and biology, in institutions supporting scientific investigation, and in the more widely held picture of the universe. Individuals started to question all manners of things and it was this questioning that led to the Scientific Revolution, which in turn formed the foundations of contemporary sciences and the establishment of several modern scientific fields.
The changes were accompanied by violent turmoil which included the trial and execution of the king, vast bloodshed and repression during the Reign of Terror, and warfare involving every other major European power. Subsequent events that can be traced to the Revolution include the Napoleonic Wars, two separate restorations of the monarchy, and two additional revolutions as modern France took shape. In the following century, France would be governed at one point or another as a republic, constitutional monarchy, and two different empires.
The campaigns of French Emperor and General Napoleon Bonaparte characterized the Napoleonic Era. Born on Corsica as the French invaded, and dying suspiciously on the tiny British Island of St. Helena, this brilliant commander, controlled a French Empire that, at its height, ruled a large portion of Europe directly from Paris, while many of his friends and family ruled countries such as Spain, Poland, several parts of Italy and many other Kingdoms Republics and dependencies. The Napoleonic Era changed the face of Europe forever, and old Empires and Kingdoms fell apart as a result of the mighty and "Glorious" surge of Republicanism.
Italian unification was the political and social movement that annexed different states of the Italian peninsula into the single state of Italy in the 19th century. There is a lack of consensus on the exact dates for the beginning and the end of this period, but many scholars agree that the process began with the end of Napoleonic rule and the Congress of Vienna in 1815, and approximately ended with the Franco-Prussian War in 1871, though the last città irredente did not join the Kingdom of Italy until after World War I.
Beginning the Age of Revolution, the American Revolution and the ensuing political upheaval during the last half of the 18th century saw the Thirteen Colonies of North America overthrow the governance of the Parliament of Great Britain, and then reject the British monarchy itself to become the sovereign United States of America. In this period the colonies first rejected the authority of the Parliament to govern them without representation, and formed self-governing independent states. The Second Continental Congress then joined together against the British to defend that self-governance in the armed conflict from 1775 to 1783 known as the American Revolutionary War (also called American War of Independence).
The American Revolution begun with fighting at Lexington and Concord. On July 4, 1776, they issued the Declaration of Independence, which proclaimed their independence from Great Britain and their formation of a cooperative union. In June 1776, Benjamin Franklin was appointed a member of the Committee of Five that drafted the Declaration of Independence. Although he was temporarily disabled by gout and unable to attend most meetings of the Committee, Franklin made several small changes to the draft sent to him by Thomas Jefferson.
The decolonization of the Americas was the process by which the countries in the Americas gained their independence from European rule. Decolonization began with a series of revolutions in the late 18th and early-to-mid-19th centuries. The Spanish American wars of independence were the numerous wars against Spanish rule in Spanish America that took place during the early 19th century, from 1808 until 1829, directly related to the Napoleonic French invasion of Spain. The conflict started with short-lived governing juntas established in Chuquisaca and Quito opposing the composition of the Supreme Central Junta of Seville.
When the Central Junta fell to the French, numerous new Juntas appeared all across the Americas, eventually resulting in a chain of newly independent countries stretching from Argentina and Chile in the south, to Mexico in the north. After the death of the king Ferdinand VII, in 1833, only Cuba and Puerto Rico remained under Spanish rule, until the Spanish–American War in 1898. Unlike the Spanish, the Portuguese did not divide their colonial territory in America. The captaincies they created were subdued to a centralized administration in Salvador (later relocated to Rio de Janeiro) which reported directly to the Portuguese Crown until its independence in 1822, becoming the Empire of Brazil.
The first Industrial Revolution merged into the Second Industrial Revolution around 1850, when technological and economic progress gained momentum with the development of steam-powered ships and railways, and later in the 19th century with the internal combustion engine and electric power generation. The Second Industrial Revolution was a phase of the Industrial Revolution; labeled as the separate Technical Revolution. From a technological and a social point of view there is no clean break between the two. Major innovations during the period occurred in the chemical, electrical, petroleum, and steel industries. Specific advancements included the introduction of oil fired steam turbine and internal combustion driven steel ships, the development of the airplane, the practical commercialization of the automobile, mass production of consumer goods, the perfection of canning, mechanical refrigeration and other food preservation techniques, and the invention of the telephone.
Industrialization is the process of social and economic change whereby a human group is transformed from a pre-industrial society into an industrial one. It is a subdivision of a more general modernization process, where social change and economic development are closely related with technological innovation, particularly with the development of large-scale energy and metallurgy production. It is the extensive organization of an economy for the purpose of manufacturing. Industrialization also introduces a form of philosophical change, where people obtain a different attitude towards their perception of nature.
The modern petroleum industry started in 1846 with the discovery of the process of refining kerosene from coal by Nova Scotian Abraham Pineo Gesner. Ignacy Łukasiewicz improved Gesner's method to develop a means of refining kerosene from the more readily available "rock oil" ("petr-oleum") seeps in 1852 and the first rock oil mine was built in Bóbrka, near Krosno in Galicia in the following year. In 1854, Benjamin Silliman, a science professor at Yale University in New Haven, was the first to fractionate petroleum by distillation. These discoveries rapidly spread around the world.
Engineering achievements of the revolution ranged from electrification to developments in materials science. The advancements made a great contribution to the quality of life. In the first revolution, Lewis Paul was the original inventor of roller spinning, the basis of the water frame for spinning cotton in a cotton mill. Matthew Boulton and James Watt's improvements to the steam engine were fundamental to the changes brought by the Industrial Revolution in both the Kingdom of Great Britain and the world.
In the latter part of the second revolution, Thomas Alva Edison developed many devices that greatly influenced life around the world and is often credited with the creation of the first industrial research laboratory. In 1882, Edison switched on the world's first large-scale electrical supply network that provided 110 volts direct current to fifty-nine customers in lower Manhattan. Also toward the end of the second industrial revolution, Nikola Tesla made many contributions in the field of electricity and magnetism in the late 19th and early 20th centuries.
The European Revolutions of 1848, known in some countries as the Spring of Nations or the Year of Revolution, were a series of political upheavals throughout the European continent. Described as a revolutionary wave, the period of unrest began in France and then, further propelled by the French Revolution of 1848, soon spread to the rest of Europe. Although most of the revolutions were quickly put down, there was a significant amount of violence in many areas, with tens of thousands of people tortured and killed. While the immediate political effects of the revolutions were reversed, the long-term reverberations of the events were far-reaching.
Following the Enlightenment's ideas, the reformers looked to the Scientific Revolution and industrial progress to solve the social problems which arose with the Industrial Revolution. Newton's natural philosophy combined a mathematics of axiomatic proof with the mechanics of physical observation, yielding a coherent system of verifiable predictions and replacing a previous reliance on revelation and inspired truth. Applied to public life, this approach yielded several successful campaigns for changes in social policy.
Under Peter I (the Great), Russia was proclaimed an Empire in 1721 and became recognized as a world power. Ruling from 1682 to 1725, Peter defeated Sweden in the Great Northern War, forcing it to cede West Karelia and Ingria (two regions lost by Russia in the Time of Troubles), as well as Estland and Livland, securing Russia's access to the sea and sea trade. On the Baltic Sea Peter founded a new capital called Saint Petersburg, later known as Russia's Window to Europe. Peter the Great's reforms brought considerable Western European cultural influences to Russia. Catherine II (the Great), who ruled in 1762–96, extended Russian political control over the Polish-Lithuanian Commonwealth and incorporated most of its territories into Russia during the Partitions of Poland, pushing the Russian frontier westward into Central Europe. In the south, after successful Russo-Turkish Wars against the Ottoman Empire, Catherine advanced Russia's boundary to the Black Sea, defeating the Crimean khanate.
The Victorian era of the United Kingdom was the period of Queen Victoria's reign from June 1837 to January 1901. This was a long period of prosperity for the British people, as profits gained from the overseas British Empire, as well as from industrial improvements at home, allowed a large, educated middle class to develop. Some scholars would extend the beginning of the period—as defined by a variety of sensibilities and political games that have come to be associated with the Victorians—back five years to the passage of the Reform Act 1832.
In Britain's "imperial century", victory over Napoleon left Britain without any serious international rival, other than Russia in central Asia. Unchallenged at sea, Britain adopted the role of global policeman, a state of affairs later known as the Pax Britannica, and a foreign policy of "splendid isolation". Alongside the formal control it exerted over its own colonies, Britain's dominant position in world trade meant that it effectively controlled the economies of many nominally independent countries, such as China, Argentina and Siam, which has been generally characterized as "informal empire". Of note during this time was the Anglo-Zulu War, which was fought in 1879 between the British Empire and the Zulu Empire.
British imperial strength was underpinned by the steamship and the telegraph, new technologies invented in the second half of the 19th century, allowing it to control and defend the Empire. By 1902, the British Empire was linked together by a network of telegraph cables, the so-called All Red Line. Growing until 1922, around 13,000,000 square miles (34,000,000 km2) of territory and roughly 458 million people were added to the British Empire. The British established colonies in Australia in 1788, New Zealand in 1840 and Fiji in 1872, with much of Oceania becoming part of the British Empire.
The Bourbon Restoration followed the ousting of Napoleon I of France in 1814. The Allies restored the Bourbon Dynasty to the French throne. The ensuing period is called the Restoration, following French usage, and is characterized by a sharp conservative reaction and the re-establishment of the Roman Catholic Church as a power in French politics. The July Monarchy was a period of liberal constitutional monarchy in France under King Louis-Philippe starting with the July Revolution (or Three Glorious Days) of 1830 and ending with the Revolution of 1848. The Second Empire was the Imperial Bonapartist regime of Napoleon III from 1852 to 1870, between the Second Republic and the Third Republic, in France.
The Franco-Prussian War was a conflict between France and Prussia, while Prussia was backed up by the North German Confederation, of which it was a member, and the South German states of Baden, Württemberg and Bavaria. The complete Prussian and German victory brought about the final unification of Germany under King Wilhelm I of Prussia. It also marked the downfall of Napoleon III and the end of the Second French Empire, which was replaced by the Third Republic. As part of the settlement, almost all of the territory of Alsace-Lorraine was taken by Prussia to become a part of Germany, which it would retain until the end of World War I.
The major European powers laid claim to the areas of Africa where they could exhibit a sphere of influence over the area. These claims did not have to have any substantial land holdings or treaties to be legitimate. The European power that demonstrated its control over a territory accepted the mandate to rule that region as a national colony. The European nation that held the claim developed and benefited from their colony’s commercial interests without having to fear rival European competition. With the colonial claim came the underlying assumption that the European power that exerted control would use its mandate to offer protection and provide welfare for its colonial peoples, however, this principle remained more theory than practice. There were many documented instances of material and moral conditions deteriorating for native Africans in the late nineteenth and early twentieth centuries under European colonial rule, to the point where the colonial experience for them has been described as "hell on earth."
At the time of the Berlin Conference, Africa contained one-fifth of the world’s population living in one-quarter of the world’s land area. However, from Europe's perspective, they were dividing an unknown continent. European countries established a few coastal colonies in Africa by the mid-nineteenth century, which included Cape Colony (Great Britain), Angola (Portugal), and Algeria (France), but until the late nineteenth century Europe largely traded with free African states without feeling the need for territorial possession. Until the 1880s most of Africa remained unchartered, with western maps from the period generally showing blank spaces for the continent’s interior.
From the 1880s to 1914, the European powers expanded their control across the African continent, competing with each other for Africa’s land and resources. Great Britain controlled various colonial holdings in East Africa that spanned the length of the African continent from Egypt in the north to South Africa. The French gained major ground in West Africa, and the Portuguese held colonies in southern Africa. Germany, Italy, and Spain established a small number of colonies at various points throughout the continent, which included German East Africa (Tanganyika) and German Southwest Africa for Germany, Eritrea and Libya for Italy, and the Canary Islands and Rio de Oro in northwestern Africa for Spain. Finally, for King Leopold (ruled from 1865–1909), there was the large “piece of that great African cake” known as the Congo, which, unfortunately for the native Congolese, became his personal fiefdom to do with as he pleased in Central Africa. By 1914, almost the entire continent was under European control. Liberia, which was settled by freed American slaves in the 1820s, and Abyssinia (Ethiopia) in eastern Africa were the last remaining independent African states. (John Merriman, A History of Modern Europe, Volume Two: From the French Revolution to the Present, Third Edition (New York: W. W. Norton & Company, 2010), pp. 819–859).
Around the end of the 19th century and into the 20th century, the Meiji era was marked by the reign of the Meiji Emperor. During this time, Japan started its modernization and rose to world power status. This era name means "Enlightened Rule". In Japan, the Meiji Restoration started in the 1860s, marking the rapid modernization by the Japanese themselves along European lines. Much research has focused on the issues of discontinuity versus continuity with the previous Tokugawa Period. In the 1960s younger Japanese scholars led by Irokawa Daikichi, reacted against the bureaucratic superstate, and began searching for the historic role of the common people . They avoided the elite, and focused not on political events but on social forces and attitudes. They rejected both Marxism and modernization theory as alien and confining. They stressed the importance of popular energies in the development of modern Japan. They enlarged history by using the methods of social history. It was not until the beginning of the Meiji Era that the Japanese government began taking modernization seriously. Japan expanded its military production base by opening arsenals in various locations. The hyobusho (war office) was replaced with a War Department and a Naval Department. The samurai class suffered great disappointment the following years.
Laws were instituted that required every able-bodied male Japanese citizen, regardless of class, to serve a mandatory term of three years with the first reserves and two additional years with the second reserves. This action, the deathblow for the samurai warriors and their daimyo feudal lords, initially met resistance from both the peasant and warrior alike. The peasant class interpreted the term for military service, ketsu-eki (blood tax) literally, and attempted to avoid service by any means necessary. The Japanese government began modelling their ground forces after the French military. The French government contributed greatly to the training of Japanese officers. Many were employed at the military academy in Kyoto, and many more still were feverishly translating French field manuals for use in the Japanese ranks.
The Antebellum Age was a period of increasing division in the country based on the growth of slavery in the American South and in the western territories of Kansas and Nebraska that eventually lead to the Civil War in 1861. The Antebellum Period is often considered to have begun with the Kansas–Nebraska Act of 1854,[citation needed] although it may have begun as early as 1812. This period is also significant because it marked the transition of American manufacturing to the industrial revolution.[citation needed]
Northern leaders agreed that victory would require more than the end of fighting. Secession and Confederate nationalism had to be totally repudiated and all forms of slavery or quasi-slavery had to be eliminated. Lincoln proved effective in mobilizing support for the war goals, raising large armies and supplying them, avoiding foreign interference, and making the end of slavery a war goal. The Confederacy had a larger area than it could defend, and it failed to keep its ports open and its rivers clear. The North kept up the pressure as the South could barely feed and clothe its soldiers. Its soldiers, especially those in the East under the command of General Robert E. Lee proved highly resourceful until they finally were overwhelmed by Generals Ulysses S. Grant and William T. Sherman in 1864–65, The Reconstruction Era (1863–77) began with the Emancipation proclamation in 1863, and included freedom, full citizenship and the vote for the Southern blacks. It was followed by a reaction that left the blacks in a second class status legally, politically, socially and economically until the 1960s.
During the Gilded Age, there was substantial growth in population in the United States and extravagant displays of wealth and excess of America's upper-class during the post-Civil War and post-Reconstruction era, in the late 19th century. The wealth polarization derived primarily from industrial and population expansion. The businessmen of the Second Industrial Revolution created industrial towns and cities in the Northeast with new factories, and contributed to the creation of an ethnically diverse industrial working class which produced the wealth owned by rising super-rich industrialists and financiers called the "robber barons". An example is the company of John D. Rockefeller, who was an important figure in shaping the new oil industry. Using highly effective tactics and aggressive practices, later widely criticized, Standard Oil absorbed or destroyed most of its competition.
The creation of a modern industrial economy took place. With the creation of a transportation and communication infrastructure, the corporation became the dominant form of business organization and a managerial revolution transformed business operations. In 1890, Congress passed the Sherman Antitrust Act—the source of all American anti-monopoly laws. The law forbade every contract, scheme, deal, or conspiracy to restrain trade, though the phrase "restraint of trade" remained subjective. By the beginning of the 20th century, per capita income and industrial production in the United States exceeded that of any other country except Britain. Long hours and hazardous working conditions led many workers to attempt to form labor unions despite strong opposition from industrialists and the courts. But the courts did protect the marketplace, declaring the Standard Oil group to be an "unreasonable" monopoly under the Sherman Antitrust Act in 1911. It ordered Standard to break up into 34 independent companies with different boards of directors.
Replacing the classical physics in use since the end of the scientific revolution, modern physics arose in the early 20th century with the advent of quantum physics, substituting mathematical studies for experimental studies and examining equations to build a theoretical structure.[citation needed] The old quantum theory was a collection of results which predate modern quantum mechanics, but were never complete or self-consistent. The collection of heuristic prescriptions for quantum mechanics were the first corrections to classical mechanics. Outside the realm of quantum physics, the various aether theories in classical physics, which supposed a "fifth element" such as the Luminiferous aether, were nullified by the Michelson-Morley experiment—an attempt to detect the motion of earth through the aether. In biology, Darwinism gained acceptance, promoting the concept of adaptation in the theory of natural selection. The fields of geology, astronomy and psychology also made strides and gained new insights. In medicine, there were advances in medical theory and treatments.
The assertions of Chinese philosophy began to integrate concepts of Western philosophy, as steps toward modernization. By the time of the Xinhai Revolution in 1911, there were many calls, such as the May Fourth Movement, to completely abolish the old imperial institutions and practices of China. There were attempts to incorporate democracy, republicanism, and industrialism into Chinese philosophy, notably by Sun Yat-Sen (Sūn yì xiān, in one Mandarin form of the name) at the beginning of the 20th century. Mao Zedong (Máo zé dōng) added Marxist-Leninist thought. When the Communist Party of China took over power, previous schools of thought, excepting notably Legalism, were denounced as backward, and later even purged during the Cultural Revolution.
Starting one-hundred years before the 20th century, the enlightenment spiritual philosophy was challenged in various quarters around the 1900s. Developed from earlier secular traditions, modern Humanist ethical philosophies affirmed the dignity and worth of all people, based on the ability to determine right and wrong by appealing to universal human qualities, particularly rationality, without resorting to the supernatural or alleged divine authority from religious texts. For liberal humanists such as Rousseau and Kant, the universal law of reason guided the way toward total emancipation from any kind of tyranny. These ideas were challenged, for example by the young Karl Marx, who criticized the project of political emancipation (embodied in the form of human rights), asserting it to be symptomatic of the very dehumanization it was supposed to oppose. For Friedrich Nietzsche, humanism was nothing more than a secular version of theism. In his Genealogy of Morals, he argues that human rights exist as a means for the weak to collectively constrain the strong. On this view, such rights do not facilitate emancipation of life, but rather deny it. In the 20th century, the notion that human beings are rationally autonomous was challenged by the concept that humans were driven by unconscious irrational desires.
Albert Einstein is known for his theories of special relativity and general relativity. He also made important contributions to statistical mechanics, especially his mathematical treatment of Brownian motion, his resolution of the paradox of specific heats, and his connection of fluctuations and dissipation. Despite his reservations about its interpretation, Einstein also made contributions to quantum mechanics and, indirectly, quantum field theory, primarily through his theoretical studies of the photon.