text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Nintendo 64
The (officially abbreviated as N64, hardware model number pre-term: NUS, stylized as NINTENDO64) is a home video game console developed and marketed by Nintendo. Named for its 64-bit central processing unit, it was released in June 1996 in Japan, September 1996 in North America, and March 1997 in Europe and Australia. It was the last major home console to use the cartridge as its primary storage format until the Nintendo Switch in 2017. The Nintendo 64 was discontinued in 2002 following the launch of its successor, the GameCube, in 2001.
Codenamed "Project Reality", the Nintendo 64 design was mostly complete by mid-1995, but its launch was delayed until 1996, when "Time" named it Machine of the Year. It was launched with three games: "Super Mario 64", "Pilotwings 64" (worldwide) and "Saikyō Habu Shōgi" (exclusive to Japan). As part of the fifth generation of gaming, the system competed primarily with the PlayStation and Sega Saturn. The suggested retail price at its United States launch was (), and 32.93 million units were sold worldwide. In 2015, IGN named it the ninth-greatest video game console of all time.
Around the end of the 1980s, Nintendo led the video game industry with its Nintendo Entertainment System (NES). Although the NES follow-up console, the Super NES (SNES), was successful, sales took a hit from the Japanese recession. Competition from long-time rival Sega, and relative newcomer Sony, emphasized Nintendo's need to develop a successor for the SNES, or risk losing market dominance to its competitors. Further complicating matters, Nintendo also faced a backlash from third-party developers unhappy with Nintendo's strict licensing policies.
Silicon Graphics, Inc. (SGI), a long-time leader in graphics visualization and supercomputing, was interested in expanding its business by adapting its technology into the higher volume realm of consumer products, starting with the video game market. Based upon its MIPS R4000 family of supercomputing and workstation CPUs, SGI developed a CPU requiring a fraction of the resources—consuming only 0.5 watts of power instead of 1.5 to 2 watts, with an estimated target price of instead of –200. The company created a design proposal for a video game system, seeking an already well established partner in that market. Jim Clark, founder of SGI, initially offered the proposal to Tom Kalinske, who was the CEO of Sega of America. The next candidate would be Nintendo.
The historical details of these preliminary negotiations were controversial between the two competing suitors. Tom Kalinske said that he and Joe Miller of Sega of America were "quite impressed" with SGI's prototype, inviting their hardware team to travel from Japan to meet with SGI. The engineers from Sega Enterprises claimed that their evaluation of the early prototype had uncovered several unresolved hardware issues and deficiencies. Those were subsequently resolved, but Sega had already decided against SGI's design. Nintendo resisted that summary conclusion, arguing that the real reason for SGI's ultimate choice of partner is that Nintendo was a more appealing business partner than Sega. While Sega demanded exclusive rights to the chip, Nintendo was willing to license the technology on a non-exclusive basis. Michael Slater, publisher of "Microprocessor Report" said, "The mere fact of a business relationship there is significant because of Nintendo's phenomenal ability to drive volume. If it works at all, it could bring MIPS to levels of volume [SGI] never dreamed of".
Jim Clark met with Nintendo CEO Hiroshi Yamauchi in early 1993, thus initiating Project Reality. On August 23, 1993, the two companies announced a global joint development and licensing agreement surrounding Project Reality, projecting that the yet unnamed eventual product would be "developed specifically for Nintendo, will be unveiled in arcades in 1994, and will be available for home use by late 1995 ... below $250". This announcement coincided with Nintendo's August 1993 Shoshinkai trade show.
"Reality Immersion Technology" is the name SGI had given the set of core components, which would be first utilized in Project Reality: the MIPS R4300i CPU, the MIPS Reality Coprocessor, and the embedded software. Some chip technology and manufacturing was provided by NEC, Toshiba, and Sharp. SGI had recently acquired MIPS Computer Systems (renamed to MIPS Technologies), and the two worked together to be ultimately responsible for the design of the Reality Immersion Technology chips under engineering director Jim Foran and chief hardware architect Tim Van Hook.
The initial Project Reality game development platform was developed and sold by SGI in the form of its Onyx supercomputer costing – and loaded with the namesake RealityEngine2 graphics boards and four 150 MHz R4400 CPUs. Its software includes early Project Reality application and emulation APIs based on Performer and OpenGL. This graphics supercomputing platform had served as the source design which SGI had reduced down to become the Reality Immersion Technology for Project Reality.
The Project Reality team prototyped a game controller for the development system by modifying a Super NES controller to have a primitive analog joystick and Z trigger. Under maximal secrecy even from the rest of the company, a LucasArts developer said his team would "furtively hide the prototype controller in a cardboard box while we used it. In answer to the inevitable questions about what we were doing, we replied jokingly that it was a new type of controllera bowl of liquid that absorbed your thoughts through your fingertips. Of course, you had to think in Japanese..."
On June 23, 1994, Nintendo announced the new official name of the still unfinished console as "Ultra 64". The first group of elite developers selected by Nintendo was nicknamed the "Dream Team": Silicon Graphics, Inc.; Alias Research, Inc.; Software Creations; Rambus, Inc.; MultiGen, Inc.; Rare, Ltd. and Rare Coin-It Toys & Games, Inc.; WMS Industries, Inc.; Acclaim Entertainment, Inc.; Williams Entertainment, Inc.; Paradigm Simulation, Inc.; Spectrum Holobyte; DMA Design Ltd.; Angel Studios; Ocean; Time Warner Interactive; and Mindscape.
By purchasing and developing upon Project Reality's graphics supercomputing platform, Nintendo and its Dream Team could begin prototyping their games according to SGI's estimated console performance profile, prior to the finalization of the console hardware specifications. When the Ultra 64 hardware was finalized, that supercomputer-based prototyping platform was later supplanted by a much cheaper and fully accurate console simulation board to be hosted within a low-end SGI Indy workstation in July 1995. SGI's early performance estimates based upon its supercomputing platform were ultimately reported to have been fairly accurate to the final Ultra 64 product, allowing LucasArts developers to port their "Star Wars" game prototype to console reference hardware in only three days.
The console's design was publicly revealed for the first time in late Q2 1994. Images of the console displayed the Nintendo Ultra 64 logo and a ROM cartridge, but no controller. This prototype console's form factor would be retained by the product when it eventually launched. Having initially indicated the possibility of utilizing the increasingly popular CD-ROM if the medium's endemic performance problems were solved, the company now announced a much faster but space-limited cartridge-based system, which prompted open analysis by the gaming press. The system was frequently marketed as the world's first 64-bit gaming system, often stating the console was more powerful than the first moon landing computers. Atari had already claimed to have made the first 64-bit game console with their Atari Jaguar, but the Jaguar only uses a general 64-bit architecture in conjunction with two 32-bit RISC processors and a 16/32-bit Motorola 68000.
Later in Q2 1994, Nintendo signed a licensing agreement with Midway's parent company which enabled Midway to develop and market arcade games with the Ultra 64 brand, and formed a joint venture company called "Williams/Nintendo" to market Nintendo-exclusive home conversions of these games. The result is two Ultra 64 branded arcade games, "Killer Instinct" and "Cruis'n USA". Not derived from Project Reality's console-based branch of Ultra 64, the arcade branch uses a different MIPS CPU, has no Reality Coprocessor, and uses onboard ROM chips and a hard drive instead of a cartridge. "Killer Instinct" features 3D character artwork pre-rendered into 2D form, and computer-generated movie backgrounds that are streamed off the hard drive and animated as the characters move horizontally.
Previously, the plan had been to release the console with the name "Ultra Famicom" in Japan and "Nintendo Ultra 64" in other markets. Rumors circulated attributing the name change to the possibility of legal action by Konami's ownership of the Ultra Games trademark. Nintendo said that trademark issues were not a factor, and the sole reason for any name change was to establish a single worldwide brand and logo for the console. The new global name "Nintendo 64" was proposed by "Earthbound" series developer Shigesato Itoi. The prefix for the model numbering scheme for hardware and software across the Nintendo 64 platform is "NUS-", a reference to the console's original name of "Nintendo Ultra Sixty-four".
The newly renamed Nintendo 64 console was fully unveiled to the public in playable form on November 24, 1995, at Nintendo's 7th Annual Shoshinkai trade show. Eager for a preview, "hordes of Japanese schoolkids huddled in the cold outside ... the electricity of anticipation clearly rippling through their ranks". "Game Zero" magazine disseminated photos of the event two days later. Official coverage by Nintendo followed later via the "Nintendo Power" website and print magazine.
The console was originally slated for release by Christmas of 1995. In May 1995, Nintendo delayed the release to April 1996. Consumers anticipating a Nintendo release the following year at a lower price than the competition reportedly reduced the sales of competing Sega and Sony consoles during the important Christmas shopping season. "Electronic Gaming Monthly" editor Ed Semrad even suggested that Nintendo may have announced the April 1996 release date with this end in mind, knowing in advance that the system would not be ready by that date.
In its explanation of the delay, Nintendo claimed it needed more time for Nintendo 64 software to mature, and for third-party developers to produce games. Adrian Sfarti, a former engineer for SGI, attributed the delay to hardware problems; he claimed that the chips underperformed in testing and were being redesigned. In 1996, the Nintendo 64's software development kit was completely redesigned as the Windows-based Partner-N64 system, by Kyoto Microcomputer, Co. Ltd. of Japan.
The Nintendo 64's release date was later delayed again, to June 23, 1996. Nintendo said the reason for this latest delay, and in particular the cancellation of plans to release the console in all markets worldwide simultaneously, was that the company's marketing studies now indicated that they would not be able to manufacture enough units to meet demand by April 1996, potentially angering retailers in the same way Sega had done with its surprise early launch of the Saturn in North America and Europe.
To counteract the possibility that gamers would grow impatient with the wait for the Nintendo 64 and purchase one of the several competing consoles already on the market, Nintendo ran ads for the system well in advance of its announced release dates, with slogans like "Wait for it..." and "Is it worth the wait? Only if you want the best!"
"Popular Electronics" called the launch a "much hyped, long-anticipated moment". Several months before the launch, "GamePro" reported that many gamers, including a large percentage of their own editorial staff, were already saying they favored the Nintendo 64 over the Saturn and PlayStation.
The console was first released in Japan on June 23, 1996. Though the initial shipment of 300,000 units sold out on the first day, Nintendo successfully avoided a repeat of the Super Famicom launch day pandemonium, in part by using a wider retail network which included convenience stores. The remaining 200,000 units of the first production run shipped on June 26 and 30, with almost all of them reserved ahead of time. In the months between the Japanese and North American launches, the Nintendo 64 saw brisk sales on the American grey market, with import stores charging as much as $699 plus shipping for the system. The Nintendo 64 was first sold in North America on September 26, 1996, though having been advertised for the 29th. It was launched with just two games in the United States, "Pilotwings 64" and "Super Mario 64"; "Cruis'n USA" was pulled from the lineup less than a month before launch because it did not meet Nintendo's quality standards. In 1994, prior to the launch, Nintendo of America chairman Howard Lincoln emphasized the quality of first-party games, saying "... we're convinced that a few great games at launch are more important than great games mixed in with a lot of dogs". The PAL version of the console was released in Europe on March 1, 1997. According to Nintendo of America representatives, Nintendo had been planning a simultaneous launch in Japan, North America, and Europe, but market studies indicated that worldwide demand for the system far exceeded the number of units they could have ready by launch, potentially leading to consumer and retailer frustration.
Originally intended to be priced at , the console was ultimately launched at to make it competitive with Sony and Sega offerings, as both the Saturn and PlayStation had been lowered to $199.99 earlier that summer. Nintendo priced the console as an impulse purchase, a strategy from the toy industry. The price of the console in the United States was further reduced in August 1998.
The Nintendo 64's North American launch was backed with a $54 million marketing campaign by Leo Burnett Worldwide (meaning over $100 in marketing per North American unit that had been manufactured up to this point). While the competing Saturn and PlayStation both set teenagers and adults as their target audience, the Nintendo 64's target audience was pre-teens.
To boost sales during the slow post-Christmas season, Nintendo and General Mills worked together on a promotional campaign that appeared in early 1999. The advertisement by Saatchi and Saatchi, New York began on January 25 and encouraged children to buy Fruit by the Foot snacks for tips to help them with their Nintendo 64 games. Ninety different tips were available, with three variations of thirty tips each.
Nintendo advertised its Funtastic Series of peripherals with a $10 million print and television campaign from February 28 to April 30, 2000. Leo Burnett Worldwide was in charge again.
The Nintendo 64's central processing unit (CPU) is the NEC VR4300 at 93.75 MHz. "Popular Electronics" said it had power similar to the Pentium processors found in desktop computers. Except for its narrower 32-bit system bus, the VR4300 retained the computational abilities of the more powerful 64-bit MIPS R4300i, though software rarely took advantage of 64-bit data precision operations. Nintendo 64 games generally used faster and more compact 32-bit data-operations, as these were sufficient to generate 3D-scene data for the console's RSP (Reality Signal Processor) unit. In addition, 32-bit code executes faster and requires less storage space (which is at a premium on the Nintendo 64's cartridges).
In terms of its random-access memory (RAM), the Nintendo 64 is one of the first modern consoles to implement a unified memory subsystem, instead of having separate banks of memory for CPU, audio, and video, for example. The memory itself consists of 4 megabytes of Rambus RDRAM, expandable to 8 MB with the Expansion Pak. Rambus was quite new at the time and offered Nintendo a way to provide a large amount of bandwidth for a relatively low cost.
Audio may be processed by the Reality Coprocessor or the CPU and is output to a DAC with up to 48.0 kHz sample rate.
The system allows for video output in two formats: composite video and S-Video. The composite and S-Video cables are the same as those used with the preceding SNES and succeeding GameCube platforms.
The Nintendo 64 supports 16.8 million colors. The system can display resolutions from 320×240 up to 640×480 pixels. Most games that make use of the system's higher resolution 640x480 mode require use of the Expansion Pak RAM upgrade; several do not, such as Acclaim's "NFL Quarterback Club" series and EA Sports's second generation "Madden", "FIFA", "Supercross", and "NHL" games. The majority of games use the system's low resolution 320×240 mode. A number of games also support a video display ratio of up to using either anamorphic widescreen or letterboxing.
The Nintendo 64 is one of the first gaming consoles to have four controller ports. According to Shigeru Miyamoto, Nintendo opted to have four controller ports because the Nintendo 64 is the company's first console which can handle a four player split screen without significant slowdown.
The Nintendo 64 comes in several colors. The standard Nintendo 64 is dark gray, nearly black, and the controller is light gray (later releases in the U.S. and in Australia included a bonus second controller in Atomic Purple). Various colorations and special editions were released.
Most Nintendo 64 game cartridges are gray in color, but some games have a colored cartridge. Fourteen games have black cartridges, and other colors (such as yellow, blue, red, gold and green) were each used for six or fewer games. Several games, such as "", were released both in standard gray and in colored, limited edition versions.
The programming characteristics of the Nintendo 64 present unique challenges, with distinct potential advantages. "The Economist" described effective programming for the Nintendo 64 as being "horrendously complex". As with many other game consoles and other types of embedded systems, the Nintendo 64's architectural optimizations are uniquely acute, due to a combination of oversight on the part of the hardware designers, limitations on 3D technology of the time, and manufacturing capabilities.
As the Nintendo 64 reached the end of its lifecycle, hardware development chief Genyo Takeda repeatedly referred to the programming challenges using the word "hansei" ( "reflective regret"). Looking back, Takeda said "When we made Nintendo 64, we thought it was logical that if you want to make advanced games, it becomes technically more difficult. We were wrong. We now understand it's the cruising speed that matters, not the momentary flash of peak power".
Nintendo initially stated that while the Nintendo 64 units for each region use essentially identical hardware design, regional lockout chips would prevent games from one region from being played on a Nintendo 64 console from a different region. Following the North American launch, however, they admitted that the cartridges contain no such chips, and the regional lockout is enforced by differing notches in the back of the cartridges.
A total of 388 games were released for the console, though there were a few that were exclusively sold in Japan. For comparison, rivals PlayStation and the Sega Saturn received around 1,100 games and 600 games respectively, while previous Nintendo consoles such as the NES and SNES had 768 and 725 games released in the United States. However, the Nintendo 64 game library included a high number of critically acclaimed and widely sold games. According to TRSTS reports, three of the top five best-selling games in the U.S. for December 1996 were Nintendo 64 games (both of the remaining two were Super NES games). "Super Mario 64" is the best selling game of the generation, with 11 million units sold beating the PlayStation's "Gran Turismo" (at 10.85 million) and "Final Fantasy VII" (at 9.72 million) in sales. The game also received much praise from critics and helped to pioneer three-dimensional control schemes. "GoldenEye 007" was important in the evolution of the first-person shooter, and has been named one of the greatest in the genre. "" set the standard for future 3D action-adventure games and is considered by many to be one of the greatest games ever made. This trend followed Hiroshi Yamauchi's strategy, announced during his speech at the Nintendo 64's November 1995 unveiling, that Nintendo restrict the number of titles produced for the Nintendo 64 so that developers would focus on developing games to a higher standard instead of trying to outdo their competitors with sheer quantity.
The most graphically demanding Nintendo 64 games that arrived on larger 32 or 64 MB cartridges are the most advanced and detailed of the 32-bit/64-bit generation. In order to maximize use of the Nintendo 64 hardware developers had to create their own custom microcode. Nintendo 64 games running on custom microcode benefited from much higher polygon counts in tandem with more advanced lighting, animation, physics and AI routines than its 32-bit competition. "Conker's Bad Fur Day" is arguably the pinnacle of its generation combining multicolored real-time lighting that illuminates each area to real-time shadowing and detailed texturing replete with a full in game facial animation system. The Nintendo 64's graphics chip is capable of executing many more advanced and complex rendering techniques than its competitors. It is the first home console to feature trilinear filtering, which allowed textures to look very smooth. This contrasted with the Saturn and PlayStation, which used nearest-neighbor interpolation and produced more pixelated textures. Overall however the results of the Nintendo cartridge system were mixed and this was tied primarily to its storage medium.
The smaller storage size of ROM cartridges limited the number of available textures. As a result, many games which utilized much smaller 8 or 12 MB cartridges are forced to stretch textures over larger surfaces. Compounded by a limit of 4,096 bytes of on-chip texture memory, the end-result is often a distorted, out-of-proportion appearance. Many titles that feature larger 32 or 64 MB cartridges avoided this issue entirely, notable games include "Resident Evil 2", "Sin and Punishment: Successor of the Earth", and "Conker's Bad Fur Day" as they feature more ROM space, allowing for more detailed graphics by utilizing multiple, multi-layered textures across all surfaces.
Nintendo 64 games are ROM cartridge based. Cartridge size varies from 4 to 64 MB. Many cartridges include the ability to save games internally.
Nintendo cited several advantages for making the Nintendo 64 cartridge-based. Primarily cited was the ROM cartridges' very fast load times in comparison to disc-based games. While loading screens appear in many PlayStation games, they are rare in Nintendo 64 games. Although vulnerable to long-term environmental damage the cartridges are far more resistant to physical damage than compact discs. Nintendo also cited the fact that cartridges are more difficult to pirate than CDs, thus resisting copyright infringement, albeit at the expense of lowered profit margin for Nintendo. While unauthorized N64 interface devices for the PC were later developed, these devices are rare, when compared to a regular CD drive used on the PlayStation which suffered widespread copyright infringement.
On the downside, cartridges took longer to manufacture than CDs, with each production run (from order to delivery) taking two weeks or more. This meant that publishers of Nintendo 64 games had to attempt to predict demand for a game ahead of its release. They risked being left with a surplus of expensive cartridges for a failed game or a weeks-long shortage of product if they underestimated a game's popularity. The cost of producing a Nintendo 64 cartridge was also far higher than for a CD. Publishers passed these expenses onto the consumer. Nintendo 64 games cost an average of $10 more when compared to games produced for rival consoles. The higher cost also created the potential for much greater losses to the game's publisher in the case of a flop, making the less risky CD medium tempting for third party companies. Some third party companies also complained that they were at an unfair disadvantage against Nintendo first party developers when publishing games for the Nintendo 64, since Nintendo owned the manufacturing plant where cartridges for their consoles are made and therefore could sell their first party games at a lower price.
As fifth generation games became more complex in content, sound and graphics, games began to exceed the limits of cartridge storage capacity. Nintendo 64 cartridges had a maximum of 64 MB of data, whereas CDs held 650 MB. The "Los Angeles Times" initially defended the quality control incentives associated with working with limited storage on cartridges, citing Nintendo's position that cartridge game developers tend to "place a premium on substance over flash", and noted that the launch titles lack the "poorly acted live-action sequences or half-baked musical overtures" which it says tend to be found on CD-ROM games. However, the cartridge's limitations became apparent with software ported from other consoles, so Nintendo 64 versions of cross-platform games were truncated or redesigned with the storage limits of a cartridge in mind. For instance this meant fewer textures, and/or shorter music tracks, while full motion video was not usually feasible for use in cutscenes unless heavily compressed and of very brief length.
The era's competing systems from Sony and Sega (the PlayStation and Saturn, respectively) used CD-ROM discs to store their games. As a result, game developers who had traditionally supported Nintendo game consoles were now developing games for the competition. Some third-party developers, such as Square and Enix, whose "Final Fantasy VII" and "Dragon Warrior VII" were initially planned for the Nintendo 64, switched to the PlayStation, citing the insufficient storage capacity of the N64 cartridges. Some who remained released fewer games to the Nintendo 64; Konami released fifty PlayStation games, but only twenty nine for the Nintendo 64. New Nintendo 64 game releases were infrequent while new games were coming out rapidly for the PlayStation.
Through the difficulties with third parties, the Nintendo 64 supported popular games such as "GoldenEye 007", giving it a long market life. Additionally, Nintendo's strong first-party franchises such as "Mario" had strong name brand appeal. Second-parties of Nintendo, such as Rare, helped.
Nintendo's controversial selection of the cartridge medium for the Nintendo 64 has been cited as a key factor in Nintendo losing its dominant position in the gaming market. The ROM cartridges are constrained by small capacity and high production expenses, compared to the compact disc format used by its chief competitors. Some of the cartridge's advantages are difficult for developers to manifest prominently, requiring innovative solutions which only came late in the console's life cycle. Another of its technical drawbacks is a limited-size texture cache, which force textures of limited dimensions and reduced color depth, appearing stretched when covering in-game surfaces. Some third-party publishers that supported Nintendo's previous consoles reduced their output or stopped publishing for the console; the Nintendo 64's most successful games came from first-party or second-party studios.
Several Nintendo 64 games have been released for the Wii's and Wii U's Virtual Console service and are playable with the Classic Controller, GameCube controller, Wii U Pro Controller, or Wii U GamePad. There are some differences between these versions and the original cartridge versions. For example, the games run in a higher resolution and at a more consistent framerate than their Nintendo 64 counterparts. Some features, such as Rumble Pak functionality, are not available in the Wii versions. Some features are also changed on the Virtual Console releases. For example, the VC version of "Pokémon Snap" allows players to send photos through the Wii's message service, while "Wave Race 64"'s in-game content was altered due to the expiration of the Kawasaki license. Several games developed by Rare were released on Microsoft's Xbox Live Arcade service, including "Banjo-Kazooie", "Banjo-Tooie", and "Perfect Dark", following Microsoft's acquisition of Rareware in 2002. One exception is "Donkey Kong 64", released in April 2015 on the Wii U Virtual Console, as Nintendo retained the rights to the game.
Several unofficial emulators have been developed in order to play Nintendo 64 games on other platforms, such as PCs, Mac and cell phones.
A number of accessories were produced for the Nintendo 64, including the Rumble Pak and the Transfer Pak.
The controller was shaped like an "M", employing a joystick in the center. "Popular Electronics" called its shape "evocative of some alien space ship". While noting that the three handles could be confusing, the magazine said "the separate grips allow different hand positions for various game types".
Nintendo released a peripheral platform called 64DD, where "DD" stands for "Disk Drive". Connecting to the expansion slot at the bottom of the system, the 64DD turns the Nintendo 64 console into an Internet appliance, a multimedia workstation, and an expanded gaming platform. This large peripheral allows players to play Nintendo 64 disk-based games, capture images from an external video source, and it allowed players to connect to the now-defunct Japanese Randnet online service. Not long after its limited mail-order release, the peripheral was discontinued. Only nine games were released, including the four "Mario Artist" games ("Paint Studio", "Talent Studio", "Communication Kit", and "Polygon Studio"). Many more planned games were eventually released in cartridge format or on other game consoles. The 64DD and the accompanying Randnet online service were released only in Japan, though always having been announced for America and Europe.
To illustrate the fundamental significance of the 64DD to all game development at Nintendo, lead designer Shigesato Itoi said: "I came up with a lot of ideas because of the 64DD. All things start with the 64DD. There are so many ideas I wouldn’t have been allowed to come up with if we didn’t have the 64DD". Shigeru Miyamoto concluded: "Almost every new project for the N64 is based on the 64DD. ... we’ll make the game on a cartridge first, then add the technology we’ve cultivated to finish it up as a full-out 64DD game".
The Nintendo 64 received generally positive reviews from critics. Reviewers praised the console's advanced 3D graphics and gameplay, while criticizing the lack of games. On G4techTV's "Filter", the Nintendo 64 was voted up to No. 1 by registered users.
In February 1996, "Next Generation" magazine called the Nintendo Ultra 64 the "best kept secret in videogames" and the "world's most powerful game machine". It called the system's November 24, 1995 unveiling at Shoshinkai "the most anticipated videogaming event of the 1990s, possibly of all time". Previewing the Nintendo 64 shortly prior to its launch, "Time" magazine praised the realistic movement and gameplay provided by the combination of fast graphics processing, pressure-sensitive controller, and the "Super Mario 64" game. The review praised the "fastest, smoothest game action yet attainable via joystick at the service of equally virtuoso motion", where "[f]or once, the movement on the screen feels real". Asked if gamers should buy a Nintendo 64 at launch, buy it later, or buy a competing system, a panel of six "GamePro" editors voted almost unanimously to buy at launch; one editor said gamers who already own a PlayStation and are on a limited budget should buy it later, and all others should buy it at launch.
At launch, the "Los Angeles Times" called the system "quite simply, the fastest, most graceful game machine on the market". Its form factor was described as small, light, and "built for heavy play by kids" unlike the "relatively fragile Sega Saturn". Showing concern for a major console product launch during a sharp, several-year long, decline in the game console market, the review said that the long-delayed Nintendo 64 was "worth the wait" in the company's pursuit of quality. Nintendo's "penchant for perfection" in game quality control was praised, though with concerns about having only two launch titles at retail and twelve expected by Christmas. Describing the quality control incentives associated with cartridge-based development, the "Times" cited Nintendo's position that cartridge game developers tend to "place a premium on substance over flash", and noted that the launch titles lack the "poorly acted live-action sequences or half-baked musical overtures" which it says tend to be found on CD-ROM games. Praising Nintendo's controversial choice of the cartridge medium with its "nonexistent" load times and "continuous, fast-paced action CD-ROMs simply cannot deliver", the review concluded that "the cartridge-based Nintendo 64 delivers blistering speed and tack-sharp graphics that are unheard of on personal computers and make competing 32-bit, disc-based consoles from Sega and Sony seem downright sluggish".
"Time" named it their 1996 Machine of the Year, saying the machine had "done to video-gaming what the 707 did to air travel". The magazine said the console achieved "the most realistic and compelling three-dimensional experience ever presented by a computer". "Time" credited the Nintendo 64 with revitalizing the video game market, "rescuing this industry from the dustbin of entertainment history". The magazine suggested that the Nintendo 64 would play a major role in introducing children to digital technology in the final years of the 20th century. The article concluded by saying the console had already provided "the first glimpse of a future where immensely powerful computing will be as common and easy to use as our televisions". The console also won the 1996 Spotlight Award for Best New Technology.
"Popular Electronics" complimented the system's hardware, calling its specifications "quite impressive". It found the controller "comfortable to hold, and the controls to be accurate and responsive".
In a 1997 year-end review, a team of five "Electronic Gaming Monthly" editors gave the Nintendo 64 scores of 8.0, 7.0, 7.5, 7.5, and 9.0. They highly praised the power of the hardware and the quality of the first-party games, especially those developed by Rare and Nintendo's in-house studios, but also commented that the third-party output to date had been mediocre and the first-party output was not enough by itself to provide Nintendo 64 owners with a steady stream of good games or a full breadth of genres.
Developer Factor 5, who created some of the system's most technologically advanced games along with the system's audio development tools for Nintendo, said, "[T]he N64 is really sexy because it combines the performance of an SGI machine with a cartridge. We're big arcade fans, and cartridges are still the best for arcade games or perhaps a really fast CD-ROM. But there's no such thing for consoles yet [as of 1998]".
Lee Hutchinson of Ars Technica, a Babbage's employee in the mid-1990s, had already experienced the PlayStation's strong debut and wondered whether Nintendo's new console could do as well:
The Nintendo 64 was in heavy demand upon its release. David Cole, industry analyst, said "You have people fighting to get it from stores". "Time" called the purchasing interest "that rare and glorious middle-class Cabbage Patch-doll frenzy". The magazine said celebrities Matthew Perry, Steven Spielberg, and Chicago Bulls players called Nintendo to ask for special treatment to get their hands on the console. The console had only two launch titles but "Super Mario 64" was its killer app, Hutchinson said:
During the system's first three days on the market, retailers sold 350,000 of 500,000 available console units. During its first four months, the console yielded 500,000 unit sales in North America. Nintendo successfully outsold Sony and Sega early in 1997 in the United States; and by the end of its first full year, 3.6 million units were sold in the U.S. "BusinessWire" reported that the Nintendo 64 was responsible for Nintendo's sales having increased by 156% by 1997.
After a strong launch year, the decision to use the cartridge format is said to have contributed to the diminished release pace and higher price of games compared to the competition, and thus Nintendo was unable to maintain its lead in the United States. The console would continue to outsell the Sega Saturn throughout the generation, but would trail behind the PlayStation.
Nintendo's efforts to attain dominance in the key 1997 holiday shopping season were also hurt by game delays. Five high-profile Nintendo games slated for release by Christmas 1997 ("", "Banjo-Kazooie", "Conker's Quest", "Yoshi's Story", and "Major League Baseball Featuring Ken Griffey Jr.") were delayed until 1998, and "Diddy Kong Racing" was announced at the last minute in an effort to somewhat fill the gaps.
In Japan, the console was not as successful, failing to outsell the PlayStation and even the Sega Saturn. Benimaru Itō, a developer for "EarthBound 64" and friend of Shigeru Miyamoto, speculated in 1997 that the Nintendo 64's lower popularity in Japan was due to the lack of role-playing video games.
Nintendo reported that the system's vintage hardware and software sales had ceased by 2004, three years after the GameCube's launch; as of December 31, 2009, the Nintendo 64 had yielded a lifetime total of 5.54 million system units sold in Japan, 20.63 million in the Americas, and 6.75 million in other regions, for a total of 32.93 million units.
The Nintendo 64 remains one of the most recognized video game systems in history and its games still have impact on the games industry. Designed in tandem with the controller, "Super Mario 64" and "" are widely considered by critics and the public to be two of the greatest and most influential games of all time. "GoldenEye 007" is one of the most influential games for the shooter genre.
The Aleck 64 is a Nintendo 64 design in arcade form, designed by Seta in cooperation with Nintendo, and sold from 1998 to 2003 only in Japan. | https://en.wikipedia.org/wiki?curid=21849 |
GNU nano
GNU nano is a text editor for Unix-like computing systems or operating environments using a command line interface. It emulates the Pico text editor, part of the Pine email client, and also provides additional functionality.
Unlike Pico, nano is licensed under the GNU General Public License (GPL). Released as free software by Chris Allegretta in 1999, nano became part of the GNU Project in 2001.
GNU nano was first created in 1999 with the name "TIP" ("TIP Isn't Pico"), by Chris Allegretta. His motivation was to create a free software replacement for Pico, which was not distributed under a free software license. The name was changed to nano on 10 January 2000 to avoid a naming conflict with the existing Unix utility "tip". The name comes from the system of SI prefixes, in which nano is 1000 times larger than pico. In February 2001, nano became a part of the GNU Project.
GNU nano implements several features that Pico lacks, including syntax highlighting, line numbers, regular expression search and replace, line-by-line scrolling, multiple buffers, indenting groups of lines, rebindable key support, and the undoing and redoing of edit changes.
On 11 August 2003, Chris Allegretta officially handed the source code maintenance of nano to David Lawrence Ramsey. On 20 December 2007, Ramsey stepped down as nano's maintainer.
On version 2.6.0 in June 2016, the current principal developer and the other active members of the nano project decided in consensus to leave the GNU project, because of their objections over the Free Software Foundation's copyright assignment policy, and their belief that decentralized copyright ownership does not impede the ability to enforce the GNU General Public License. The step was acknowledged by Debian and Arch Linux, while the GNU project resisted the move and called it a "fork". On 19 August 2016, Chris Allegretta announced the return of the project to the GNU family, following concessions from GNU on copyright assignment for Nano specifically, which happened when version 2.7.0 was released in September 2016.
GNU nano, like Pico, is keyboard-oriented, controlled with control keys. For example, saves the current file; goes to the search menu. GNU nano puts a two-line "shortcut bar" at the bottom of the screen, listing many of the commands available in the current context. For a complete list, gets the help screen.
Unlike Pico, nano uses meta keys to toggle its behavior. For example, toggles smooth scrolling mode on and off. Almost all features that can be selected from the command line can be dynamically toggled. On keyboards without the meta key it is often mapped to the escape key, , such that in order to simulate, say, one has to press the key, then release it, and then press the key.
GNU nano can also use pointer devices, such as a mouse, to activate functions that are on the shortcut bar, as well as position the cursor. | https://en.wikipedia.org/wiki?curid=21850 |
Nieuwe Waterweg
The Nieuwe Waterweg ("New Waterway") is a ship canal in the Netherlands from het Scheur (a branch of the Rhine-Meuse-Scheldt delta) west of the town of Maassluis to the North Sea at Hook of Holland: the Maasmond, where the Nieuwe Waterweg connects to the Maasgeul. It is the artificial mouth of the river Rhine.
The Nieuwe Waterweg, which opened in 1872 and has a length of approximately , was constructed to keep the city and port of Rotterdam accessible to seafaring vessels as the natural Meuse-Rhine branches silted up. The Waterway is a busy shipping route since it is the primary access to one of the busiest ports in the world, the Port of Rotterdam. At the entrance to the sea, a flood protection system called Maeslantkering has been installed (completed in 1997). There are no bridges or tunnels across the Nieuwe Waterweg.
By the middle of the 19th century, Rotterdam was already one of the largest port cities in the world, mainly because of transshipment of goods from Germany to Great Britain. The increase in shipping traffic created a capacity problem: there were too many branches in the river delta, making the port difficult to reach.
In 1863, a law was passed that allowed for the provision of a new canal for large ocean-going ships from Rotterdam to the North Sea. Hydraulic engineer Pieter Caland was commissioned to design a canal cutting through the "Hook of Holland” and to extend the Mouth of Rhine to the sea. The designs for this were already done back in 1731 by Nicolaas Samuelsz Cruquius but the implementation could no longer be postponed to prevent the decline of the harbour of Rotterdam.
Construction began on 31 October 1863. The first phase consisted of the expropriation of farm lands from Rozenburg to Hoek van Holland.
During the second phase two dikes were built parallel to each other, which took 2 years. Caland proposed to extend the dikes 2 km into the sea to disrupt the coastal sea currents and decrease silt deposits in the shipping lane.
Upon the completion of the dikes, the third phase began by the digging of the actual waterway. This began on 31 October 1866 and was completed three years later. The large amounts of removed soil were in turn used to reinforce other dams and dikes.
The last phase consisted of the removal of the dam separating the new waterway from the sea and river. In 1872, the Nieuwe Waterweg was completed and Rotterdam was easily accessible.
Because of the currents and erosion, the shipping lane has been widened somewhat. Yet because of the draft of today's supertankers, it needs to be dredged constantly.
In 1997, the last part of the Delta Works, the Maeslantkering, was put in operation near the mouth of the Nieuwe Waterweg. This storm surge barrier protects Rotterdam against north westerly Beaufort Force 10 to 12 storms.
The Nieuwe Waterweg gives the Port of Rotterdam its deep-water access to the North Sea. From Hook of Holland it stretches for approximately where the waterway continues as the Nieuwe Maas. The very first Nieuwe Waterweg—a breach through the dunes at Hook of Holland—was only long, but in around 1877 the channel was made much larger and wider and the current Nieuwe Waterweg was created. Currently the width of the channel is between and it is dredged to a depth of below Amsterdam Ordnance Datum.
It is this channel, together with the dredged channels in the North Sea, Maasgeul and Eurogeul, that allows ships like the MS "Berge Stahl" and MV "Vale Rio de Janeiro" (both with a draught of 23 meters) to enter Europoort.
The Dutch government agency Rijkswaterstaat is responsible for maintaining the channel.
The point where the Nieuwe Waterweg enters into the North Sea, between Hook of Holland on the north bank and the Maasvlakte to the south, is called the Maasmond. It is marked with two navigation light-towers called the Paddestoelen ("mushrooms"). The Nieuwe Waterweg connects, in the North Sea, to the Maasgeul. This dredged channel in the North Sea is being widened to to facilitate the largest container vessels for the new Maasvlakte 2 that opened in 2013. | https://en.wikipedia.org/wiki?curid=21851 |
Neijia
Neijia (内家) is a term in Chinese martial arts, grouping those styles that practice "neijing", usually translated as internal martial arts, occupied with spiritual, mental or qi-related aspects, as opposed to an "external" approach focused on physiological aspects. The distinction dates to the 17th century, but its modern application is due to publications by Sun Lutang, dating to the period of 1915 to 1928. Neijing is developed by using "neigong", or "internal exercises," as opposed to "external exercises" (wàigōng 外功),
Wudangquan is a more specific grouping of internal martial arts named for their association with the Taoist monasteries of the Wudang Mountains, Hubei in Chinese popular legend. These styles were enumerated by Sun Lutang as Taijiquan, Xingyiquan and Baguazhang, but most also include Bajiquan and the legendary Wudang Sword.
Some other Chinese arts, not in the Wudangquan group, such as Qigong, Liuhebafa, Bak Mei Pai, Zi Ran Men (Nature Boxing), Bok Foo Pai and Yiquan are frequently classified (or classify themselves) as "internal".
The term "neijia" and the distinction between internal and external martial arts first appears in Huang Zongxi's 1669 "Epitaph for Wang Zhengnan". Stanley Henning proposes that the "Epitaph"'s identification of the internal martial arts with the Taoism indigenous to China and of the external martial arts with the foreign Buddhism of Shaolin—and the Manchu Qing Dynasty to which Huang Zongxi was opposed—was an act of political defiance rather than one of technical classification.
In 1676 Huang Zongxi's son, Huang Baijia, who learned martial arts from Wang Zhengnan, compiled the earliest extant manual of internal martial arts, the "Nèijiā quánfǎ".
Beginning in 1914, Sun Lutang together with Yang Shao-hou, Yang Chengfu and Wu Chien-ch'uan taught t'ai chi to the public at the Beijing Physical Education Research Institute. Sun taught there until 1928, a seminal period in the development of modern Yang, Wu and Sun-style tai ji quan. Sun Lutang from 1915 also published martial arts texts.
In 1928, Kuomintang generals Li Jing Lin, Zhang Zi Jiang, and Fung Zu Ziang organized a national martial arts tournament in China; they did so to screen the best martial artists in order to begin building the Central Martial Arts Academy (Zhongyang Guoshuguan). The generals separated the participants of the tournament into Shaolin and Wudang. Wudang participants were recognized as having "internal" skills. These participants were generally practitioners of t'ai chi ch'uan, Xingyiquan and Baguazhang. All other participants competed under the classification of Shaolin. One of the winners in the "internal" category was the Baguazhang master Fu Chen Sung.
Sun Lutang identified the following as the criteria that distinguish an internal martial art:
Sun Lutang's eponymous style of t'ai chi ch'uan fuses principles from all three arts he named as neijia. Similarities applying classical principles between taiji, xingyi, and baquazhang include: Loosening (song) the soft tissue, opening shoulder and hip gates or gua, cultivating qi or intrinsic energy, issuing various jin or compounded energies. Taijiquan is characterized by an ever-present peng jin or expanding energy. Xingyiquan is characterized by its solely forward moving pressing ji jin energy. Baguazhang is characterized by its “dragon body” circular movements. Some Chinese martial arts other than the ones Sun named also teach what are termed internal practices, despite being generally classified as external (e.g. Wing Chun that also is internal ). Some non-Chinese martial arts also claim to be internal, for example Aikido and Kito Ryu. Many martial artists, especially outside of China, disregard the distinction entirely. Some neijia schools refer to their arts as "soft style" martial arts.
Internal styles focus on awareness of the spirit, mind, qi ("energy") and the use of relaxed ("" ) leverage rather than muscular tension. Pushing hands is a training method commonly used in neijia arts to develop sensitivity and softness.
Much time may nevertheless be spent on basic physical training, such as stance training ("zhan zhuang"), stretching and strengthening of muscles, as well as on empty hand and weapon forms which can be quite demanding.
Some forms in internal styles are performed slowly, although some include sudden outbursts of explosive movements (fa jin), such as those the Chen style of Taijiquan is famous for teaching earlier than some other styles (e.g. Yang and Wu). The reason for the generally slow pace is to improve coordination and balance by increasing the work load, and to require the student to pay minute attention to their whole body and its weight as they perform a technique. At an advanced level, and in actual fighting, internal styles are performed quickly, but the goal is to learn to involve the entire body in every motion, to stay relaxed, with deep, controlled breathing, and to coordinate the motions of the body and the breathing accurately according to the dictates of the forms while maintaining perfect balance.
The reason for the label "internal," according to most schools, is that there is a focus on the internal aspects earlier in the training, once these internal relationships are apprehended (the theory goes) they are then applied to the external applications of the styles in question.
External styles are characterized by fast and explosive movements and a focus on physical strength and agility. External styles include both the traditional styles focusing on application and fighting, as well as the modern styles adapted for competition and exercise. Examples of external styles are Shaolinquan, with its direct explosive attacks and many Wushu forms that have spectacular aerial techniques. External styles begin with a training focus on muscular power, speed and application, and generally integrate their qigong aspects in advanced training, after their desired "hard" physical level has been reached.
Some say that there is no differentiation between the so-called internal and external systems of the Chinese martial arts, while other well known teachers have expressed differing opinions. For example, the Taijiquan teacher Wu Jianquan:
Those who practice Shaolinquan leap about with strength and force; people not proficient at this kind of training soon lose their breath and are exhausted. Taijiquan is unlike this. Strive for quiescence of body, mind and intention.
Many internal schools teach forms that are practised for health benefits only. Thus, T'ai chi ch'uan in spite of its roots in martial arts has become similar in scope to Qigong, the purely meditative practice based on notions of circulation of qi. With purely a health emphasis, T'ai chi classes have become popular in hospitals, clinics, community and senior centers in the last twenty years or so, as baby boomers age and the art's reputation as a low stress training for seniors became better known.
Traditionalists feel that a school not teaching martial aspects somewhere in their syllabus cannot be said to be actually teaching the art itself, that they have accredited themselves prematurely. Traditional teachers also believe that understanding the core theoretical principles of neijia and the ability to apply them are a necessary gateway to health benefits.
Internal styles have been associated in legend and in much popular fiction with the Taoist monasteries of the Wudang Mountains in central China.
Neijia are a common theme in Chinese Wuxia novels and films, and are usually represented as originating in Wudang or similar mythologies. Often, genuine internal practices are highly exaggerated to the point of making them seem miraculous, as in the novels of Jin Yong and Gu Long. Internal concepts have also been a source of comedy, such as in the films "Shaolin Soccer" and "Kung Fu Hustle".
In the Naruto series, Neji Hyūga's name and techniques were based on neijia. | https://en.wikipedia.org/wiki?curid=21853 |
Navigation
Navigation is a field of study that focuses on the process of monitoring and controlling the movement of a craft or vehicle from one place to another. The field of navigation includes four general categories: land navigation, marine navigation, aeronautic navigation, and space navigation.
It is also the term of art used for the specialized knowledge used by navigators to perform navigation tasks. All navigational techniques involve locating the navigator's position compared to known locations or patterns.
Navigation, in a broader sense, can refer to any skill or study that involves the determination of position and direction. In this sense, navigation includes orienteering and pedestrian navigation.
In the European medieval period, navigation was considered part of the set of "seven mechanical arts", none of which were used for long voyages across open ocean. Polynesian navigation is probably the earliest form of open-ocean navigation, it was based on memory and observation recorded on scientific instruments like the Marshall Islands Stick Charts of Ocean Swells. Early Pacific Polynesians used the motion of stars, weather, the position of certain wildlife species, or the size of waves to find the path from one island to another.
Maritime navigation using scientific instruments such as the mariner's astrolabe first occurred in the Mediterranean during the Middle Ages. Although land astrolabes were invented in the Hellenistic period and existed in classical antiquity and the Islamic Golden Age, the oldest record of a sea astrolabe is that of Majorcan astronomer Ramon Llull dating from 1295. The perfecting of this navigation instrument is attributed to Portuguese navigators during early Portuguese discoveries in the Age of Discovery. The earliest known description of how to make and use a sea astrolabe comes from Spanish cosmographer Martín Cortés de Albacar's "Arte de Navegar" ("The Art of Navigation") published in 1551, based on the principle of the archipendulum used in constructing the Egyptian pyramids.
Open-seas navigation using the astrolabe and the compass started during the Age of Discovery in the 15th century. The Portuguese began systematically exploring the Atlantic coast of Africa from 1418, under the sponsorship of Prince Henry. In 1488 Bartolomeu Dias reached the Indian Ocean by this route. In 1492 the Spanish monarchs funded Christopher Columbus's expedition to sail west to reach the Indies by crossing the Atlantic, which resulted in the Discovery of the Americas. In 1498, a Portuguese expedition commanded by Vasco da Gama reached India by sailing around Africa, opening up direct trade with Asia. Soon, the Portuguese sailed further eastward, to the Spice Islands in 1512, landing in China one year later.
The first circumnavigation of the earth was completed in 1522 with the Magellan-Elcano expedition, a Spanish voyage of discovery led by Portuguese explorer Ferdinand Magellan and completed by Spanish navigator Juan Sebastián Elcano after the former's death in the Philippines in 1521. The fleet of seven ships sailed from Sanlúcar de Barrameda in Southern Spain in 1519, crossed the Atlantic Ocean and after several stopovers rounded the southern tip of South America. Some ships were lost, but the remaining fleet continued across the Pacific making a number of discoveries including Guam and the Philippines. By then, only two galleons were left from the original seven. The "Victoria" led by Elcano sailed across the Indian Ocean and north along the coast of Africa, to finally arrive in Spain in 1522, three years after its departure. The "Trinidad" sailed east from the Philippines, trying to find a maritime path back to the Americas, but was unsuccessful. The eastward route across the Pacific, also known as the "tornaviaje" (return trip) was only discovered forty years later, when Spanish cosmographer Andrés de Urdaneta sailed from the Philippines, north to parallel 39°, and hit the eastward Kuroshio Current which took its galleon across the Pacific. He arrived in Acapulco on October 8, 1565.
The term stems from the 1530s, from Latin "navigationem" (nom. "navigatio"), from "navigatus", pp. of "navigare" "to sail, sail over, go by sea, steer a ship," from "navis" "ship" and the root of "agere" "to drive".
Roughly, the latitude of a place on Earth is its angular distance north or south of the equator. Latitude is usually expressed in degrees (marked with °) ranging from 0° at the Equator to 90° at the North and South poles. The latitude of the North Pole is 90° N, and the latitude of the South Pole is 90° S. Mariners calculated latitude in the Northern Hemisphere by sighting the North Star Polaris with a sextant and using sight reduction tables to correct for height of eye and atmospheric refraction. The height of Polaris in degrees above the horizon is the latitude of the observer, within a degree or so.
Similar to latitude, the longitude of a place on Earth is the angular distance east or west of the prime meridian or Greenwich meridian. Longitude is usually expressed in degrees (marked with °) ranging from 0° at the Greenwich meridian to 180° east and west. Sydney, for example, has a longitude of about 151° east. New York City has a longitude of 74° west. For most of history, mariners struggled to determine longitude. Longitude can be calculated if the precise time of a sighting is known. Lacking that, one can use a sextant to take a lunar distance (also called "the lunar observation", or "lunar" for short) that, with a nautical almanac, can be used to calculate the time at zero longitude (see Greenwich Mean Time). For about a hundred years, from about 1767 until about 1850, mariners lacking a chronometer used the method of lunar distances to determine Greenwich time to find their longitude. A mariner with a chronometer could check its reading using a lunar determination of Greenwich time.
In navigation, a rhumb line (or loxodrome) is a line crossing all meridians of longitude at the same angle, i.e. a path derived from a defined initial bearing. That is, upon taking an initial bearing, one proceeds along the same bearing, without changing the direction as measured relative to true or magnetic north.
Most modern navigation relies primarily on positions determined electronically by receivers collecting information from satellites. Most other modern techniques rely on crossing lines of position or LOP.
A line of position can refer to two different things, either a line on a chart or a line between the observer and an object in real life. A bearing is a measure of the direction to an object. If the navigator measures the direction in real life, the angle can then be drawn on a nautical chart and the navigator will be on that line on the chart.
In addition to bearings, navigators also often measure distances to objects. On the chart, a distance produces a circle or arc of position. Circles, arcs, and hyperbolae of positions are often referred to as lines of position.
If the navigator draws two lines of position, and they intersect he must be at that position. A fix is the intersection of two or more LOPs.
If only one line of position is available, this may be evaluated against the dead reckoning position to establish an estimated position.
Lines (or circles) of position can be derived from a variety of sources:
There are some methods seldom used today such as "dipping a light" to calculate the geographic range from observer to lighthouse
Methods of navigation have changed through history. Each new method has enhanced the mariner's ability to complete his voyage. One of the most important judgments the navigator must make is the best method to use. Some types of navigation are depicted in the table.
The practice of navigation usually involves a combination of these different methods.
By mental navigation checks, a pilot or a navigator estimates tracks, distances, and altitudes which will then help the pilot avoid gross navigation errors.
Piloting (also called pilotage) involves navigating an aircraft by visual reference to landmarks, or a water vessel in restricted waters and fixing its position as precisely as possible at frequent intervals. More so than in other phases of navigation, proper preparation and attention to detail are important. Procedures vary from vessel to vessel, and between military, commercial, and private vessels.
A military navigation team will nearly always consist of several people. A military navigator might have bearing takers stationed at the gyro repeaters on the bridge wings for taking simultaneous bearings, while the civilian navigator must often take and plot them himself. While the military navigator will have a bearing book and someone to record entries for each fix, the civilian navigator will simply pilot the bearings on the chart as they are taken and not record them at all.
If the ship is equipped with an ECDIS, it is reasonable for the navigator to simply monitor the progress of the ship along the chosen track, visually ensuring that the ship is proceeding as desired, checking the compass, sounder and other indicators only occasionally. If a pilot is aboard, as is often the case in the most restricted of waters, his judgement can generally be relied upon, further easing the workload. But should the ECDIS fail, the navigator will have to rely on his skill in the manual and time-tested procedures.
Celestial navigation systems are based on observation of the positions of the Sun, Moon, Planets and navigational stars. Such systems are in use as well for terrestrial navigating as for interstellar navigating. By knowing which point on the rotating earth a celestial object is above and measuring its height above the observer's horizon, the navigator can determine his distance from that subpoint. A nautical almanac and a marine chronometer are used to compute the subpoint on earth a celestial body is over, and a sextant is used to measure the body's angular height above the horizon. That height can then be used to compute distance from the subpoint to create a circular line of position. A navigator shoots a number of stars in succession to give a series of overlapping lines of position. Where they intersect is the celestial fix. The moon and sun may also be used. The sun can also be used by itself to shoot a succession of lines of position (best done around local noon) to determine a position.
In order to accurately measure longitude, the precise time of a sextant sighting (down to the second, if possible) must be recorded. Each second of error is equivalent to 15 seconds of longitude error, which at the equator is a position error of .25 of a nautical mile, about the accuracy limit of manual celestial navigation.
The spring-driven marine chronometer is a precision timepiece used aboard ship to provide accurate time for celestial observations. A chronometer differs from a spring-driven watch principally in that it contains a variable lever device to maintain even pressure on the mainspring, and a special balance designed to compensate for temperature variations.
A spring-driven chronometer is set approximately to Greenwich mean time (GMT) and is not reset until the instrument is overhauled and cleaned, usually at three-year intervals. The difference between GMT and chronometer time is carefully determined and applied as a correction to all chronometer readings. Spring-driven chronometers must be wound at about the same time each day.
Quartz crystal marine chronometers have replaced spring-driven chronometers aboard many ships because of their greater accuracy. They are maintained on GMT directly from radio time signals. This eliminates chronometer error and watch error corrections. Should the second hand be in error by a readable amount, it can be reset electrically.
The basic element for time generation is a quartz crystal oscillator. The quartz crystal is temperature compensated and is hermetically sealed in an evacuated envelope. A calibrated adjustment capability is provided to adjust for the aging of the crystal.
The chronometer is designed to operate for a minimum of 1 year on a single set of batteries. Observations may be timed and ship's clocks set with a comparing watch, which is set to chronometer time and taken to the bridge wing for recording sight times. In practice, a wrist watch coordinated to the nearest second with the chronometer will be adequate.
A stop watch, either spring wound or digital, may also be used for celestial observations. In this case, the watch is started at a known GMT by chronometer, and the elapsed time of each sight added to this to obtain GMT of the sight.
All chronometers and watches should be checked regularly with a radio time signal. Times and frequencies of radio time signals are listed in publications such as Radio Navigational Aids.
The second critical component of celestial navigation is to measure the angle formed at the observer's eye between the celestial body and the sensible horizon. The sextant, an optical instrument, is used to perform this function. The sextant consists of two primary assemblies. The frame is a rigid triangular structure with a pivot at the top and a graduated segment of a circle, referred to as the "arc", at the bottom. The second component is the index arm, which is attached to the pivot at the top of the frame. At the bottom is an endless vernier which clamps into teeth on the bottom of the "arc". The optical system consists of two mirrors and, generally, a low power telescope. One mirror, referred to as the "index mirror" is fixed to the top of the index arm, over the pivot. As the index arm is moved, this mirror rotates, and the graduated scale on the arc indicates the measured angle ("altitude").
The second mirror, referred to as the "horizon glass", is fixed to the front of the frame. One half of the horizon glass is silvered and the other half is clear. Light from the celestial body strikes the index mirror and is reflected to the silvered portion of the horizon glass, then back to the observer's eye through the telescope. The observer manipulates the index arm so the reflected image of the body in the horizon glass is just resting on the visual horizon, seen through the clear side of the horizon glass.
Adjustment of the sextant consists of checking and aligning all the optical elements to eliminate "index correction". Index correction should be checked, using the horizon or more preferably a star, each time the sextant is used. The practice of taking celestial observations from the deck of a rolling ship, often through cloud cover and with a hazy horizon, is by far the most challenging part of celestial navigation.
Inertial navigation system (INS) is a dead reckoning type of navigation system that computes its position based on motion sensors. Before actually navigating, the initial latitude and longitude and the INS's physical orientation relative to the earth (e.g., north and level) are established. After alignment, an INS receives impulses from motion detectors that measure (a) the acceleration along three axes (accelerometers), and (b) rate of rotation about three orthogonal axes (gyroscopes). These enable an INS to continually and accurately calculate its current latitude and longitude (and often velocity).
Advantages over other navigation systems are that, once aligned, an INS does not require outside information. An INS is not affected by adverse weather conditions and it cannot be detected or jammed. Its disadvantage is that since the current position is calculated solely from previous positions and motion sensors, its errors are cumulative, increasing at a rate roughly proportional to the time since the initial position was input. Inertial navigation systems must therefore be frequently corrected with a location 'fix' from some other type of navigation system.
The first inertial system is considered to be the V-2 guidance system deployed by the Germans in 1942. However, inertial sensors are traced to the early 19th century. The advantages INSs led their use in aircraft, missiles, surface ships and submarines. For example, the U.S. Navy developed the Ships Inertial Navigation System (SINS) during the Polaris missile program to ensure a reliable and accurate navigation system to initial its missile guidance systems. Inertial navigation systems were in wide use until satellite navigation systems (GPS) became available. INSs are still in common use on submarines (since GPS reception or other fix sources are not possible while submerged) and long-range missiles.
A radio direction finder or RDF is a device for finding the direction to a radio source. Due to radio's ability to travel very long distances "over the horizon", it makes a particularly good navigation system for ships and aircraft that might be flying at a distance from land.
RDFs works by rotating a directional antenna and listening for the direction in which the signal from a known station comes through most strongly. This sort of system was widely used in the 1930s and 1940s. RDF antennas are easy to spot on German World War II aircraft, as loops under the rear section of the fuselage, whereas most US aircraft enclosed the antenna in a small teardrop-shaped fairing.
In navigational applications, RDF signals are provided in the form of "radio beacons", the radio version of a lighthouse. The signal is typically a simple AM broadcast of a morse code series of letters, which the RDF can tune in to see if the beacon is "on the air". Most modern detectors can also tune in any commercial radio stations, which is particularly useful due to their high power and location near major cities.
Decca, OMEGA, and LORAN-C are three similar hyperbolic navigation systems. Decca was a hyperbolic low frequency radio navigation system (also known as multilateration) that was first deployed during World War II when the Allied forces needed a system which could be used to achieve accurate landings. As was the case with Loran C, its primary use was for ship navigation in coastal waters. Fishing vessels were major post-war users, but it was also used on aircraft, including a very early (1949) application of moving-map displays. The system was deployed in the North Sea and was used by helicopters operating to oil platforms.
The OMEGA Navigation System was the first truly global radio navigation system for aircraft, operated by the United States in cooperation with six partner nations. OMEGA was developed by the United States Navy for military aviation users. It was approved for development in 1968 and promised a true worldwide oceanic coverage capability with only eight transmitters and the ability to achieve a four-mile (6 km) accuracy when fixing a position. Initially, the system was to be used for navigating nuclear bombers across the North Pole to Russia. Later, it was found useful for submarines. Due to the success of the Global Positioning System the use of Omega declined during the 1990s, to a point where the cost of operating Omega could no longer be justified. Omega was terminated on September 30, 1997 and all stations ceased operation.
LORAN is a terrestrial navigation system using low frequency radio transmitters that use the time interval between radio signals received from three or more stations to determine the position of a ship or aircraft. The current version of LORAN in common use is LORAN-C, which operates in the low frequency portion of the EM spectrum from 90 to 110 kHz. Many nations are users of the system, including the United States, Japan, and several European countries. Russia uses a nearly exact system in the same frequency range, called CHAYKA. LORAN use is in steep decline, with GPS being the primary replacement. However, there are attempts to enhance and re-popularize LORAN. LORAN signals are less susceptible to interference and can penetrate better into foliage and buildings than GPS signals.
When a vessel is within radar range of land or special radar aids to navigation, the navigator can take distances and angular bearings to charted objects and use these to establish arcs of position and lines of position on a chart. A fix consisting of only radar information is called a radar fix.
Types of radar fixes include "range and bearing to a single object," "two or more bearings," "tangent bearings," and "two or more ranges."
Parallel indexing is a technique defined by William Burger in the 1957 book "The Radar Observer's Handbook". This technique involves creating a line on the screen that is parallel to the ship's course, but offset to the left or right by some distance. This parallel line allows the navigator to maintain a given distance away from hazards.
Some techniques have been developed for special situations. One, known as the "contour method," involves marking a transparent plastic template on the radar screen and moving it to the chart to fix a position.
Another special technique, known as the Franklin Continuous Radar Plot Technique, involves drawing the path a radar object should follow on the radar display if the ship stays on its planned course. During the transit, the navigator can check that the ship is on track by checking that the pip lies on the drawn line.
Global Navigation Satellite System or GNSS is the term for satellite navigation systems that provide positioning with global coverage. A GNSS allow small electronic receivers to determine their location (longitude, latitude, and altitude) to within a few metres using time signals transmitted along a line of sight by radio from satellites. Receivers on the ground with a fixed position can also be used to calculate the precise time as a reference for scientific experiments.
As of October 2011, only the United States NAVSTAR Global Positioning System (GPS) and the Russian GLONASS are fully globally operational GNSSs. The European Union's Galileo positioning system is a next generation GNSS in the final deployment phase, and became operational in 2016. China has indicated it may expand its regional Beidou navigation system into a global system.
More than two dozen GPS satellites are in medium Earth orbit, transmitting signals allowing GPS receivers to determine the receiver's location, speed and direction.
Since the first experimental satellite was launched in 1978, GPS has become an indispensable aid to navigation around the world, and an important tool for map-making and land surveying. GPS also provides a precise time reference used in many applications including scientific study of earthquakes, and synchronization of telecommunications networks.
Developed by the United States Department of Defense, GPS is officially named NAVSTAR GPS (NAVigation Satellite Timing And Ranging Global Positioning System). The satellite constellation is managed by the United States Air Force 50th Space Wing. The cost of maintaining the system is approximately US$750 million per year, including the replacement of aging satellites, and research and development. Despite this fact, GPS is free for civilian use as a public good.
Modern smartphones act as personal GPS navigators for civilians who own them. Overuse of these devices, whether in the vehicle or on foot, can lead to a relative inability to learn about navigated environments, resulting in sub-optimal navigation abilities when and if these devices become unavailable . Typically a compass is also provided to determine direction when not moving.
The day's work in navigation is a minimal set of tasks consistent with prudent navigation. The definition will vary on military and civilian vessels, and from ship to ship, but the traditional method takes a form resembling:
Navigation on ships is usually always conducted on the Bridge. It may also take place in adjacent space, where chart tables and publications are available.
Passage planning or voyage planning is a procedure to develop a complete description of vessel's voyage from start to finish. The plan includes leaving the dock and harbor area, the en route portion of a voyage, approaching the destination, and mooring. According to international law, a vessel's captain is legally responsible for passage planning, however on larger vessels, the task will be delegated to the ship's navigator.
Studies show that human error is a factor in 80 percent of navigational accidents and that in many cases the human making the error had access to information that could have prevented the accident. The practice of voyage planning has evolved from penciling lines on nautical charts to a process of risk management.
Passage planning consists of four stages: appraisal, planning, execution, and monitoring, which are specified in "International Maritime Organization Resolution A.893(21), Guidelines For Voyage Planning," and these guidelines are reflected in the local laws of IMO signatory countries (for example, Title 33 of the U.S. Code of Federal Regulations), and a number of professional books or publications. There are some fifty elements of a comprehensive passage plan depending on the size and type of vessel.
The appraisal stage deals with the collection of information relevant to the proposed voyage as well as ascertaining risks and assessing the key features of the voyage. This will involve considering the type of navigation required e.g. Ice navigation, the region the ship will be passing through and the hydrographic information on the route. In the next stage, the written plan is created. The third stage is the execution of the finalised voyage plan, taking into account any special circumstances which may arise such as changes in the weather, which may require the plan to be reviewed or altered. The final stage of passage planning consists of monitoring the vessel's progress in relation to the plan and responding to deviations and unforeseen circumstances.
Electronic integrated bridge concepts are driving future navigation system planning. Integrated systems take inputs from various ship sensors, electronically display positioning information, and provide control signals required to maintain a vessel on a preset course. The navigator becomes a system manager, choosing system presets, interpreting system output, and monitoring vessel response.
Navigation for cars and other land-based travel typically uses maps, landmarks, and in recent times computer navigation ("satnav", short for satellite navigation), as well as any means available on water.
Computerized navigation commonly relies on GPS for current location information, a navigational map database of roads and navigable routes, and uses algorithms related to the shortest path problem to identify optimal routes.
Professional standards for navigation depend on the type of navigation and vary by country. For marine navigation, Merchant Navy deck officers are trained and internationally certified according to the STCW Convention. Leisure and amateur mariners may undertake lessons in navigation at local/regional training schools. Naval officers receive navigation training as part of their naval training.
In land navigation, courses and training is often provided to young persons as part of general or extra-curricular education. Land navigation is also an essential part of army training. Additionally, organisations such as the Scouts and DoE programme teach navigation to their students. Orienteering organisations are a type of sports that require navigational skills using a map and compass to navigate from point to point in diverse and usually unfamiliar terrain whilst moving at speed.
In aviation, pilots undertake air navigation training as part of learning to fly.
Professional organisations also assist to encourage improvements in navigation or bring together navigators in learned environments. The Royal Institute of Navigation (RIN) is a learned society with charitable status, aimed at furthering the development of navigation on land and sea, in the air and in space. It was founded in 1947 as a forum for mariners, pilots, engineers and academics to compare their experiences and exchange information. In the US, the Institute of Navigation (ION) is a non-profit professional organisation advancing the art and science of positioning, navigation and timing.
Numerous nautical publications are available on navigation, which are published by professional sources all over the world. In the UK, the United Kingdom Hydrographic Office, the Witherby Publishing Group and the Nautical Institute provide numerous navigational publications, including the comprehensive Admiralty Manual of Navigation.
In the US, Bowditch's American Practical Navigator is a free available encyclopedia of navigation issued by the US Government. | https://en.wikipedia.org/wiki?curid=21854 |
Cryptonomicon
Cryptonomicon is a 1999 novel by American author Neal Stephenson, set in two different time periods. One group of characters are World War II-era Allied codebreakers and tactical-deception operatives affiliated with the Government Code and Cypher School at Bletchley Park (UK), and disillusioned Axis military and intelligence figures. The second narrative is set in the late 1990s, with characters that are (in part) descendants of those of the earlier time period, who employ cryptologic, telecom, and computer technology to build an underground data haven in the fictional Sultanate of Kinakuta. Their goal is to facilitate anonymous Internet banking using electronic money and (later) digital gold currency, with a long-term objective to distribute Holocaust Education and Avoidance Pod (HEAP) media for instructing genocide-target populations on defensive warfare.
"Cryptonomicon" is closer to the genres of historical fiction and contemporary techno-thriller than to the science fiction of Stephenson's two previous novels, "Snow Crash" and "The Diamond Age". It features fictionalized characterizations of such historical figures as Alan Turing, Albert Einstein, Douglas MacArthur, Winston Churchill, Isoroku Yamamoto, Karl Dönitz, Hermann Göring, and Ronald Reagan, as well as some highly technical and detailed descriptions of modern cryptography and information security, with discussions of prime numbers, modular arithmetic, and Van Eck phreaking.
According to Stephenson:
The title is a play on "Necronomicon", the title of a book mentioned in the stories of horror writer H. P. Lovecraft:
The novel's Cryptonomicon, described as a "cryptographer's bible", is a fictional book summarizing America's knowledge of cryptography and cryptanalysis. Begun by John Wilkins (the Cryptonomicon is mentioned in "Quicksilver") and amended over time by William Friedman, Lawrence Waterhouse, and others, the Cryptonomicon is described by Katherine Hayles as "a kind of Kabala created by a Brotherhood of Code that stretches across centuries. To know its contents is to qualify as a Morlock among the Eloi, and the elite among the elite are those gifted enough actually to contribute to it."
The action takes place in two periods—World War II and the late 1990s, during the Internet boom and Asian financial crisis.
In 1942, Lawrence Pritchard Waterhouse, a young United States Navy code breaker and mathematical genius, is assigned to the newly formed joint British and American Detachment 2702. This ultra-secret unit's role is to hide the fact that Allied intelligence has cracked the German Enigma code. The detachment stages events, often behind enemy lines, that provide alternative explanations for the Allied intelligence successes. United States Marine sergeant Bobby Shaftoe, a veteran of China and Guadalcanal, serves in unit 2702, carrying out Waterhouse's plans. At the same time, Japanese soldiers, including mining engineer Goto Dengo, a "friendly enemy" of Shaftoe's, are assigned to build a mysterious bunker in the mountains in the Philippines as part of what turns out to be a literal suicide mission.
Circa 1997, Randy Waterhouse (Lawrence's grandson) joins his old role-playing game companion Avi Halaby in a new startup, providing Pinoy-grams (inexpensive, non-real-time video messages) to migrant Filipinos via new fiber-optic cables. The Epiphyte Corporation uses this income stream to fund the creation of a data haven in the nearby fictional Sultanate of Kinakuta. Vietnam veteran Doug Shaftoe, the son of Bobby Shaftoe, and his daughter Amy, do the undersea surveying for the cables and engineering work on the haven, which is overseen by Goto Furudenendu, heir-apparent to Goto Engineering. Complications arise as figures from the past reappear seeking gold or revenge.
Fictionalized versions of several historical figures appear in the World War II storyline:
The precise date of this storyline is not established, but the ages of characters, the technologies described, and certain date-specific references suggest that it is set in the late 1990s, at the time of the internet boom and the Asian financial crisis.
Portions of "Cryptonomicon" are notably complex. Several pages are spent explaining in detail some of the concepts behind cryptography and data storage security, including a description of Van Eck phreaking.
Stephenson also includes a precise description of (and even Perl script for) the Solitaire (or Pontifex) cipher, a cryptographic algorithm developed by Bruce Schneier for use with a deck of playing cards, as part of the plot. The perl script was written by well known cryptographer and cypherpunk, Ian Goldberg.
$f=$d?-1:1;$D=pack('C*',33..86);$p=shift;
$p=~y/a-z/A-Z/;$U='$D=~s/(.*)U$/U$1/;
$D=~s/U(.)/$1U/;';($V=$U)=~s/U/V/g;
$p=~s/[A-Z]/$k=ord($&)-64,&e/eg;$k=0;
while(<>){y/a-z/A-Z/;y/A-Z//dc;$o.=$_}$o.='X'
while length ($o)%5&&!$d;
$o=~s/./chr(($f*&e+ord($&)-l3)%26+65)/eg;
$o=~s/X*$// if $d;$o=~s/.{5}/$& /g;
print"$o\n";sub v{$v=ord(substr($D,$_[0]))-32;
sub e{eval"$U$V$V";$D=~s/(.*)([UV].*[UV])(.*)/$3$2$l/;
Since the original printing of the script, Stephenson has made several changes. The first was to remediate a typesetting error on the eighth line that caused the perl script to be useless. The second change was to add semi-colons as line breaks to facilitate readers without fluency in Perl in transcribing and running the script themselves.
A verbose and annotated version of the script appears on Bruce Schneier's web site.
Several of the characters in the book communicate with each other through the use of One-time pads. A one-time pad (OTP) is an encryption technique that requires a single-use pre-shared key of at least the same length as the encrypted message.
The story posits a variation of the OTP technique wherein there is no pre-shared key - the key is instead generated algorithmically.
He also describes computers using a fictional operating system, Finux. The name is a thinly veiled reference to Linux, a kernel originally written by the Finnish native Linus Torvalds. Stephenson changed the name so as not to be creatively constrained by the technical details of Linux-based operating systems.
An excerpt from "Cryptonomicon" was originally published in the short story collection "Disco 2000", edited by Sarah Champion and published in 1998.
Stephenson's subsequent work, a trio of novels dubbed "The Baroque Cycle", provides part of the deep backstory to the characters and events featured in "Cryptonomicon". Set in the late 17th and early 18th centuries, the novels feature ancestors of several characters in "Cryptonomicon", as well as events and objects which affect the action of the later-set book. The subtext implies the existence of secret societies or conspiracies, and familial tendencies and groupings found within those darker worlds.
The short story "Jipi and the Paranoid Chip" appears to take place some time after the events of "Cryptonomicon". In the story, the construction of the Crypt has triggered economic growth in Manila and Kinakuta, in which Goto Engineering, and Homa/Homer Goto, a Goto family heir, are involved. The IDTRO ("Black Chamber") is also mentioned.
Stephenson's 2019 novel, "Fall; or, Dodge in Hell", is promoted as a sequel to "Reamde" (2011), but as the story unfolds, it is revealed that "Fall", "Reamde", "Cryptonomicon" and "The Baroque Cycle" are all set in the same fictional universe, with references to the Waterhouse, Shaftoe and Hacklheber families, as well as Societas Eruditorum and Epiphyte Corporation. Two "Wise" entities from "The Baroque Cycle" also appear in "Fall," including Enoch Root.
Peter Thiel states in his book "Zero to One" that "Cryptonomicon" was required reading during the early days of PayPal.
According to critic Jay Clayton, the book is written for a technical or geek audience. Despite the technical detail, the book drew praise from both Stephenson's science fiction fan base and literary critics and buyers. In his book "Charles Dickens in Cyberspace: The Afterlife of the Nineteenth Century in Postmodern Culture" (2003), Jay Clayton calls Stephenson’s book the “ultimate geek novel” and draws attention to the “literary-scientific-engineering-military-industrial-intelligence alliance” that produced discoveries in two eras separated by fifty years, World War II and the Internet age. In July 2012, io9 included the book on its list of "10 Science Fiction Novels You Pretend to Have Read". | https://en.wikipedia.org/wiki?curid=21861 |
In the Beginning... Was the Command Line
In the Beginning... Was the Command Line is an essay by Neal Stephenson which was originally published online in 1999 and later made available in book form (November 1999, ). The essay is a commentary on why the proprietary operating systems business is unlikely to remain profitable in the future because of competition from free software. It also analyzes the corporate/collective culture of the Microsoft, Apple, and free software communities.
Stephenson explores the GUI as a metaphor in terms of the increasing interposition of abstractions between humans and the actual workings of devices (in a similar manner to "Zen and the Art of Motorcycle Maintenance") and explains the beauty hackers feel in good-quality tools. He does this with a car analogy. He compares four operating systems, Mac OS by Apple Computer to a luxury European car, Windows by Microsoft to a station wagon, Linux to a free tank, and BeOS to a batmobile. Stephenson argues that people continue to buy the station wagon despite free tanks being given away, because people do not want to learn how to operate a tank; they know that the station wagon dealership has a machine shop that they can take their car to when it breaks down. Because of this attitude, Stephenson argues that Microsoft is not really a monopoly, as evidenced by the free availability of other choice OSes, but rather has simply accrued enough mindshare among the people to have them coming back. He compares Microsoft to Disney, in that both are selling a vision to their customers, who in turn "want to believe" in that vision.
Stephenson relays his experience with the Debian bug tracking system (#6518). He then contrasts it with Microsoft's approach. Debian developers responded from around the world within a day. He was completely frustrated with his initial attempt to achieve the same response from Microsoft, but he concedes that his subsequent experience was satisfactory. The difference he notes is that Debian developers are personally accessible and transparently own up to defects in their OS distribution, while Microsoft pretends errors don't exist.
The essay was written before the advent of Mac OS X. A recurring theme is the full power of the command line compared with easier to learn graphical user interfaces (GUIs) which are described as broken mixed metaphors for 'power users'. He then mentions GUIs which allow traditional terminal windows to be used. In a Slashdot interview in 2004, in response to the question:
... have you embraced the new UNIX based MacOS X as the OS you want to use when you "Just want to go to Disneyland"?
he replied:
I embraced OS X as soon as it was available and have never looked back. So a lot of "In the Beginning...was the Command Line" is now obsolete. I keep meaning to update it, but if I'm honest with myself, I have to say this is unlikely.
With Neal Stephenson's permission, Garrett Birkel responded to "In the Beginning...was the Command Line" in 2004, bringing it up to date and critically discussing Stephenson's argument. Birkel's response is interspersed throughout the original text, which remains untouched. | https://en.wikipedia.org/wiki?curid=21862 |
Netscape Navigator
Netscape Navigator was a proprietary web browser, and the original browser of the Netscape line, from versions 1 to 4.08, and 9.x. It was the flagship product of the Netscape Communications Corp and was the dominant web browser in terms of usage share in the 1990s, but by around 2003 its use had almost disappeared. This was primarily due to the increased use of Microsoft's Internet Explorer web browser software, and partly because the Netscape Corporation (later purchased by AOL) did not sustain Netscape Navigator's technical innovation in the late 1990s.
The business demise of Netscape was a central premise of Microsoft's antitrust trial, wherein the Court ruled that Microsoft's bundling of Internet Explorer with the Windows operating system was a monopolistic and illegal business practice. The decision came too late for Netscape, however, as Internet Explorer had by then become the dominant web browser in Windows.
The Netscape Navigator web browser was succeeded by the Netscape Communicator suite in 1997. Netscape Communicator's 4.x source code was the base for the Netscape-developed Mozilla Application Suite, which was later renamed SeaMonkey. Netscape's Mozilla Suite also served as the base for a browser-only spinoff called Mozilla Firefox.
The Netscape Navigator name returned in 2007 when AOL announced version 9 of the Netscape series of browsers, Netscape Navigator 9. On December 28, 2007, AOL canceled its development but continued supporting the web browser with security updates until March 1, 2008. AOL allows downloading of archived versions of the Netscape Navigator web browser family. AOL maintains the Netscape website as an Internet portal.
Netscape Navigator was inspired by the success of the Mosaic web browser, which was co-written by Marc Andreessen, a part-time employee of the National Center for Supercomputing Applications at the University of Illinois. After Andreessen graduated in 1993, he moved to California and there met Jim Clark, the recently departed founder of Silicon Graphics. Clark believed that the Mosaic browser had great commercial possibilities and provided the seed money. Soon Mosaic Communications Corporation was in business in Mountain View, California, with Andreessen as a vice-president. Since the University of Illinois was unhappy with the company's use of the Mosaic name, the company changed its name to Netscape Communications (suggested by product manager Greg Sands ) and named its flagship web browser Netscape Navigator.
Netscape announced in its first press release (October 13, 1994) that it would make Navigator available without charge to all non-commercial users, and beta versions of version 1.0 and 1.1 were indeed freely downloadable in November 1994 and March 1995, with the full version 1.0 available in December 1994. Netscape's initial corporate policy regarding Navigator claimed that it would make Navigator freely available for non-commercial use in accordance with the notion that Internet software should be distributed for free.
However, within two months of that press release, Netscape apparently reversed its policy on who could freely obtain and use version 1.0 by only mentioning that educational and non-profit institutions could use version 1.0 at no charge.
The reversal was complete with the availability of version 1.1 beta on March 6, 1995, in which a press release states that the final 1.1 release would be available at no cost only for academic and non-profit organizational use. Gone was the notion expressed in the first press release that Navigator would be freely available in the spirit of Internet software.
Some security experts and cryptographers found out that all released Netscape versions had major security problems with crashing the browser with long URLs and 40 bits encryption keys.
The first few releases of the product were made available in "commercial" and "evaluation" versions; for example, version "1.0" and version "1.0N". The "N" evaluation versions were completely identical to the commercial versions; the letter was there to remind people to pay for the browser once they felt they had tried it long enough and were satisfied with it. This distinction was formally dropped within a year of the initial release, and the full version of the browser continued to be made available for free online, with boxed versions available on floppy disks (and later CDs) in stores along with a period of phone support. During this era, "Internet Starter Kit" books were popular, and usually included a floppy disk or CD containing internet software, and this was a popular means of obtaining Netscape's and other browsers. Email support was initially free, and remained so for a year or two until the volume of support requests grew too high.
During development, the Netscape browser was known by the code name "Mozilla", which became the name of a Godzilla-like cartoon dragon mascot used prominently on the company's web site. The Mozilla name was also used as the User-Agent in HTTP requests by the browser. Other web browsers claimed to be compatible with Netscape's extensions to HTML, and therefore used the same name in their User-Agent identifiers so that web servers would send them the same pages as were sent to Netscape browsers. Mozilla is now a generic name for matters related to the open source successor to Netscape Communicator and is most identified with the browser Firefox.
When the consumer Internet revolution arrived in the mid1990s, Netscape was well positioned to take advantage of it. With a good mix of features and an attractive licensing scheme that allowed free use for non-commercial purposes, the Netscape browser soon became the de facto standard, particularly on the Windows platform. Internet service providers and computer magazine publishers helped make Navigator readily available.
An innovation that Netscape introduced in 1994 was the on-the-fly display of web pages, where text and graphics appeared on the screen as the web page downloaded. Earlier web browsers would not display a page until all graphics on it had been loaded over the network connection; this often made a user stare at a blank page for as long as several minutes. With Netscape, people using dial-up connections could begin reading the text of a web page within seconds of entering a web address, even before the rest of the text and graphics had finished downloading. This made the web much more tolerable to the average user.
Through the late 1990s, Netscape made sure that Navigator remained the technical leader among web browsers. New features included cookies, frames, proxy auto-config, and JavaScript (in version 2.0). Although those and other innovations eventually became open standards of the W3C and ECMA and were emulated by other browsers, they were often viewed as controversial. Netscape, according to critics, was more interested in bending the web to its own de facto "standards" (bypassing standards committees and thus marginalizing the commercial competition) than it was in fixing bugs in its products. Consumer rights advocates were particularly critical of cookies and of commercial web sites using them to invade individual privacy.
In the marketplace, however, these concerns made little difference. Netscape Navigator remained the market leader with more than 50% usage share. The browser software was available for a wide range of operating systems, including Windows (3.1, 95, 98, NT), Macintosh, Linux, OS/2, and many versions of Unix including OSF/1, Sun Solaris, BSD/OS, IRIX, AIX, and HP-UX, and looked and worked nearly identically on every one of them. Netscape began to experiment with prototypes of a web-based system, known internally as “Constellation”, which would allow a user to access and edit his or her files anywhere across a network no matter what computer or operating system he or she happened to be using.
Industry observers forecast the dawn of a new era of connected computing. The underlying operating system, it was believed, would not be an important consideration; future applications would run within a web browser. This was seen by Netscape as a clear opportunity to entrench Navigator at the heart of the next generation of computing, and thus gain the opportunity to expand into all manner of other software and service markets.
With the success of Netscape showing the importance of the web (more people were using the Internet due in part to the ease of using Netscape), Internet browsing began to be seen as a potentially profitable market. Following Netscape's lead, Microsoft started a campaign to enter the web browser software market. Like Netscape before them, Microsoft licensed the Mosaic source code from Spyglass, Inc. (which in turn licensed code from University of Illinois). Using this basic code, Microsoft created Internet Explorer (IE).
The competition between Microsoft and Netscape dominated the Browser Wars. Internet Explorer, Version 1.0 (shipped in the Internet Jumpstart Kit in Microsoft Plus! For Windows 95) and IE, Version 2.0 (the first cross-platform version of the web browser, supporting both Windows and Mac OS) were thought by many to be inferior and primitive when compared to contemporary versions of Netscape Navigator. With the release of IE version 3.0 (1996) Microsoft was able to catch up with Netscape competitively, with IE Version 4.0 (1997) further improvement in terms of market share. IE 5.0 (1999) improved stability and took significant market share from Netscape Navigator for the first time.
There were two versions of Netscape Navigator 3.0, the Standard Edition and the Gold Edition. The latter consisted of the Navigator browser with e-mail, news readers, and a WYSIWYG web page compositor; however, these extra functions enlarged and slowed the software, rendering it prone to crashing.
This Gold Edition was renamed Netscape Communicator starting with version 4.0; the name change diluted its name-recognition and confused users. Netscape CEO James L. Barksdale insisted on the name change because Communicator was a general-purpose "client" application, which contained the Navigator "browser".
The aging Netscape Communicator 4.x was slower than Internet Explorer 5.0. Typical web pages had become heavily illustrated, often JavaScript-intensive, and encoded with HTML features designed for specific purposes but now employed as global layout tools (HTML tables, the most obvious example of this, were especially difficult for Communicator to render). The Netscape browser, once a solid product, became crash-prone and buggy; for example, some versions re-downloaded an entire web page to re-render it when the browser window was re-sized (a nuisance to dial-up users), and the browser would usually crash when the page contained simple Cascading Style Sheets, as proper support for CSS never made it into Communicator 4.x. At the time that Communicator 4.0 was being developed, Netscape had a competing technology called JavaScript Style Sheets. Near the end of the development cycle, it became obvious that CSS would prevail, so Netscape quickly implemented a CSS to JSSS converter, which then processed CSS as JSSS (this is why turning JavaScript off also disabled CSS). Moreover, Netscape Communicator's browser interface design appeared dated in comparison to Internet Explorer and interface changes in Microsoft and Apple's operating systems.
By the end of the decade, Netscape's web browser had lost dominance over the Windows platform, and the August 1997 Microsoft financial agreement to invest one hundred and fifty million dollars in Apple required that Apple make Internet Explorer the default web browser in new Mac OS distributions. The latest IE Mac release at that time was Internet Explorer version 3.0 for Macintosh, but Internet Explorer 4 was released later that year.
Microsoft succeeded in having ISPs and PC vendors distribute Internet Explorer to their customers instead of Netscape Navigator, mostly due to Microsoft using its leverage from Windows OEM licenses, and partly aided by Microsoft's investment in making IE brandable, such that a customized version of IE could be offered. Also, web developers used proprietary, browser-specific extensions in web pages. Both Microsoft and Netscape did this, having added many proprietary HTML tags to their browsers, which forced users to choose between two competing and almost incompatible web browsers.
In March 1998, Netscape released most of the development code base for Netscape Communicator under an open source license. Only pre-alpha versions of Netscape 5 were released before the open source community decided to scrap the Netscape Navigator codebase entirely and build a new web browser around the Gecko layout engine which Netscape had been developing but which had not yet incorporated. The community-developed open source project was named "Mozilla", Netscape Navigator's original code name. America Online bought Netscape; Netscape programmers took a pre-beta-quality form of the Mozilla codebase, gave it a new GUI, and released it as Netscape 6. This did nothing to win back users, who continued to migrate to Internet Explorer. After the release of Netscape 7 and a long public beta test, Mozilla 1.0 was released on June 5, 2002. The same code-base, notably the Gecko layout engine, became the basis of independent applications, including Firefox and Thunderbird.
On December 28, 2007, the Netscape developers announced that AOL had canceled development of Netscape Navigator, leaving it unsupported as of March 1, 2008. Despite this, archived and unsupported versions of the browser remain available for download.
Netscape's contributions to the web include JavaScript, which was submitted as a new standard to Ecma International. The resultant ECMAScript specification allowed JavaScript support by multiple web browsers and its use as a cross-browser scripting language, long after Netscape Navigator itself had dropped in popularity. Another example is the FRAME tag, that is widely supported today, and that has been incorporated into official web standards such as the "HTML 4.01 Frameset" specification.
In a 2007 "PC World" column, the original Netscape Navigator was considered the "best tech product of all time" due to its impact on the Internet. | https://en.wikipedia.org/wiki?curid=21863 |
Neurotransmitter
Neurotransmitters are endogenous chemicals acting as signaling molecules that enable neurotransmission. They are a type of chemical messenger which transmits signals across a chemical synapse from one neuron (nerve cell) to another 'target' neuron, to a muscle cell, or to a gland cell. Neurotransmitters are released from synaptic vesicles in synapses into the synaptic cleft, where they are received by neurotransmitter receptors on the target cell. Many neurotransmitters are synthesized from simple and plentiful precursors such as amino acids, which are readily available and only require a small number of biosynthetic steps for conversion. Neurotransmitters are essential to the function of complex neural systems. The exact number of unique neurotransmitters in humans is unknown, but more than 200 have been identified.
Neurotransmitters are stored in synaptic vesicles, clustered close to the cell membrane at the axon terminal of the presynaptic neuron. Neurotransmitters are released into and diffuse across the synaptic cleft, where they bind to specific receptors on the membrane of the postsynaptic neuron. Binding of neurotransmitters may influence the postsynaptic neuron in either an excitation or inhibitory way, depolarizing or repolarizing it respectively.
Most neurotransmitters are about the size of a single amino acid; however, some neurotransmitters may be the size of larger proteins or peptides. A released neurotransmitter is typically available in the synaptic cleft for a short time before it is metabolized by enzymes, pulled back into the presynaptic neuron through reuptake, or bound to a postsynaptic receptor. Nevertheless, short-term exposure of the receptor to a neurotransmitter is typically sufficient for causing a postsynaptic response by way of synaptic transmission.
Generally, a neurotransmitter is released at the presynaptic terminal in response to a threshold action potential or graded electrical potential in the presynaptic neuron. However, low level 'baseline' release also occurs without electrical stimulation.
Until the early 20th century, scientists assumed that the majority of synaptic communication in the brain was electrical. However, through histological examinations by Ramón y Cajal, a 20 to 40 nm gap between neurons, known today as the synaptic cleft, was discovered. The presence of such a gap suggested communication via chemical messengers traversing the synaptic cleft, and in 1921 German pharmacologist Otto Loewi confirmed that neurons can communicate by releasing chemicals. Through a series of experiments involving the vagus nerves of frogs, Loewi was able to manually slow the heart rate of frogs by controlling the amount of saline solution present around the vagus nerve. Upon completion of this experiment, Loewi asserted that sympathetic regulation of cardiac function can be mediated through changes in chemical concentrations. Furthermore, Otto Loewi is credited with discovering acetylcholine (ACh)—the first known neurotransmitter.
There are four main criteria for identifying neurotransmitters:
However, given advances in pharmacology, genetics, and chemical neuroanatomy, the term "neurotransmitter" can be applied to chemicals that:
The anatomical localization of neurotransmitters is typically determined using immunocytochemical techniques, which identify the location of either the transmitter substances themselves or of the enzymes that are involved in their synthesis. Immunocytochemical techniques have also revealed that many transmitters, particularly the neuropeptides, are co-localized, that is, a neuron may release more than one transmitter from its synaptic terminal. Various techniques and experiments such as staining, stimulating, and collecting can be used to identify neurotransmitters throughout the central nervous system.
There are many different ways to classify neurotransmitters. Dividing them into amino acids, peptides, and monoamines is sufficient for some classification purposes.
Major neurotransmitters:
In addition, over 50 neuroactive peptides have been found, and new ones are discovered regularly. Many of these are co-released along with a small-molecule transmitter. Nevertheless, in some cases, a peptide is the primary transmitter at a synapse. β-endorphin is a relatively well-known example of a peptide neurotransmitter because it engages in highly specific interactions with opioid receptors in the central nervous system.
Single ions (such as synaptically released zinc) are also considered neurotransmitters by some, as well as some gaseous molecules such as nitric oxide (NO), carbon monoxide (CO), and hydrogen sulfide (H2S). The gases are produced in the neural cytoplasm and are immediately diffused through the cell membrane into the extracellular fluid and into nearby cells to stimulate production of second messengers. Soluble gas neurotransmitters are difficult to study, as they act rapidly and are immediately broken down, existing for only a few seconds.
The most prevalent transmitter is glutamate, which is excitatory at well over 90% of the synapses in the human brain. The next most prevalent is Gamma-Aminobutyric Acid, or GABA, which is inhibitory at more than 90% of the synapses that do not use glutamate. Although other transmitters are used in fewer synapses, they may be very important functionally: the great majority of psychoactive drugs exert their effects by altering the actions of some neurotransmitter systems, often acting through transmitters other than glutamate or GABA. Addictive drugs such as cocaine and amphetamines exert their effects primarily on the dopamine system. The addictive opiate drugs exert their effects primarily as functional analogs of opioid peptides, which, in turn, regulate dopamine levels.
Neurons form elaborate networks through which nerve impulses—action potentials—travel. Each neuron has as many as 15,000 connections with neighboring neurons.
Neurons do not touch each other (except in the case of an electrical synapse through a gap junction); instead, neurons interact at contact points called synapses: a junction within two nerve cells, consisting of a miniature gap within which impulses are carried by a neurotransmitter. A neuron transports its information by way of a nerve impulse called an action potential. When an action potential arrives at the synapse's presynaptic terminal button, it may stimulate the release of neurotransmitters. These neurotransmitters are released into the synaptic cleft to bind onto the receptors of the postsynaptic membrane and influence another cell, either in an inhibitory or excitatory way. The next neuron may be connected to many more neurons, and if the total of excitatory influences minus inhibitory influences is great enough, it will also "fire". That is to say, it will create a new action potential at its axon hillock, releasing neurotransmitters and passing on the information to yet another neighboring neuron.
A neurotransmitter can influence the function of a neuron through a remarkable number of mechanisms. In its direct actions in influencing a neuron's electrical excitability, however, a neurotransmitter acts in only one of two ways: excitatory or inhibitory. A neurotransmitter influences trans-membrane ion flow either to increase (excitatory) or to decrease (inhibitory) the probability that the cell with which it comes in contact will produce an action potential. Thus, despite the wide variety of synapses, they all convey messages of only these two types, and they are labeled as such. Type I synapses are excitatory in their actions, whereas type II synapses are inhibitory. Each type has a different appearance and is located on different parts of the neurons under its influence.
Type I (excitatory) synapses are typically located on the shafts or the spines of dendrites, whereas type II (inhibitory) synapses are typically located on a cell body. In addition, Type I synapses have round synaptic vesicles, whereas the vesicles of type II synapses are flattened. The material on the presynaptic and post-synaptic membranes is denser in a Type I synapse than it is in a type II, and the type I synaptic cleft is wider. Finally, the active zone on a Type I synapse is larger than that on a Type II synapse.
The different locations of type I and type II synapses divide a neuron into two zones: an excitatory dendritic tree and an inhibitory cell body. From an inhibitory perspective, excitation comes in over the dendrites and spreads to the axon hillock to trigger an action potential. If the message is to be stopped, it is best stopped by applying inhibition on the cell body, close to the axon hillock where the action potential originates. Another way to conceptualize excitatory–inhibitory interaction is to picture excitation overcoming inhibition. If the cell body is normally in an inhibited state, the only way to generate an action potential at the axon hillock is to reduce the cell body's inhibition. In this "open the gates" strategy, the excitatory message is like a racehorse ready to run down the track, but first, the inhibitory starting gate must be removed.
As explained above, the only direct action of a neurotransmitter is to activate a receptor. Therefore, the effects of a neurotransmitter system depend on the connections of the neurons that use the transmitter, and the chemical properties of the receptors that the transmitter binds to.
Here are a few examples of important neurotransmitter actions:
Neurons expressing certain types of neurotransmitters sometimes form distinct systems, where activation of the system affects large volumes of the brain, called volume transmission. Major neurotransmitter systems include the noradrenaline (norepinephrine) system, the dopamine system, the serotonin system, and the cholinergic system, among others. Trace amines have a modulatory effect on neurotransmission in monoamine pathways (i.e., dopamine, norepinephrine, and serotonin pathways) throughout the brain via signaling through trace amine-associated receptor 1. A brief comparison of these systems follows:
Understanding the effects of drugs on neurotransmitters comprises a significant portion of research initiatives in the field of neuroscience. Most neuroscientists involved in this field of research believe that such efforts may further advance our understanding of the circuits responsible for various neurological diseases and disorders, as well as ways to effectively treat and someday possibly prevent or cure such illnesses.
Drugs can influence behavior by altering neurotransmitter activity. For instance, drugs can decrease the rate of synthesis of neurotransmitters by affecting the synthetic enzyme(s) for that neurotransmitter. When neurotransmitter syntheses are blocked, the amount of neurotransmitters available for release becomes substantially lower, resulting in a decrease in neurotransmitter activity. Some drugs block or stimulate the release of specific neurotransmitters. Alternatively, drugs can prevent neurotransmitter storage in synaptic vesicles by causing the synaptic vesicle membranes to leak. Drugs that prevent a neurotransmitter from binding to its receptor are called receptor antagonists. For example, drugs used to treat patients with schizophrenia such as haloperidol, chlorpromazine, and clozapine are antagonists at receptors in the brain for dopamine. Other drugs act by binding to a receptor and mimicking the normal neurotransmitter. Such drugs are called receptor agonists. An example of a receptor agonist is morphine, an opiate that mimics effects of the endogenous neurotransmitter β-endorphin to relieve pain. Other drugs interfere with the deactivation of a neurotransmitter after it has been released, thereby prolonging the action of a neurotransmitter. This can be accomplished by blocking re-uptake or inhibiting degradative enzymes. Lastly, drugs can also prevent an action potential from occurring, blocking neuronal activity throughout the central and peripheral nervous system. Drugs such as tetrodotoxin that block neural activity are typically lethal.
Drugs targeting the neurotransmitter of major systems affect the whole system, which can explain the complexity of action of some drugs. Cocaine, for example, blocks the re-uptake of dopamine back into the presynaptic neuron, leaving the neurotransmitter molecules in the synaptic gap for an extended period of time. Since the dopamine remains in the synapse longer, the neurotransmitter continues to bind to the receptors on the postsynaptic neuron, eliciting a pleasurable emotional response. Physical addiction to cocaine may result from prolonged exposure to excess dopamine in the synapses, which leads to the downregulation of some post-synaptic receptors. After the effects of the drug wear off, an individual can become depressed due to decreased probability of the neurotransmitter binding to a receptor. Fluoxetine is a selective serotonin re-uptake inhibitor (SSRI), which blocks re-uptake of serotonin by the presynaptic cell which increases the amount of serotonin present at the synapse and furthermore allows it to remain there longer, providing potential for the effect of naturally released serotonin. AMPT prevents the conversion of tyrosine to L-DOPA, the precursor to dopamine; reserpine prevents dopamine storage within vesicles; and deprenyl inhibits monoamine oxidase (MAO)-B and thus increases dopamine levels.
An agonist is a chemical capable of binding to a receptor, such as a neurotransmitter receptor, and initiating the same reaction typically produced by the binding of the endogenous substance. An agonist of a neurotransmitter will thus initiate the same receptor response as the transmitter. In neurons, an agonist drug may activate neurotransmitter receptors either directly or indirectly. Direct-binding agonists can be further characterized as full agonists, partial agonists, inverse agonists.
Direct agonists act similar to a neurotransmitter by binding directly to its associated receptor site(s), which may be located on the presynaptic neuron or postsynaptic neuron, or both. Typically, neurotransmitter receptors are located on the postsynaptic neuron, while neurotransmitter autoreceptors are located on the presynaptic neuron, as is the case for monoamine neurotransmitters; in some cases, a neurotransmitter utilizes retrograde neurotransmission, a type of feedback signaling in neurons where the neurotransmitter is released postsynaptically and binds to target receptors located on the presynaptic neuron. Nicotine, a compound found in tobacco, is a direct agonist of most nicotinic acetylcholine receptors, mainly located in cholinergic neurons. Opiates, such as morphine, heroin, hydrocodone, oxycodone, codeine, and methadone, are μ-opioid receptor agonists; this action mediates their euphoriant and pain relieving properties.
Indirect agonists increase the binding of neurotransmitters at their target receptors by stimulating the release or preventing the reuptake of neurotransmitters. Some indirect agonists trigger neurotransmitter release and prevent neurotransmitter reuptake. Amphetamine, for example, is an indirect agonist of postsynaptic dopamine, norepinephrine, and serotonin receptors in each their respective neurons; it produces both neurotransmitter release into the presynaptic neuron and subsequently the synaptic cleft and prevents their reuptake from the synaptic cleft by activating TAAR1, a presynaptic G protein-coupled receptor, and binding to a site on VMAT2, a type of monoamine transporter located on synaptic vesicles within monoamine neurons.
An antagonist is a chemical that acts within the body to reduce the physiological activity of another chemical substance (as an opiate); especially one that opposes the action on the nervous system of a drug or a substance occurring naturally in the body by combining with and blocking its nervous receptor.
There are two main types of antagonist: direct-acting Antagonist and indirect-acting Antagonists:
An antagonist drug is one that attaches (or binds) to a site called a receptor without activating that receptor to produce a biological response. It is therefore said to have no intrinsic activity. An antagonist may also be called a receptor "blocker" because they block the effect of an agonist at the site. The pharmacological effects of an antagonist, therefore, result in preventing the corresponding receptor site's agonists (e.g., drugs, hormones, neurotransmitters) from binding to and activating it. Antagonists may be "competitive" or "irreversible".
A competitive antagonist competes with an agonist for binding to the receptor. As the concentration of antagonist increases, the binding of the agonist is progressively inhibited, resulting in a decrease in the physiological response. High concentration of an antagonist can completely inhibit the response. This inhibition can be reversed, however, by an increase of the concentration of the agonist, since the agonist and antagonist compete for binding to the receptor. Competitive antagonists, therefore, can be characterized as shifting the dose–response relationship for the agonist to the right. In the presence of a competitive antagonist, it takes an increased concentration of the agonist to produce the same response observed in the absence of the antagonist.
An irreversible antagonist binds so strongly to the receptor as to render the receptor unavailable for binding to the agonist. Irreversible antagonists may even form covalent chemical bonds with the receptor. In either case, if the concentration of the irreversible antagonist is high enough, the number of unbound receptors remaining for agonist binding may be so low that even high concentrations of the agonist do not produce the maximum biological response.
While intake of neurotransmitter precursors does increase neurotransmitter synthesis, evidence is mixed as to whether neurotransmitter release and postsynaptic receptor firing is increased. Even with increased neurotransmitter release, it is unclear whether this will result in a long-term increase in neurotransmitter signal strength, since the nervous system can adapt to changes such as increased neurotransmitter synthesis and may therefore maintain constant firing. Some neurotransmitters may have a role in depression and there is some evidence to suggest that intake of precursors of these neurotransmitters may be useful in the treatment of mild and moderate depression.
L-DOPA, a precursor of dopamine that crosses the blood–brain barrier, is used in the treatment of Parkinson's disease. For depressed patients where low activity of the neurotransmitter norepinephrine is implicated, there is only little evidence for benefit of neurotransmitter precursor administration. L-phenylalanine and L-tyrosine are both precursors for dopamine, norepinephrine, and epinephrine. These conversions require vitamin B6, vitamin C, and S-adenosylmethionine. A few studies suggest potential antidepressant effects of L-phenylalanine and L-tyrosine, but there is much room for further research in this area.
Administration of L-tryptophan, a precursor for serotonin, is seen to double the production of serotonin in the brain. It is significantly more effective than a placebo in the treatment of mild and moderate depression. This conversion requires vitamin C. 5-hydroxytryptophan (5-HTP), also a precursor for serotonin, is more effective than a placebo.
Diseases and disorders may also affect specific neurotransmitter systems. The following are disorders involved in either an increase, decrease, or imbalance of certain neurotransmitters.
Dopamine:
For example, problems in producing dopamine (mainly in the substantia nigra) can result in Parkinson's disease, a disorder that affects a person's ability to move as they want to, resulting in stiffness, tremors or shaking, and other symptoms. Some studies suggest that having too little or too much dopamine or problems using dopamine in the thinking and feeling regions of the brain may play a role in disorders like schizophrenia or attention deficit hyperactivity disorder (ADHD). Dopamine is also involved in addiction and drug use, as most recreational drugs cause an influx of dopamine in the brain (especially opioid and methamphetamines) that produces a pleasurable feeling, which is why users constantly crave drugs.
Serotonin:
Similarly, after some research suggested that drugs that block the recycling, or reuptake, of serotonin seemed to help some people diagnosed with depression, it was theorized that people with depression might have lower-than-normal serotonin levels. Though widely popularized, this theory was not borne out in subsequent research. Therefore, selective serotonin reuptake inhibitors (SSRIs) are used to increase the amounts of serotonin in synapses.
Glutamate:
Furthermore, problems with producing or using glutamate have been suggestively and tentatively linked to many mental disorders, including autism, obsessive compulsive disorder (OCD), schizophrenia, and depression. Having too much glutamate has been linked to neurological diseases such as Parkinson's disease, multiple sclerosis, Alzheimer's disease, stroke, and ALS (amyotrophic lateral sclerosis).
Generally, there are no scientifically established "norms" for appropriate levels or "balances" of different neurotransmitters. It is in most cases pragmatically impossible to even measure levels of neurotransmitters in a brain or body at any distinct moments in time. Neurotransmitters regulate each other's release, and weak consistent imbalances in this mutual regulation were linked to temperament in healthy people
. Strong imbalances or disruptions to neurotransmitter systems have been associated with many diseases and mental disorders. These include Parkinson's, depression, insomnia, Attention Deficit Hyperactivity Disorder (ADHD), anxiety, memory loss, dramatic changes in weight and addictions. Chronic physical or emotional stress can be a contributor to neurotransmitter system changes. Genetics also plays a role in neurotransmitter activities. Apart from recreational use, medications that directly and indirectly interact one or more transmitter or its receptor are commonly prescribed for psychiatric and psychological issues. Notably, drugs interacting with serotonin and norepinephrine are prescribed to patients with problems such as depression and anxiety—though the notion that there is much solid medical evidence to support such interventions has been widely criticized. Studies shown that dopamine imbalance has an influence on multiple sclerosis and other neurological disorders.
A neurotransmitter must be broken down once it reaches the post-synaptic cell to prevent further excitatory or inhibitory signal transduction. This allows new signals to be produced from the adjacent nerve cells. When the neurotransmitter has been secreted into the synaptic cleft, it binds to specific receptors on the postsynaptic cell, thereby generating a postsynaptic electrical signal. The transmitter must then be removed rapidly to enable the postsynaptic cell to engage in another cycle of neurotransmitter release, binding, and signal generation. Neurotransmitters are terminated in three different ways:
For example, choline is taken up and recycled by the pre-synaptic neuron to synthesize more ACh. Other neurotransmitters such as dopamine are able to diffuse away from their targeted synaptic junctions and are eliminated from the body via the kidneys, or destroyed in the liver. Each neurotransmitter has very specific degradation pathways at regulatory points, which may be targeted by the body's regulatory system or by recreational drugs. | https://en.wikipedia.org/wiki?curid=21865 |
Neutronium
Neutronium (sometimes shortened to neutrium, also referred to as neutrite) is a hypothetical substance composed purely of neutrons. The word was coined by scientist Andreas von Antropoff in 1926 (before the discovery of the neutron) for the hypothetical "element of atomic number zero" (with zero protons in its nucleus) that he placed at the head of the periodic table (denoted by dash, no element symbol). However, the meaning of the term has changed over time, and from the last half of the 20th century onward it has been also used to refer to extremely dense substances resembling the neutron-degenerate matter theorized to exist in the cores of neutron stars; hereinafter ""degenerate" neutronium" will refer to this. Science fiction and popular literature frequently use the term "neutronium" to refer to a highly dense phase of matter composed primarily of neutrons.
Neutronium is used in popular physics literature to refer to the material present in the cores of neutron stars (stars which are too massive to be supported by electron degeneracy pressure and which collapse into a denser phase of matter). This term is very rarely used in scientific literature, for three reasons: there are multiple definitions for the term "neutronium"; there is considerable uncertainty over the composition of the material in the cores of neutron stars (it could be neutron-degenerate matter, strange matter, quark matter, or a variant or combination of the above); the properties of neutron star material should depend on depth due to changing pressure (see below), and no sharp boundary between the crust (consisting primarily of atomic nuclei) and almost protonless inner layer is expected to exist.
When neutron star core material is presumed to consist mostly of free neutrons, it is typically referred to as neutron-degenerate matter in scientific literature.
The term "neutronium" was coined in 1926 by Andreas von Antropoff for a conjectured form of matter made up of neutrons with no protons or electrons, which he placed as the chemical element of atomic number zero at the head of his new version of the periodic table. It was subsequently placed in the middle of several spiral representations of the periodic system for classifying the chemical elements, such as those of Charles Janet (1928), E. I. Emerson (1944), and John D. Clark (1950).
Although the term is not used in the scientific literature either for a condensed form of matter, or as an element, there have been reports that, besides the free neutron, there may exist two bound forms of neutrons without protons. If neutronium were considered to be an element, then these neutron clusters could be considered to be the isotopes of that element. However, these reports have not been further substantiated.
Although not called "neutronium", the National Nuclear Data Center's "Nuclear Wallet Cards" lists as its first "isotope" an "element" with the symbol n and atomic number "Z" = 0 and mass number "A" = 1. This isotope is described as decaying to element H with a half life of .
Neutron matter is equivalent to a chemical element with atomic number 0, which is to say that it is equivalent to a species of atoms having no protons in their atomic nuclei. It is extremely radioactive; its only legitimate equivalent isotope, the free neutron, has a half-life of only 10 minutes, which is comparable to half that of the most stable known isotope of francium. Neutron matter decays quickly into hydrogen. Neutron matter has no electronic structure on account of its total lack of electrons. As an equivalent element, however, it could be classified as a noble gas.
Bulk neutron matter has never been viewed. It is assumed that neutron matter would appear as a chemically inert gas, if enough could be collected together to be viewed as a bulk gas or liquid, because of the general appearance of the elements in the noble gas column of the periodic table.
While this lifetime is long enough to permit the study of neutronium's chemical properties, there are serious practical problems. Having no charge or electrons, neutronium would not interact strongly with ordinary low-energy photons (visible light) and would feel no electrostatic forces, so it would diffuse into the walls of most containers made of ordinary matter. Certain materials are able to resist diffusion or absorption of ultracold neutrons due to nuclear-quantum effects, specifically reflection caused by the strong interaction. At ambient temperature and in the presence of other elements, thermal neutrons readily undergo neutron capture to form heavier (and often radioactive) isotopes of that element.
Neutron matter at standard pressure and temperature is predicted by the ideal gas law to be less dense than even hydrogen, with a density of only (roughly 27 times less dense than air). Neutron matter is predicted to remain gaseous down to absolute zero at normal pressures, as the zero-point energy of the system is too high to allow condensation. However, neutron matter should in theory form a degenerate gaseous Bose–Einstein condensate at these temperatures, composed of neutron pairs called "dineutrons". At higher temperatures, neutron matter will only condense with sufficient pressure, and solidify with even greater pressure. Such pressures exist in neutron stars, where the extreme pressure causes the neutron matter to become degenerate. However, in the presence of atomic matter compressed to the state of electron degeneracy, β− decay may be inhibited due to the Pauli exclusion principle, thus making free neutrons stable. Also, elevated pressures should make neutrons degenerate themselves.
Compared to ordinary elements, neutronium should be more compressible due to the absence of electrically charged protons and electrons. This makes neutronium more energetically favorable than (positive-"Z") atomic nuclei and leads to their conversion to (degenerate) neutronium through electron capture, a process that is believed to occur in stellar cores in the final seconds of the lifetime of massive stars, where it is facilitated by cooling via emission. As a result, degenerate neutronium can have a density of , roughly 14 orders of magnitude denser than the densest known ordinary substances. It was theorized that extreme pressures of order might deform the neutrons into a cubic symmetry, allowing tighter packing of neutrons, or cause a strange matter formation.
The term "neutronium" has been popular in science fiction since at least the middle of the 20th century, such as the in . It typically refers to an extremely dense, incredibly strong form of matter. While presumably inspired by the concept of neutron-degenerate matter in the cores of neutron stars, the material used in fiction bears at most only a superficial resemblance, usually depicted as an extremely strong solid under Earth-like conditions, or possessing exotic properties such as the ability to manipulate time and space. In contrast, all proposed forms of neutron star core material are fluids and are extremely unstable at pressures lower than that found in stellar cores. According to one analysis, a neutron star with a mass below about 0.2 solar masses would explode. | https://en.wikipedia.org/wiki?curid=21868 |
Neutron star
A neutron star is the collapsed core of a giant star, which before collapse had a total mass of between 10 and 29 solar masses. Neutron stars are the smallest and densest stars, excluding black holes and hypothetical white holes, quark stars, and strange stars. Neutron stars have a radius on the order of and a mass of about 1.4 solar masses. They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei.
Once formed, they no longer actively generate heat, and cool over time; however, they may still evolve further through collision or accretion. Most of the basic models for these objects imply that neutron stars are composed almost entirely of neutrons (subatomic particles with no net electrical charge and with slightly larger mass than protons); the electrons and protons present in normal matter combine to produce neutrons at the conditions in a neutron star. Neutron stars are partially supported against further collapse by neutron degeneracy pressure, a phenomenon described by the Pauli exclusion principle, just as white dwarfs are supported against collapse by electron degeneracy pressure. However neutron degeneracy pressure is not by itself sufficient to hold up an object beyond 0.7 and repulsive nuclear forces play a larger role in supporting more massive neutron stars. If the remnant star has a mass exceeding the Tolman–Oppenheimer–Volkoff limit of around 2 solar masses, the combination of degeneracy pressure and nuclear forces is insufficient to support the neutron star and it continues collapsing to form a black hole.
Neutron stars that can be observed are very hot and typically have a surface temperature of around . They are so dense that a normal-sized matchbox containing neutron-star material would have a weight of approximately 3 billion tonnes, the same weight as a 0.5 cubic kilometre chunk of the Earth (a cube with edges of about 800 metres) from Earth's surface. Their magnetic fields are between 108 and 1015 (100 million to 1 quadrillion) times stronger than Earth's magnetic field. The gravitational field at the neutron star's surface is about (200 billion) times that of Earth's gravitational field.
As the star's core collapses, its rotation rate increases as a result of conservation of angular momentum, and newly formed neutron stars hence rotate at up to several hundred times per second. Some neutron stars emit beams of electromagnetic radiation that make them detectable as pulsars. Indeed, the discovery of pulsars by Jocelyn Bell Burnell and Antony Hewish in 1967 was the first observational suggestion that neutron stars exist. The radiation from pulsars is thought to be primarily emitted from regions near their magnetic poles. If the magnetic poles do not coincide with the rotational axis of the neutron star, the emission beam will sweep the sky, and when seen from a distance, if the observer is somewhere in the path of the beam, it will appear as pulses of radiation coming from a fixed point in space (the so-called "lighthouse effect"). The fastest-spinning neutron star known is PSR J1748-2446ad, rotating at a rate of 716 times a second or 43,000 revolutions per minute, giving a linear speed at the surface on the order of (i.e., nearly a quarter the speed of light).
There are thought to be around 100 million neutron stars in the Milky Way, a figure obtained by estimating the number of stars that have undergone supernova explosions. However, most are old and cold and radiate very little; most neutron stars that have been detected occur only in certain situations in which they do radiate, such as if they are a pulsar or part of a binary system. Slow-rotating and non-accreting neutron stars are almost undetectable; however, since the "Hubble Space Telescope" detection of RX J185635−3754, a few nearby neutron stars that appear to emit only thermal radiation have been detected. Soft gamma repeaters are conjectured to be a type of neutron star with very strong magnetic fields, known as magnetars, or alternatively, neutron stars with fossil disks around them.
Neutron stars in binary systems can undergo accretion which typically makes the system bright in X-rays while the material falling onto the neutron star can form hotspots that rotate in and out of view in identified X-ray pulsar systems. Additionally, such accretion can "recycle" old pulsars and potentially cause them to gain mass and spin-up to very fast rotation rates, forming the so-called millisecond pulsars. These binary systems will continue to evolve, and eventually the companions can become compact objects such as white dwarfs or neutron stars themselves, though other possibilities include a complete destruction of the companion through ablation or merger. The merger of binary neutron stars may be the source of short-duration gamma-ray bursts and are likely strong sources of gravitational waves. In 2017, a direct detection (GW170817) of the gravitational waves from such an event was made, and gravitational waves have also been indirectly detected in a system where two neutron stars orbit each other.
Any main-sequence star with an initial mass of above 8 times the mass of the sun () has the potential to produce a neutron star. As the star evolves away from the main sequence, subsequent nuclear burning produces an iron-rich core. When all nuclear fuel in the core has been exhausted, the core must be supported by degeneracy pressure alone. Further deposits of mass from shell burning cause the core to exceed the Chandrasekhar limit. Electron-degeneracy pressure is overcome and the core collapses further, sending temperatures soaring to over . At these temperatures, photodisintegration (the breaking up of iron nuclei into alpha particles by high-energy gamma rays) occurs. As the temperature climbs even higher, electrons and protons combine to form neutrons via electron capture, releasing a flood of neutrinos. When densities reach nuclear density of , a combination of strong force repulsion and neutron degeneracy pressure halts the contraction. The infalling outer envelope of the star is halted and flung outwards by a flux of neutrinos produced in the creation of the neutrons, becoming a supernova. The remnant left is a neutron star. If the remnant has a mass greater than about , it collapses further to become a black hole.
As the core of a massive star is compressed during a Type II supernova or a Type Ib or Type Ic supernova, and collapses into a neutron star, it retains most of its angular momentum. But, because it has only a tiny fraction of its parent's radius (and therefore its moment of inertia is sharply reduced), a neutron star is formed with very high rotation speed, and then over a very long period it slows. Neutron stars are known that have rotation periods from about 1.4 ms to 30 s. The neutron star's density also gives it very high surface gravity, with typical values ranging from 1012 to 1013 m/s2 (more than 1011 times that of Earth). One measure of such immense gravity is the fact that neutron stars have an escape velocity ranging from 100,000 km/s to 150,000 km/s, that is, from a third to half the speed of light. The neutron star's gravity accelerates infalling matter to tremendous speed. The force of its impact would likely destroy the object's component atoms, rendering all the matter identical, in most respects, to the rest of the neutron star.
A neutron star has a mass of at least 1.1 solar masses (). The upper limit of mass for a neutron star is called the Tolman–Oppenheimer–Volkoff limit and is generally held to be around , but a recent estimate puts the upper limit at . The maximum observed mass of neutron stars is about for PSR J0740+6620 discovered in September, 2019. Compact stars below the Chandrasekhar limit of are generally white dwarfs whereas compact stars with a mass between and are expected to be neutron stars, but there is an interval of a few tenths of a solar mass where the masses of low-mass neutron stars and high-mass white dwarfs can overlap. It is thought that beyond the stellar remnant will overcome the strong force repulsion and neutron degeneracy pressure so that gravitational collapse will occur to produce a black hole, but the smallest observed mass of a stellar black hole is about . Between and , hypothetical intermediate-mass stars such as quark stars and electroweak stars have been proposed, but none have been shown to exist.
The temperature inside a newly formed neutron star is from around 1011 to 1012 kelvin. However, the huge number of neutrinos it emits carry away so much energy that the temperature of an isolated neutron star falls within a few years to around 106 kelvin. At this lower temperature, most of the light generated by a neutron star is in X-rays.
Some researchers have proposed a neutron star classification system using Roman numerals (not to be confused with the Yerkes luminosity classes for non-degenerate stars) to sort neutron stars by their mass and cooling rates: type I for neutron stars with low mass and cooling rates, type II for neutron stars with higher mass and cooling rates, and a proposed type III for neutron stars with even higher mass, approaching , and with higher cooling rates and possibly candidates for exotic stars.
Neutron stars have overall densities of to ( to times the density of the Sun), which is comparable to the approximate density of an atomic nucleus of . The neutron star's density varies from about in the crust—increasing with depth—to about or (denser than an atomic nucleus) deeper inside. A neutron star is so dense that one teaspoon (5 milliliters) of its material would have a mass over , about 900 times the mass of the Great Pyramid of Giza. In the enormous gravitational field of a neutron star, that teaspoon of material would weigh , which is 15 times what the Moon would weigh if it were placed on the surface of the Earth. The entire mass of the Earth at neutron star density would fit into a sphere of 305 m in diameter (the size of the Arecibo Observatory). The pressure increases from to from the inner crust to the center.
The equation of state of matter at such high densities is not precisely known because of the theoretical difficulties associated with extrapolating the likely behavior of quantum chromodynamics, superconductivity, and superfluidity of matter in such states. The problem is exacerbated by the empirical difficulties of observing the characteristics of any object that is hundreds of parsecs away, or farther.
A neutron star has some of the properties of an atomic nucleus, including density (within an order of magnitude) and being composed of nucleons. In popular scientific writing, neutron stars are therefore sometimes described as "giant nuclei". However, in other respects, neutron stars and atomic nuclei are quite different. A nucleus is held together by the strong interaction, whereas a neutron star is held together by gravity. The density of a nucleus is uniform, while neutron stars are predicted to consist of multiple layers with varying compositions and densities.
The magnetic field strength on the surface of neutron stars ranges from c. 104 to 1011 tesla. These are orders of magnitude higher than in any other object: for comparison, a continuous 16 T field has been achieved in the laboratory and is sufficient to levitate a living frog due to diamagnetic levitation. Variations in magnetic field strengths are most likely the main factor that allows different types of neutron stars to be distinguished by their spectra, and explains the periodicity of pulsars.
The neutron stars known as magnetars have the strongest magnetic fields, in the range of 108 to 1011 tesla, and have become the widely accepted hypothesis for neutron star types soft gamma repeaters (SGRs) and anomalous X-ray pulsars (AXPs). The magnetic energy density of a 108 T field is extreme, exceeding the mass−energy density of ordinary matter. Fields of this strength are able to polarize the vacuum to the point that the vacuum becomes birefringent. Photons can merge or split in two, and virtual particle-antiparticle pairs are produced. The field changes electron energy levels and atoms are forced into thin cylinders. Unlike in an ordinary pulsar, magnetar spin-down can be directly powered by its magnetic field, and the magnetic field is strong enough to stress the crust to the point of fracture. Fractures of the crust cause starquakes, observed as extremely luminous millisecond hard gamma ray bursts. The fireball is trapped by the magnetic field, and comes in and out of view when the star rotates, which is observed as a periodic soft gamma repeater (SGR) emission with a period of 5–8 seconds and which lasts for a few minutes.
The origins of the strong magnetic field are as yet unclear. One hypothesis is that of "flux freezing", or conservation of the original magnetic flux during the formation of the neutron star. If an object has a certain magnetic flux over its surface area, and that area shrinks to a smaller area, but the magnetic flux is conserved, then the magnetic field would correspondingly increase. Likewise, a collapsing star begins with a much larger surface area than the resulting neutron star, and conservation of magnetic flux would result in a far stronger magnetic field. However, this simple explanation does not fully explain magnetic field strengths of neutron stars.
The gravitational field at a neutron star's surface is about times stronger than on Earth, at around . Such a strong gravitational field acts as a gravitational lens and bends the radiation emitted by the neutron star such that parts of the normally invisible rear surface become visible.
If the radius of the neutron star is 3"GM"/"c"2 or less, then the photons may be trapped in an orbit, thus making the whole surface of that neutron star visible "from a single vantage point", along with destabilizing photon orbits at or below the 1 radius distance of the star.
A fraction of the mass of a star that collapses to form a neutron star is released in the supernova explosion from which it forms (from the law of mass–energy equivalence, ). The energy comes from the gravitational binding energy of a neutron star.
Hence, the gravitational force of a typical neutron star is huge. If an object were to fall from a height of one meter on a neutron star 12 kilometers in radius, it would reach the ground at around 1400 kilometers per second. However, even before impact, the tidal force would cause spaghettification, breaking any sort of an ordinary object into a stream of material.
Because of the enormous gravity, time dilation between a neutron star and Earth is significant. For example, eight years could pass on the surface of a neutron star, yet ten years would have passed on Earth, not including the time-dilation effect of its very rapid rotation.
Neutron star relativistic equations of state describe the relation of radius vs. mass for various models. The most likely radii for a given neutron star mass are bracketed by models AP4 (smallest radius) and MS2 (largest radius). BE is the ratio of gravitational binding energy mass equivalent to the observed neutron star gravitational mass of "M" kilograms with radius "R" meters,
Given current values
and star masses "M" commonly reported as multiples of one solar mass,
then the relativistic fractional binding energy of a neutron star is
A neutron star would not be more compact than 10,970 meters radius (AP4 model). Its mass fraction gravitational binding energy would then be 0.187, −18.7% (exothermic). This is not near 0.6/2 = 0.3, −30%.
The equation of state for a neutron star is not yet known. It is assumed that it differs significantly from that of a white dwarf, whose equation of state is that of a degenerate gas that can be described in close agreement with special relativity. However, with a neutron star the increased effects of general relativity can no longer be ignored. Several equations of state have been proposed (FPS, UU, APR, L, SLy, and others) and current research is still attempting to constrain the theories to make predictions of neutron star matter. This means that the relation between density and mass is not fully known, and this causes uncertainties in radius estimates. For example, a neutron star could have a radius of 10.7, 11.1, 12.1 or 15.1 kilometers (for EOS FPS, UU, APR or L respectively).
Current understanding of the structure of neutron stars is defined by existing mathematical models, but it might be possible to infer some details through studies of neutron-star oscillations. Asteroseismology, a study applied to ordinary stars, can reveal the inner structure of neutron stars by analyzing observed spectra of stellar oscillations.
Current models indicate that matter at the surface of a neutron star is composed of ordinary atomic nuclei crushed into a solid lattice with a sea of electrons flowing through the gaps between them. It is possible that the nuclei at the surface are iron, due to iron's high binding energy per nucleon. It is also possible that heavy elements, such as iron, simply sink beneath the surface, leaving only light nuclei like helium and hydrogen. If the surface temperature exceeds 106 kelvin (as in the case of a young pulsar), the surface should be fluid instead of the solid phase that might exist in cooler neutron stars (temperature <106 kelvin).
The "atmosphere" of a neutron star is hypothesized to be at most several micrometers thick, and its dynamics are fully controlled by the neutron star's magnetic field. Below the atmosphere one encounters a solid "crust". This crust is extremely hard and very smooth (with maximum surface irregularities of ~5 mm), due to the extreme gravitational field.
Proceeding inward, one encounters nuclei with ever-increasing numbers of neutrons; such nuclei would decay quickly on Earth, but are kept stable by tremendous pressures. As this process continues at increasing depths, the neutron drip becomes overwhelming, and the concentration of free neutrons increases rapidly. In that region, there are nuclei, free electrons, and free neutrons. The nuclei become increasingly small (gravity and pressure overwhelming the strong force) until the core is reached, by definition the point where mostly neutrons exist. The expected hierarchy of phases of nuclear matter in the inner crust has been characterized as "nuclear pasta", with fewer voids and larger structures towards higher pressures.
The composition of the superdense matter in the core remains uncertain. One model describes the core as superfluid neutron-degenerate matter (mostly neutrons, with some protons and electrons). More exotic forms of matter are possible, including degenerate strange matter (containing strange quarks in addition to up and down quarks), matter containing high-energy pions and kaons in addition to neutrons, or ultra-dense quark-degenerate matter.
Neutron stars are detected from their electromagnetic radiation. Neutron stars are usually observed to pulse radio waves and other electromagnetic radiation, and neutron stars observed with pulses are called pulsars.
Pulsars' radiation is thought to be caused by particle acceleration near their magnetic poles, which need not be aligned with the rotational axis of the neutron star. It is thought that a large electrostatic field builds up near the magnetic poles, leading to electron emission. These electrons are magnetically accelerated along the field lines, leading to curvature radiation, with the radiation being strongly polarized towards the plane of curvature. In addition, high energy photons can interact with lower energy photons and the magnetic field for electron−positron pair production, which through electron–positron annihilation leads to further high energy photons.
The radiation emanating from the magnetic poles of neutron stars can be described as "magnetospheric radiation", in reference to the magnetosphere of the neutron star. It is not to be confused with "magnetic dipole radiation", which is emitted because the magnetic axis is not aligned with the rotational axis, with a radiation frequency the same as the neutron star's rotational frequency.
If the axis of rotation of the neutron star is different to the magnetic axis, external viewers will only see these beams of radiation whenever the magnetic axis point towards them during the neutron star rotation. Therefore, periodic pulses are observed, at the same rate as the rotation of the neutron star.
In addition to pulsars, non-pulsating neutron stars have also been identified, although they may have minor periodic variation in luminosity. This seems to be a characteristic of the X-ray sources known as Central Compact Objects in Supernova remnants (CCOs in SNRs), which are thought to be young, radio-quiet isolated neutron stars.
In addition to radio emissions, neutron stars have also been identified in other parts of the electromagnetic spectrum. This includes visible light, near infrared, ultraviolet, X-rays, and gamma rays. Pulsars observed in X-rays are known as X-ray pulsars if accretion-powered, while those identified in visible light are known as optical pulsars. The majority of neutron stars detected, including those identified in optical, X-ray, and gamma rays, also emit radio waves; the Crab Pulsar produces electromagnetic emissions across the spectrum. However, there exist neutron stars called radio-quiet neutron stars, with no radio emissions detected.
Neutron stars rotate extremely rapidly after their formation due to the conservation of angular momentum; in analogy to spinning ice skaters pulling in their arms, the slow rotation of the original star's core speeds up as it shrinks. A newborn neutron star can rotate many times a second.
Over time, neutron stars slow, as their rotating magnetic fields in effect radiate energy associated with the rotation; older neutron stars may take several seconds for each revolution. This is called "spin down". The rate at which a neutron star slows its rotation is usually constant and very small.
The periodic time ("P") is the rotational period, the time for one rotation of a neutron star. The spin-down rate, the rate of slowing of rotation, is then given the symbol formula_8 ("P"-dot), the derivative of "P" with respect to time. It is defined as periodic time increase per unit time; it is a dimensionless quantity, but can be given the units of s⋅s−1 (seconds per second).
The spin-down rate ("P"-dot) of neutron stars usually falls within the range of 10−22 to 10−9 s⋅s−1, with the shorter period (or faster rotating) observable neutron stars usually having smaller "P"-dot. As a neutron star ages, its rotation slows (as "P" increases); eventually, the rate of rotation will become too slow to power the radio-emission mechanism, and the neutron star can no longer be detected.
"P" and "P"-dot allow minimum magnetic fields of neutron stars to be estimated. "P" and "P"-dot can be also used to calculate the "characteristic age" of a pulsar, but gives an estimate which is somewhat larger than the true age when it is applied to young pulsars.
"P" and "P"-dot can also be combined with neutron star's moment of inertia to estimate a quantity called "spin-down luminosity", which is given the symbol formula_9 ("E"-dot). It is not the measured luminosity, but rather the calculated loss rate of rotational energy that would manifest itself as radiation. For neutron stars where the spin-down luminosity is comparable to the actual luminosity, the neutron stars are said to be "rotation powered". The observed luminosity of the Crab Pulsar is comparable to the spin-down luminosity, supporting the model that rotational kinetic energy powers the radiation from it. With neutron stars such as magnetars, where the actual luminosity exceeds the spin-down luminosity by about a factor of one hundred, it is assumed that the luminosity is powered by magnetic dissipation, rather than being rotation powered.
"P" and "P"-dot can also be plotted for neutron stars to create a "P"–"P"-dot diagram. It encodes a tremendous amount of information about the pulsar population and its properties, and has been likened to the Hertzsprung–Russell diagram in its importance for neutron stars.
Neutron star rotational speeds can increase, a process known as spin up. Sometimes neutron stars absorb orbiting matter from companion stars, increasing the rotation rate and reshaping the neutron star into an oblate spheroid. This causes an increase in the rate of rotation of the neutron star of over a hundred times per second in the case of millisecond pulsars.
The most rapidly rotating neutron star currently known, PSR J1748-2446ad, rotates at 716 revolutions per second. A 2007 paper reported the detection of an X-ray burst oscillation, which provides an indirect measure of spin, of 1122 Hz from the neutron star XTE J1739-285, suggesting 1122 rotations a second. However, at present, this signal has only been seen once, and should be regarded as tentative until confirmed in another burst from that star.
Sometimes a neutron star will undergo a glitch, a sudden small increase of its rotational speed or spin up. Glitches are thought to be the effect of a starquake—as the rotation of the neutron star slows, its shape becomes more spherical. Due to the stiffness of the "neutron" crust, this happens as discrete events when the crust ruptures, creating a starquake similar to earthquakes. After the starquake, the star will have a smaller equatorial radius, and because angular momentum is conserved, its rotational speed has increased.
Starquakes occurring in magnetars, with a resulting glitch, is the leading hypothesis for the gamma-ray sources known as soft gamma repeaters.
Recent work, however, suggests that a starquake would not release sufficient energy for a neutron star glitch; it has been suggested that glitches may instead be caused by transitions of vortices in the theoretical superfluid core of the neutron star from one metastable energy state to a lower one, thereby releasing energy that appears as an increase in the rotation rate.
An "anti-glitch", a sudden small decrease in rotational speed, or spin down, of a neutron star has also been reported. It occurred in the magnetar 1E 2259+586, that in one case produced an X-ray luminosity increase of a factor of 20, and a significant spin-down rate change. Current neutron star models do not predict this behavior. If the cause was internal, it suggests differential rotation of solid outer crust and the superfluid component of the magnetar's inner structure.
At present, there are about 2,000 known neutron stars in the Milky Way and the Magellanic Clouds, the majority of which have been detected as radio pulsars. Neutron stars are mostly concentrated along the disk of the Milky Way, although the spread perpendicular to the disk is large because the supernova explosion process can impart high translational speeds (400 km/s) to the newly formed neutron star.
Some of the closest known neutron stars are RX J1856.5−3754, which is about 400 light-years from Earth, and PSR J0108−1431 about 424 light years. RX J1856.5-3754 is a member of a close group of neutron stars called The Magnificent Seven. Another nearby neutron star that was detected transiting the backdrop of the constellation Ursa Minor has been nicknamed Calvera by its Canadian and American discoverers, after the villain in the 1960 film "The Magnificent Seven". This rapidly moving object was discovered using the ROSAT/Bright Source Catalog.
Neutron stars are only detectable with modern technology during the earliest stages of their lives (almost always less than 1 million years) and are vastly outnumbered by older neutron stars that would only be detectable through their blackbody radiation and gravitational effects on other stars.
About 5% of all known neutron stars are members of a binary system. The formation and evolution of binary neutron stars can be a complex process. Neutron stars have been observed in binaries with ordinary main-sequence stars, red giants, white dwarfs, or other neutron stars. According to modern theories of binary evolution, it is expected that neutron stars also exist in binary systems with black hole companions. The merger of binaries containing two neutron stars, or a neutron star and a black hole, has been observed through the emission of gravitational waves.
Binary systems containing neutron stars often emit X-rays, which are emitted by hot gas as it falls towards the surface of the neutron star. The source of the gas is the companion star, the outer layers of which can be stripped off by the gravitational force of the neutron star if the two stars are sufficiently close. As the neutron star accretes this gas, its mass can increase; if enough mass is accreted, the neutron star may collapse into a black hole.
The distance between two neutron stars in a close binary system is observed to shrink as gravitational waves are emitted. Ultimately, the neutron stars will come into contact and coalesce.
The coalescence of binary neutron stars is one of the leading models for the origin of short gamma-ray bursts. Strong evidence for this model came from the observation of a kilonova associated with the short-duration gamma-ray burst GRB 130603B, and finally confirmed by detection of gravitational wave GW170817 and short GRB 170817A by LIGO, Virgo, and 70 observatories covering the electromagnetic spectrum observing the event. The light emitted in the kilonova is believed to come from the radioactive decay of material ejected in the merger of the two neutron stars. This material may be responsible for the production of many of the chemical elements beyond iron, as opposed to the supernova nucleosynthesis theory.
Neutron stars can host exoplanets. These can be original, circumbinary, captured, or the result of a second round of planet formation. Pulsars can also strip the atmosphere off from a star, leaving a planetary-mass remnant, which may be understood as a chthonian planet or a stellar object depending on interpretation. For pulsars, such pulsar planets can be detected with the pulsar timing method, which allows for high precision and detection of much smaller planets than with other methods. Two systems have been definitively confirmed. The first exoplanets ever to be detected were the three planets Draugr, Poltergeist and Phobetor around PSR B1257+12, discovered in 1992–1994. Of these, Draugr is the smallest exoplanet ever detected, at a mass of twice that of the Moon. Another system is PSR B1620−26, where a circumbinary planet orbits a neutron star-white dwarf binary system. Also, there are several unconfirmed candidates. Pulsar planets receive little visible light, but massive amounts of ionizing radiation and high-energy stellar wind, which makes them rather hostile environments.
At the meeting of the American Physical Society in December 1933 (the proceedings were published in January 1934), Walter Baade and Fritz Zwicky proposed the existence of neutron stars, less than two years after the discovery of the neutron by James Chadwick. In seeking an explanation for the origin of a supernova, they tentatively proposed that in supernova explosions ordinary stars are turned into stars that consist of extremely closely packed neutrons that they called neutron stars. Baade and Zwicky correctly proposed at that time that the release of the gravitational binding energy of the neutron stars powers the supernova: "In the supernova process, mass in bulk is annihilated". Neutron stars were thought to be too faint to be detectable and little work was done on them until November 1967, when Franco Pacini pointed out that if the neutron stars were spinning and had large magnetic fields, then electromagnetic waves would be emitted. Unbeknown to him, radio astronomer Antony Hewish and his research assistant Jocelyn Bell at Cambridge were shortly to detect radio pulses from stars that are now believed to be highly magnetized, rapidly spinning neutron stars, known as pulsars.
In 1965, Antony Hewish and Samuel Okoye discovered "an unusual source of high radio brightness temperature in the Crab Nebula". This source turned out to be the Crab Pulsar that resulted from the great supernova of 1054.
In 1967, Iosif Shklovsky examined the X-ray and optical observations of Scorpius X-1 and correctly concluded that the radiation comes from a neutron star at the stage of accretion.
In 1967, Jocelyn Bell Burnell and Antony Hewish discovered regular radio pulses from PSR B1919+21. This pulsar was later interpreted as an isolated, rotating neutron star. The energy source of the pulsar is the rotational energy of the neutron star. The majority of known neutron stars (about 2000, as of 2010) have been discovered as pulsars, emitting regular radio pulses.
In 1971, Riccardo Giacconi, Herbert Gursky, Ed Kellogg, R. Levinson, E. Schreier, and H. Tananbaum discovered 4.8 second pulsations in an X-ray source in the constellation Centaurus, Cen X-3. They interpreted this as resulting from a rotating hot neutron star. The energy source is gravitational and results from a rain of gas falling onto the surface of the neutron star from a companion star or the interstellar medium.
In 1974, Antony Hewish was awarded the Nobel Prize in Physics "for his decisive role in the discovery of pulsars" without Jocelyn Bell who shared in the discovery.
In 1974, Joseph Taylor and Russell Hulse discovered the first binary pulsar, PSR B1913+16, which consists of two neutron stars (one seen as a pulsar) orbiting around their center of mass. Albert Einstein's general theory of relativity predicts that massive objects in short binary orbits should emit gravitational waves, and thus that their orbit should decay with time. This was indeed observed, precisely as general relativity predicts, and in 1993, Taylor and Hulse were awarded the Nobel Prize in Physics for this discovery.
In 1982, Don Backer and colleagues discovered the first millisecond pulsar, PSR B1937+21. This object spins 642 times per second, a value that placed fundamental constraints on the mass and radius of neutron stars. Many millisecond pulsars were later discovered, but PSR B1937+21 remained the fastest-spinning known pulsar for 24 years, until PSR J1748-2446ad (which spins more than 700 times a second) was discovered.
In 2003, Marta Burgay and colleagues discovered the first double neutron star system where both components are detectable as pulsars, PSR J0737−3039. The discovery of this system allows a total of 5 different tests of general relativity, some of these with unprecedented precision.
In 2010, Paul Demorest and colleagues measured the mass of the millisecond pulsar PSR J1614−2230 to be , using Shapiro delay. This was substantially higher than any previously measured neutron star mass (, see PSR J1903+0327), and places strong constraints on the interior composition of neutron stars.
In 2013, John Antoniadis and colleagues measured the mass of PSR J0348+0432 to be , using white dwarf spectroscopy. This confirmed the existence of such massive stars using a different method. Furthermore, this allowed, for the first time, a test of general relativity using such a massive neutron star.
In August 2017, LIGO and Virgo made first detection of gravitational waves produced by colliding neutron stars.
In October 2018, astronomers reported that GRB 150101B, a gamma-ray burst event detected in 2015, may be directly related to the historic GW170817 and associated with the merger of two neutron stars. The similarities between the two events, in terms of gamma ray, optical and x-ray emissions, as well as to the nature of the associated host galaxies, are "striking", suggesting the two separate events may both be the result of the merger of neutron stars, and both may be a kilonova, which may be more common in the universe than previously understood, according to the researchers.
In July 2019, astronomers reported that a new method to determine the Hubble constant, and resolve the discrepancy of earlier methods, has been proposed based on the mergers of pairs of neutron stars, following the detection of the neutron star merger of GW170817. Their measurement of the Hubble constant is (km/s)/Mpc. | https://en.wikipedia.org/wiki?curid=21869 |
Nassau, Bahamas
Nassau () is the capital and largest city of The Bahamas. With a population of 274,400 as of 2016, or just over 70% of the entire population of the Bahamas (≈391,000), Nassau is commonly defined as a primate city, dwarfing all other towns in the country. It is the centre of commerce, education, law, administration and media of the country.
Lynden Pindling International Airport, the major airport for the Bahamas, is located about west of Nassau city centre, and has daily flights to major cities in Canada, the Caribbean, the United Kingdom and the United States. The city is located on the island of New Providence, which functions much like a business district.
Nassau is the site of the House of Assembly and various judicial departments and was considered historically to be a stronghold of pirates. The city was named in honour of William III of England, Prince of Orange-Nassau.
Nassau's modern growth began in the late eighteenth century, with the influx of thousands of Loyalists and their slaves to the Bahamas following the American War of Independence. Many of them settled in Nassau (then and still the commerce capital of the Bahamas) and eventually came to outnumber the original inhabitants.
As the population of Nassau grew, so did its populated areas. Today the city dominates the entire island and its satellite, Paradise Island. However, until the post-Second World War era, the outer suburbs scarcely existed. Most of New Providence was uncultivated bush until Loyalists were resettled there following the American Revolutionary War; they established several plantations, such as Clifton and Tusculum. Slaves were imported as labour.
After the British abolished the international slave trade in 1807, they resettled thousands of Africans liberated from slave ships by the Royal Navy on New Providence (at Adelaide Village and Gambier Village), along with other islands such as Grand Bahama, Exuma, Abaco and Inagua. In addition, slaves freed from American ships, such as the Creole case in 1841, were allowed to settle there. The largest concentration of Africans historically lived in the "Over-the-Hill" suburbs of Grants Town and Bain Town to the south of the city of Nassau, while most of the inhabitants of European descent lived on the island's northern coastal ridges.
The town that would be called Nassau was founded in 1670 by British noblemen who brought British settlers with them to New Providence. They built a fort, and named it Charles Town in honour of England’s King Charles II. During this time there were frequent wars with the Spanish, and Charles Town was used as a base for privateering against them. In 1684 the town was burned to the ground during the Raid on Charles Town. It was rebuilt in 1695 under Governor Nicholas Trott and renamed Nassau in honour of William of Orange. William was the Dutch Stadtholder ("stadhouder" in Dutch), and after 1689 he was William III, the King of England, Scotland and Ireland. William belonged to a branch of the House of Nassau, from which the city takes its name. The name Nassau ultimately derives from the town of Nassau in Germany.
Lacking effective governors after Trott, Nassau fell on hard times. In 1703 Spanish and French allied forces briefly occupied Nassau. More so, Nassau suffered greatly during the War of Spanish Succession and had witnessed Spanish incursions during 1703, 1704 and 1706. From 1703 to 1718 there was no legitimate governor in the colony. Thomas Walker was the island's last remaining appointed official and although evidence is scarce, it appears that he was acting in the role of deputy governor upon Benjamin Hornigold's arrival in 1713. By this time, the sparsely settled Bahamas had become a pirate haven known as New Providence. The Governor of Bermuda stated that there were over 1,000 pirates in Nassau and that they outnumbered the mere hundred inhabitants of the town. They proclaimed Nassau a pirate republic, recognising the island's prosperous state in which it offered fresh fruit, meat and water and plenty of protection amid its waterways. Nassau's harbour was tailor-made for defence and it could take around 500 vessels, though it was too shallow to accept large battleships. Benjamin Hornigold, along with his great rival Henry Jennings, became the unofficial overlord of a veritable pirate republic which played host to the self-styled Flying Gang. Examples of pirates that used Nassau as their base are Charles Vane, Thomas Barrow (who declared himself "Governor of New Providence"), Benjamin Hornigold, Calico Jack Rackham, Anne Bonny, Mary Read, and the infamous Edward Teach, better known as "Blackbeard".
In 1718, the British sought to regain control of the islands and appointed Captain Woodes Rogers as Royal governor. He successfully clamped down on the pirates, reformed the civil administration, and restored commerce. Rogers cleaned up Nassau and rebuilt the fort, using his own wealth to try to overcome problems. In 1720, the Spanish attacked Nassau but failed to capture the town and the island.
During the wars in the Thirteen Colonies, Nassau experienced an economic boom. With funds from privateering, a new fort, street lights and over 2300 sumptuous houses were built and Nassau was extended. In addition to this, mosquito breeding swamps were filled.
In 1776, the Battle of Nassau resulted in a brief occupation by American Continental Marines during the American War of Independence, where the Marines staged their first amphibious raid on Fort Montague after attempting to sneak up on Fort Nassau. In 1778 after an overnight invasion, American raiders led by Captain Rathburn, left with ships, gunpowder and military stores after stopping in Nassau for only two weeks. In 1782 Spain captured Nassau for the last time when Don Juan de Cagigal, governor-general of Cuba, attacked New Providence with 5,000 men. Andrew Deveaux, an American Loyalist who resettled on the island, set forth and recaptured the island for the British Crown with just 220 men and 150 muskets to face a force of 600 trained soldiers.
Lord Dunmore governed the colony from 1787 to 1796. He oversaw the construction of Fort Charlotte and Fort Fincastle in Nassau.
During the American Civil War, Nassau served as a port for blockade runners making their way to and from ports along the southern Atlantic Coast for continued trade with the Confederacy.
In the 1920s and 1930s, Nassau profited from Prohibition in the United States.
Located on New Providence Island, Nassaus harbour has a blend of old world and colonial architecture, and a busy port. The tropical climate and natural environment of the Bahamas have made Nassau an attractive tourist destination.
Nassau developed directly behind the port area. New Providence provides 200 km² of relatively flat and low-lying land intersected by low ridges (none of which restricted settlement). In the centre of the island there are several shallow lakes that are tidally connected.
The city's proximity to the United States (290 km east-southeast of Miami, Florida) has contributed to its popularity as a holiday resort, especially after the United States imposed a ban on travel to Cuba in 1963. The Atlantis resort on nearby Paradise Island accounts for more tourist arrivals to the city than any other hotel property of Nassau. The mega-resort employs over 6,000 Bahamians, and is the largest employer outside government.
Nassau has a tropical savanna climate (Köppen: "Aw"), bordering on a tropical monsoon climate (Köppen: "Am"), with hot wet summers, and mild dry winters. Temperatures are relatively consistent throughout the course of the year. During the wet season from May through October, average daytime high temperatures are , while during the dry season from November through April daytime temperatures are between , rarely falling below .
During the 19th century, Nassau became urbanized, attracting rural residents. Growth since the 1950s has been outwards from the town. The 1788 heart of Nassau was just a few blocks of buildings between Government House and the harbour, but the town gradually expanded east to Malcolm's Park, south to Wulff Road, and west to Nassau Street. Grants Town and Bain Town south of the city became the main residential areas for those of African descent, and until about 30 years ago was the most populous part of the city.
Those of European descent built houses along the shore, east as far as Fort Montagu, west as far as Saunders Beach, and along the ridge edging the city. During the 20th century, the city spread east to Village Road and west to Fort Charlotte and Oakes Field. This semicircle of residential development was the main area of settlement until after the Second World War, and marks a distinct phase in the city's expansion, the outer boundary to this zone being the effective limit of the continuous built-up area. The wealthier residents continued to spread east (to East End Point) and West (to Lyford Cay).
In the last 40 years, residential development has been quite different. It has consisted mainly of planned middle-income sub-divisions. Since the 1960s, government has sponsored low-cost housing developments at Yellow Elder, Elizabeth Estates, and Pinewood Gardens, in the outer ring.
The city centre is the hub for all activities in Nassau. Thousands of people visit daily, to shop, dine, sightsee and to enjoy the tropical climate of the city. While the busiest part of central city is the Bay Street thoroughfare and the Woodes Rogers Walk, located across the street from the port and parallel to Bay, the area extends for several blocks in each direction. It starts at West Bay, around the Junkanoo Beach area. A few hotels and restaurants are located on West Bay.
The next landmark is the British Colonial Hotel, which marks the beginning of Bay Street proper. Pirates of Nassau Museum is just across from the British Colonial Hilton. The next few blocks of Bay Street are wall-to-wall boutiques, with a few restaurants and clubs interspersed throughout the retailers.
Historical landmarks are also in the vicinity, including Vendue House, Christ Church Cathedral, and the Nassau Public Library. Although the tourist part of the city centre peters out after about seven blocks, smaller, more local shops are located down Bay Street. At this point, Bay Street becomes East Bay.
The Straw Market is a tourist destination in the city centre. A new market was opened in 2011 after a fire in 2001 destroyed the original Fish, Vegetable and Straw Market. The market is open on all sides, and contains a number of Bahamian craft stores.
Cable Beach is recognised as the hotel district of Nassau. Five hotels—two of which are all-inclusive—are located on this strip. The area is also known for its dining, the Crystal Palace Casino, and the golden sands of Cable Beach. Most of the area's restaurants are located either in the hotels or across the street. There is little to no nightlife. There is a bit of shopping, most of it located in the Wyndham. The commercial future of Cable Beach is being re-imagined with the development of Baha Mar, a resort and casino project that will bring more than 2,000 hotel rooms and the largest gaming and convention facility in the Caribbean to this section of New Providence Island. As of April 2017, it is officially open, but not yet complete.
Nassau had a population of 128,420 females and 117,909 males and was home to 70,222 households with an average family size of 3.5 according to the 2010 census. Nassau's large population in relation to the remainder of the Bahamas is the result of waves of immigration from the Family Islands to the capital. Consequently, this has led to the decline in the population of the lesser developed islands and the rapid growth of Nassau.
In January 2018, the U.S. Department of State issued the latest in a series of travel advisories due to violent crime. Tourists are often targeted, and armed robbery has increased on all of New Providence.
Lynden Pindling International Airport (formerly Nassau International Airport) is located on the western side of Nassau. New Providence Airport on Paradise Island was closed in 1999 with runway removed and integrated into the resort on the island.
Ferries (boats) provide water travel around Nassau to the surrounding islands, namely Paradise Island. Prince George Wharf is the main port in the city that serves cruise ships with ports of call in Nassau. Transportation and shipping around the Family Islands is primarily through mailboats based at Potters Cay. International shipping is done through the Arawak Port Department on Arawak Cay. High speed excursions to Exuma, Spanish Wells and Harbour Island are available daily.
Public jitney buses and taxis provide transport in and around Nassau. Rental cars are also available in the city and at the airport.
Major roads in Nassau include:
The major road in Nassau is Bay Street for tourists. Bay Street runs the entire length of the Island from East to West. Bay Street also provided beachfront views. The downtown area and the cruise ships are in walking distance.
The Bahamas is a left-hand traffic country, but many cars are imported from the US in left-hand drive.
Nassau has been recognized as a part of the UNESCO Creative Cities Network as a city of Crafts and Folk Art. It is one of only three Caribbean cities to receive this honour.
The city's chief festival is Junkanoo, an energetic, colourful street parade of brightly costumed people dancing to the rhythmic accompaniment of cowbells, drums and whistles. The word 'Junkanoo' is named after the founder 'John Kanoo'. The celebration occurs on December 26, July 10 and January 1, beginning in the early hours of the morning (1:00 a.m.) and ending around 10 a.m. At the end of the Junkanoo procession, judges award cash prizes for the best music, costumes, and overall group presentation. These Bahamians spend all year preparing their handmade costumes by using colored crepe paper and cardboard.
Nassau was the main location (however, the filming locations were based around South Africa) for the Starz Network show "Black Sails" (2014-2017).
Nassau was featured as an important location in several movies, including the Beatles film "Help!" and the James Bond films "Thunderball", (1965) and "Never Say Never Again", (a remake of "Thunderball") (1983) and also for part of the action in "Casino Royale" (2006). In 1981, it was used as a location for the ocean scene (in the film portrayed as being in Greece) in "For Your Eyes Only."
Several other late-20th- and 21st-century movies have been set here, including "After the Sunset", "Into the Blue" (2005), and "Flipper" (1996).
It hosted the Miss Universe 2009 pageant.
Nassau was featured as a primary location in the 2013 video game "" (2013).
Nassau Town is mentioned in Sloop John B, a Bahamian folk song. Since the early 1950s there have been many recordings of the song, the best known being by The Beach Boys on their Pet Sounds album.
Nassau has six sister cities worldwide: | https://en.wikipedia.org/wiki?curid=21871 |
Nastassja Kinski
Nastassja Aglaia Kinski (née Nakszynski; born 24 January 1961) is a German actress and former model who has appeared in more than 60 films in Europe and the United States. Her worldwide breakthrough was with "Stay as You Are" (1978). She then came to global prominence with her Golden Globe Award-winning performance as the title character in the Roman Polanski-directed film "Tess" (1979). Other notable films in which she acted include the erotic horror film "Cat People" (1982), the Wim Wenders dramas "Paris, Texas" (1984) and "Faraway, So Close!" (1993), and the biographical drama film, "An American Rhapsody" (2001). Kinski is fluent in four languages: German, English, French and Italian.
Kinski was born in West Berlin as Nastassja Aglaia Nakszynski. She is the daughter of renowned German actor Klaus Kinski and his second wife, actress Ruth Brigitte Tocki. She is of partial Polish descent, for her grandfather Bruno Nakszynski was a Germanized ethnic Pole. Kinski has two half-siblings: Pola and Nikolai Kinski. Her parents divorced in 1968. After the age of 10, Kinski rarely saw her father. Her mother struggled financially to support them; they eventually lived in a commune in Munich.
In a 1999 interview, Kinski denied that her father had molested her as a child, but said he had abused her "in other ways". In 2013, when interviewed about the allegations of sexual abuse made by her half-sister Pola Kinski, she confirmed that he attempted with her, but did not succeed. She said, "He was no father. Ninety-nine percent of the time I was terrified of him. He was so unpredictable that the family lived in constant terror." When asked what she would say to him now, if she had the chance, she replied, "I would do anything to put him behind bars for life. I am glad he is no longer alive."
Kinski began working as a model as a teenager in Germany. Actress Lisa Kreuzer of the German New Wave helped get her the role of the mute Mignon in Wim Wenders 1975 film "The Wrong Move", in which at the age of 13 she was depicted topless. She later played one of the leading roles in Wenders' film "Paris, Texas" (1984) and appeared in his "Faraway, So Close" (1993).
In 1976, while still a teenager, Kinski had her first two major roles: in Wolfgang Petersen's feature film-length episode "Reifezeugnis" of the German TV crime series "Tatort." Next, she appeared in the British horror film "To the Devil a Daughter" (1976), produced by Hammer Film Productions, which was released in the UK just 40 days after Kinski's fifteenth birthday, making it a virtual certainty she was only fourteen when her scenes were shot (including full frontal nudity). In regards to her early films, Kinski has stated that she felt exploited by the industry. In an interview with "W", she said, "If I had had somebody to protect me or if I had felt more secure about myself, I would not have accepted certain things. Nudity things. And inside it was just tearing me apart."
In 1978 Kinski starred in the Italian romance "Stay as You Are" ("Così come sei") with Marcello Mastroianni, gaining her recognition in the United States after New Line Cinema released it there in December 1979. "Time" wrote that she was "simply ravishing, genuinely sexy and high-spirited without being painfully aggressive about it." The film also received a major international release from Columbia Pictures.
Kinski met the director Roman Polanski at a party in 1976. He urged her to study method acting with Lee Strasberg in the United States and she was offered the title role in Polanski's upcoming film, "Tess" (1979). In 1978, Kinski underwent extensive preparation for the portrayal of an English peasant girl, which included acquiring a Dorset accent through elocution studies:
The film was nominated for six awards, including Best Picture, at the 53rd Academy Awards, and won three.
In 1981 Richard Avedon photographed Kinski with a Burmese python coiled around her nude body. The image, which first appeared in the October 1981 issue of US "Vogue", was released as a poster and became a best-seller, further confirming her status as a sex symbol.
In 1982 she starred in Francis Ford Coppola's romantic musical "One from the Heart", her first film made in the United States. "Texas Monthly" described her as acting "as a Felliniesque circus performer to represent the twinkling evanescence of Eros." The film failed at the box office and was a major loss for Coppola's new Zoetrope Studios. That year, she was also in the erotic horror movie "Cat People". Dudley Moore's comedy "Unfaithfully Yours" and an adaptation of John Irving's "The Hotel New Hampshire" followed in 1984.
Kinski reteamed with Wenders for the 1984 film "Paris, Texas". One of her most acclaimed films to date, it won the top award at the Cannes Film Festival. Throughout the 1980s, Kinski split her time between Europe and the United States, making "Moon in the Gutter" (1983), "Harem" (1985) and "Torrents of Spring" (1989) in Europe, and "Exposed" (1983), "Maria's Lovers" (1984), and "Revolution" (1985) in the United States.
During the 1990s Kinski appeared in a number of American films, including the action movie "Terminal Velocity" opposite Charlie Sheen, the Mike Figgis 1997 adultery tale "One Night Stand", "Your Friends & Neighbors" (1998), John Landis's "Susan's Plan" (1998), and "The Lost Son" (1999).
Her most recent films include David Lynch's "Inland Empire" (2006) and Rotimi Rainwater's "Sugar" (2013). In 2016, she competed in the German "Let's Dance" show.
In 1976, when Kinski was aged 15, she reportedly began a romantic relationship with director Roman Polanski, who at the time was 43. In a 1994 interview with Diane Sawyer, Polanski said on camera: "So what about Nastassja Kinski? She was young and we had a love affair." However, in a 1999 interview in "The Guardian", Kinski was quoted as saying that there was no affair and that, "There was a flirtation. There could have been a seduction, but there was not. He had respect for me."
In the late 1970s, Kinski was roommates with a pre-fame Demi Moore. In her 2019 memoir "Inside Out", Moore wrote: "We know each other in a way that no one else could."
Kinski has three children by three different men. Her first child, son Aljosha Nakszynski (born June 29, 1984), was fathered by actor Vincent Spano, her co-star in "Maria's Lovers". On September 10, 1984, Kinski married Egyptian filmmaker Ibrahim Moussa, with whom she had daughter (born March 2, 1986). The marriage was dissolved in 1992. From 1992 until 1995, Kinski lived with musician Quincy Jones, though she kept her own apartment on Hilgard Avenue, near UCLA, at the time. They had a daughter, Kenya Julia Niambi Sarah Jones (born February 9, 1993), a model known professionally as Kenya Kinski-Jones.
In 1997, Kinski dated married producer Jonathan D. Krane during a brief separation from his wife, actress Sally Kellerman. Over the course of her career, Kinski has also been romantically linked with Paul Schrader, Jean-Jacques Beineix, Rob Lowe, Jon Voight, Gérard Depardieu, Dudley Moore, Milos Forman and Wim Wenders. As of 2012, she was dating actor Rick Yune.
In 2001 Kinski stated in an interview in "The Daily Telegraph" that she was affected by the sleep disorder narcolepsy. | https://en.wikipedia.org/wiki?curid=21873 |
Nuremberg trials
The Nuremberg trials () were a series of military tribunals held after World War II by the Allied forces under international law and the laws of war. The trials were most notable for the prosecution of prominent members of the political, military, judicial, and economic leadership of Nazi Germany, who planned, carried out, or otherwise participated in the Holocaust and other war crimes. The trials were held in Nuremberg, Germany, and their decisions marked a turning point between classical and contemporary international law.
The first and best known of the trials was that of the major war criminals before the International Military Tribunal (IMT). It was described as "the greatest trial in history" by Sir Norman Birkett, one of the British judges present throughout. Held between 20 November 1945 and 1 October 1946, the Tribunal was given the task of trying 24 of the most important political and military leaders of the Third Reich. Primarily treated here is the first trial, conducted by the International Military Tribunal. Further trials of lesser war criminals were conducted under Control Council Law No. 10 at the U.S. Nuremberg Military Tribunal (NMT), which included the Doctors' trial and the Judges' Trial.
The categorization of the crimes and the constitution of the court represented a juridical advance that would be followed afterward by the United Nations for the development of an international jurisprudence in matters of war crimes, crimes against humanity, and wars of aggression, and led to the creation of the International Criminal Court. For the first time in international law, the Nuremberg indictments also mention genocide (count three, war crimes: "the extermination of racial and national groups, against the civilian populations of certain occupied territories in order to destroy particular races and classes of people and national, racial, or religious groups, particularly Jews, Poles, and Gypsies and others.")
A precedent for trying those accused of war crimes had been set at the end of World War I in the Leipzig War Crimes Trials held in May to July 1921 before the "Reichsgericht" (German Supreme Court) in Leipzig, although these had been on a very limited scale and largely regarded as ineffectual. At the beginning of 1940, the Polish government-in-exile asked the British and French governments to condemn the German invasion of their country. The British initially declined to do so; however, in April 1940, a joint declaration was issued by the British, French and Polish. Relatively bland because of Anglo-French reservations, it proclaimed the trio's ""desire to make a formal and public protest to the conscience of the world against the action of the German government whom they must hold responsible for these crimes which cannot remain unpunished.""
Three-and-a-half years later, the stated intention to punish the Germans was much more trenchant. On 1 November 1943, the Soviet Union, the United Kingdom and the United States published their "Declaration on German Atrocities in Occupied Europe", which gave a "full warning" that, when the Nazis were defeated, the Allies would "pursue them to the uttermost ends of the earth ... in order that justice may be done. ... The above declaration is without prejudice to the case of the major war criminals whose offences have no particular geographical location and who will be punished by a joint decision of the Government of the Allies." This intention by the Allies to dispense justice was reiterated at the Yalta Conference and at Potsdam in 1945.
British War Cabinet documents, released on 2 January 2006, showed that as early as December 1944 the Cabinet had discussed their policy for the punishment of the leading Nazis if captured. The British Prime Minister, Winston Churchill, had then advocated a policy of summary execution in some circumstances, with the use of an Act of Attainder to circumvent legal obstacles, being dissuaded from this only by talks with US and Soviet leaders later in the war.
In late 1943, during the Tripartite Dinner Meeting at the Tehran Conference, the Soviet leader, Joseph Stalin, proposed executing 50,000–100,000 German staff officers. US President Franklin D. Roosevelt joked that perhaps 49,000 would do. Churchill, believing them to be serious, denounced the idea of "the cold blooded execution of soldiers who fought for their country" and that he would rather be "taken out in the courtyard and shot" himself than partake in any such action. However, he also stated that war criminals must pay for their crimes and that, in accordance with the Moscow Document which he himself had written, they should be tried at the places where the crimes were committed. Churchill was vigorously opposed to executions "for political purposes." According to the minutes of a meeting between Roosevelt and Stalin at Yalta, on 4 February 1945, at the Livadia Palace, President Roosevelt "said that he had been very much struck by the extent of German destruction in Crimea and therefore he was more bloodthirsty in regard to the Germans than he had been a year ago, and he hoped that Marshal Stalin would again propose a toast to the execution of 50,000 officers of the German Army."
Henry Morgenthau Jr., US Secretary of the Treasury, suggested a plan for the total denazification of Germany; this was known as the Morgenthau Plan. The plan advocated the forced de-industrialisation of Germany and the summary execution of so-called "arch-criminals", i.e. the major war criminals. Roosevelt initially supported this plan, and managed to convince Churchill to support it in a less drastic form. Later, details were leaked generating widespread condemnation by the nation's newspapers. Roosevelt, aware of strong public disapproval, abandoned the plan, but did not adopt an alternative position on the matter. The demise of the Morgenthau Plan created the need for an alternative method of dealing with the Nazi leadership. The plan for the "Trial of European War Criminals" was drafted by Secretary of War Henry L. Stimson and the War Department. Following Roosevelt's death in April 1945, the new president, Harry S. Truman, gave strong approval for a judicial process. After a series of negotiations between Britain, the US, Soviet Union, and France, details of the trial were worked out. The trials were to commence on 20 November 1945, in the Bavarian city of Nuremberg.
On 20 April 1942, representatives from the nine countries occupied by Germany met in London to draft the "Inter-Allied Resolution on German War Crimes". At the meetings in Tehran (1943), Yalta (1945), and Potsdam (1945), the three major wartime powers, the United Kingdom, United States, and the Soviet Union, agreed on the format of punishment for those responsible for war crimes during World War II. France was also awarded a place on the tribunal. The legal basis for the trial was established by the London Charter, which was agreed upon by the four so-called Great Powers on 8 August 1945, and which restricted the trial to "punishment of the major war criminals of the European Axis countries."
Some 200 German war crimes defendants were tried at Nuremberg, and 1,600 others were tried under the traditional channels of military justice. The legal basis for the jurisdiction of the court was that defined by the Instrument of Surrender of Germany. Political authority for Germany had been transferred to the Allied Control Council which, having sovereign power over Germany, could choose to punish violations of international law and the laws of war. Because the court was limited to violations of the laws of war, it did not have jurisdiction over crimes that took place before the outbreak of war on 1 September 1939.
Leipzig and Luxembourg were briefly considered as the location for the trial. The Soviet Union had wanted the trials to take place in Berlin, as the capital city of the 'fascist conspirators', but Nuremberg was chosen as the site for two reasons, with the first one having been the decisive factor:
As a compromise with the Soviets, it was agreed that while the location of the trial would be Nuremberg, Berlin would be the official home of the Tribunal authorities. It was also agreed that France would become the permanent seat of the IMT and that the first trial (several were planned) would take place in Nuremberg.
Most of the accused had previously been detained at Camp Ashcan, a processing station and interrogation center in Luxembourg, and were moved to Nuremberg for the trial.
Each of the four countries provided one judge and an alternative, as well as a prosecutor.
Assisting Jackson were the lawyers Telford Taylor, William S. Kaplan and Thomas J. Dodd, and Richard Sonnenfeldt, a US Army interpreter. Assisting Shawcross were Major Sir David Maxwell-Fyfe and Sir John Wheeler-Bennett. Mervyn Griffith-Jones, who was later to become famous as the chief prosecutor in the "Lady Chatterley's Lover" obscenity trial, was also on Shawcross's team. Shawcross also recruited a young barrister, Anthony Marreco, who was the son of a friend of his, to help the British team with the heavy workload.
The vast majority of the defense attorneys were German lawyers. These included Georg Fröschmann, Heinz Fritz (Hans Fritzsche), Otto Kranzbühler (Karl Dönitz), Otto Pannenbecker (Wilhelm Frick), Alfred Thoma (Alfred Rosenberg), Kurt Kauffmann (Ernst Kaltenbrunner), Hans Laternser (general staff and high command), Franz Exner (Alfred Jodl), Alfred Seidl (Hans Frank), Otto Stahmer (Hermann Göring), Walter Ballas (Gustav Krupp von Bohlen und Halbach), Hans Flächsner (Albert Speer), Günther von Rohrscheidt (Rudolf Hess), Egon Kubuschok (Franz von Papen), Robert Servatius (Fritz Sauckel), Fritz Sauter (Joachim von Ribbentrop), Walther Funk (Baldur von Schirach), Hanns Marx (Julius Streicher), Otto Nelte (Wilhelm Keitel), and Herbert Kraus/Rudolph Dix (both working for Hjalmar Schacht). The main counsel were supported by a total of 70 assistants, clerks and lawyers. The defense witnesses included several men who took part in war crimes themselves during World War II, such as Rudolf Höss. The men testifying for the defense hoped to receive more lenient sentences. All of the men testifying on behalf of the defense were found guilty on several counts.
The International Military Tribunal was opened on 19 November 1945 in the Palace of Justice in Nuremberg. The first session was presided over by the Soviet judge, Nikitchenko. The prosecution entered indictments against 24 major war criminals and seven organizations – the leadership of the Nazi party, the Reich Cabinet, the Schutzstaffel (SS), Sicherheitsdienst (SD), the Gestapo, the Sturmabteilung (SA) and the "General Staff and High Command", comprising several categories of senior military officers. These organizations were to be declared "criminal" if found guilty.
The indictments were for:
The 24 accused were, with respect to each charge, either indicted but not convicted (I), indicted and found guilty (G), or not charged (—), as listed below by defendant, charge, and eventual outcome:
The Rorschach test was administered to the defendants, along with the Thematic Apperception Test and a German adaptation of the Wechsler-Bellevue Intelligence Test. All were scored as having above average intelligence, several considerably so.
Throughout the trials, specifically between January and July 1946, the defendants and a number of witnesses were interviewed by American psychiatrist Leon Goldensohn. His notes detailing the demeanor and comments of the defendants survive; they were edited into book form and published in 2004. Jean Delay was the psychiatric expert for the French delegation in the trial of Rudolf Hess.
The accusers were successful in unveiling the background of developments leading to the outbreak of World War II, which cost around 50 million lives in Europe alone, as well as the extent of the atrocities committed in the name of the Hitler regime. Twelve of the accused were sentenced to death, seven received prison sentences (ranging from 10 years to life sentence), three were acquitted, and two were not charged.
The death sentences were carried out on 16 October 1946 by hanging using the standard drop method instead of long drop. The U.S. Army denied claims that the drop length was too short, which may cause the condemned to die slowly from strangulation instead of quickly from a broken neck, but evidence remains that some of the condemned men choked in agony for 14 to 28 minutes. The executioner was John C. Woods. The executions took place in the gymnasium of the court building (demolished in 1983).
Although the rumor has long persisted that the bodies were taken to Dachau and burned there, they were actually incinerated in a crematorium in Munich, and the ashes scattered over the river Isar. The French judges suggested that the military condemned (Göring, Keitel and Jodl) be shot by a firing squad, as is standard for military courts-martial, but this was opposed by Biddle and the Soviet judges, who argued that the military officers had violated their military ethos and were not worthy of the more dignified death by shooting. The prisoners sentenced to incarceration were transferred to Spandau Prison in 1947.
Of the 12 defendants sentenced to death by hanging, two were not hanged: Martin Bormann was convicted in absentia (he had, unknown to the Allies, died while trying to escape from Berlin in May 1945), and Hermann Göring committed suicide the night before the execution. The remaining 10 defendants sentenced to death were hanged.
The definition of what constitutes a war crime is described by the Nuremberg principles, a set of guidelines document which was created as a result of the trial. The medical experiments conducted by German doctors and prosecuted in the so-called Doctors' Trial led to the creation of the Nuremberg Code to control future trials involving human subjects, a set of research ethics principles for human experimentation.
Of the indicted organizations the following were found not to be criminal:
The American authorities conducted subsequent Nuremberg Trials in their occupied zone.
Other trials conducted after the first Nuremberg trial include the following:
While Sir Geoffrey Lawrence of Britain was the judge who was chosen to serve as the president of the court, arguably, the most prominent of the judges at the trial was his American counterpart, Francis Biddle. Prior to the trial, Biddle had been Attorney General of the United States but had been asked to resign by Truman earlier in 1945.
Some accounts argue that Truman had appointed Biddle as the main American judge for the trial as an apology for asking for his resignation. Ironically, Biddle was known during his time as Attorney General for opposing the idea of prosecuting Nazi leaders for crimes committed before the beginning of the war, even sending out a memorandum on 5 January 1945 on the subject. The note also expressed Biddle's opinion that instead of proceeding with the original plan for prosecuting entire organizations, there should simply be more trials that would prosecute specific offenders.
Biddle soon changed his mind, as he approved a modified version of the plan on 21 January 1945, likely due to time constraints, since the trial would be one of the main issues which was to be discussed at Yalta. At trial, the Nuremberg tribunal ruled that any member of an organization convicted of war crimes, such as the SS or Gestapo, who had joined after 1939 would be considered a war criminal. Biddle managed to convince the other judges to make an exemption for any member who was drafted or had no knowledge of the crimes being committed by these organizations.
Justice Robert H. Jackson played an important role in not only the trial itself, but also in the creation of the International Military Tribunal, as he led the American delegation to London that, in the summer of 1945, argued in favour of prosecuting the Nazi leadership as a criminal conspiracy. According to Airey Neave, Jackson was also the one behind the prosecution's decision to include membership in any of the six criminal organizations in the indictments at the trial, though the IMT rejected this on the grounds that it was wholly without precedent in either international law or the domestic laws of any of the Allies. Jackson also attempted to have Alfried Krupp be tried in place of his father, Gustav, and even suggested that Alfried volunteer to be tried in his father's place. Both proposals were rejected by the IMT, particularly by Lawrence and Biddle, and some sources indicate that this resulted in Jackson being viewed unfavourably by the latter.
Thomas Dodd was a prosecutor for the United States. There was an immense amount of evidence backing the prosecutors' case, especially since meticulous records of the Nazis' actions had been kept. There were records taken in by the prosecutors that had signatures from specific Nazis signing for everything from stationery supplies to Zyklon B gas, which was used to kill the inmates of the deathcamps. Thomas Dodd showed a series of pictures to the courtroom after reading through the documents of crimes committed by the defendants. The showing consisted of pictures displaying the atrocities performed by the defendants. The pictures had been gathered when the inmates were liberated from the concentration camps.
, a Lutheran pastor, and Sixtus O'Connor, a Roman Catholic priest, were sent to minister to the Nazi defendants. Photographs of the trial were taken by a team of about a dozen US Army still photographers, under the direction of chief photographer Ray D'Addario. | https://en.wikipedia.org/wiki?curid=21875 |
Natasha Stott Despoja
Natasha Jessica Stott Despoja AO (born 9 September 1969) is an Australian politician, diplomat, advocate and author. She is the founding Chair of the Board of Our Watch, the national foundation to prevent violence against women and their children, and was previously the Australian Ambassador for Women and Girls at the Department of Foreign Affairs and Trade from 2013 to 2016. She was also a Member of the World Bank Gender Advisory Council from 2015 to 2017 and a Member of the United Nations High Level Working Group on the Health and Human Rights of Women, Children and Adolescents in 2017.
Stott Despoja began her parliamentary career after being appointed to the Senate at the age of 26 serving as an Australian Democrats Senator for South Australia from 1995 to 2008. She went on to serve as the Deputy Leader and Leader of the Australian Democrats. She holds the record for being the youngest woman to sit in the Parliament of Australia and the longest serving Australian Democrats Senator.
Stott Despoja was born in Adelaide to Shirley Stott Despoja, an Australian-born journalist and Mario Despoja, who was from Croatia (then part of Yugoslavia). She attended Stradbroke Primary and Pembroke School and later graduated from the University of Adelaide in 1991. She was President of the Students' Association of the University of Adelaide (SAUA) and the South Australian Women's Officer for the National Union of Students. She then went on to work as a political advisor to Senator John Coulter and Senator Cheryl Kernot.
When John Coulter had to stand down for health reasons in 1995, Stott Despoja was the successful candidate to replace him. Her performance was recognized when she was re-elected not only in the 1996 election the following year, but again in the 2001 election. In 1997 she had been promoted to become the deputy leader of the Democrats from her position as party spokesperson for parliamentary portfolios such as Science and Technology, Higher Education, IT, Employment & Youth Affairs.
During the passage of the Goods and Services Tax (GST) legislation in 1999, Stott Despoja, along with Andrew Bartlett, split from the party's other senators by opposing the package, which had been negotiated by Lees and prime minister John Howard. She said that she refused to break promises made by the party during the election. The party had gone to the election stating that they would work with whichever party formed government to improve their tax package. The Australian Democrats traditionally permitted parliamentary representatives to cast a conscience vote on any issue but, on this occasion, close numbers in the Senate placed greater pressure than usual on the dissenters.
In 2004, Stott Despoja took 11 weeks' leave from the Senate following the birth of her first child before returning to full duties as Democrat spokesperson on, inter alia, Higher Education, Status of Women, and Work and Family.
During her political career she also introduced 24 Private Member's Bills on issues including paid maternity leave, the Republic, genetic privacy, stem cells, captioning and same sex marriage. Stott Despoja regularly attends the Sydney Gay and Lesbian Mardi Gras.
On 22 October 2006, after undergoing emergency surgery for an ectopic pregnancy, she announced that she would not be contesting the 2007 election to extend her term beyond 30 June 2008. She was the Australian Democrats' longest-serving senator. Her retirement coincided with the ending of her party's federal parliamentary representation; the Democrats' support had collapsed after 2002 and they won no seats at the 2004 and 2007 half-senate elections.
Stott Despoja became the leader of her party on 6 April 2001. The preceding leader Meg Lees left the party in the following year. Stott Despoja faced criticism with calm resolution from Democrat senators and the general public, but she opted to resign on 21 August 2002 after 16 months. She had been faced with little alternative after four of her six colleagues forced a ten-point reform agenda upon her. The agenda was proposed by John Cherry and she was opposed to its content. She announced her resignation in a speech to the Senate, concluding with a "pledge to bring the party back home to the members again", and referring to her colleagues' attitude towards her.
She was replaced as leader by Bartlett following a membership ballot interval during which Brian Greig acted in the position.
Stott Despoja has been a casual host on ABC 891 radio, a guest panellist on Channel 10's "The Project" and a columnist for the Australian business news website "Business Spectator".
She was a board member of non-profit organisations the South Australian Museum (SAM) from 2009 to 2013; the Museum of Australian Democracy (MOAD) from 2010 to 2013; and the Advertising Standards Board (ASB) from 2008 to 2013. She was a deputy chair at beyondblue (Australia's national depression initiative).
She has been an ambassador for Ovarian Cancer Australia (OCA), The Orangutan Project (TOP); Cancer Australia; secondbite; and the HIV/AIDS anti-stigma campaign, ENUF, (along with her husband Ian Smith).
She was on the board of the Burnet Institute (Australia's largest virology and communicable disease research institute) from 2008 until December 2013, when Foreign Minister Julie Bishop announced the appointment of Stott Despoja as Australia's new Ambassador for Women and Girls, a role she held until 2016. This involved visiting some 45 countries to promote women's economic empowerment and leadership and to help reduce violence against women and girls.
Stott Despoja has also been an election observer for the US-based National Democratic Institute (NDI) in Nigeria (2011); visited Burkina Faso for Oxfam (2012); and went to Laos (2011) and Burma (2013) with The Burnet Institute. She was mentioned in June 2014 as a possible replacement for Kevin Scarce as the next Governor of South Australia, however Hieu Van Le was chosen.
On 21 July 2015, Stott Despoja returned to the Burnet Institute as a Patron.
In July 2013, Stott Despoja was the founding chair of Our Watch, originally named Foundation to Prevent Violence Against Women and their Children, and still occupies this position. A joint initiative of the Victorian and Commonwealth Governments, the organisation is based in Melbourne.
Stott Despoja has authored a large number of essays, reports and non-fiction works on a range of topics, both during and since her political career.
In March 2019 she published "On Violence", with the publisher's blurb asking "Why is violence against women endemic, and how do we stop it?". Stott Despoja posits that violence against women is "Australia's national emergency", with one woman dying at the hands of her partner or someone she knows every week. This violence is preventable, and that we need to "create a new normal".
In 1999, she was appointed a Global Leader for Tomorrow by the World Economic Forum (WEF).
Despoja was awarded a Member of the Order of Australia in June 2011 for her "service to the Parliament of Australia, particularly as a Senator for South Australia, through leadership roles with the Australian Democrats, to education, and as a role model for women".
She is listed as one of the "Gender Equality Top 100" by the UK organisation Apolitical.
In June 2019 Despoja was appointed as an Officer of the Order of Australia for her "distinguished service to the global community as an advocate for gender equality, and through roles in a range of organisations"
Stott Despoja is married to former Liberal party advisor, Ian Smith and has two children. | https://en.wikipedia.org/wiki?curid=21876 |
Nuremberg Code
The Nuremberg Code () is a set of research ethics principles for human experimentation created as a result of the Nuremberg trials at the end of the Second World War.
The origin of the Nuremberg Code began in pre–World War II German politics, particularly during the 1930s and 1940s. The pre-war German Medical Association was considered to be a progressive yet democratic association with great concerns for public health, one example being the legislation of compulsory health insurance for German workers. However, starting in the mid-1920s, German physicians, usually proponents of racial hygiene, were accused by the public and the medical society of unethical medical practices. The use of racial hygiene was supported by the German government in order to create an Aryan "master race", and to exterminate those who did not fit into their criteria. Racial hygiene extremists merged with National Socialism to promote the use of biology to accomplish their goals of racial purity, a core concept in the Nazi ideology. Physicians were attracted to the scientific ideology and aided in the establishment of National Socialist Physicians' League in 1929 to "purify the German medical community of 'Jewish Bolshevism'." Criticism was becoming prevalent; Alfons Stauder, member of the Reich Health Office, claimed that the "dubious experiments have no therapeutic purpose", and Fredrich von Muller, physician and the president of the Deutsche Akademie, joined the criticism.
In response to the criticism of unethical human experimentation, the Reich government issued "Guidelines for New Therapy and Human Experimentation" in Weimar, Germany. The guidelines were based on beneficence and non-maleficence, but also stressed legal doctrine of informed consent. The guidelines clearly distinguished the difference between therapeutic and non-therapeutic research. For therapeutic purposes, the guidelines allowed administration without consent only in dire situations, but for non-therapeutic purposes any administration without consent was strictly forbidden. However, the guidelines from Weimar were negated by Adolf Hitler. By 1942, the Nazi party included more than 38,000 German physicians, who helped carry out medical programs such as the Sterilization Law.
After World War II, a series of trials were held to hold members of the Nazi party responsible for a multitude of war crimes. The trials were approved by President Harry Truman on May 2, 1945 and were led by the United States, Great Britain, and the Soviet Union. They began on November 20, 1945 in Nuremberg, Germany, in what became known as the Nuremberg trials. In one of the trials, which became known as the "Doctors' Trial", German physicians responsible for conducting unethical medical procedures on humans during the war were tried. They focused on physicians who conducted inhumane and unethical human experiments in concentration camps, in addition to those who were involved in over 3,500,000 sterilizations of German citizens.
Several of the accused argued that their experiments differed little from those used before the war, and that there was no law that differentiated between legal and illegal experiments. This worried Drs. Andrew Ivy and Leo Alexander, who worked with the prosecution during the trial. In April 1947, Dr. Alexander submitted a memorandum to the United States Counsel for War Crimes outlining six points for legitimate medical research.
The Nuremberg code, which stated explicit voluntary consent from patients are required for human experimentation was drafted on August 9, 1947. On August 20, 1947, the judges delivered their verdict against Karl Brandt and 22 others. The verdict reiterated the memorandum's points and, in response to expert medical advisers for the prosecution, revised the original six points to ten. The ten points became known as the "Nuremberg Code", which includes such principles as informed consent and absence of coercion; properly formulated scientific experimentation; and beneficence towards experiment participants. It is thought to have been mainly based on the Hippocratic Oath, which was interpreted as endorsing the experimental approach to medicine while protecting the patient.
The ten points of the code were given in the section of the verdict entitled "Permissible Medical Experiments":
The Nuremberg Code was initially ignored, but gained much greater significance about 20 years after it was written. As a result, there were substantial rival claims for the creation of the Code. Some claimed that Harold Sebring, one of the three U.S. judges who presided over the Doctors' Trial, was the author. Leo Alexander, MD and Andrew Ivy, MD, the prosecution's chief medical expert witnesses, were also each identified as authors. In his letter to Maurice H. Pappworth, an English physician and the author of the book "Human Guinea Pigs", Andrew Ivy claimed sole authorship of the Code. Leo Alexander, approximately 30 years after the trial, also claimed sole authorship. However, after careful reading of the transcript of the Doctors' Trial, background documents, and the final judgements, it is more accepted that the authorship was shared and the Code grew out of the trial itself.
Dr. Ravindra Ghooi from India has written a paper on this code and in his opinion, the code borrows heavily from the 1931 guidelines without acknowledging its source and thus could be considered plagiarized.
The Nuremberg Code has not been officially accepted as law by any nation or as official ethics guidelines by any association. In fact, the Code's reference to Hippocratic duty to the individual patient and the need to provide information was not initially favored by the American Medical Association. The Western world initially dismissed the Nuremberg Code as a "code for barbarians" and not for civilized physicians and investigators. Additionally, the final judgment did not specify whether the Nuremberg Code should be applied to cases such as political prisoners, convicted felons, and healthy volunteers. The lack of clarity, the brutality of the unethical medical experiments, and the uncompromising language of the Nuremberg Code created an image that the Code was designed for singularly egregious transgressions.
However, the Code is considered to be the most important document in the history of clinical research ethics, which had a massive influence on global human rights. The Nuremberg Code and the related Declaration of Helsinki are the basis for the Code of Federal Regulations Title 45 Part 46, which are the regulations issued by the United States Department of Health and Human Services for the ethical treatment of human subjects, and are used in Institutional Review Boards (IRBs). In addition, the idea of informed consent has been universally accepted and now constitutes Article 7 of the United Nations' International Covenant on Civil and Political Rights. It also served as the basis for International Ethical Guidelines for Biomedical Research Involving Human Subjects proposed by the World Health Organization. | https://en.wikipedia.org/wiki?curid=21881 |
Nim
Nim is a mathematical game of strategy in which two players take turns removing (or "nimming") objects from distinct heaps or piles. On each turn, a player must remove at least one object, and may remove any number of objects provided they all come from the same heap or pile. Depending on the version being played, the goal of the game is either to avoid taking the last object, or to take the last object.
Variants of Nim have been played since ancient times. The game is said to have originated in China—it closely resembles the Chinese game of 捡石子 "jiǎn-shízi", or "picking stones"—but the origin is uncertain; the earliest European references to Nim are from the beginning of the 16th century. Its current name was coined by Charles L. Bouton of Harvard University, who also developed the complete theory of the game in 1901, but the origins of the name were never fully explained.
Nim is typically played as a "misère game", in which the player to take the last object loses. Nim can also be played as a "normal play" game, where the player taking the last object wins. This is called normal play because the last move is a winning move in most games, even though it is not the normal way that Nim is played. In either normal play or a misère game, when the number of heaps with at least two objects is exactly equal to one, the next player who takes next can easily win. If this removes either all or all but one objects from the heap that has two or more, then no heaps will have more than one object, so the players are forced to alternate removing exactly one object until the game ends. If the player leaves an even number of non-zero heaps (as the player would do in normal play), the player takes last; if the player leaves an odd number of heaps (as the player would do in misère play), then the other player takes last.
Normal play Nim (or more precisely the system of nimbers) is fundamental to the Sprague–Grundy theorem, which essentially says that in normal play every impartial game is equivalent to a Nim heap that yields the same outcome when played in parallel with other normal play impartial games (see disjunctive sum).
While all normal play impartial games can be assigned a Nim value, that is not the case under the misère convention. Only tame games can be played using the same strategy as misère Nim.
Nim is a special case of a poset game where the poset consists of disjoint chains (the heaps).
The evolution graph of the game of Nim with three heaps is the same as three branches of the evolution graph of the Ulam-Warburton automaton.
At the 1940 New York World's Fair Westinghouse displayed a machine, the Nimatron, that played Nim. From May 11, 1940 to October 27, 1940 only a few people were able to beat the machine in that six week period; if they did they were presented with a coin that said Nim Champ. It was also one of the first ever electronic computerized games. Ferranti built a Nim playing computer which was displayed at the Festival of Britain in 1951. In 1952 Herbert Koppel, Eugene Grant and Howard Bailer, engineers from the W. L. Maxon Corporation, developed a machine weighing which played Nim against a human opponent and regularly won. A Nim Playing Machine has been described made from TinkerToy.
The game of Nim was the subject of Martin Gardner's February 1958 Mathematical Games column in Scientific American. A version of Nim is played—and has symbolic importance—in the French New Wave film "Last Year at Marienbad" (1961).
The normal game is between two players and played with three heaps of any number of objects. The two players alternate taking any number of objects from any single one of the heaps. The goal is to be the last to take an object. In misère play, the goal is instead to ensure that the opponent is forced to take the last remaining object.
The following example of a normal game is played between fictional players Bob and Alice who start with heaps of three, four and five objects.
The practical strategy to win at the game of "Nim" is for a player to get the other into one of the following positions, and every successive turn afterwards they should be able to make one of the lower positions. Only the last move changes between misere and normal play.
* Only valid for normal play.
** Only valid for misere.
For the generalisations, "n" and "m" can be any value > 0, and they may be the same.
Nim has been mathematically solved for any number of initial heaps and objects, and there is an easily calculated way to determine which player will win and what winning moves are open to that player.
The key to the theory of the game is the binary digital sum of the heap sizes, that is, the sum (in binary) neglecting all carries from one digit to another. This operation is also known as "exclusive or" (xor) or "vector addition over GF(2)" (bitwise addition modulo 2). Within combinatorial game theory it is usually called the nim-sum, as it will be called here. The nim-sum of "x" and "y" is written to distinguish it from the ordinary sum, . An example of the calculation with heaps of size 3, 4, and 5 is as follows:
An equivalent procedure, which is often easier to perform mentally, is to express the heap sizes as sums of distinct powers of 2, cancel pairs of equal powers, and then add what is left:
In normal play, the winning strategy is to finish every move with a nim-sum of 0. This is always possible if the nim-sum is not zero before the move. If the nim-sum is zero, then the next player will lose if the other player does not make a mistake. To find out which move to make, let X be the nim-sum of all the heap sizes. Find a heap where the nim-sum of X and heap-size is less than the heap-size - the winning strategy is to play in such a heap, reducing that heap to the nim-sum of its original size with X. In the example above, taking the nim-sum of the sizes is . The nim-sums of the heap sizes A=3, B=4, and C=5 with X=2 are
The only heap that is reduced is heap A, so the winning move is to reduce the size of heap A to 1 (by removing two objects).
As a particular simple case, if there are only two heaps left, the strategy is to reduce the number of objects in the bigger heap to make the heaps equal. After that, no matter what move your opponent makes, you can make the same move on the other heap, guaranteeing that you take the last object.
When played as a misère game, Nim strategy is different only when the normal play move would leave only heaps of size one. In that case, the correct move is to leave an odd number of heaps of size one (in normal play, the correct move would be to leave an even number of such heaps).
These strategies for normal play and a misère game are the same until the number of heaps with at least two objects is exactly equal to one. At that point, the next player removes either all or all but one objects from the heap that has two or more, so no heaps will have more than one object (in other words, all remaining heaps have exactly one object each), so the players are forced to alternate removing exactly exactly one object until the game ends. In normal play, the player leaves an even number of non-zero heaps, and player takes last; in misère play), the players leaves an odd number of non-zero heaps, so the other player takes last.
In a misère game with heaps of sizes three, four and five, the strategy would be applied like this:
The previous strategy for a misère game can be easily implemented (for example in Python, below).
import functools
MISERE = 'misere'
NORMAL = 'normal'
def nim(heaps, game_type):
if __name__ == "__main__":
The soundness of the optimal strategy described above was demonstrated by C. Bouton.
Theorem. In a normal Nim game, the player making the first move has a winning strategy if and only if the nim-sum of the sizes of the heaps is not zero. Otherwise, the second player has a winning strategy.
"Proof:" Notice that the nim-sum (⊕) obeys the usual associative and commutative laws of addition (+) and also satisfies an additional property, "x" ⊕ "x" = 0.
Let "x"1, ..., "xn" be the sizes of the heaps before a move, and "y"1, ..., "yn" the corresponding sizes after a move. Let "s" = "x"1 ⊕ ... ⊕ "xn" and "t" = "y"1 ⊕ ... ⊕ "yn". If the move was in heap "k", we have "xi" = "yi" for all "i" ≠ "k", and "xk" > "yk". By the properties of ⊕ mentioned above, we have
The theorem follows by induction on the length of the game from these two lemmas.
Lemma 1. If "s" = 0, then "t" ≠ 0 no matter what move is made.
"Proof:" If there is no possible move, then the lemma is vacuously true (and the first player loses the normal play game by definition). Otherwise, any move in heap "k" will produce "t" = "xk" ⊕ "yk" from (*). This number is nonzero, since "xk" ≠ "yk".
Lemma 2. If "s" ≠ 0, it is possible to make a move so that "t" = 0.
"Proof:" Let "d" be the position of the leftmost (most significant) nonzero bit in the binary representation of "s", and choose "k" such that the "d"th bit of "xk" is also nonzero. (Such a "k" must exist, since otherwise the "d"th bit of "s" would be 0.)
Then letting "yk" = "s" ⊕ "xk", we claim that "yk" k": all bits to the left of "d" are the same in "xk" and "yk", bit "d" decreases from 1 to 0 (decreasing the value by 2"d"), and any change in the remaining bits will amount to at most 2"d"−1. The first player can thus make a move by taking "xk" − "yk" objects from heap "k", then
The modification for misère play is demonstrated by noting that the modification first arises in a position that has only one heap of size 2 or more. Notice that in such a position "s" ≠ 0, therefore this situation has to arise when it is the turn of the player following the winning strategy. The normal play strategy is for the player to reduce this to size 0 or 1, leaving an even number of heaps with size 1, and the misère strategy is to do the opposite. From that point on, all moves are forced.
In another game which is commonly known as Nim (but is better called the subtraction game "S" (1,2...,"k")), an upper bound is imposed on the number of objects that can be removed in a turn. Instead of removing arbitrarily many objects, a player can only remove 1 or 2 or ... or "k" at a time. This game is commonly played in practice with only one heap (for instance with "k" = 3 in the game "Thai 21" on , where it appeared as an Immunity Challenge).
Bouton's analysis carries over easily to the general multiple-heap version of this game. The only difference is that as a first step, before computing the Nim-sums, we must reduce the sizes of the heaps modulo "k" + 1. If this makes all the heaps of size zero (in misère play), the winning move is to take "k" objects from one of the heaps. In particular, in ideal play from a single heap of "n" objects, the second player can win if and only if
This follows from calculating the nim-sequence of "S"(1,2...,"k"),
from which the strategy above follows by the Sprague–Grundy theorem.
The game "21" is played as a misère game with any number of players who take turns saying a number. The first player says "1" and each player in turn increases the number by 1, 2, or 3, but may not exceed 21; the player forced to say "21" loses. This can be modeled as a subtraction game with a heap of 21–"n" objects. The winning strategy for the two-player version of this game is to always say a multiple of 4; it is then guaranteed that the other player will ultimately have to say 21 – so in the standard version where the first player opens with "1", they start with a losing move.
The 21 game can also be played with different numbers, like "Add at most 5; lose on 34".
A sample game of 21 in which the second player follows the winning strategy:
A similar version is the "100 game": two players start from 0 and alternately add a number from 1 to 10 to the sum. The player who reaches 100 wins. The winning strategy is to reach a number in which the digits are subsequent (e.g. 01, 12, 23, 34...) and control the game by jumping through all the numbers of this sequence. Once reached 89, the opponent has lost; they can only choose numbers from 90 to 99, and the next answer can in any case be 100).
In another variation of Nim, besides removing any number of objects from a single heap, one is permitted to remove the same number of objects from each heap.
Yet another variation of Nim is 'Circular Nim', where any number of objects are placed in a circle, and two players alternately remove one, two or three adjacent objects. For example, starting with a circle of ten objects,
three objects are taken in the first move
then another three
then one
but then three objects cannot be taken out in one move.
In Grundy's game, another variation of Nim, a number of objects are placed in an initial heap, and two players alternately divide a heap into two nonempty heaps of different sizes. Thus, six objects may be divided into piles of 5+1 or 4+2, but not 3+3. Grundy's game can be played as either misère or normal play.
"Greedy Nim" is a variation where the players are restricted to choosing stones from only the largest pile. It is a finite impartial game. "Greedy Nim Misère" has the same rules as Greedy Nim, but here the last player able to make a move loses.
Let the largest number of stones in a pile be "m", the second largest number of stones in a pile be "n". Let "p""m" be the number of piles having "m" stones, "p""n" be the number of piles having "n" stones. Then there is a theorem that game positions with "p""m" even are "P" positions.
This theorem can be shown by considering the positions where "p""m" is odd. If "p""m" is larger than 1, all stones may be removed from this pile to reduce "p""m" by 1 and the new "p""m" will be even. If "p""m" = 1 (i.e. the largest heap is unique), there are two cases:
Thus there exists a move to a state where "p""m" is even. Conversely, if "p""m" is even, if any move is possible ("p""m" ≠ 0) then it must take the game to a state where "p""m" is odd. The final position of the game is even ("p""m" = 0). Hence each position of the game with "p""m" even must be a "P" position.
A generalization of multi-heap Nim was called "Nimformula_2" or "index-"k"" Nim by E. H. Moore, who analyzed it in 1910. In index-"k" Nim, instead of removing objects from only one heap, players can remove objects from at least one but up to "k" different heaps. The number of elements that may be removed from each heap may be either arbitrary, or limited to at most "r" elements, like in the "subtraction game" above.
The winning strategy is as follows: Like in ordinary multi-heap Nim, one considers the binary representation of the heap sizes (or heap sizes modulo "r" + 1). In ordinary Nim one forms the XOR-sum (or sum modulo 2) of each binary digit, and the winning strategy is to make each XOR sum zero. In the generalization to index-"k" Nim, one forms the sum of each binary digit modulo "k" + 1.
Again the winning strategy is to move such that this sum is zero for every digit. Indeed, the value thus computed is zero for the final position, and given a configuration of heaps for which this value is zero, any change of at most "k" heaps will make the value non-zero. Conversely, given a configuration with non-zero value, one can always take from at most "k" heaps, carefully chosen, so that the value will become zero.
Building Nim is a variant of Nim where the two players first construct the game of Nim. Given "n" stones and "s" empty piles, the players alternate turns placing exactly one stone into a pile of their choice. Once all the stones are placed, a game of Nim begins, starting with the next player that would move. This game is denoted "BN(n,s)".
"n"-d Nim is played on a formula_3 board, where any number of continuous pieces can be removed from any hyper-row. The starting position is usually the full board, but other options are allowed.
The starting board is a graph, and players take turn to remove 1, 2 or 3 adjacent vertices. | https://en.wikipedia.org/wiki?curid=21885 |
National Institute of Standards and Technology
The National Institute of Standards and Technology (NIST) is a physical sciences laboratory and a non-regulatory agency of the United States Department of Commerce. Its mission is to promote innovation and industrial competitiveness. NIST's activities are organized into laboratory programs that include nanoscale science and technology, engineering, information technology, neutron research, material measurement, and physical measurement. From 1901–1988, the agency was named the National Bureau of Standards.
The Articles of Confederation, ratified by the colonies in 1781, contained the clause, "The United States in Congress assembled shall also have the sole and exclusive right and power of regulating the alloy and value of coin struck by their own authority, or by that of the respective states—fixing the standards of weights and measures throughout the United States". Article 1, section 8, of the Constitution of the United States (1789), transferred this power to Congress; "The Congress shall have power...To coin money, regulate the value thereof, and of foreign coin, and fix the standard of weights and measures".
In January 1790, President George Washington, in his first annual message to Congress stated that, "Uniformity in the currency, weights, and measures of the United States is an object of great importance, and will, I am persuaded, be duly attended to", and ordered Secretary of State Thomas Jefferson to prepare a plan for Establishing Uniformity in the Coinage, Weights, and Measures of the United States, afterwards referred to as the Jefferson report. On October 25, 1791, Washington appealed a third time to Congress, "A uniformity of the weights and measures of the country is among the important objects submitted to you by the Constitution and if it can be derived from a standard at once invariable and universal, must be no less honorable to the public council than conducive to the public convenience", but it was not until 1838, that a uniform set of standards was worked out.
In 1821, John Quincy Adams had declared "Weights and measures may be ranked among the necessities of life to every individual of human society".
From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, which was part of the United States Department of the Treasury.
In 1901, in response to a bill proposed by Congressman James H. Southard (R, Ohio), the National Bureau of Standards was founded with the mandate to provide standard weights and measures, and to serve as the national physical laboratory for the United States. (Southard had previously sponsored a bill for metric conversion of the United States.)
President Theodore Roosevelt appointed Samuel W. Stratton as the first director. The budget for the first year of operation was $40,000. The Bureau took custody of the copies of the kilogram and meter bars that were the standards for US measures, and set up a program to provide metrology services for United States scientific and commercial users. A laboratory site was constructed in Washington, DC, and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures, the Bureau developed instruments for electrical units and for measurement of light. In 1905 a meeting was called that would be the first "National Conference on Weights and Measures".
Initially conceived as purely a metrology agency, the Bureau of Standards was directed by Herbert Hoover to set up divisions to develop commercial standards for materials and products.page 133 Some of these standards were for products intended for government use, but product standards also affected private-sector consumption. Quality standards were developed for products including some types of clothing, automobile brake systems and headlamps, antifreeze, and electrical safety. During World War I, the Bureau worked on multiple problems related to war production, even operating its own facility to produce optical glass when European supplies were cut off. Between the wars, Harry Diamond of the Bureau developed a blind approach radio aircraft landing system. During World War II, military research and development was carried out, including development of radio propagation forecast methods, the proximity fuze and the standardized airframe used originally for Project Pigeon, and shortly afterwards the autonomously radar-guided Bat anti-ship guided bomb and the Kingfisher family of torpedo-carrying missiles.
In 1948, financed by the United States Air Force, the Bureau began design and construction of SEAC, the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination of vacuum tubes and solid-state diode logic. About the same time the Standards Western Automatic Computer, was built at the Los Angeles office of the NBS by Harry Huskey and used for research there. A mobile version, DYSEAC, was built for the Signal Corps in 1954.
Due to a changing mission, the "National Bureau of Standards" became the "National Institute of Standards and Technology" in 1988.
Following September 11, 2001, NIST conducted the official investigation into the collapse of the World Trade Center buildings.
NIST, known between 1901 and 1988 as the National Bureau of Standards (NBS), is a measurement standards laboratory, also known as a National Metrological Institute (NMI), which is a non-regulatory agency of the United States Department of Commerce. The institute's official mission is to:
NIST had an operating budget for fiscal year 2007 (October 1, 2006September 30, 2007) of about $843.3 million. NIST's 2009 budget was $992 million, and it also received $610 million as part of the American Recovery and Reinvestment Act. NIST employs about 2,900 scientists, engineers, technicians, and support and administrative personnel. About 1,800 NIST associates (guest researchers and engineers from American companies and foreign countries) complement the staff. In addition, NIST partners with 1,400 manufacturing specialists and staff at nearly 350 affiliated centers around the country. NIST publishes the Handbook 44 that provides the "Specifications, tolerances, and other technical requirements for weighing and measuring devices".
The Congress of 1866 made use of the metric system in commerce a legally protected activity through the passage of Metric Act of 1866. On May 20, 1875, 17 out of 20 countries signed a document known as the "Metric Convention" or the "Treaty of the Meter", which established the International Bureau of Weights and Measures under the control of an international committee elected by the General Conference on Weights and Measures.
NIST is headquartered in Gaithersburg, Maryland, and operates a facility in Boulder, Colorado. NIST's activities are organized into laboratory programs and extramural programs. Effective October 1, 2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six. NIST Laboratories include:
Extramural programs include:
NIST also operates a neutron science user facility: the NIST Center for Neutron Research (NCNR). The NCNR provides scientists access to a variety of neutron scattering instruments, which they use in many research fields (materials science, fuel cells, biotechnology, etc.).
The SURF III Synchrotron Ultraviolet Radiation Facility is a source of synchrotron radiation, in continuous operation since 1961. SURF III now serves as the US national standard for source-based radiometry throughout the generalized optical spectrum. All NASA-borne, extreme-ultraviolet observation instruments have been calibrated at SURF since the 1970s, and SURF is used for measurement and characterization of systems for extreme ultraviolet lithography.
The Center for Nanoscale Science and Technology (CNST) performs research in nanotechnology, both through internal research efforts and by running a user-accessible cleanroom nanomanufacturing facility. This "NanoFab" is equipped with tools for lithographic patterning and imaging (e.g., electron microscopes and atomic force microscopes).
NIST has seven standing committees:
As part of its mission, NIST supplies industry, academia, government, and other users with over 1,300 Standard Reference Materials (SRMs). These artifacts are certified as having specific characteristics or component content, used as calibration standards for measuring equipment and procedures, quality control benchmarks for industrial processes, and experimental control samples.
NIST publishes the "Handbook 44" each year after the annual meeting of the National Conference on Weights and Measures (NCWM). Each edition is developed through cooperation of the Committee on Specifications and Tolerances of the NCWM and the Weights and Measures Division (WMD) of the NIST. The purpose of the book is a partial fulfillment of the statutory responsibility for "cooperation with the states in securing uniformity of weights and measures laws and methods of inspection".
NIST has been publishing various forms of what is now the "Handbook 44" since 1918 and began publication under the current name in 1949. The 2010 edition conforms to the concept of the primary use of the SI (metric) measurements recommended by the Omnibus Foreign Trade and Competitiveness Act of 1988.
NIST is developing government-wide identity document standards for federal employees and contractors to prevent unauthorized persons from gaining access to government buildings and computer systems.
In 2002, the National Construction Safety Team Act mandated NIST to conduct an investigation into the collapse of the World Trade Center buildings 1 and 2 and the 47-story 7 World Trade Center. The "World Trade Center Collapse Investigation", directed by lead investigator Shyam Sunder, covered three aspects, including a technical building and fire safety investigation to study the factors contributing to the probable cause of the collapses of the WTC Towers (WTC 1 and 2) and WTC 7. NIST also established a research and development program to provide the technical basis for improved building and fire codes, standards, and practices, and a dissemination and technical assistance program to engage leaders of the construction and building community in implementing proposed changes to practices, standards, and codes. NIST also is providing practical guidance and tools to better prepare facility owners, contractors, architects, engineers, emergency responders, and regulatory authorities to respond to future disasters. The investigation portion of the response plan was completed with the release of the final report on 7 World Trade Center on November 20, 2008. The final report on the WTC Towers—including 30 recommendations for improving building and occupant safety—was released on October 26, 2005.
NIST works in conjunction with the Technical Guidelines Development Committee of the Election Assistance Commission to develop the Voluntary Voting System Guidelines for voting machines and other election technology.
Four scientific researchers at NIST have been awarded Nobel Prizes for work in physics: William Daniel Phillips in 1997, Eric Allin Cornell in 2001, John Lewis Hall in 2005 and David Jeffrey Wineland in 2012, which is the largest number for any US government laboratory. All four were recognized for their work related to laser cooling of atoms, which is directly related to the development and advancement of the atomic clock. In 2011, Dan Shechtman was awarded the Nobel in chemistry for his work on quasicrystals in the Metallurgy Division from 1982 to 1984. In addition, John Werner Cahn was awarded the 2011 Kyoto Prize for Materials Science, and the National Medal of Science has been awarded to NIST researchers Cahn (1998) and Wineland (2007). Other notable people who have worked at NBS or NIST include:
Since 1989, the director of NIST has been a Presidential appointee and is confirmed by the United States Senate, and since that year the average tenure of NIST directors has fallen from 11 years to 2 years in duration. Since the 2011 reorganization of NIST, the director also holds the title of Under Secretary of Commerce for Standards and Technology. Fifteen individuals have officially held the position (in addition to four acting directors who have served on a temporary basis).
In September 2013, both "The Guardian" and "The New York Times" reported that NIST allowed the National Security Agency (NSA) to insert a cryptographically secure pseudorandom number generator called Dual EC DRBG into NIST standard SP 800-90 that had a kleptographic backdoor that the NSA can use to covertly predict the future outputs of this pseudorandom number generator thereby allowing the surreptitious decryption of data. Both papers report that the NSA worked covertly to get its own version of SP 800-90 approved for worldwide use in 2006. The whistle-blowing document states that "eventually, NSA became the sole editor". The reports confirm suspicions and technical grounds publicly raised by cryptographers in 2007 that the EC-DRBG could contain a kleptographic backdoor (perhaps placed in the standard by NSA).
NIST responded to the allegations, stating that "NIST works to publish the strongest cryptographic standards possible" and that it uses "a transparent, public process to rigorously vet our recommended standards". The agency stated that "there has been some confusion about the standards development process and the role of different organizations in it...The National Security Agency (NSA) participates in the NIST cryptography process because of its recognized expertise. NIST is also required by statute to consult with the NSA." Recognizing the concerns expressed, the agency reopened the public comment period for the SP800-90 publications, promising that "if vulnerabilities are found in these or any other NIST standards, we will work with the cryptographic community to address them as quickly as possible”. Due to public concern of this cryptovirology attack, NIST rescinded the EC-DRBG algorithm from the NIST SP 800-90 standard. | https://en.wikipedia.org/wiki?curid=21888 |
NATO reporting name
NATO reporting names are code names for military equipment from Russia, China, and historically, the Eastern Bloc (Soviet Union and other nations of the Warsaw Pact). They provide unambiguous and easily understood English words in a uniform manner in place of the original designations, which either may have been unknown to the Western world at the time or easily confused codes. For example, the Russian bomber jet Tupolev Tu-160 is simply called "Blackjack".
NATO maintains lists of the names. The assignment of the names for the Russian and Chinese aircraft was once managed by the five-nation Air Standardization Coordinating Committee (ASCC), but that is no longer the case.
The United States Department of Defense (DOD) expands on the NATO reporting names in some cases. NATO refers to surface-to-air missile systems mounted on ships or submarines with the same names as the corresponding land-based systems, but the US DoD assigns a different series of numbers with a different suffix (i.e., SA-N- vs. SA-) for these systems. The names are kept the same as a convenience. Where there is no corresponding system, a new name is devised.
The Soviet Union did not always assign official "popular names" to its aircraft, but unofficial nicknames were common as in any air force. Generally, Soviet pilots did not use the NATO names, preferring a different, Russian, nickname. An exception was that Soviet airmen appreciated the MiG-29's codename "Fulcrum", as an indication of its pivotal role in Soviet air defence.
To reduce the risk of confusion, unusual or made-up names were allocated, the idea being that the names chosen would be unlikely to occur in normal conversation, and be easier to memorise. For fixed-wing aircraft, single-syllable words denoted piston-prop and turboprop, while multiple-syllable words denoted jets. Bombers had names starting with the letter "B" and names like "Badger" (2 syllables: jet), "Bear" (single syllable: propeller), and "Blackjack" were used. "Frogfoot," the reporting name for the Sukhoi Su-25, references the aircraft's close air support role. Transports had names starting with "C" (as in "cargo"), which resulted in names like "Condor" or "Candid".
The initial letter of the name indicated the use of that equipment.
The first letter indicates the type of aircraft, like "B"ear for a bomber aircraft, or "F"ulcrum for a fighter aircraft. For fixed-wing aircraft, a one-syllable name refers to a propeller aircraft and a two-syllable name refers to an aircraft with jet engines. This distinction is not made for helicopters.
Before the 1980s, reporting names for submarines were taken from the NATO spelling alphabet. Modifications of existing designs were given descriptive terms, such as “Whiskey Long Bin”. From the 1980s, new designs were given names derived from Russian words, such as “Akula”, or “shark”. These names did not correspond to the Soviet names. Coincidentally, “Akula”, which was assigned to an attack submarine by NATO, was the actual Soviet name for the ballistic missile submarine NATO dubbed “Typhoon”. | https://en.wikipedia.org/wiki?curid=21891 |
List of NATO reporting names for surface-to-surface missiles
NATO reporting name for SS series surface-to-surface missiles, with Soviet designations:
US DoD designations for SS-N series naval surface-to-surface missiles (fired from ships and submarines), with Soviet designations:
"See also": NATO reporting name | https://en.wikipedia.org/wiki?curid=21892 |
List of NATO reporting names for air-to-air missiles
NATO reporting name for AA series air-to-air missiles, with Soviet designations:
"See also": NATO reporting name | https://en.wikipedia.org/wiki?curid=21893 |
List of NATO reporting names for air-to-surface missiles
NATO reporting name for AS series air-to-surface missiles, with Soviet designations:
Note: the Soviet / Russian designation is a Cyrillic letter "Х", which is translated as "Kh" or "H". Also, sometimes a combination ("complex") of a missile with its aircraft is marked with a letter "K" (for example, a missile Kh-22 with an aircraft is a "complex K-22"). The Cyrillic "X" (read "Kh") in the designation of Soviet ASMs is in fact a Latin "X" ("ecs") for Xperimental, as used by the design bureau. With passing time, however, this was ignored and used in Soviet/Russian as well as foreign literature as the Cyrillic Kh.
"See also": NATO reporting name | https://en.wikipedia.org/wiki?curid=21894 |
List of NATO reporting names for anti-tank missiles
NATO reporting name for AT series anti-tank guided missiles, with Soviet designations:
"See also:" NATO reporting name, List of anti-tank guided missiles | https://en.wikipedia.org/wiki?curid=21895 |
List of NATO reporting names for surface-to-air missiles
NATO reporting name for SA series surface-to-air missiles, with Soviet designations:
U.S. DoD designations for SA-N series naval surface-to-air missiles, with Soviet designations. Note that these are not standard NATO names, NATO uses the regular SA series for naval SAMS also, however the US DoD refers to them by these names: | https://en.wikipedia.org/wiki?curid=21896 |
Naturism
Naturism, or nudism, is a cultural movement practising, advocating, and defending personal and social nudity, most but not all of which takes place on private property. The term also refers to a lifestyle based on personal, family, or social nudity. Naturism may be practiced individually, within a familial or social context, or in public.
Ethical or philosophical nudism has a long history, with many advocates of the benefits of enjoying nature without clothing. At the turn of the 20th century, organizations emerged to promote social nudity and to establish private campgrounds and resorts for that purpose. Since the 1960s, with the acceptance of public places for clothing-optional recreation, individuals who do not identify themselves as nudists have been able to casually participate in nude activities. Nude recreation opportunities vary widely around the world, from isolated places known mainly to locals to officially-designated nude beaches and parks.
The XIV Congress of the International Naturist Federation (Agde, France, 1974) defined naturism as:
Many contemporary naturists and naturist organisations feel that the practice of social nudity should be asexual. For various social, cultural, and historical reasons, the lay public, the media, and many contemporary naturists and their organisations have or present a simplified view of the relationship between naturism and sexuality. Current research has begun to explore this complex relationship.
The International Naturist Federation explains:
The usage and definition of these terms varies geographically and historically. Naturism and nudism have the same meaning in the United States, but there is a clear distinction between the two terms in Great Britain.
In naturist parlance, the terms "textile" or "textilist" refer to non-naturist persons, behaviours or facilities (e.g. "the textile beach starts at the flag", "they are a mixed couple – he is naturist, she is textile"). "Textile" is the predominant term used in the UK ("textilist" is unknown in British naturist magazines, including "H&E naturist"), although some naturists avoid it due to perceived negative or derogatory connotations. "Textilist" is said to be used interchangeably with "textile", but no dictionary definition to this effect exists, nor are there any equivalent examples of use in mainstream literature such as those for "textile".
At naturist organised events or venues, clothing is usually optional. At naturist swimming pools or sunbathing places, however, complete nudity is expected (weather permitting). This rule is sometimes a source of controversy among naturists. Staff at a naturist facility are usually required to be clothed due to health and safety regulations.
Facilities for naturists are classified in various ways. A landed or members' naturist club is one that owns its own facilities. Non-landed (or travel) clubs meet at various locations, such as private residences, swimming pools, hot springs, landed clubs and resorts, or rented facilities. Landed clubs can be run by members on democratic lines or by one or more owners who make the rules. In either case, they can determine membership criteria and the obligations of members. This usually involves sharing work necessary to maintain or develop the site.
The international naturist organizations were mainly composed of representatives of landed clubs. Nudist colony is no longer a favored term, but it is used by naturists as a term of derision for landed clubs that have rigid non-inclusive membership criteria.
A holiday centre is a facility that specializes in providing apartments, chalets and camping pitches for visiting holidaymakers. A center is run commercially, and visitors are not members and have no say in the management. Most holiday centers expect visitors to hold an INF card (that is, to be a member of their national organization), but some have relaxed this requirement, relying on the carrying of a trade card. Holiday centers vary in size. Larger holiday centres may have swimming pools, sports pitches, an entertainment program, kids' clubs, restaurants and supermarkets. Some holiday centres allow regular visitors to purchase their own chalets, and generations of the same families may visit each year. Holiday centres are more tolerant of clothing than members-only clubs; total nudity is usually compulsory in the swimming pools and may be expected on the beaches, while on the football pitches, or in the restaurants in the evening, it is rare.
A naturist resort is, to a European, an essentially urban development where naturism is the norm. Cap d'Agde in France; the naturist village of Charco del Palo on Lanzarote, Canary Islands; Vera Playa in Spain; and Vritomartis in Greece are examples.
In US usage, a naturist resort can mean a holiday centre.
Freikörperkultur (FKK)--literally translated as 'free body culture'--is the name for the general movement in Germany. The abbreviation is widely recognised all over Europe and often found on informal signs indicating the direction to a remote naturist beach.
In some European countries, such as Denmark, all beaches are clothing optional, while in others like Germany and experimentally in France, there are naturist sunbathing areas in public parks, e.g., in Munich and Berlin. Beaches in some holiday destinations, such as Crete, are also clothing-optional, except some central urban beaches. There are two centrally located clothes-optional beaches in Barcelona. Sweden allows nudity on all beaches.
In a survey by "The Daily Telegraph", Germans and Austrians were most likely to have visited a nude beach (28%), followed by Norwegians (18%), Spaniards (17%), Australians (17%), and New Zealanders (16%). Of the nationalities surveyed, the Japanese (2%) were the least likely to have visited a nude beach. This result may indicate the lack of nude beaches in Japan; however, the Japanese are open with regard to family bathing nude at home and at onsen (hot springs).
From Woodstock to Edinburgh, and Nambassa in the southern hemisphere communal nudity is commonly recorded at music and counterculture festivals.
The series of 1970s Nambassa hippie festivals held in New Zealand is a further example of non-sexualized naturism. Of the 75,000 patrons who attended the 1979 Nambassa 3 day counterculture Festival an estimated 35% of festival attendees spontaneously chose to remove their clothing, preferring complete or partial nudity.
A few camps organize activities in the nude, including the famous oil wrestling by camp Gymnasium.
Organized by the Federación Nudista de México (Mexican Nudist Federation) since 2016 when Zipolite beach nudity was legalized, FESTIVAL NUDISTA ZIPOLITE occurs annually on the first weekend of February.
Nudist festivals are held to celebrate particular days of the year, and in many such events nude bodypainting is also common, such as Neptune Day Festival held in Koktebel, Crimea to depict mythological events.
The prevalence of naturism tends to increase during the summer months especially when the temperature is higher with some regions experiencing first-time naturists and people who have transitioned to becoming a naturist. Some studies have observed that among some of these naturists, they are clothed during other seasons, thus making them seasonal naturists.
Nudity in social contexts has been practised in various forms by many cultures at all time periods. In modern Western society, social nudity is most frequently encountered in the contexts of bathing, swimming and in saunas, whether in single-sex groups, within the family or with mixed-sex friends, but throughout history and in many tropical cultures until now, nudity is a norm at many sports events and competitions.
The first known use of the word "naturisme" occurred in 1778. A French-speaking Belgian, Jean Baptiste Luc Planchon (1734–1781), used the term to advocate nudism as a means of improving the "hygiène de vie" or healthy living.
The earliest known naturist club in the western sense of the word was established in British India in 1891. The 'Fellowship of the Naked Trust' was founded by Charles Edward Gordon Crawford, a widower, who was a District and Sessions Judge for the Bombay Civil Service. The commune was based in Matheran and had just three members at the beginning; Crawford and two sons of an Anglican missionary, Andrew and Kellogg Calderwood. The commune fell apart when Crawford was transferred to Ratnagiri; he died soon after in 1894.
In 1902, a series of philosophical papers was published in Germany by Dr. Heinrich Pudor, under the pseudonym Heinrich Scham, who coined the term "Nacktkultur". In 1906 he went on to write a three volume treatise with his new term as its title, which discussed the benefits of nudity in co-education and advocated participating in sports while being free of cumbersome clothing. Richard Ungewitter ("Nacktheit", 1906, "Nackt", 1908, etc.) proposed that combining physical fitness, sunlight, and fresh air bathing, and then adding the nudist philosophy, contributed to mental and psychological fitness, good health, and an improved moral-life view. Major promoters of these ideas included Adolf Koch and Hans Suren. Germany published the first journal of nudism between 1902 and 1932.
The wide publication of those papers and others, contributed to an explosive worldwide growth of nudism, in which nudists participated in various social, recreational, and physical fitness activities in the nude. The first organized club for nudists on a large scale, "Freilichtpark" (Free-Light Park), was opened near Hamburg in 1903 by Paul Zimmerman.
In 1919, German doctor Kurt Huldschinsky discovered that exposure to sunlight helped to cure rickets in many children, causing sunlight to be associated with improved health.
In France in the early 20th century, the brothers Gaston and André Durville, both of them physicians, studied the effects of psychology, nutrition, and environment on health and healing. They became convinced of the importance of natural foods and the natural environment on human well-being and health. They named this concept . The profound effect of clean air and sunlight on human bodies became evident to them and so nudity became a part of their naturism.
Naturism became a more widespread phenomenon in the 1920s, in Germany, the United Kingdom, France and other European countries and spread to the United States where it became established in the 1930s.
By 1951, the national federations united to form the International Naturist Federation or INF. Some naturists preferred not to join clubs, and after 1945, pressure was put to designate beaches for naturist use.
From the middle of the 20th century, with changing leisure patterns, commercial organisations began opening holiday resorts to attract naturists who expected the same – or better – standards of comfort and amenity offered to non-naturists. More recently, naturist holiday options have expanded to include cruises.
Naturism was part of a literary movement in the late 1800s (see the writings of André Gide) which also influenced the art movements of the time specifically Henri Matisse and other Fauve painters. This movement was based on the French concept of "joie de vivre", the idea of reveling freely in physical sensations and direct experiences and a spontaneous approach to life.
There are documented psychological benefits of naturist activities, including greater life satisfaction, more positive body image, and higher self-esteem. Social nudity leads to acceptance in spite of differences in age, body shape, fitness, and health.
Christian naturism contains various members associated with most denominations. Although beliefs vary, a common theme is that much of Christianity has misinterpreted the events regarding the Garden of Eden, and God was displeased with Adam and Eve for covering their bodies with fig leaves.
In most European countries, nudity is not explicitly forbidden. Whether it is tolerated on beaches which are not marked as official nudist beaches varies greatly. The only country with substantially different laws is Denmark, where beach nudity is explicitly allowed on all beaches, except for two in the far west of the country.
Organized naturism in Belgium began in 1924 when engineer Joseph-Paul Swenne founded the Belgian League of Heliophilous Propaganda (usually abbreviated to ) in Uccle near Brussels. This was followed four years later by , founded by Jozef Geertz and hosted on the country estate of entrepreneur Oswald Johan de Schampelaere. Belgian naturism was influenced in equal part by French naturism and German . Today Belgian naturists are represented by the (FBN).
Croatia is world-famous for naturism, which accounts for about 15% of its tourism industry. It was also the first European country to develop commercial naturist resorts. During a 1936 Adriatic cruise, King Edward VIII and Wallis Simpson stopped at a beach on the island of Rab where King Edward obtained a special permission from the local government to swim naked, thereby designating it the world's first official nude beach.
In Finnish culture, nudism is considered to be a relatively normal way to live. It is not uncommon to see entire families spending time together naked. Families may be naked while bathing in a sauna, swimming in a pool, or playing on a beach, and it's not unusual to see children playing naked in a family yard for example. Nudity as a whole is considered less taboo than many other countries.
Marcel Kienné de Mongeot is credited with starting naturism in France in 1920. His family had suffered from tuberculosis, and he saw naturism as a cure and a continuation of the traditions of the ancient Greeks. In 1926, he started the magazine ' (later called ') and the first French naturist club, at Garambouville, near Evreux. The court action that he initiated, established that nudism was legal on private property that was fenced and screened.
Drs. André and Gaston Durville bought a 70 hectare site on the Île du Levant where they established the village of Héliopolis. The village was open to the public. In 1925 Dr François Fougerat de David de Lastours wrote a thesis on heliotherapy, and in that year opened the . In 1936, the naturist movement was officially recognised.
Albert and Christine Lecocq were active members of many of these clubs, but after disagreements left and In 1944 Albert and Christine Lecocq founded the with members in 84 cities. In 1948 they founded the , in 1949 they started the magazine, "", and in 1950 opened the CHM Montalivet, the world's first naturist holiday centre, where the INF was formed.
German naturism was part of the movement and the youth movement of 1896, from Steglitz, Berlin which promoted ideas of fitness and vigour. At the same time doctors of the were using heliotherapy, treating diseases such as TB, rheumatism and scrofula with exposure to sunlight.
, a term coined in 1903 by Heinrich Pudor, flourished. connected nudity, vegetarianism and social reform. It was practised in a network of 200 members clubs. The movement gained prominence in the 1920s as offering a health giving life-style with Utopian ideals. Germany published the first naturist journal between 1902 and 1932.
It became politicised by radical socialists who believed it would lead to classlessness and a breaking down of society. It became associated with pacificism.
In 1926, Adolf Koch established a school of naturism in Berlin; encouraging a mixing of the sexes, open air exercises, and a programme of "sexual hygiene". In 1929, the Berlin school hosted the first International Congress on Nudity.
After the war, East Germans were free to practice naturism, chiefly at beaches rather than clubs (private organizations being regarded as potentially subversive). Naturism became a large element in DDR politics. The subsection of the Workers Sports Organisation had 60,000 members.
Today, following reunification there are many clubs, parks and beaches open to naturists.
though nudity has become less common in the former eastern zone. Germans are typically the most commonly seen foreigners at nude beaches in France and around Europe.
Public nudity is prohibited in Greece and there are no official nude beaches. There are, however, numerous unofficial nude beaches especially on the islands frequented by tourists, like Crete, Mykonos or Karpathos but also on smaller islands like Skopelos or Skiathos where nudity is tolerated, usually at the more remote ends or secluded areas of beaches.
On the other hand, toplessness is not illegal and is widely practiced by locals and tourists alike as there are no cultural taboos against it.
Public nudity is generally prohibited in Italy as a civil offence and can be punished with high fines, with the exception of the official naturist beaches and places where's a tradition of naturist attendance, as shown by a recent absolution sentence. Furthermore, in the recent decade, some regions have created laws to help the naturist tourism industry, and actually there are twelve official naturist beaches in all Italy, where nudity is officially guaranteed by admnistrative acts. On all other public beaches in Italy, police can potentially impose substantial fines.
On the other hand, female toplessness has been officially legalized (in a nonsexual context) in all public beaches and swimming pools throughout the country (unless otherwise specified by region, province or municipality by-laws) on 20 March 2000, when the Supreme Court of Cassation (through sentence No. 3557) has determined that the exposure of the nude female breast, after several decades, is now considered a "commonly accepted behavior", and therefore, has "entered into the social costume".
The oldest Dutch naturist association is ("Sun and Life"), founded in 1946 with the aim of promoting healthy physical and mental development and a natural way of life. The national association is (NFN), which in 2017 adopted the new brand name ("Simply Naked") in an effort to become more accessible to casual naturists and strengthen the acceptance of nude recreation.
In general, Dutch people are very tolerant of beach nudity, as long as it does not impact on others, or involve inappropriate staring or sexual behaviour. Topless sunbathing is permitted on most beaches except where prohibited by signage.
The "Federação Portuguesa de Naturismo" (Portuguese Naturist Federation) or FPN was founded on 01 March 1977 in Lisbon. In the 21th century, naturism is considered a tolerated practice, whereas there are many officially-designated nudist beaches.
In today's Poland naturism is practiced in number of the seaside and inland beaches. Most Polish beaches are actually clothing-optional rather than naturist. One such beach is Międzyzdroje-Lubiewo.
Beginnings of naturism in Slovenia started in the year 1852, when a 29 year old Swiss physician Arnold Rikli visited Bled for the first time. In the following years he started to promote healthy way of living, because he considered water, air and light to be the source for his healing therapy. He continued to build spa centers which included light therapy and hydrotherapy treatment.
Public nudity in Spain is not illegal since there is no law banning its practice. Spanish legislation foresees felony for exhibitionism but restricts its scope to obscene exposure in front of children or mentally impaired individuals, i.e. with sexual connotation.
There are, however, some municipalities (like San Pedro del Pinatar) where public nudity has been banned by means of by-laws. Other municipalities (like Barcelona, Salou, Platja de Palma and Sant Antoni de Portmany) have used similar provisions to regulate partial nudity, requiring people to cover their torsos on the streets. Some naturist associations have appealed these by-laws on the grounds that a fundamental right (freedom of expression, as they understand nudism to be self-expression) cannot be regulated with such a mechanism. Some courts have ruled in favour of nudist associations.
Nudism in Spain is normally practised by the seaside, on beaches or small coves with a tradition of naturism. In Vera (Andalusia), there is a wide residential area formed by nudist urbanisations. Nudist organisations may organise some activities elsewhere in inner territory.
Legal provisions regarding partial nudity (or toplessness) are analogous to those regarding full nudity, but social tolerance towards toplessness is higher. The law does not require women to cover their breasts in public swimming, or on any beach in Spain. The governments of the municipalities of Galdakao and L'Ametlla del Vallès legalized female toplessness on their public pools in March 2016 and June 2018, respectively.
In the United Kingdom, the first official nudist club was established in Wickford, Essex in 1924. According to Michael Farrar, writing for British Naturism the club adopted the name "Moonella Group" from the name of the owner of the ground, "Moonella", and called its site The Camp. Moonella, who was still living in 1965 but whose identity remains to be discovered, had inherited a house with land in 1923 and made it available to certain members of the New Gymnosophy Society. This society had been founded a few years before by H.C. Booth, M.H. Sorensen and Rex Wellbye under the name of the English Gymnosophical Society. It met for discussions at the Minerva Cafe at 144 High Holborn in London, the headquarters of the Women's Freedom League. Those who were permitted to join the Moonella Group were carefully selected, and the club was run by an "aristocracy" of the original members, all of whom had "club names" to preserve their anonymity. The club closed in 1926 because of building on adjacent land.
By 1943 there were a number of these so-called "sun clubs" and together they formed the British Sun Bathers Association or BSBA. In 1954 a group of clubs unhappy with the way the BSBA was being run split off to form the Federation of British Sun Clubs or FBSC. In 1961, the BSBA Annual Conference agreed that the term nudist was inappropriate and should be discarded in favour of naturist. The two organisations rivalled each other before eventually coming together again in 1964 as the Central Council for British Naturism or CCBN. This organisation structure has remained much the same but it is now called British Naturism which is often abbreviated to BN.
The first official nude beach was opened at Fairlight Glen in Covehurst Bay near Hastings in 1978 (not to be confused with Fairlight Cove, which is 2 km to the east) followed later by the beaches at Brighton and Fraisthorpe. Bridlington opened in April 1980.
Australia's first naturist club was founded in Sydney in 1931 by the French-born anarchist and pacifist Kleber Claux. In 1975, the southern half of Maslin Beach, south of Adelaide was declared Australia's first official nude beach. The beach is almost long, so the area reserved for nude bathing is away from other beach users.
Nudist clubs (known as "sun clubs") were established in Dunedin and Auckland in early 1938; the Auckland Sun Group went into recess shortly afterwards due to the outbreak of World War II. In 1958 the allied nudist clubs of New Zealand established the New Zealand Sunbathing Association, later renamed the New Zealand Naturist Federation. The Federation today includes 17 affiliated clubs with a total membership, in 2012, of 1,600 people. In 2016 the Federation in conjunction with Tourism New Zealand hosted the World Congress of the International Naturist Federation at the Wellington Naturist Club, marking the second time the Congress had ever been held in the Southern Hemisphere.
Outside formal naturist organizations, social nudity is practised in a variety of contexts in New Zealand culture. It is a feature of many summer music festivals, including Convergence, Kiwiburn, Luminate, Rhythm & Vines, and Splore, in a tradition going back to Nambassa in the late 1970s. It is also associated with the culture of rugby, most prominently in the nude rugby match held in Dunedin each winter from 2002 to 2014 (and sporadically thereafter) as pre-match entertainment for the first professional rugby game of the season, and in the mock public holiday "National Nude Day", an event in which viewers of the TV2 talk show "SportsCafe" were invited – chiefly by former rugby player Marc Ellis, the show's most irrepressibly comic presenter – to send in photos and video of themselves performing daily activities in the nude.
Whilst a large proportion of New Zealanders today are tolerant of nudity, especially on beaches, there remains a contingent who consider it obscene. Naturists who engage in casual public nudity, even in places where this is lawful, run the risk of having the police called on them by disapproving people. Legally, nudity is permissible on any beach where it is "known to occur", in consequence of which New Zealand has no official nude beaches. The "indecent exposure" provision of the Summary Offences Act is in practice reserved for cases of public sexual gratification, but public nudity may still be prosecuted under the "offensive behaviour" provision.
In Canada, individuals around the country became interested in nudism, skinny-dipping, and physical culture in the early part of the 20th century. After 1940 they had their own Canadian magazine, "Sunbathing & Health", which occasionally carried local news. Canadians had scattered groups in several cities during the 1930s and 1940s, and some of these groups attracted enough interest to form clubs on private land. The most significant clubs were the Van Tan Club, formed in 1939, and continues today in North Vancouver, BC., and, in Ontario, the Sun Air Club.
Canadians who served in the military during the Second World War met like-minded souls from across the country, and often visited clubs while in Europe. They were a ready pool of recruits for post-war organizers. A few years later, the wave of post-war immigration brought many Europeans with their own extensive experience, and they not only swelled the ranks of membership, but often formed their own clubs, helping to expand nudism from coast to coast.
Most of those clubs united in the Canadian Sunbathing Association, which affiliated with the American Sunbathing Association in 1954. Several disagreements between eastern and western members of the CSA resulted in the breakup of CSA into the Western Canadian Sunbathing Association (WCSA) and the Eastern Canadian Sunbathing Association (ECSA) in 1960. The ECSA endured much in-fighting over the next decade and a half, leading to its official demise in 1978. The WCSA continues today as the American Association for Nude Recreation – Western Canadian Region (www.aanr-wc.com), a region of the American Association for Nude Recreation (AANR) which itself was formerly known as the ASA.
In 1977 the (FQN) was founded in Quebec, by Michel Vaïs, who had experienced European naturism at Montalivet. In 1985 the Federation of Canadian Naturists (FCN) was formed with the support of the FQN. In 1988 the FQN and FCN formed the FQN-FCN Union as the official Canadian representative in the International Naturist Federation (INF).
Federación Nudista de México is a members organization with both individual and organization members. It promotes social nudity in Mexico, and it is recognized by the International Naturist Federation as the official national naturist organization in that country.
As of 2016, Playa Zipolite is Mexico's first and only legal public nude beach. A "free beach" and unofficially nudist for more than 50 years, this beach is reputed to be the best place for nudism in the country. The numerous nude sunbathers, and the long tradition, make it safe for nudism and naturism. Annually since 2016, on the first weekend of February, Zipolite has hosted Festival Nudista Zipolite that in 2019 attracted 7,000-8,000 visitors.
Kurt Barthel founded the American League for Physical Culture in 1929 and organized the first nudist event. In about 1930 they organized the American Gymnosophical Association. Barthel founded America's first official nudist camp, Sky Farm in New Jersey, in May, 1932. Around 1932, AGA established the Rock Lodge Club as a nudist facility in Stockholm, New Jersey and Ilsley Boone, a Dutch Reformed minister, formed the Christian naturism movement. Naturism began expanding nationwide. Nudism venues were teetotal until 1970.
The American Association for Nude Recreation (AANR) is the national naturist organization. Arnd Krüger compared nudists in Germany and the United States and came to the conclusion that in Germany the racial aspects ("Zuchtwahl") were important for the breakthrough (e.g. the Commanding General of the Army served as patron for nudists events), while in the U.S. nudism was far more commercial and had thus more difficulties.
In 2008, Florida Young Naturists held their first Naked Bash, which has since been repeated multiple times per year and has grown into one of the largest young naturist gatherings in the world.
In 2009, a campaign to promote Nudism in the United States occurred with an effort by AANR to record the largest simultaneous Skinny Dip at several U.S. Clubs and beaches, occurring on July 11 of that year.
In 2010, a new organization formed called Young Naturists and Nudists America which was mostly focused around the younger generation as well as social issues, such as body image. Young Naturists and Nudists America closed in 2017.
In the seventies, nudity on Bali's remote and deserted beaches was common but with the massive growth of tourism, this practice has disappeared. In 2002, nudity was declared illegal on Petitenget Beach, the last beach in Seminyak that tolerated discreet nudity. Individuals began to practice nudity in private villas and resorts. Laki Uma Villa, the first naturist facility to open, was for gay men only. Bali au Naturel, the first adult-only nudist resort for both genders, opened its doors in 2004. It subsequently expanded from 3 to 15 rooms and added from two more swimming pools. Indonesia has an underground naturist community, who defy the laws against public nudity there.
Nudism was successfully introduced in 2012 by The Thailand Naturist Association in Pattaya (Chan Resort), and six more nudist resorts have been created all over Thailand. Barefeet Resort in Bangkok, Lemon Tree in Phuket, Oriental Village in Chiangmai, Phuan Naturist Village in Huay Yai, and Peace Blue Naturist resort in Phukett all members of the Naturist Association of Thailand as well as other international naturist organizations.
Magazines published by, for or purportedly about naturists can be grouped:
Magazines in the second and, occasionally, third grouping feature naturist editorial and advertising, while some naturists argue over which magazines belonged in which of these categories – these views may change as publishers and editors change. Many clubs and groups have benefitted from magazines which, while not exclusively or even predominantly naturist in character, made naturist information available to many who would not otherwise have been aware of it. (These days, the information and advertising provided online, and the wide availability of free online porn, has meant the disappearance of old-style 'skin' magazines presenting significant glamour content masquerading as or alongside naturist content. Naturist magazines have to appeal strongly to naturists to succeed – they cannot sit on the fence between naturism and glamour.)
Some naturists still feel that the worthwhile editorial content in some magazines is not a fair balance for the disapproved-of photographic content.
Some naturist clubs have been willing to allow filming by the media on their grounds, though content that proved not to be of genuine naturism can end up being parodied by the media as the norm.
Some commercial 'naturist' DVDs are dominated by imagery of naked children. Such material can be marketed in ways that appear to appeal directly to paedophile inclinations, and ownership of these DVDs (and their earlier video cassette incarnations) has resulted in successful British prosecutions for possession of indecent images of children. One case was appealed, unsuccessfully, to the European Court of Human Rights. The precedents set by the court cases mean that possession in Britain of any naturist image of a child is, potentially, grounds for prosecution.
Photo shoots, including major high-profile works by Spencer Tunick, are done in public places including beaches. | https://en.wikipedia.org/wiki?curid=21911 |
Nordea
Nordea Bank Abp (), commonly referred to as Nordea, is a European financial services group operating in northern Europe and based in Helsinki, Finland. The bank is the result of the successive mergers and acquisitions of the Finnish, Danish, Norwegian and Swedish banks of Merita Bank, Unibank, Kreditkassen (Christiania Bank) and Nordbanken that took place between 1997 and 2000. The Baltic states are today also considered part of the home market. The largest shareholder of Nordea is Sampo, a Finnish insurance company with around 20% of the shares. Nordea is listed on the Copenhagen Stock Exchange, Helsinki Stock Exchange and Stockholm Stock Exchange.
Nordea operates across both the Nordic and Baltic regions with over 1,400 branches. The bank is present in 20 countries around the world, operating through full-service branches, subsidiaries and representative offices, although it primarily provides services in Finland, Norway, Denmark, Sweden, Estonia, Latvia and Lithuania.
Nordea serves 11 million private and 700,000 active corporate customers. The group also operates an Internet bank, which has more than 5.9 million online customers and performs more than 260 million payments per year.
Nordea is the result of the successive mergers and acquisitions of the Swedish, Finnish, Danish and Norwegian banks of Nordbanken, Merita Bank, Unibank and Kreditkassen (Christiania Bank og Kreditkasse) that took place between 1997 and 2000.
PK-banken was formed in 1974 by a merger between Postbanken (formed 1884) and Sveriges Kreditbank (formed 1923), both state-owned.
The private Nordbanken was formed in 1986 by a merger of two smaller private local banks, Uplandsbanken and Sundsvallsbanken. The Swedish banking crisis of 1991, resulting from deregulated markets and a housing price bubble, forced the government to nationalise Nordbanken for 64 billion kronor. Bad debts were transferred to the asset-management companies Securum and Retriva, which sold off the assets.
The name Nordea comes from the Swedish bank Nordbanken; this developed from PK-banken (Post och Kreditbanken) which in 1990 purchased the smaller private bank Nordbanken, and picked up that name. The name is also a contraction of the words Nordic and ideas.
Merita Bank was a 1995 merger of the former main rivals in Finland, the originally Svecoman Union Bank of Finland (Suomen Yhdyspankki) founded in 1862 and the Fennoman National Share Bank (Kansallis-Osake-Pankki) founded in 1889.
Nordea was the subject of an online phishing scam in 2007. The company estimated 8 million kr ($1.1 million) was stolen. Customers were targeted over a period of 15 months with phishing emails containing a trojan horse. Nordea refunded affected customers.
Nordea converted its subsidiaries operating in Denmark, Finland, and Norway to branches under the Swedish holding company, Nordea AB, in January 2017. In August 2017, DNB ASA and Nordea combined their operations in Estonia, Latvia and Lithuania to create Luminor Bank.
Nordea announced plans to move its corporate headquarters to Helsinki, Finland in September 2017. In October 2018, Nordea completed the move of its corporate headquarters to Helsinki, Finland.
In March 2019, public service broadcasting company, Yle, aired a program that revealed money laundering allegations against Nordea. The company was the biggest Nordic lender allegedly involved in the multi-million-dollar money laundering scheme, according to Bloomberg.
In October 2019, Nordic banks agreed to fund common payment services in order to boost businesses.
Nordea is owned by:
Nordea Markets is the international markets operation of Nordea. It handles a broad range of investment banking products and services including fixed income, currencies, commodities, equities, debt capital markets, and corporate finance. It also supplies advisory services and internationally acknowledged economic research and analysis.
There are approximately 2,200 employees including Financial Risk Control and Capital Markets Services. Its main operational centres are in Copenhagen (also the main trading floor), Helsinki, Oslo and Stockholm, and with regional offices also in Brazil (São Paulo), China (Beijing and Shanghai), Estonia (Tallinn), Germany (Frankfurt), Latvia (Riga), Lithuania (Vilnius), Luxembourg (Luxembourg City), Poland (Warsaw), Russia (Moscow), Singapore, Switzerland (Zurich), the United Kingdom (London), and United States (New York City).
The largest financial group in the Nordic region, Nordea has, despite warnings from the Swedish Financial Supervisory Authority (FI) been active in using offshore companies in tax havens according to the Panama papers.
The Nordea section in Luxembourg, between the years 2004 and 2014, founded nearly 400 offshore companies in Panama and the British Virgin Islands for its customers.
The Swedish Financial Supervisory Authority (FI) has pointed out that there are "serious deficiencies" in how Nordea monitors money laundering, and has given the bank two warnings. In 2015, Nordea had to pay the largest possible fine - over 5 million EUR.
In 2012, Nordea asked Mossack Fonseca to change documents retroactively so that three Danish customers power of attorney documents had been in force since 2010.
The director for Nordea Private banking Thorben Sanders admits that before 2009 they did not screen for customers that tried to evade tax. "At the end of 2009 we decided that our bank should not be a means of tax evasion" says Thorben Sanders.
As a consequence of the leaked documents, the Swedish Financial Supervisory Authority (FI) stated on 4 April 2016 that it had started an investigation into the conduct of Nordea, the largest financial group in the Nordic region. The Swedish minister of Finance Magdalena Andersson characterized the conduct of Nordea as "a crime" and "totally unacceptable". Nordea CEO Casper von Koskull stated that he was disappointed with the shortcomings within Nordea's operating principles, saying that "this cannot be tolerated".
Other Swedish banks are mentioned in the documents, but mention of Nordea occurs 10,902 times and the second-most mentioned bank has 764 matches.
Stefan Löfven, Prime Minister of Sweden, said in 2016 that he was very critical of the conduct of Nordea and its role, and said "Â- They are on the list of shame too".
Nordea bank loaned billions of euros to shipping companies that own vessels in secrecy jurisdictions such as Bermuda, Cyprus, Panama, BVI, the Cayman Islands and the Isle of Man. In the Paradise Papers, Nordea was shown to have lent a significant amount of money to customers based in tax havens. | https://en.wikipedia.org/wiki?curid=21916 |
Normal subgroup
In abstract algebra, a normal subgroup is a subgroup that is invariant under conjugation by members of the group of which it is a part. In other words, a subgroup of the group is normal in if and only if for all and . The usual notation for this relation is formula_1.
Normal subgroups are important because they (and only they) can be used to construct quotient groups of the given group. Furthermore, the normal subgroups of are precisely the kernels of group homomorphisms with domain , which means that they can be used to internally classify those homomorphisms.
Évariste Galois was the first to realize the importance of the existence of normal subgroups.
A subgroup of a group is called a normal subgroup of if it is invariant under conjugation; that is, the conjugation of an element of by an element of is always in . The usual notation for this relation is formula_1, and the definition may be written in symbols as
formula_3
For any subgroup of , the following conditions are equivalent to being a normal subgroup of . Therefore, any one of them may be taken as the definition:
Given two normal subgroups, and , of , their intersection formula_19and their product formula_20 are also normal subgroups of .
The normal subgroups of form a lattice under subset inclusion with least element, , and greatest element, . The meet of two normal subgroups, and , in this lattice is their intersection and the join is their product.
The lattice is complete and modular.
If is a normal subgroup, we can define a multiplication on cosets as follows:formula_21This relation defines a mapping formula_22. To show that this mapping is well-defined, one needs to prove that the choice of representative elements formula_23 does not affect the result. To this end, consider some other representative elements formula_24. Then there are formula_25 such that formula_26. It follows that formula_27where we also used the fact that formula_28 is a "normal" subgroup, and therefore there is formula_29 such that formula_30. This proves that this product is a well-defined mapping between cosets.
With this operation, the set of cosets is itself a group, called the quotient group and denoted with . There is a natural homomorphism, , given by . This homomorphism maps formula_28 into the identity element of , which is the coset , that is, formula_32.
In general, a group homomorphism, sends subgroups of to subgroups of . Also, the preimage of any subgroup of is a subgroup of . We call the preimage of the trivial group in the kernel of the homomorphism and denote it by . As it turns out, the kernel is always normal and the image of , , is always isomorphic to (the first isomorphism theorem). In fact, this correspondence is a bijection between the set of all quotient groups of , , and the set of all homomorphic images of (up to isomorphism). It is also easy to see that the kernel of the quotient map, , is itself, so the normal subgroups are precisely the kernels of homomorphisms with domain . | https://en.wikipedia.org/wiki?curid=21918 |
Napalm
Napalm is an incendiary mixture of a gelling agent and a volatile petrochemical (usually gasoline (petrol) or diesel fuel). The title is a portmanteau of the names of two of the constituents of the original thickening and gelling agents: co-precipitated aluminium salts of naphthenic acid and palmitic acid. Napalm B is the more modern version of napalm (utilizing Polystyrene derivatives) and, although distinctly different in its chemical composition, is often referred to simply as "napalm".
A team led by chemist Louis Fieser originally developed napalm for the United States Chemical Warfare Service in 1942 in a secret laboratory at Harvard University. Of immediate first interest was its viability as an incendiary device to be used in fire bombing campaigns during World War II; its potential to be coherently projected into a solid stream that would carry for distance (instead of the bloomy fireball of pure gasoline) resulted in widespread adoption in infantry/combat engineer flamethrowers as well.
Napalm burns at temperatures ranging from 800° C (1,472° F) to 1200° C (2192° F). In addition, it burns for a greater duration than gasoline, as well as being more easily dispersed and sticking tenaciously to its targets. These traits make it extremely effective (and controversial) in the anti-structure and antipersonnel role. It has been widely used in both the air and ground role, with the largest use to date being via air-dropped bombs in World War II (most notably in the devastating incendiary attacks on Japanese cities in 1945), and later close air support roles in Korea and Vietnam. Napalm also has fueled most of the flamethrowers (tank, ship and infantry-based) used since World War II, giving them much greater range, and was used in this role as a common weapon of urban combat by both the Axis and the Allies in World War II. Multiple nations (including the United States, China, Russia, Iran and North Korea) maintain large stockpiles of napalm-based weapons of various types.
Napalm was used in flamethrowers, bombs and tanks in World War II. It is believed to have been formulated to burn at a specific rate and to adhere to surfaces to increase its stopping power. During combustion, napalm rapidly deoxygenates the available air and generates large amounts of carbon monoxide and carbon dioxide.
Alternative compositions exist for different uses, e.g. triethylaluminium, a pyrophoric compound that aids ignition.
Use of fire in warfare has a long history. Greek fire, also described as "sticky fire" (πῦρ κολλητικόν, "pýr kolletikón"), is believed to have had a petroleum base. The development of napalm was precipitated by the use of jellied gasoline mixtures by the Allied forces during World War II. Latex, used in these early forms of incendiary devices, became scarce, since natural rubber was almost impossible to obtain after the Japanese army captured the rubber plantations in Malaya, Indonesia, Vietnam, and Thailand.
This shortage of natural rubber prompted chemists at US companies such as DuPont and Standard Oil, and researchers at Harvard University, to develop factory-made alternatives—artificial rubber for all uses, including vehicle tires, tank tracks, gaskets, hoses, medical supplies and rain clothing. A team of chemists led by Louis Fieser at Harvard University was the first to develop synthetic napalm, during 1942. "The production of napalm was first entrusted to Nuodex Products, and by the middle of April 1942 they had developed a brown, dry powder that was not sticky by itself, but when mixed with gasoline turned into an extremely sticky and inflammable substance." One of Fieser's colleagues suggested adding phosphorus to the mix which increased the "ability to penetrate deeply...into the musculature, where it would continue to burn day after day."
On 4 July 1942, the first test occurred on the football field near the Harvard Business School. Tests under operational conditions were carried out at Jefferson Proving Ground on condemned farm buildings, and subsequently at Dugway Proving Ground on buildings designed and constructed to represent those to be found in German and Japanese towns. This new mixture of chemicals was widely used in the Second World War in incendiary bombs and in flamethrowers.
From 1965 to 1969, the Dow Chemical Company manufactured napalm B for the American armed forces. After news reports of napalm B's deadly and disfiguring effects were published, Dow Chemical experienced boycotts of its products, and its recruiters for new chemists, chemical engineers, etc., graduating from college were subject to campus boycotts and protests. The management of the company decided that its "first obligation was the government." Meanwhile, napalm B became a symbol for the Vietnam War.
Napalm was first employed in incendiary bombs and went on to be used as fuel for flamethrowers.
The first recorded strategic use of napalm incendiary bombs occurred in an attack by the US Army Air Force on Berlin on 6 March 1944, using American AN-M76 incendiary bombs with PT-1 (Pyrogel) filler.
The first known tactical use by the USAAF was by the 368th Fighter Group, 9th Air Force Northeast of Compiègne, France 27 May 1944
and the British De Havilland Mosquito FB Mk.VIs of No. 140 Wing RAF, Second Tactical Air Force on 14 July 1944, which also employed the AN-M76 incendiary in a reprisal attack on the 17th SS Panzergrenadier Division ""Götz von Berlichingen"" in Bonneuil-Matours. Soldiers of this Waffen SS unit had captured and then killed a British SAS prisoner-of-war, Lt. Tomos Stephens, taking part in Operation Bulbasket, and seven local Resistance fighters. Although it was not known at the time of the air strike, 31 other POWs from the same SAS unit, and an American airman who had joined up with the SAS unit, had also been executed.
Further use of napalm by American forces occurred in the Pacific theater of operations, where in 1944 and 1945, napalm was used as a tactical weapon against Japanese bunkers, pillboxes, tunnels, and other fortifications, especially on Saipan, Iwo Jima, the Philippines, and Okinawa, where deeply dug-in Japanese troops refused to surrender. Napalm bombs were dropped by aviators of the U.S. Navy, the United States Army Air Forces, and the U.S. Marine Corps in support of ground troops.
When the U.S. Army Air Forces on the Marianas Islands ran out of conventional thermite incendiary bombs for their B-29 Superfortresses to drop on large Japanese cities, its top commanders, such as General Curtis LeMay, used napalm bombs to continue with fire raids.
In the European Theater of Operations napalm was used by American forces in the siege of La Rochelle in April 1945 against German soldiers (and inadvertently French civilians in Royan) – about two weeks before the end of the war.
In its first known post-WWII use, U.S.-supplied napalm was used in the Greek Civil War by the Greek National Army as part of Operation Coronis against the Democratic Army of Greece (DSE) — the military branch of the Communist Party of Greece (KKE).
Napalm was also widely used by the United States during the Korean War. The ground forces in North Korea holding defensive positions were often outnumbered by Chinese and North Koreans, but U.S. Air Force and Navy aviators had control of the air over nearly all of the Korean Peninsula. Hence, the American and other U.N. aviators used napalm B for close air support of the ground troops along the border between North Korea and South Korea, and also for attacks in North Korea. Napalm was used most notably during the battle "Outpost Harry" in South Korea during the night of June 10–11, 1953. Eighth Army chemical officer Donald Bode reported that on an "average good day" UN pilots used 70,000 gallons of napalm, with approximately 60,000 gallons of this thrown by US forces. The "New York Herald Tribune" hailed "Napalm, the No. 1 Weapon in Korea". Winston Churchill, among others, criticized American use of napalm in Korea, calling it "very cruel", as the US/UN forces, he said, were "splashing it all over the civilian population", "tortur[ing] great masses of people". The American official who took this statement declined to publicize it.
At the same time the French Air Force regularly used napalm for close air support of ground operations in the First Indochina War (1946–1954). At first the canisters were simply pushed out the side doors of Ju-52 planes that had been captured in Germany, later mostly B-26 bombers were used.
Napalm became an intrinsic element of U.S. military action during the Vietnam War as forces made increasing use of it for its tactical and psychological effects. Reportedly about 388,000 tons of U.S. napalm bombs were dropped in the region between 1963 and 1973, compared to 32,357 tons used over three years in the Korean War, and 16,500 tons dropped on Japan in 1945. The U.S. Air Force and U.S. Navy used napalm with great effect against all kinds of targets, such as troops, tanks, buildings, jungles, and even railroad tunnels. The effect was not always purely physical as napalm had psychological effects on the enemy as well.
A variant of napalm was produced in Rhodesia for a type of ordnance known as "Frantan" between 1968 and 1978 and was deployed extensively by the Rhodesian Air Force during that country's bush war. In May 1978, Herbert Ushewokunze, minister of health for the Zimbabwe African National Union (ZANU) produced photographic evidence of purported civilian victims of Rhodesian napalm strikes, which he circulated during a tour of the US. The government of Mozambique and the Zimbabwe African People's Union (ZAPU) also issued claims at around the same time that napalm strikes against guerrilla targets had become a common feature in Rhodesian military operations both at home and abroad.
The South African Air Force frequently deployed napalm from Atlas Impala strike aircraft during raids on guerrilla bases in Angola during the South African Border War.
Other instances of napalm's use include by France during the Algerian War (1954–1962), the Portuguese Colonial War (1961–1974), the Six-Day War by Israel (1967), in Nigeria (1969), India and Pakistan (1965 and 1971), Egypt (1973), by Morocco during the Western Sahara War (1975–1991), by Argentina (1982), by Iran (1980–88), by Iraq (1980–88, 1991), By IPKF (Indian Peace keeping force) in 1987 against Tamils (LTTE) in Sri Lanka, by Angola during the Angolan Civil War, and Yugoslavia (1991–1996). Recently, Turkey has been accused of using Napalm in its war against Kurdish militias over Afrin. Turkey's General Staff, however, denies this.
When used as a part of an incendiary weapon, napalm can cause severe burns (ranging from superficial to subdermal), asphyxiation, unconsciousness, and death. In this implementation, napalm fires can create an atmosphere of greater than 20% carbon monoxide and firestorms with self-perpetuating winds of up to .
Napalm is effective against dug-in enemy personnel. The burning incendiary composition flows into foxholes, trenches and bunkers, and drainage and irrigation ditches and other improvised troop shelters. Even people in undamaged shelters can be killed by hyperthermia, radiant heat, dehydration, asphyxiation, smoke exposure, or carbon monoxide poisoning.
One firebomb released from a low-flying plane can damage an area of .
International law does not specifically prohibit the use of napalm or other incendiaries against military targets, but use against civilian populations was banned by the United Nations Convention on Certain Conventional Weapons (CCW) in 1980. Protocol III of the CCW restricts the use of all incendiary weapons, but a number of countries have not acceded to all of the protocols of the CCW. According to the Stockholm International Peace Research Institute (SIPRI), countries are considered a party to the convention, which entered into force as international law in December 1983, as long as they ratify at least two of the five protocols. Approximately 25 years after the General Assembly adopted it, the United States signed it on January 21, 2009, President Barack Obama's first full day in office. Its ratification, however, is subject to a reservation that says that the treaty can be ignored if it would save civilian lives. | https://en.wikipedia.org/wiki?curid=21920 |
Northern Crusades
The Northern Crusades or Baltic Crusades were Christian colonization and Christianization campaigns undertaken by Catholic Christian military orders and kingdoms, primarily against the pagan Baltic, Finnic and West Slavic peoples around the southern and eastern shores of the Baltic Sea, and to a lesser extent also against Orthodox Christian Slavs (East Slavs).
The most notable campaigns were the Livonian and Prussian crusades. Some of these wars were called crusades during the Middle Ages, but others, including most of the Swedish ones, were first dubbed crusades by 19th-century romantic nationalist historians. However, crusades against Baltic indigenous peoples were authorized by Pope Alexander III in the bull "Non parum animus noster", in 1171 or 1172.
At the outset of the northern crusades, Christian monarchs across northern Europe commissioned forays into territories that comprise modern-day Estonia, Finland, Latvia, Lithuania, Poland and Russia. Pagans or eastern Orthodox Christians, the indigenous populations suffered forced baptisms and the ravages of military occupation. Spearheading, but by no means monopolizing these incursions, the ascendant Teutonic Order profited immensely from the crusades, as did German merchants who fanned out along trading routes traversing the Baltic frontier.
The official starting point for the Northern Crusades was Pope Celestine III's call in 1195, but the Catholic kingdoms of Scandinavia, Poland and the Holy Roman Empire had begun moving to subjugate their pagan neighbors even earlier. The non-Christian people who were objects of the campaigns at various dates included:
Armed conflict between the Baltic Finns, Balts and Slavs who dwelt by the Baltic shores and their Saxon and Danish neighbors to the north and south had been common for several centuries before the crusade. The previous battles had largely been caused by attempts to destroy castles and sea trade routes to gain economic advantage in the region, and the crusade basically continued this pattern of conflict, albeit now inspired and prescribed by the Pope and undertaken by Papal knights and armed monks.
The campaigns started with the 1147 Wendish Crusade against the Polabian Slavs (or "Wends") of what is now northern and eastern Germany. The crusade occurred parallel to the Second Crusade to the Holy Land, and continued irregularly until the 16th century.
The Swedish crusades were campaigns by Sweden against Finns, Tavastians and Karelians during period from 1150 to 1293.
The Danes are known to have made two crusades to Finland in 1191 and in 1202. The latter one was led by the Bishop of Lund Anders Sunesen with his brother.
By the 12th century, the peoples inhabiting the lands now known as Estonia, Latvia and Lithuania formed a pagan wedge between increasingly powerful rival Christian states – the Orthodox Church to their east and the Catholic Church to their west. The difference in creeds was one of the reasons they had not yet been effectively converted. During a period of more than 150 years leading up to the arrival of German crusaders in the region, Estonia was attacked thirteen times by Russian principalities, and by Denmark and Sweden as well. Estonians for their part made raids upon Denmark and Sweden. There were peaceful attempts by some Catholics to convert the Estonians, starting with missions dispatched by Adalbert, Archbishop of Bremen in 1045-1072. However, these peaceful efforts seem to have had limited success.
Moving in the wake of German merchants who were now following the old trading routes of the Vikings, a monk named Meinhard landed at the mouth of the Daugava river in present-day Latvia in 1180 and was made bishop in 1186. Pope Celestine III proclaimed a crusade against the Baltic heathens in 1195, which was reiterated by Pope Innocent III and a crusading expedition led by Meinhard's successor, Bishop Berthold of Hanover, landed in Livonia (part of present-day Latvia, surrounding the Gulf of Riga) in 1198. Although the crusaders won their first battle, Bishop Berthold was mortally wounded and the crusaders were repulsed.
In 1199, Albert of Buxhoeveden was appointed by the Archbishop Hartwig II of Bremen to Christianise the Baltic countries. By the time Albert died 30 years later, the conquest and formal Christianisation of present-day Estonia and northern Latvia was complete. Albert began his task by touring the Empire, preaching a Crusade against the Baltic countries, and was assisted in this by a Papal Bull which declared that fighting against the Baltic heathens was of the same rank as participating in a crusade to the Holy Land. Although he landed in the mouth of the Daugava in 1200 with only 23 ships and 500 soldiers, the bishop's efforts ensured that a constant flow of recruits followed. The first crusaders usually arrived to fight during the spring and returned to their homes in the autumn. To ensure a permanent military presence, the Livonian Brothers of the Sword were founded in 1202. The founding by Bishop Albert of the market at Riga in 1201 attracted citizens from the Empire and economic prosperity ensued. At Albert's request, Pope Innocent III dedicated the Baltic countries to the Virgin Mary to popularize recruitment to his army and the name "Mary's Land" has survived up to modern times. This is noticeable in one of the names given to Livonia at the time, Terra Mariana (Land of Mary).
In 1206, the crusaders subdued the Livonian stronghold in Turaida on the right bank of Gauja River, the ancient trading route to the Northwestern Rus. In order to gain control over the left bank of Gauja, the stone castle was built in Sigulda before 1210. By 1211, the Livonian province of Metsepole (now Limbaži district) and the mixed Livonian-Latgallian inhabited county of Idumea (now Straupe) was converted to the Roman Catholic faith. The last battle against the Livonians was the siege of Satezele hillfort near to Sigulda in 1212. The Livonians, who had been paying tribute to the East Slavic Principality of Polotsk, had at first considered the Germans as useful allies. The first prominent Livonian to be christened was their leader Caupo of Turaida. As the German grip tightened, the Livonians rebelled against the crusaders and the christened chief, but were put down. Caupo of Turaida remained an ally of the crusaders until his death in the Battle of St. Matthew's Day in 1217.
The German crusaders enlisted newly baptised Livonian warriors to participate in their campaigns against Latgallians and Selonians (1208–1209), Estonians (1208–1227) and against Semigallians, Samogitians and Curonians (1219–1290).
After the subjugation of the Livonians the crusaders turned their attention to the Latgallian principalities to the east, along the Gauja and Daugava rivers. The military alliance in 1208 and later conversion from Greek Orthodoxy to Roman Catholicism of the Principality of Tālava was the only peaceful subjugation of the Baltic tribes during the Nordic crusades. The ruler of Tālava, Tālivaldis ("Talibaldus de Tolowa"), became the most loyal ally of German crusaders against the Estonians, and he died a Catholic martyr in 1215. The war against the Latgallian and Selonian countries along the Daugava waterway started in 1208 by occupation of the Orthodox Principality of Koknese and the Selonian Sēlpils hillfort. The campaign continued in 1209 with an attack on the Orthodox Principality of Jersika (known as "Lettia"), accused by crusaders of being in alliance with Lithuanian pagans. After defeat the king of Jersika, Visvaldis, became the vassal of the Bishop of Livonia and received part of his country (Southern Latgale) as a fiefdom. The Selonian stronghold of Sēlpils was briefly the seat of a Selonian diocese (1218–1226), and then came under the rule of the Livonian Order (and eventually the stone castle of Selburg was built in its place). Only in 1224, with the division of Tālava and Adzele counties between the Bishop of Rīga and the Order of the Swordbearers, did Latgallian countries finally become the possession of German conquerors. The territory of the former Principality of Jersika was divided by the Bishop of Rīga and the Livonian Order in 1239.
By 1208, the Germans were strong enough to begin operations against the Estonians, who were at that time divided into eight major and several smaller counties led by elders with limited co-operation between them. In 1208-27, war parties of the different sides rampaged through the Livonian, Northern Latgallian, and Estonian counties, with Livonians and Latgallians normally as allies of the Crusaders, and the Principalities of Polotsk and Pskov appearing as allies of different sides at different times. Hill forts, which were the key centres of Estonian counties, were besieged and captured a number of times. A truce between the war-weary sides was established for three years (1213–1215) and proved generally more favourable to the Germans, who consolidated their political position, while the Estonians were unable to develop their system of loose alliances into a centralised state. The Livonian leader Kaupo was killed in battle near Viljandi (Fellin) on 21 September 1217, but the battle was a crushing defeat for the Estonians, whose leader Lembitu was also killed. Since 1211, his name had come to the attention of the German chroniclers as a notable Estonian elder, and he had become the central figure of the Estonian resistance.
The Christian kingdoms of Denmark and Sweden were also greedy for conquests on the Eastern shores of the Baltic. While the Swedes made only one failed foray into western Estonia in 1220, the Danish Fleet headed by King Valdemar II of Denmark had landed at the Estonian town of Lindanisse (present-day Tallinn) in 1219. After the Battle of Lindanise the Danes established a fortress, which was besieged by Estonians in 1220 and 1223, but held out. Eventually, the whole of northern Estonia came under Danish control.
The last Estonian county to hold out against the invaders was the island county of Saaremaa (Ösel), whose war fleets had raided Denmark and Sweden during the years of fighting against the German crusaders.
In 1206, a Danish army led by king Valdemar II and Andreas, the Bishop of Lund landed on Saaremaa and attempted to establish a stronghold without success. In 1216 the Livonian Brothers of the Sword and the bishop Theodorich joined forces and invaded Saaremaa over the frozen sea. In return the Oeselians raided the territories in Latvia that were under German rule the following spring. In 1220, the Swedish army led by king John I of Sweden and the bishop Karl of Linköping conquered Lihula in Rotalia in Western Estonia. Oeselians attacked the Swedish stronghold the same year, conquered it and killed the entire Swedish garrison including the Bishop of Linköping.
In 1222, the Danish king Valdemar II attempted the second conquest of Saaremaa, this time establishing a stone fortress housing a strong garrison. The Danish stronghold was besieged and surrendered within five days, the Danish garrison returned to Revel, leaving bishop Albert of Riga's brother Theodoric, and few others, behind as hostages for peace. The castle was razed to the ground by the Oeselians.
A 20,000 strong army under Papal legate William of Modena crossed the frozen sea while the Saaremaa fleet was icebound, in January 1227. After the surrender of two major Oeselian strongholds, Muhu and Valjala, the Oeselians formally accepted Christianity.
In 1236, after the defeat of the Livonian Brothers of the Sword in the Battle of Saule, military action on Saaremaa broke out again. In 1261, warfare continued as the Oeselians had once more renounced Christianity and killed all the Germans on the island. A peace treaty was signed after the united forces of the Livonian Order, the Bishopric of Ösel-Wiek, and Danish Estonia, including mainland Estonians and Latvians, defeated the Oeselians by conquering their stronghold at Kaarma. Soon thereafter, the Livonian Order established a stone fort at Pöide.
Although the Curonians had attacked Riga in 1201 and 1210, Albert of Buxhoeveden, considering Courland a tributary of Valdemar II of Denmark, had been reluctant to conduct a large scale campaign against them. After Albert's death in 1229, the crusaders secured the peaceful submission of Vanemane (a county with a mixed Livonian, Oselian, and Curonian population in the northeastern part of Courland) by treaty in 1230. In the same year the papal vice-legat Baldouin of Alnea annulled this agreement and concluded an agreement with the ruler ("rex") of Bandava in the central Courland Lammechinus, delivering his kingdom into the hands of the papacy. Baldouin became the popes's delegate in Courland and bishop of Semigallia; however, the Germans complained about him to the Roman Curia, and in 1234 Pope Gregory IX removed Baldouin as his delegate.
After their decisive defeat in the Battle of Saule by the Samogitians and Semigallians, the remnants of the Swordbrothers were reorganized in 1237 as a subdivision of the Teutonic Order, and became known as the Livonian Order. In 1242, under the leadership of the master of the Livonian Order Andrew of Groningen, the crusaders began the military conquest of Courland. They defeated the Curonians as far south as Embūte, near the contemporary border with Lithuania, and founded their main fortress at Kuldīga. In 1245 Pope Innocent IV allotted two-thirds of conquered Courland to the Livonian Order, and one third to the Bishopric of Courland.
At the Battle of Durbe in 1260 a force of Samogitians and Curonians overpowered the united forces of the Livonian and Teutonic Orders; over the following years, however, the Crusaders gradually subjugated the Curonians, and in 1267 concluded the peace treaty stipulating the obligations and the rights of their defeated rivals. The unconquered southern parts of their territories (Ceklis and Megava) were united under the rule of the Grand Duchy of Lithuania.
The conquest of Semigallian counties started in 1219 when crusaders from Rīga occupied Mežotne, the major port on the Lielupe waterway, and founded the Bishopric of Semigallia. After several unsuccessful campaigns against the pagan Semigallian duke Viestards and his Samogitian kinsfolk, the Roman Curia decided in 1251 to abolish the Bishopric of Semigallia, and divided its territories between the Bishopric of Rīga and the Order of Livonia. In 1265 a stone castle was built at Jelgava, on the Lielupe, and became the main military base for crusader attacks against the Semigallians. In 1271 the capital hillfort of Tērvete was conquered, but Semigallians under the Duke Nameisis rebelled in 1279, and the Lithuanians under Traidenis defeated Livonian Order forces in the Battle of Aizkraukle. Duke Nameisis' warriors unsuccessfully attacked Rīga in 1280, in response to which around 14,000 crusaders besieged Turaida castle in 1281. To conquer the remaining Semigallian hillforts the Order's master Villekin of Endorpe built a castle called "Heiligenberg" right next to the Tērvete castle in 1287. The same year the Semigallians made another attempt to conquer Rīga, but again failed to take it. On their return home Livonian knights attacked them, but were defeated at the Battle of Garoza, in which the Orders' master Villekin and at least 35 knights lost their lives. The new master of the Order Cuno of Haciginstein organised the last campaigns against the Semigallians in 1289 and 1290; the hillforts of Dobele, Rakte and Sidabre were conquered and most of the Semigallian warriors joined the Samogitian and Lithuanian forces.
Konrad I, the Polish Duke of Masovia, unsuccessfully attempted to conquer pagan Prussia in crusades in 1219 and 1222. Taking the advice of the first Bishop of Prussia, Christian of Oliva, Konrad founded the crusading Order of Dobrzyń (or "Dobrin") in 1220. However, this order was largely ineffective, and Konrad's campaigns against the Old Prussians were answered by incursions into the already captured territory of Culmerland (Chełmno Land). Subjected to constant Prussian counter-raids, Konrad wanted to stabilize the north of the Duchy of Masovia in this fight over border area of Chełmno Land. Masovia had only been conquered in the 10th century and native Prussians, Yotvingians, and Lithuanians were still living in the territory, where no settled borders existed. Konrad's military weakness led him in 1226 to ask the Roman Catholic monastic order of the Teutonic Knights to come to Prussia and suppress the Old Prussians.
The Northern Crusades provided a rationale for the growth and expansion of the Teutonic Order of German crusading knights which had been founded in Palestine at the end of the 12th century. Duke Konrad I of Masovia in west-central Poland appealed to the Knights to defend his borders and subdue the pagan Old Prussians in 1226. After the subjugation of the Prussians, the Teutonic Knights fought against the Grand Duchy of Lithuania.
When the Livonian knights were crushed by Samogitians in the Battle of Saule in 1236, coinciding with a series of revolts in Estonia, the Livonian Order was inherited by the Teutonic Order, allowing the Teutonic Knights to exercise political control over large territories in the Baltic region. Mindaugas, the King of Lithuania, was baptised together with his wife after his coronation in 1253, hoping that this would help stop the Crusaders' attacks, which it did not. The Teutonic Knights failed to subdue Lithuania, which officially converted to (Catholic) Christianity in 1386 on the marriage of Grand Duke Jogaila to the 11-year-old Queen Jadwiga of Poland. However, even after the country was officially converted, the conflict continued up until the 1410 Battle of Grunwald, also known as the First Battle of Tannenberg, when the Lithuanians and Poles, helped by the Tatars, Moldovans and the Czechs, defeated the Teutonic knights.
In 1221, Pope Honorius III was again worried about the situation in the Finnish-Novgorodian Wars after receiving alarming information from the Archbishop of Uppsala. He authorized the Bishop of Finland to establish a trade embargo against the "barbarians" that threatened the Christianity in Finland. The nationality of the "barbarians", presumably a citation from Archbishop's earlier letter, remains unknown, and was not necessarily known even by the Pope. However, as the trade embargo was widened eight years later, it was specifically said to be against the Russians. Based on Papal letters from 1229, the Bishop of Finland requested, the Pope enforce a trade embargo against Novgorodians on the Baltic Sea, at least in Visby, Riga and Lübeck. A few years later, the Pope also requested the Livonian Brothers of the Sword send troops to protect Finland. Whether any knights ever arrived remains unknown.
The Teutonic Order's attempts to conquer Orthodox Russia (particularly the Republics of Pskov and Novgorod), an enterprise endorsed by Pope Gregory IX, accompanied the Northern Crusades. One of the major blows for the idea of the conquest of Russia was the Battle of the Ice in 1242. With or without the Pope's blessing, Sweden also undertook several crusades against Orthodox Novgorod. | https://en.wikipedia.org/wiki?curid=21921 |
Neoteny
Neoteny (), also called juvenilization, is the delaying or slowing of the physiological (or somatic) development of an organism, typically an animal. Neoteny is found in modern humans. In progenesis (also called paedogenesis), sexual development is accelerated.
Both neoteny and progenesis result in paedomorphism (or paedomorphosis), a type of heterochrony. Some authors define paedomorphism as the retention of larval traits, as seen in salamanders.
Both neoteny and progenesis cause the retention in adults of traits previously seen only in the young. Such retention is important in evolutionary biology, domestication and evolutionary developmental biology.
The origins of the concept of neoteny have been traced to the Bible (as argued by Ashley Montagu) and to the poet William Wordsworth's "The Child is the father of the Man" (as argued by Barry Bogin). The term itself was invented in 1885 by Julius Kollmann as he described the axolotl's maturation while remaining in a tadpole-like aquatic stage complete with gills, unlike other adult amphibians like frogs and toads.
The word "neoteny" is borrowed from the German "Neotenie", the latter constructed by Kollmann from the Greek νέος ("neos", "young") and τείνειν ("teínein", "to stretch, to extend"). The adjective is either "neotenic" or "neotenous". For the opposite of "neotenic", different authorities use either "gerontomorphic" or "peramorphic". Bogin points out that Kollmann had intended the meaning to be "retaining youth", but had evidently confused the Greek "teínein" with the Latin "tenere", which had the meaning he wanted, "to retain", so that the new word would mean "the retaining of youth (into adulthood)".
In 1926 Louis Bolk described neoteny as the major process in humanization. In his 1977 book "Ontogeny and Phylogeny", Stephen Jay Gould noted that Bolk's account constituted an attempted justification for "scientific" racism and sexism, but acknowledged that Bolk had been right in the core idea that humans differ from other primates in becoming sexually mature in an infantile stage of body development.
Neoteny in humans is the slowing or delaying of body development, compared to non-human primates, resulting in features such as a large head, a flat face, and relatively short arms. These neotenic changes may have been brought about by sexual selection in human evolution. In turn, they may have permitted the development of human capacities such as emotional communication. However, humans also have relatively large noses and long legs, both peramorphic (not neotenic) traits. Some evolutionary theorists have proposed that neoteny was a key feature in human evolution. Gould argued that the "evolutionary story" of humans is one where we have been "retaining to adulthood the originally juvenile features of our ancestors". J. B. S. Haldane mirrors Gould's hypothesis by stating a "major evolutionary trend in human beings" is "greater prolongation of childhood and retardation of maturity." Delbert D. Thiessen said that "neoteny becomes more apparent as early primates evolved into later forms" and that primates have been "evolving toward flat face." However, in light of some groups using neotony-based arguments to support racism, Gould also argued "that the whole enterprise of ranking groups by degree of neoteny is fundamentally unjustified" (Gould, 1996, pg. 150). Doug Jones argued that human evolution's trend toward neoteny may have been caused by sexual selection in human evolution for neotenous facial traits in women by men with the resulting neoteny in male faces being a "by-product" of sexual selection for neotenous female faces.
Neoteny is seen in domesticated animals such as dogs and mice. This is because there are more resources available, less competition for those resources, and with the lowered competition the animals expend less energy obtaining those resources. This allows them to mature and reproduce more quickly than their wild counterparts. The environment that domesticated animals are raised in determines whether or not neoteny is present in those animals. Evolutionary neoteny can arise in a species when those conditions occur, and a species becomes sexually mature ahead of its "normal development". Another explanation for the neoteny in domesticated animals can be the selection for certain behavioral characteristics. Behavior is linked to genetics which therefore means that when a behavioral trait is selected for, a physical trait may also be selected for due to mechanisms like linkage disequilibrium. Often, juvenile behaviors are selected for in order to domesticate more easily a species; aggressiveness in certain species comes with adulthood when there is a need to compete for resources. If there is no need for competition, then there is no need for aggression. Selecting for juvenile behavioral characteristics can lead to neoteny in physical characteristics because, for example, with the reduced need for behaviors like aggression there is no need for developed traits that would help in that area. Traits that may become neotenized due to decreased aggression may be a shorter muzzle and smaller general size among the domesticated individuals. Some common neotenous physical traits in domesticated animals (mainly dogs, pigs, ferrets, cats, and even foxes) include: floppy ears, changes in reproductive cycle, curly tails, piebald coloration, fewer or shortened vertebra, large eyes, rounded forehead, large ears, and shortened muzzle.
When the role of dogs expanded from just being working dogs to also being companions, humans started selective breeding dogs for morphological neoteny, and this selective breeding for "neoteny or paedomorphism" had the effect of enhancing the bond between humans and dogs. Humans bred dogs to have more "juvenile physical traits" as adults such as short snouts and wide-set eyes which are associated with puppies, because people usually consider these traits to be more attractive. Some breeds of dogs with short snouts and broad heads such as the Komondor, Saint Bernard and Maremma Sheepdog are more morphologically neotenous than other breeds of dogs. Cavalier King Charles spaniels are an example of selection for neoteny, because they exhibit large eyes, pendant-shaped ears and compact feet, giving them a morphology similar to puppies as adults.
In 2004, a study that used 310 wolf skulls and over 700 dog skulls representing 100 breeds concluded that the evolution of dog skulls can generally not be described by heterochronic processes such as neoteny although some pedomorphic dog breeds have skulls that resemble the skulls of juvenile wolves. By 2011, the finding by the same researcher was simply "Dogs are not paedomorphic wolves."
Neoteny has been observed in many other species. It is important to note the difference between partial and full neoteny when looking at other species in order to distinguish between juvenile traits that are only advantageous in the short term and traits that provide a benefit throughout the organism's life; this might then provide some insight into the cause of neoteny in those species. Partial neoteny is the retention of the larval form beyond the usual age of maturation with the possibility of the development of sexual organs progenesis, but eventually the organism still matures into the adult form; this can be seen in "Lithobates clamitans". Full neoteny is seen in "Ambystoma mexicanum" and some populations of "Ambystoma tigrinum", which remain in their larval form for the duration of their life. "Lithobates clamitans" is partially neotenous: it delays its maturation through the winter season, because it is not advantageous for it to metamorphose into the adult form until there are more resources available: it can find those resources much more easily in the larval form. This would fall under both of the main causes of neoteny; the energy required to survive in the winter as a newly formed adult is too costly, so the organism exhibits neotenous characteristics until a time when it is capable of better survival as an adult. "Ambystoma tigrinum" retains its neotenous features for a similar reason, however the retention is permanent due to the lack of resources available throughout its lifetime. This is another example of an environmental cause of neoteny in that the species retains juvenile characteristics because the environment limits the ability of the organism to fully come into its adult form. A few species of birds show partial neoteny. A couple of examples of such species are the manakin birds "Chiroxiphia linearis" and "Chiroxiphia caudata". The males of both species retain their juvenile plumage into adulthood, but they eventually lose it once they are fully mature. In certain species of birds the retention of juvenile plumage is often linked to the molting times within each species. In order to ensure there is no overlap between the molting and mating times, the birds may show partial neoteny in regards to their plumage so that the males do not attain their bright adult plumage before the females are prepared to mate. In this instance, neoteny is present because there is no need for the males to molt early and it would be a waste of energy for them to try to mate while the females are still immature.
Neoteny is commonly seen in flightless insects like the females in the order Strepsiptera. The flightless trait in insects has evolved many separate times; environments that may have contributed to the separate evolution of this trait are: high altitudes, isolation on islands, and insects that reside in colder climates. These environmental factors may be responsible for the flightless trait, because in these situations it would be disadvantageous to have a population that is more dispersed, so flightlessness would be favored due to the boundaries it poses to dispersal. Also, in cooler temperatures heat is lost more rapidly through wings, thus the circumstance favors flightlessness. Another couple of main points to note about insects are that the females in certain groups become sexually mature without metamorphosing into adulthood, and some insects which grow up in certain conditions do not ever develop wings. Flightlessness in some female insects has been linked to higher fecundity, this would increase the fitness of the individual because the female is producing more offspring and therefore passing on more of her genes. In those instances, neoteny occurs because it is more advantageous for the females to remain flightless in order to conserve energy which thereby increases their fecundity. Aphids are a great example of insects that may never develop wings due to their environmental setting. If resources are abundant there is no need to grow wings and disperse. When the nutrition of a host plant is abundant, aphids may not grow wings, remaining on the host plant for the duration of their lives; however, if the resources become diminished, their offspring may develop wings in order to disperse to other host plants.
Two common environments that tend to favor neoteny are high-altitude and cool environments because neotenous individuals have a higher fitness than those that metamorphose into the adult form. This is because the energy required for metamorphosis is too costly for the individual's fitness, also the conditions favor neoteny due to the ability of neotenous individuals to utilize the available resources more easily. This trend can be seen in the comparison of salamander species of lower and higher altitudes. The neotenous individuals have higher survivorship as well as higher fecundity than the salamanders that had gone to the adult form in the higher altitude and cooler environment. Insects in cooler environments tend to show neoteny in flight because wings have a high surface area and lose heat quickly, thus it is not advantageous for insects in that environment to metamorphose into adults.
Many species of salamander, and amphibians in general, are known to have neotenized characteristics because of the environment they live in. Axolotl and olm are species of salamander that retains their juvenile aquatic form throughout adulthood, examples of full neoteny. Gills are a common juvenile characteristic in amphibians that are kept after maturation; an example of this would be a comparison of the tiger salamander and the rough-skinned newt, both of which retain gills into adulthood.
Pygmy chimpanzees (bonobos) share many physical characteristics with humans. A prime example is their neotenous skulls. The shape of their skull does not change into adulthood; it only increases in size. This is due to sexual dimorphism and an evolutionary change in timing of development. Juveniles became sexually mature before their bodies had fully developed into adulthood, and due to some selective advantage the neotenic structure of the skull remained in later generations.
In some species, energy costs result in neoteny, as in the insect families Gerridae, Delphacidae, and Carabidae. Many of the species in these families have smaller, neotenous wings or no wings at all. Similarly, some cricket species shed their wings in adulthood, while in beetles of the genus "Ozopemon", the males (thought to be the first example of neoteny in the Coleoptera) are significantly smaller than the females, through inbreeding. In the termite "Kalotermes flavicollis", neoteny is seen in females during molting.
In other species, environmental conditions cause neoteny, as in the northwestern salamander ("Ambystoma gracile"), where higher altitude is correlated with greater neotenic tendencies, perhaps to help conserve energy as mentioned above. Similarly, neoteny is found in a few species of the crustacean family Ischnomesidae, which live in deep ocean waters.
Neoteny is usually used to describe animal development; however, neoteny is also seen in the cell organelles. It was suggested that subcellular neoteny could explain why sperm cells have atypical centrioles. One of the two sperm centrioles of fruit fly exhibits the retention of “juvenile” centriole structure, which can be described as centriolar “neoteny”. This neotenic, atypical centriole, is known as the Proximal Centriole-Like. Typical centriole form via a step by step process in which a cartwheel form, then develops to become a procentriole, and further matures into a centriole. The neotenic centriole of fruit fly resembles an early procentriole. | https://en.wikipedia.org/wiki?curid=21922 |
National Rail
National Rail (NR) in the United Kingdom is the trading name licensed for use by the Rail Delivery Group, an unincorporated association whose membership consists of the passenger train operating companies (TOCs) of England, Scotland, and Wales. The TOCs run the passenger services previously provided by the British Railways Board, from 1965 using the brand name British Rail. Northern Ireland, which is bordered by the Republic of Ireland, has a different system. National Rail services share a ticketing structure and inter-availability that generally do not extend to services which were not part of British Rail.
"National" Rail should not be confused with "Network" Rail. National Rail is a brand used to promote passenger railway services, and providing some harmonisation for passengers in ticketing, while Network Rail is the organisation which owns and manages most of the fixed assets of the railway network, including tracks, stations and signals.
The two generally coincide where passenger services are run. Most major Network Rail lines also carry freight traffic and some lines are freight only. There are some scheduled passenger services on privately managed, non-Network Rail lines, for example Heathrow Express, which partly runs on Network Rail track. The London Underground also overlaps with Network Rail in places.
Twenty eight privately owned train operating companies, each franchised for a defined term by government, operate passenger trains on the main rail network in Great Britain. The Rail Delivery Group is the trade association representing the TOCs and provides core services, including the provision of the National Rail Enquiries service. It also runs Rail Settlement Plan, which allocates ticket revenue to the various TOCs, and Rail Staff Travel, which manages travel facilities for railway staff. It does not compile the national timetable, which is the joint responsibility of the Office of Rail Regulation (allocation of paths) and Network Rail (timetable production and publication).
Since the privatisation of British Rail there is no longer a single approach to design on railways in Great Britain. The look and feel of signage, liveries and marketing material is largely the preserve of the individual TOCs.
However, National Rail continues to use BR's famous double-arrow symbol, designed by Gerald Burney of the Design Research Unit. It has been incorporated in the National Rail logotype and is displayed on tickets, the National Rail website and other publicity. The trademark rights to the double arrow symbol remain state-owned, being vested in the Secretary of State for Transport.
The double arrow symbol is also used to indicate a railway station on British traffic signs.
The National Rail (NR) logo was introduced by ATOC in 1999, and was used on the Great Britain public timetable for the first time in the edition valid from 26 September in that year. Rules for its use are set out in the Corporate Identity Style Guidelines published by the Rail Delivery Group, available on its website. "In 1964 the Design Research Unit—Britain’s first multi-disciplinary design agency founded in 1943 by Misha Black, Milner Gray and Herbert Read—was commissioned to breathe new life into the nation’s neglected railway industry". The NR title is sometimes described as a "brand". As it was used by British Rail, the single operator before franchising, its use also maintains continuity and public familiarity; and it avoids the need to replace signage.
The lettering used in the National Rail logotype is a modified form of the typeface Sassoon Bold. Some train operating companies continue to use the former British Rail Rail Alphabet lettering to varying degrees in station signage, although its use is no longer universal; however it remains compulsory (under Railway Group Standards) for safety signage in trackside areas and is still common (although not universal) on rolling stock.
The British Rail typefaces of choice from 1965 were Helvetica and Univers, with others (particularly Frutiger) coming into use during the sectorisation period after 1983. TOCs may use what they like: examples include Futura (Stagecoach Group), Helvetica (FirstGroup and National Express), Frutiger (Arriva Trains Wales), Bliss (CrossCountry), and a modified version of Precious by London Midland.
Although TOCs compete against each other for franchises, and for passengers on routes where more than one TOC operates, the strapline used with the National Rail logo is 'Britain's train companies working together'.
Several conurbations have their own metro or tram systems, most of which are not part of National Rail. These include the London Underground, Docklands Light Railway, London Tramlink, Blackpool Tramway, Glasgow Subway, Tyne & Wear Metro, Manchester Metrolink, Sheffield Supertram, Midland Metro and Nottingham Express Transit. On the other hand, the largely self-contained Merseyrail system is part of the National Rail network, and urban rail networks around Birmingham, Cardiff, Glasgow and West Yorkshire consist entirely of National Rail services.
London Overground (LO) is a hybrid: its services are operated via a concession awarded by Transport for London, and are branded accordingly, but until 2010 all its routes used infrastructure owned by Network Rail, that is because of the East London Line which was a LO line converted to a mainline but it is still National Rail. LO now also possesses some infrastructure in its own right, following the reopening of the former London Underground East London line as the East London Railway. Since all the previous LO routes were operated by National Rail franchise Silverlink until November 2007, they have continued to be shown in the National Rail timetable and are still considered to be a part of National Rail.
Heathrow Express and Eurostar are also not part of the National Rail network despite sharing of stations (Heathrow Express also share its route with GWR and TfL Rail). Northern Ireland Railways were never part of British Rail, which was limited to England, Scotland and Wales, and therefore are not part of the National Rail network.
There are many privately owned or heritage railways in Great Britain which are not part of the National Rail network and mostly operate for heritage or pleasure purposes rather than as public transport, but some have connections to National rail track.
National Rail services have a common ticketing structure inherited from British Rail. Through tickets are available between any pair of stations on the network, and can be bought from any station ticket office. Most tickets are inter-available between the services of all operators on routes appropriate to the journey being made. Operators on some routes offer operator-specific tickets that are cheaper than the inter-available ones.
Through tickets involving Heathrow Express and London Underground are also available. Oyster pay-as-you-go can be used on National Rail in Greater London from 2 January 2010. These same areas can also be journeyed to using a contactless debit/credit card. Contactless also covers some areas that oyster doesn't such as the Crossrail line to Reading, or the Thameslink station at Oakleigh Park.
Passengers without a valid ticket boarding a train at a station where ticket-buying facilities are available are required to pay the full Open Single or Return fare. On some services penalty fares apply - a ticketless passenger may be charged the greater of £20 or twice the full single fare to the next stop. Penalty Fares can be collected only by authorised Revenue Protection Inspectors, not by ordinary Guards.
National Rail distributes a number of technical manuals on which travel on the railways in Great Britain is based, such as the National Rail Conditions of Travel, via their website.
Pocket timetables for individual operators or routes are available free at staffed stations. The last official printed timetable with up to 3000 pages was published in 2007. Now the only complete print edition is published by Middleton Press (as of October 2016). A digital version of the full timetable is available as a pdf file without charge on the Network Rail website, however passengers are recommended to obtain their timetables from the individual train companies.
The National Rail Enquiries website includes a journey planner, fare and live departure information. The site is designed to complement the myriad different websites of Britain's privatised rail companies, so when users have selected which tickets they wish to buy, they are redirected to the most relevant train company website, where they can buy their tickets without booking fees.
In 2012 the website was joined by a mobile app mirroring its functionality. The app is available for iPhone, Android and Windows Phone. However Trainline remains the most downloaded rail app in the UK with 9.4 million users. | https://en.wikipedia.org/wiki?curid=21923 |
Naked singularity
In general relativity, a naked singularity is a hypothetical gravitational singularity without an event horizon. In a black hole, the singularity is completely enclosed by a boundary known as the event horizon, inside which the gravitational force of the singularity is so strong that light cannot escape. Hence, objects inside the event horizon—including the singularity itself—cannot be directly observed. A naked singularity, by contrast, would be observable from the outside.
The theoretical existence of naked singularities is important because their existence would mean that it would be possible to observe the collapse of an object to "infinite density". It would also cause foundational problems for general relativity, because general relativity cannot make predictions about the future evolution of space-time near a singularity. In generic black holes, this is not a problem, as an outside viewer cannot observe the space-time within the event horizon.
Naked singularities have not been observed in nature. Astronomical observations of black holes indicate that their rate of rotation falls below the threshold to produce a naked singularity (spin parameter 1). GRS 1915+105 comes closest to the limit, with a spin parameter of 0.82-1.00.
According to the cosmic censorship hypothesis, gravitational singularities may not be observable. If loop quantum gravity is correct, naked singularities may be possible in nature.
From concepts drawn from rotating black holes, it is shown that a singularity, spinning rapidly, can become a ring-shaped object. This results in two event horizons, as well as an ergosphere, which draw closer together as the spin of the singularity increases. When the outer and inner event horizons merge, they shrink toward the rotating singularity and eventually expose it to the rest of the universe.
A singularity rotating fast enough might be created by the collapse of dust or by a supernova of a fast-spinning star. Studies of pulsars and some computer simulations (Choptuik, 1997) have been performed.
Mathematician Demetrios Christodoulou, a winner of the Shaw Prize, has shown that contrary to what had been expected, singularities which are not hidden in a black hole also occur. However, he then showed that such "naked singularities" are unstable.
Disappearing event horizons exist in the Kerr metric, which is a spinning black hole in a vacuum. Specifically, if the angular momentum is high enough, the event horizons could disappear. Transforming the Kerr metric to Boyer–Lindquist coordinates, it can be shown that the formula_1 coordinate (which is not the radius) of the event horizon is
formula_2,
where formula_3, and formula_4. In this case, "event horizons disappear" means when the solutions are complex for formula_5, or formula_6. However, this corresponds to a case where formula_7 exceeds formula_8 (or in Planck units, , i.e. the spin exceeds what is normally viewed as the upper limit of its physically possible values.
Disappearing event horizons can also be seen with the Reissner–Nordström geometry of a charged black hole. In this metric, it can be shown that the horizons occur at
formula_9,
where formula_3, and formula_11. Of the three possible cases for the relative values of formula_12 and formula_13, the case where formula_14 causes both formula_5 to be complex. This means the metric is regular for all positive values of formula_1, or in other words, the singularity has no event horizon. However, this corresponds to a case where formula_17 exceeds formula_18 (or in Planck units, , i.e. the charge exceeds what is normally viewed as the upper limit of its physically possible values.
See Kerr–Newman metric for a spinning, charged ring singularity.
A naked singularity could allow scientists to observe an infinitely dense material, which would under normal circumstances be impossible by the cosmic censorship hypothesis. Without an event horizon of any kind, some speculate that naked singularities could actually emit light.
The cosmic censorship hypothesis says that a gravitational singularity would remain hidden by the event horizon. LIGO events, including GW150914, are consistent with these predictions. Although data anomalies would have resulted in the case of a singularity, the nature of those anomalies remains unknown.
Some research has suggested that if loop quantum gravity is correct, then naked singularities could exist in nature, implying that the cosmic censorship hypothesis does not hold. Numerical calculations and some other arguments have also hinted at this possibility. | https://en.wikipedia.org/wiki?curid=21926 |
National Party of Australia
The National Party of Australia, also known as The Nationals or The Nats, is an Australian political party. Traditionally representing graziers, farmers, and rural voters generally, it began as the Australian Country Party in 1920 at a federal level. It later adopted the name National Country Party in 1975, before taking its current name in 1982.
Federally, and in New South Wales, and to an extent in Victoria and historically in Western Australia, it has in government been the minor party in a centre-right Coalition with the Liberal Party of Australia, and its leader has usually served as Deputy Prime Minister. In Opposition the Coalition was usually maintained, but even otherwise the party still generally continued to work in co-operation with the Liberal Party of Australia (as had their predecessors the Nationalist Party of Australia and United Australia Party). In Queensland, however, the Country Party (later National Party) was the senior coalition party between 1925 and 2008, after which it merged in that state with the junior Liberal Party of Australia to form the Liberal National Party (LNP). Despite taking a conservative position politically, the National Party has long pursued agrarian socialist economic policies. Ensuring support for farmers, either through government grants and subsidies or through community appeals, is a major focus of National Party policy. The unusual hybrid of welfare and free market attitudes that characterise the party has led to it being often accused of seeking to privatise profits and socialise costs for the agricultural and mining sectors.
The current leader of the National Party is Michael McCormack, who won a leadership spill following Barnaby Joyce's resignation in February 2018. The most recent deputy leader of the Nationals, from 7 December 2017 until her resignation due to acting with a conflict of interest on 2 February 2020, was Bridget McKenzie.
The Country Party was formally founded in 1913 in Western Australia, and nationally in 1920, from a number of state-based parties such as the Victorian Farmers' Union (VFU) and the Farmers and Settlers Party of New South Wales. Australia's first Country Party was founded in 1912 by Harry J. Stephens, editor of "The Farmer & Settler", but, under fierce opposition from rival newspapers, failed to gain momentum.
The VFU won a seat in the House of Representatives at the Corangamite by-election held in December 1918, with the help of the newly introduced preferential voting system. At the 1919 federal election the state-based Country Parties won federal seats in New South Wales, Victoria and Western Australia. They also began to win seats in state parliaments. In 1920 the Country Party was established as a national party led by William McWilliams from Tasmania. In his first speech as leader, McWilliams laid out the principles of the new party, stating "we crave no alliance, we spurn no support but we intend drastic action to secure closer attention to the needs of primary producers" McWilliams was deposed as party leader in favour of Earle Page in April 1921, following instances where McWilliams voted against the party line. McWilliams later left the Country Party to sit as an Independent.
According to historian B. D. Graham (1959), the graziers who operated the sheep stations were politically conservative. They disliked the Labor Party, which represented their workers, and feared that Labor governments would pass unfavorable legislation and listen to foreigners and communists. The graziers were satisfied with the marketing organisation of their industry, opposed any change in land tenure and labour relations, and advocated lower tariffs, low freight rates, and low taxes. On the other hand, Graham reports, the small farmers, not the graziers, founded the Country party. The farmers advocated government intervention in the market through price support schemes and marketing pools. The graziers often politically and financially supported the Country party, which in turn made the Country party more conservative.
The Country Party's first election as a united party, in 1922, saw it in an unexpected position of power. It won enough seats to deny the Nationalists an overall majority. It soon became apparent that the price for Country support would be a full-fledged coalition with the Nationalists. However, Page let it be known that his party would not serve under Hughes, and forced his resignation. Page then entered negotiations with the Nationalists' new leader, Stanley Bruce, for a coalition government. Page wanted five seats for his Country Party in a Cabinet of 11, including the Treasurer portfolio and the second rank in the ministry for himself. These terms were unusually stiff for a prospective junior coalition partner in a Westminster system, and especially so for such a new party. Nonetheless, with no other politically realistic coalition partner available, Bruce readily agreed, and the "Bruce-Page Ministry" was formed. This began the tradition of the Country Party leader ranking second in Coalition cabinets.
Page remained dominant in the party until 1939, and briefly served as caretaker Prime Minister between the death of Joseph Lyons and the election of Robert Menzies as his successor. However, Page gave up the leadership rather than serve under Menzies. The coalition was re-formed under Archie Cameron in 1940, and continued until October 1941 despite the election of Arthur Fadden as leader after the 1940 election. Fadden was well regarded within conservative circles and proved to be a loyal deputy to Menzies in the difficult circumstances of 1941. When Menzies was forced to resign as Prime Minister, the UAP was so bereft of leadership that Fadden briefly succeeded him (despite the Country Party being the junior partner in the governing coalition). However, the two independents who had been propping up the government rejected Fadden's budget and brought the government down. Fadden stood down in favour of Labor leader John Curtin.
The Fadden-led Coalition made almost no headway against Curtin, and was severely defeated in the 1943 election. After that loss, Fadden became deputy Leader of the Opposition under Menzies, a role that continued after Menzies folded the UAP into the Liberal Party of Australia in 1944. Fadden remained a loyal partner of Menzies, though he was still keen to assert the independence of his party. Indeed, in the lead up to the 1949 federal election, Fadden played a key role in the defeat of the Chifley Labor government, frequently making inflammatory claims about the "socialist" nature of the Labor Party, which Menzies could then "clarify" or repudiate as he saw fit, thus appearing more "moderate". In 1949, Fadden became Treasurer in the second Menzies government and remained so until his retirement in 1958. His successful partnership with Menzies was one of the elements that sustained the coalition, which remained in office until 1972 (Menzies himself retired in 1966).
Fadden's successor, Trade Minister John McEwen, took the then unusual step of declining to serve as Treasurer, believing he could better ensure that the interests of Australian primary producers were safeguarded. Accordingly, McEwen personally supervised the signing of the first post-war trade treaty with Japan, new trade agreements with New Zealand and Britain, and Australia's first trade agreement with the USSR (1965). In addition to this, he insisted on developing an all-encompassing system of tariff protection that would encourage the development of those secondary industries that would "value add" Australia's primary produce. His success in this endeavour is sometimes dubbed "McEwenism". This was the period of the Country Party's greatest power, as was demonstrated in 1962 when McEwen was able to insist that Menzies sack a Liberal Minister who claimed that Britain's entry into the European Economic Community was unlikely to severely impact on the Australian economy as a whole.
Menzies retired in 1966 and was succeeded by Harold Holt. McEwen thus became the longest-tenured member of the government, with the informal right to veto government policy. The most significant instance in which McEwen exercised this right came when Holt disappeared in December 1967. John Gorton became the new Liberal Prime Minister in January 1968. McEwen was sworn in as interim Prime Minister pending the election of the new Liberal leader. Logically, the Liberals' deputy leader, William McMahon, should have succeeded Holt. However, McMahon was a staunch free-trader, and there were also rumors that he was homosexual. As a result, McEwen told the Liberals that he and his party would not serve under McMahon. McMahon stood down in favour of John Gorton. It was only after McEwen announced his retirement that MacMahon was able to successfully challenge Gorton for the Liberal leadership. McEwen's reputation for political toughness led to him being nicknamed "Black Jack" by his allies and enemies alike.
At the state level, from 1957 to 1989, the Country Party under Frank Nicklin and Joh Bjelke-Petersen dominated governments in Queensland—for the last six of those years ruling in its own right, without the Liberals. It also took part in governments in New South Wales, Victoria, and Western Australia.
However, successive electoral redistributions after 1964 indicated that the Country Party was losing ground electorally to the Liberals as the rural population declined, and the nature of some parliamentary seats on the urban/rural fringe changed. A proposed merger with the Democratic Labor Party (DLP) under the banner of "National Alliance" was rejected when it failed to find favour with voters at the 1974 state election.
Also in 1974, the Northern Territory members of the party joined with its Liberal party members to form the independent Country Liberal Party. This party continues to represent both parent parties in that territory. A separate party, the Joh-inspired NT Nationals, competed in the 1987 election with former Chief Minister Ian Tuxworth winning his seat of Barkly by a small margin. However, this splinter group were not endorsed by the national executive and soon disappeared from the political scene.
The National Party was confronted by the impact of demographic shifts from the 1970s: between 1971 and 1996, the population of Sydney and surrounds grew by 34%, with even larger growth in coastal New South Wales, while more remote rural areas grew by a mere 13%, further diminishing the National Party's base. On 2 May 1975 at the federal convention in Canberra, the Country Party changed its name to the National Country Party of Australia as part of a strategy to expand into urban areas. This had some success in Queensland under Joh Bjelke-Petersen, but nowhere else. The party briefly walked out of the coalition agreement in Western Australia in May 1975, returning within the month. However, the party split in two over the decision and other factors in late 1978, with a new National Party forming and becoming independent, holding three seats in the Western Australian lower house, while the National Country Party remained in coalition and also held three seats. They reconciled after the Burke Labor government came to power in 1983.
The 1980s were dominated by the feud between Bjelke-Petersen and the federal party leadership. Bjelke-Petersen briefly triumphed in 1987, forcing the Nationals to tear up the Coalition agreement and support his bid to become Prime Minister. The "Joh for Canberra" campaign backfired spectacularly when a large number of three-cornered contests allowed Labor to win a third term under Bob Hawke; however, in 1987 the National Party won a bump in votes and recorded its highest vote in more than four decades, but it also recorded a new low in the proportion of seats won. The collapse of Joh for Canberra also proved to be the Queensland Nationals' last hurrah; Bjelke-Petersen was forced into retirement a few months after the federal election, and his party was heavily defeated in 1989. The federal National Party were badly defeated at the 1990 election, with leader Charles Blunt one of five MPs to lose his seat.
Blunt's successor as leader, Tim Fischer, recovered two seats at the 1993 election, but lost an additional 1.2% of the vote from its 1990 result. In 1996, as the Coalition won a significant victory over the Keating Labor government, the National Party recovered another two seats, and Fischer became Deputy Prime Minister under John Howard.
The Nationals experienced difficulties in the late 1990s from two fronts – firstly from the Liberal Party, who were winning seats on the basis that the Nationals were not seen to be a sufficiently separate party, and from the One Nation Party riding a swell of rural discontent with many of the policies such as multiculturalism and gun control embraced by all of the major parties. The rise of Labor in formerly safe National-held areas in rural Queensland, particularly on the coast, has been the biggest threat to the Queensland Nationals.
At the 1998 Federal election, the National Party recorded only 5.3% of the vote in the House of Representatives, its lowest ever, and won only 16 seats, at 10.8% its second lowest proportion of seats.
The National Party under Fischer and his successor, John Anderson, rarely engaged in public disagreements with the Liberal Party, which weakened the party's ability to present a separate image to rural and regional Australia. In 2001 the National Party recorded its second-worst result at 5.6% winning 13 seats, and its third lowest at 5.9% at the 2004 election, winning only 12 seats.
Australian psephologist Antony Green argues that two important trends have driven the National Party's decline at a federal level: "the importance of the rural sector to the health of the nation's economy" and "the growing chasm between the values and attitudes of rural and urban Australia". Green has suggested that the result has been that "Both have resulted in rural and regional voters demanding more of the National Party, at exactly the time when its political influence has declined. While the National Party has never been the sole representative of rural Australia, it is the only party that has attempted to paint itself as representing rural voters above all else",
In June 2005 party leader John Anderson announced that he would resign from the ministry and as Leader of the Nationals due to a benign prostate condition, he was succeeded by Mark Vaile. At the following election the Nationals vote declined further, with the party winning a mere 5.4% of the vote and securing only 10 seats.
In 2010, under the leadership of Warren Truss the party received its lowest vote to date, at only 3.4%, however they secured a slight increase in seats from 10 to 12. At the following election in 2010 the national Party's fortunes improved slightly with a vote of 4.2% and an increase in seats from 12 to 15.
At the 2016 Double dissolution election, under the Leadership of Barnaby Joyce the party secured 4.6% of the vote and 16 seats. In 2018, reports emerged that the National Party leader and Deputy Prime Minister, Barnaby Joyce was expecting a child with his former communications staffer Vikki Campion. Joyce resigned after revelations that he had been engaged in an extramarital affair. Later in the same year it was revealed that the NSW National party and its youth wing, the Young Nationals had been infiltrated by neo-Nazis with more than 30 members being investigated for alleged links to neo-Nazism. Leader McCormack denounced the infiltration, and several suspected neo-Nazis were expelled from the party and its youth wing. | https://en.wikipedia.org/wiki?curid=21927 |
Northern blot
The northern blot, or RNA blot, is a technique used in molecular biology research to study gene expression by detection of RNA (or isolated mRNA) in a sample.
With northern blotting it is possible to observe cellular control over structure and function by determining the particular gene expression rates during differentiation and morphogenesis, as well as in abnormal or diseased conditions. Northern blotting involves the use of electrophoresis to separate RNA samples by size, and detection with a hybridization probe complementary to part of or the entire target sequence. The term 'northern blot' actually refers specifically to the capillary transfer of RNA from the electrophoresis gel to the blotting membrane. However, the entire process is commonly referred to as northern blotting. The northern blot technique was developed in 1977 by James Alwine, David Kemp, and George Stark at Stanford University, with contributions from Gerhard Heinrich. Northern blotting takes its name from its similarity to the first blotting technique, the Southern blot, named for biologist Edwin Southern. The major difference is that RNA, rather than DNA, is analyzed in the northern blot.
A general blotting procedure starts with extraction of total RNA from a homogenized tissue sample or from cells. Eukaryotic mRNA can then be isolated through the use of oligo (dT) cellulose chromatography to isolate only those RNAs with a poly(A) tail. RNA samples are then separated by gel electrophoresis. Since the gels are fragile and the probes are unable to enter the matrix, the RNA samples, now separated by size, are transferred to a nylon membrane through a capillary or vacuum blotting system. A nylon membrane with a positive charge is the most effective for use in northern blotting since the negatively charged nucleic acids have a high affinity for them. The transfer buffer used for the blotting usually contains formamide because it lowers the annealing temperature of the probe-RNA interaction, thus eliminating the need for high temperatures, which could cause RNA degradation. Once the RNA has been transferred to the membrane, it is immobilized through covalent linkage to the membrane by UV light or heat. After a probe has been labeled, it is hybridized to the RNA on the membrane. Experimental conditions that can affect the efficiency and specificity of hybridization include ionic strength, viscosity, duplex length, mismatched base pairs, and base composition. The membrane is washed to ensure that the probe has bound specifically and to prevent background signals from arising. The hybrid signals are then detected by X-ray film and can be quantified by densitometry. To create controls for comparison in a northern blot, samples not displaying the gene product of interest can be used after determination by microarrays or RT-PCR.
The RNA samples are most commonly separated on agarose gels containing formaldehyde as a denaturing agent for the RNA to limit secondary structure. The gels can be stained with ethidium bromide (EtBr) and viewed under UV light to observe the quality and quantity of RNA before blotting. Polyacrylamide gel electrophoresis with urea can also be used in RNA separation but it is most commonly used for fragmented RNA or microRNAs. An RNA ladder is often run alongside the samples on an electrophoresis gel to observe the size of fragments obtained but in total RNA samples the ribosomal subunits can act as size markers. Since the large ribosomal subunit is 28S (approximately 5kb) and the small ribosomal subunit is 18S (approximately 2kb) two prominent bands appear on the gel, the larger at close to twice the intensity of the smaller.
Probes for northern blotting are composed of nucleic acids with a complementary sequence to all or part of the RNA of interest, they can be DNA, RNA, or oligonucleotides with a minimum of 25 complementary bases to the target sequence. RNA probes (riboprobes) that are transcribed in vitro are able to withstand more rigorous washing steps preventing some of the background noise. Commonly cDNA is created with labelled primers for the RNA sequence of interest to act as the probe in the northern blot. The probes must be labelled either with radioactive isotopes (32P) or with chemiluminescence in which alkaline phosphatase or horseradish peroxidase (HRP) break down chemiluminescent substrates producing a detectable emission of light. The chemiluminescent labelling can occur in two ways: either the probe is attached to the enzyme, or the probe is labelled with a ligand (e.g. biotin) for which the ligand (e.g., avidin or streptavidin) is attached to the enzyme (e.g. HRP). X-ray film can detect both the radioactive and chemiluminescent signals and many researchers prefer the chemiluminescent signals because they are faster, more sensitive, and reduce the health hazards that go along with radioactive labels. The same membrane can be probed up to five times without a significant loss of the target RNA.
Northern blotting allows one to observe a particular gene's expression pattern between tissues, organs, developmental stages, environmental stress levels, pathogen infection, and over the course of treatment. The technique has been used to show overexpression of oncogenes and downregulation of tumor-suppressor genes in cancerous cells when compared to 'normal' tissue, as well as the gene expression in the rejection of transplanted organs. If an upregulated gene is observed by an abundance of mRNA on the northern blot the sample can then be sequenced to determine if the gene is known to researchers or if it is a novel finding. The expression patterns obtained under given conditions can provide insight into the function of that gene. Since the RNA is first separated by size, if only one probe type is used variance in the level of each band on the membrane can provide insight into the size of the product, suggesting alternative splice products of the same gene or repetitive sequence motifs. The variance in size of a gene product can also indicate deletions or errors in transcript processing. By altering the probe target used along the known sequence it is possible to determine which region of the RNA is missing.
Analysis of gene expression can be done by several different methods including RT-PCR, RNase protection assays, microarrays, RNA-Seq, serial analysis of gene expression (SAGE), as well as northern blotting. Microarrays are quite commonly used and are usually consistent with data obtained from northern blots; however, at times northern blotting is able to detect small changes in gene expression that microarrays cannot. The advantage that microarrays have over northern blots is that thousands of genes can be visualized at a time, while northern blotting is usually looking at one or a small number of genes.
A problem in northern blotting is often sample degradation by RNases (both endogenous to the sample and through environmental contamination), which can be avoided by proper sterilization of glassware and the use of RNase inhibitors such as DEPC (diethylpyrocarbonate). The chemicals used in most northern blots can be a risk to the researcher, since formaldehyde, radioactive material, ethidium bromide, DEPC, and UV light are all harmful under certain exposures. Compared to RT-PCR, northern blotting has a low sensitivity, but it also has a high specificity, which is important to reduce false positive results.
The advantages of using northern blotting include the detection of RNA size, the observation of alternate splice products, the use of probes with partial homology, the quality and quantity of RNA can be measured on the gel prior to blotting, and the membranes can be stored and reprobed for years after blotting.
For northern blotting for the detection of acetylcholinesterase mRNA the nonradioactive technique was compared to a radioactive technique and found as sensitive as the radioactive one, but requires no protection against radiation and is less time consuming.
Researchers occasionally use a variant of the procedure known as the reverse northern blot. In this procedure, the substrate nucleic acid (that is affixed to the membrane) is a collection of isolated DNA fragments, and the probe is RNA extracted from a tissue and radioactively labelled.
The use of DNA microarrays that have come into widespread use in the late 1990s and early 2000s is more akin to the reverse procedure, in that they involve the use of isolated DNA fragments affixed to a substrate, and hybridization with a probe made from cellular RNA. Thus the reverse procedure, though originally uncommon, enabled northern analysis to evolve into gene expression profiling, in which many (possibly all) of the genes in an organism may have their expression monitored. | https://en.wikipedia.org/wiki?curid=21930 |
Narrow-gauge railway
A narrow-gauge railway (narrow-gauge railroad in the US) is a railway with a track gauge narrower than standard . Most narrow-gauge railways are between and .
Since narrow-gauge railways are usually built with tighter curves, smaller structure gauges, and lighter rails, they can be less costly to build, equip, and operate than standard- or broad-gauge railways (particularly in mountainous or difficult terrain). Lower-cost narrow-gauge railways are often built to serve industries and communities where the traffic potential would not justify the cost of a standard- or broad-gauge line. Narrow-gauge railways have specialized use in mines and other environments where a small structure gauge necessitates a small loading gauge. They also have more general applications. Non-industrial, narrow-gauge mountain railways are (or were) common in the Rocky Mountains of the United States and the Pacific Cordillera of Canada, Mexico, Switzerland, Bulgaria, the former Yugoslavia, Greece, and Costa Rica.
In some countries, narrow gauge is the standard; Japan, Indonesia, Taiwan, New Zealand, South Africa, and the Australian states of Queensland, Western Australia and Tasmania have a gauge, and Malaysia and Thailand have metre-gauge railways. Narrow-gauge trams, particularly metre-gauge, are common in Europe.
A narrow-gauge railway is one where the distance between the inside edges of the rails is less than . Historically, the term was sometimes used to refer to standard-gauge railways, to distinguish them from broad-gauge railways, but this use no longer applies.
The earliest recorded railway appears in Georgius Agricola's 1556 "De re metallica", which shows a mine in Bohemia with a railway of about gauge. During the 16th century, railways were primarily restricted to hand-pushed, narrow-gauge lines in mines throughout Europe. In the 17th century, mine railways were extended to provide transportation above ground. These lines were industrial, connecting mines with nearby transportation points (usually canals or other waterways). These railways were usually built to the same narrow gauge as the mine railways from which they developed.
The world's first steam locomotive, built in 1802 by Richard Trevithick for the Coalbrookdale Company, ran on a plateway. The first commercially successful steam locomotive was Matthew Murray's Salamanca built in 1812 for the Middleton Railway in Leeds. Salamanca was also the first rack-and-pinion locomotive. During the 1820s and 1830s, a number of industrial narrow-gauge railways in the United Kingdom used steam locomotives. In 1842, the first narrow-gauge steam locomotive outside the UK was built for the -gauge Antwerp-Ghent Railway in Belgium. The first use of steam locomotives on a public, passenger-carrying narrow-gauge railway was in 1865, when the Ffestiniog Railway introduced passenger service after receiving its first locomotives two years earlier.
Many narrow-gauge railways were part of industrial enterprises and served primarily as industrial railways, rather than general carriers. Common uses for these industrial narrow-gauge railways included mining, logging, construction, tunnelling, quarrying, and conveying agricultural products. Extensive narrow-gauge networks were constructed in many parts of the world; 19th-century mountain logging operations often used narrow-gauge railways to transport logs from mill to market. Significant sugarcane railways still operate in Cuba, Fiji, Java, the Philippines, and Queensland, and narrow-gauge railway equipment remains in common use for building tunnels.
The first use of an internal combustion engine to power a narrow-gauge locomotive was in 1902. F. C. Blake built a 7 hp petrol locomotive for the Richmond Main Sewerage Board sewage plant at Mortlake. This gauge locomotive was probably the third petrol-engined locomotive built.
Extensive narrow-gauge rail systems served the front-line trenches of both sides in World War I. They were a short-lived military application, and after the war the surplus equipment created a small boom in European narrow-gauge railway building.
Narrow-gauge railways usually cost less to build because they are usually lighter in construction, using smaller cars and locomotives (a smaller loading gauge), smaller bridges and tunnels (a smaller structure gauge), and tighter curves. Narrow gauge is often used in mountainous terrain, where engineering savings can be substantial. It is also used in sparsely populated areas where the potential demand is too low for broad-gauge railways to be economically viable. This is the case in parts of Australia and most of Southern Africa, where poor soils have led to population densities too low for standard gauge to be viable.
For temporary railways which will be removed after short-term use, such as logging, mining or large-scale construction projects (especially in confined spaces, such as the Channel Tunnel), a narrow-gauge railway is substantially cheaper and easier to install and remove. Such railways have almost vanished, however, due to the capabilities of modern trucks.
In many countries, narrow-gauge railways were built as branch lines to feed traffic to standard-gauge lines due to lower construction costs. The choice was often not between a narrow- and standard-gauge railway, but between a narrow-gauge railway and none at all.
Narrow-gauge railways cannot freely interchange rolling stock (such as freight and passenger cars) with the standard- or broad- gauge railways with which they link, and the transfer of passengers and freight require time-consuming manual labour or substantial capital expenditure. Some bulk commodities, such as coal, ore, and gravel, can be mechanically transshipped, but this is time-consuming, and the equipment required for the transfer is often complex to maintain.
If rail lines with other gauges coexist in a network, in times of peak demand it is difficult to move rolling stock to where it is needed when a break of gauge exists. Sufficient rolling stock must be available to meet a narrow-gauge railway's peak demand (which might be greater in comparison to a single-gauge network), and the surplus equipment generates no cash flow during periods of low demand. In regions where narrow gauge forms a small part of the rail network (as was the case on Russia's Sakhalin Railway), extra money is needed to design, produce or import narrow-gauge equipment.
Solutions to interchangeability problems include bogie exchanges, a rollbock system, variable gauge, dual gauge or gauge conversion.
Historically, in many places narrow gauge railways were built to lower standards to prioritize cheap and fast construction. As a result, many narrow-gauge railways have often limited scope for increase in maximum load or speed. An example is the use of low curve radius, which simplifies construction but limits the maximum allowed speed.
In Japan, a few narrow-gauge lines have been upgraded to standard-gauge mini-shinkansen to allow through service by standard-gauge high-speed trains. Due to the alignment and minimum curve radius of those lines, however, the maximum speed of the through service is the same as the original narrow-gauge line. If a narrow-gauge line is built to a higher standard, like Japan's proposed Super Tokkyū, this problem can be minimized.
If narrow-gauge rails are designed with potential growth in mind (or at the same standard as standard-gauge rails), obstacles to future growth would be similar to other rail gauges. For lines constructed to a lower standard, speed can be increased by realigning rail lines to increase the minimum curve radius, reducing the number of intersections or introducing tilting trains.
The heavy-duty narrow-gauge railways in Queensland, South Africa, and New Zealand demonstrate that if track is built to a heavy-duty standard, performance almost as good as a standard-gauge line is possible. Two-hundred-car trains operate on the Sishen–Saldanha railway line in South Africa, and high-speed Tilt Trains run in Queensland. Another example of a heavy-duty narrow-gauge line is Brazil's EFVM. gauge, it has over-100-pound rail () and a loading gauge almost as large as US non-excess-height lines. The line has a number of locomotives and 200-plus-car trains. In South Africa and New Zealand, the loading gauge is similar to the restricted British loading gauge; in New Zealand, some British Rail Mark 2 carriages have been rebuilt with new bogies for use by Tranz Scenic (Wellington-Palmerston North service), Tranz Metro (Wellington-Masterton service), and Transdev Auckland (Auckland suburban services).
Narrow gauge's reduced stability means that its trains cannot run at speeds as high as on broader gauges. For example, if a curve with standard-gauge rail can allow speed up to , the same curve with narrow-gauge rail can only allow speed up to .
In Japan and Queensland, recent permanent-way improvements have allowed trains on gauge tracks to exceed . Queensland Rail's Electric Tilt Train, the fastest train in Australia and the fastest gauge train in the world, set a record of . The speed record for narrow-gauge rail is , set in South Africa in 1978.
A special gauge railcar was built for the Otavi Mining and Railway Company with a design speed of 137 km/h.
Curve radius is also important for high speeds: narrow-gauge railways allow sharper curves, but these limit a vehicle's safe speed.
Many narrow gauges, from gauge and gauge, are in present or former use. They fall into several broad categories:
track gauge (also known as Scotch gauge) was adopted by early 19th-century railways, primarily in the Lanarkshire area of Scotland. lines were also constructed, and both were eventually converted to standard gauge.
between the inside of the rail heads, its name and classification vary worldwide and it has about of track.
As its name implies, metre gauge is a track gauge of . It has about of track.
According to Italian law, track gauges in Italy were defined from the centre of each rail rather than the inside edges of the rails. This gauge, measured between the edges of the rails, is known as Italian metre gauge.
There were a number of large railroad systems in North America; notable examples include the Denver & Rio Grande and Rio Grande Southern in Colorado and the South Pacific Coast and West Side Lumber Co of California. was also a common track gauge in South America, Ireland and on the Isle of Man. was a common gauge in Europe. Swedish three-foot-gauge railways () are unique to that country.
A few railways and tramways were built to gauge, including Nankai Main Line (later converted to ), Ocean Pier Railway at Atlantic City, Seaton Tramway (converted from ) and Waiorongomai Tramway.
gauge railways are commonly used for rack railways. Imperial gauge railways were generally constructed in the former British colonies. Bosnian gauge and railways are predominantly found in Russia and Eastern Europe.
Gauges such as , and were used in parts of the UK, particularly for railways in Wales and the borders, with some industrial use in the coal industry. Some sugar cane lines in Cuba were .
gauge railways were generally constructed in the former British colonies. , and were used in Europe.
Gauges below were rare. Arthur Percival Heywood developed gauge estate railways in Britain and Decauville produced a range of industrial railways running on and tracks, most commonly in restricted environments such as underground mine railways, parks and farms, in France. Several gauge railways were built in Britain to serve ammunition depots and other military facilities, particularly during World War I. | https://en.wikipedia.org/wiki?curid=21932 |
Surfing
Surfing is a surface water pastime in which the wave rider, referred to as a surfer, rides on the forward part, or face, of a moving wave, which usually carries the surfer towards the shore. Waves suitable for surfing are primarily found in the ocean, but can also be found in lakes or rivers in the form of a standing wave or tidal bore. However, surfers can also utilize artificial waves such as those from boat wakes and the waves created in artificial wave pools.
The term "surfing" usually refers to the act of riding a wave using a board, regardless of the stance. There are several types of boards. The native peoples of the Pacific, for instance, surfed waves on alaia, paipo, and other such craft, and did so on their belly and knees. The modern-day definition of surfing, however, most often refers to a surfer riding a wave standing on a surfboard; this is also referred to as stand-up surfing.
Another prominent form of surfing is body boarding, when a surfer rides the wave on a bodyboard, either lying on their belly, drop knee (one foot and one knee on the board), or sometimes even standing up on a body board. Other types of surfing include knee boarding, surf matting (riding inflatable mats), and using foils. Body surfing, where the wave is surfed without a board, using the surfer's own body to catch and ride the wave, is very common and is considered by some to be the purest form of surfing. The closest form of body surfing using a board is a handboard which normally has one strap over it to fit one hand in.
Three major subdivisions within stand-up surfing are stand-up paddling, long boarding and short boarding with several major differences including the board design and length, the riding style, and the kind of wave that is ridden.
In tow-in surfing (most often, but not exclusively, associated with big wave surfing), a motorized water vehicle such as a personal watercraft, tows the surfer into the wave front helping the surfer match a large wave's speed, which is generally a higher speed than a self-propelled surfer can produce. Surfing-related sports such as paddle boarding and sea kayaking do not require waves, and other derivative sports such as kite surfing and windsurfing rely primarily on wind for power, yet all of these platforms may also be used to ride waves. Recently with the use of V-drive boats, Wakesurfing, in which one surfs on the wake of a boat, has emerged. The Guinness Book of World Records recognized a wave ride by Garrett McNamara at Nazaré, Portugal as the largest wave ever surfed.
For hundreds of years, surfing was a central part of Polynesian culture. Surfing may have been observed by British explorers at Tahiti in 1767. Samuel Wallis and the crew members of were the first Britons to visit the island in June of that year. Another candidate is the botanist Joseph Banks being part of the first voyage of James Cook on , who arrived on Tahiti on 10 April 1769. Lieutenant James King was the first person to write about the art of surfing on Hawaii when he was completing the journals of Captain James Cook upon Cook's death in 1779.
When Mark Twain visited Hawaii in 1866 he wrote,
In one place we came upon a large company of naked natives, of both sexes and all ages, amusing themselves with the national pastime of surf-bathing.
References to surf riding on planks and single canoe hulls are also verified for pre-contact Samoa, where surfing was called "fa'ase'e" or "se'egalu" (see Augustin Krämer, "The Samoa Islands"), and Tonga, far pre-dating the practice of surfing by Hawaiians and eastern Polynesians by over a thousand years.
In July 1885, three teenage Hawaiian princes took a break from their boarding school, St. Mathew's Hall in San Mateo, and came to cool off in Santa Cruz, California. There, David Kawānanakoa, Edward Keliʻiahonui and Jonah Kūhiō Kalanianaʻole surfed the mouth of the San Lorenzo River on custom-shaped redwood boards, according to surf historians Kim Stoner and Geoff Dunn. In 1890 the pioneer in agricultural education John Wrightson reputedly became the first British surfer when instructed by two Hawaiian students at his college.
George Freeth (8 November 1883 – 7 April 1919) is often credited as being the "Father of Modern Surfing". He is thought to have been the first modern surfer.
In 1907, the eclectic interests of the land baron Henry E. Huntington brought the ancient art of surfing to the California coast. While on vacation, Huntington had seen Hawaiian boys surfing the island waves. Looking for a way to entice visitors to the area of Redondo Beach, where he had heavily invested in real estate, he hired a young Hawaiian to ride surfboards. George Freeth decided to revive the art of surfing, but had little success with the huge hardwood boards that were popular at that time. When he cut them in half to make them more manageable, he created the original "Long board", which made him the talk of the islands. To the delight of visitors, Freeth exhibited his surfing skills twice a day in front of the Hotel Redondo. Another native Hawaiian, Duke Kahanamoku, spread surfing to both the U.S. and Australia, riding the waves after displaying the swimming prowess that won him Olympic gold medals in 1912 and 1920.
In 1975, a professional tour started. That year Margo Oberg became the first female professional surfer.
Swell is generated when the wind blows consistently over a large area of open water, called the wind's fetch. The size of a swell is determined by the strength of the wind and the length of its fetch and duration. Because of this, the surf tends to be larger and more prevalent on coastlines exposed to large expanses of ocean traversed by intense low pressure systems.
Local wind conditions affect wave quality since the surface of a wave can become choppy in blustery conditions. Ideal conditions include a light to moderate "offshore" wind, because it blows into the front of the wave, making it a "barrel" or "tube" wave. Waves are Left handed and Right Handed depending upon the breaking formation of the wave.
Waves are generally recognized by the surfaces over which they break. For example, there are beach breaks, reef breaks and point breaks.
The most important influence on wave shape is the topography of the seabed directly behind and immediately beneath the breaking wave. The contours of the reef or bar front become stretched by diffraction. Each break is different since each location's underwater topography is unique. At beach breaks, sandbanks change shape from week to week. Surf forecasting is aided by advances in information technology. Mathematical modeling graphically depicts the size and direction of swells around the globe.
Swell regularity varies across the globe and throughout the year. During winter, heavy swells are generated in the mid-latitudes, when the North and South polar fronts shift toward the Equator. The predominantly Westerly winds generate swells that advance Eastward, so waves tend to be largest on West coasts during winter months. However, an endless train of mid-latitude cyclones cause the isobars to become undulated, redirecting swells at regular intervals toward the tropics.
East coasts also receive heavy winter swells when low-pressure cells form in the sub-tropics, where slow moving highs inhibit their movement. These lows produce a shorter fetch than polar fronts, however, they can still generate heavy swells since their slower movement increases the duration of a particular wind direction. The variables of fetch and duration both influence how long wind acts over a wave as it travels since a wave reaching the end of a fetch behaves as if the wind died.
During summer, heavy swells are generated when cyclones form in the tropics. Tropical cyclones form over warm seas, so their occurrence is influenced by El Niño & La Niña cycles. Their movements are unpredictable.
Surf travel and some surf camps offer surfers access to remote, tropical locations, where tradewinds ensure offshore conditions. Since winter swells are generated by mid-latitude cyclones, their regularity coincides with the passage of these lows. Swells arrive in pulses, each lasting for a couple of days, with a few days between each swell.
The availability of free model data from the NOAA has allowed the creation of several surf forecasting websites.
Tube shape is defined by length to width ratio. A perfectly cylindrical vortex has a ratio of 1:1. Other forms include:
Tube speed is defined by angle of peel line.
The value of good surf in attracting surf tourism has prompted the construction of artificial reefs and sand bars. Artificial surfing reefs can be built with durable sandbags or concrete, and resemble a submerged breakwater. These artificial reefs not only provide a surfing location, but also dissipate wave energy and shelter the coastline from erosion. Ships such as Seli 1 that have accidentally stranded on sandy bottoms, can create sandbanks that give rise to good waves.
An artificial reef known as Chevron Reef was constructed in El Segundo, California in hopes of creating a new surfing area. However, the reef failed to produce any quality waves and was removed in 2008. In Kovalam, South West India, an artificial reef has, however, successfully provided the local community with a quality lefthander, stabilized coastal soil erosion, and provided good habitat for marine life. ASR Ltd., a New Zealand-based company, constructed the Kovalam reef and is working on another reef in Boscombe, England.
Even with artificial reefs in place, a tourist's vacation time may coincide with a "flat spell", when no waves are available. Completely artificial Wave pools aim to solve that problem by controlling all the elements that go into creating perfect surf, however there are only a handful of wave pools that can simulate good surfing waves, owing primarily to construction and operation costs and potential liability. Most wave pools generate waves that are too small and lack the power necessary to surf. The Seagaia Ocean Dome, located in Miyazaki, Japan, was an example of a surfable wave pool. Able to generate waves with up to faces, the specialized pump held water in 20 vertical tanks positioned along the back edge of the pool. This allowed the waves to be directed as they approach the artificial sea floor. Lefts, Rights, and A-frames could be directed from this pump design providing for rippable surf and barrel rides. The Ocean Dome cost about $2 billion to build and was expensive to maintain. The Ocean Dome was closed in 2007. In England, construction is nearing completion on the Wave, situated near Bristol, which will enable people unable to get to the coast to enjoy the waves in a controlled environment, set in the heart of nature.
There are two main types of artificial waves that exist today. One being artificial or stationary waves which simulate a moving, breaking wave by pumping a layer of water against a smooth structure mimicking the shape of a breaking wave. Because of the velocity of the rushing water the wave and the surfer can remain stationary while the water rushes by under the surfboard. Artificial waves of this kind provide the opportunity to try surfing and learn its basics in a moderately small and controlled environment near or far from locations with natural surf.
Another artificial wave can be made through use of a wave pool. These wave pools strive to make a wave that replicates a real ocean wave more than the stationary wave does. In 2018, the first professional surfing tournament in a wave pool was held.
Surfers represent a diverse culture based on riding the waves. Some people practice surfing as a recreational activity while others make it the central focus of their lives. Surfing culture is most dominant in Hawaii and California because these two states offer the best surfing conditions. However, waves can be found wherever there is coastline, and a tight-knit yet far-reaching subculture of surfers has emerged throughout America. Some historical markers of the culture included the woodie, the station wagon used to carry surfers' boards, as well as boardshorts, the long swim shorts typically worn while surfing. Surfers also wear wetsuits in colder regions.
The sport is also a significant part of Australia's eastern coast sub-cultural life, especially in New South Wales, where the weather and water conditions are most favourable for surfing.
During the 1960s, as surfing caught on in California, its popularity spread through American pop culture. Several teen movies, starting with the Gidget series in 1959, transformed surfing into a dream life for American youth. Later movies, including Beach Party (1963), Ride the Wild Surf (1964), and Beach Blanket Bingo (1965) promoted the California dream of sun and surf. Surf culture also fueled the early records of the Beach Boys.
The sport of surfing now represents a multibillion-dollar industry especially in clothing and fashion markets. The World Surf League (WSL) runs the championship tour, hosting top competitors in some of the best surf spots around the globe. A small number of people make a career out of surfing by receiving corporate sponsorships and performing for photographers and videographers in far-flung destinations; they are typically referred to as freesurfers. Sixty-six surfboarders on a long surfboard set a record in Huntington Beach, California for most people on a surfboard at one time. Dale Webster consecutively surfed for 14,641 days, making it his main life focus.
When the waves were flat, surfers persevered with sidewalk surfing, which is now called skateboarding. Sidewalk surfing has a similar feel to surfing and requires only a paved road or sidewalk. To create the feel of the wave, surfers even sneak into empty backyard swimming pools to ride in, known as pool skating. Eventually, surfing made its way to the slopes with the invention of the Snurfer, later credited as the first snowboard. Many other board sports have been invented over the years, but all can trace their heritage back to surfing.
Many surfers claim to have a spiritual connection with the ocean, describing surfing, the surfing experience, both in and out of the water, as a type of spiritual experience or a religion.
Standup surfing begins when the surfer paddles toward shore in an attempt to match the speed of the wave (the same applies whether the surfer is standup paddling, bodysurfing, boogie-boarding or using some other type of watercraft, such as a waveski or kayak). Once the wave begins to carry the surfer forward, the surfer stands up and proceeds to ride the wave. The basic idea is to position the surfboard so it is just ahead of the breaking part (whitewash) of the wave. A common problem for beginners is being able to catch the wave at all.
Surfers' skills are tested by their ability to control their board in difficult conditions, riding challenging waves, and executing maneuvers such as strong turns and cutbacks (turning board back to the breaking wave) and "carving" (a series of strong back-to-back maneuvers). More advanced skills include the "floater" (riding on top of the breaking curl of the wave), and "off the lip" (banking off the breaking wave). A newer addition to surfing is the progression of the "air" whereby a surfer propels off the wave entirely up into the air, and then successfully lands the board back on the wave.
The tube ride is considered to be the ultimate maneuver in surfing. As a wave breaks, if the conditions are ideal, the wave will break in an orderly line from the middle to the shoulder, enabling the experienced surfer to position themselves inside the wave as it is breaking. This is known as a tube ride. Viewed from the shore, the tube rider may disappear from view as the wave breaks over the rider's head. The longer the surfer remains in the tube, the more successful the ride. This is referred to as getting tubed, barrelled, shacked or pitted. Some of the world's best known waves for tube riding include Pipeline on the North shore of Oahu, Teahupoo in Tahiti and G-Land in Java. Other names for the tube include "the barrel", and "the pit".
Hanging ten and hanging five are moves usually specific to long boarding. Hanging Ten refers to having both feet on the front end of the board with all of the surfer's toes off the edge, also known as nose-riding. Hanging Five is having just one foot near the front, with five toes off the edge.
Cutback: Generating speed down the line and then turning back to reverse direction.
Floater: Suspending the board atop the wave. Very popular on small waves.
Top-Turn: Turn off the top of the wave. Sometimes used to generate speed and sometimes to shoot spray.
Airs/Aerials: These maneuvers have been becoming more and more prevalent in the sport in both competition and free surfing. An air is when the surfer can achieve enough speed and approach a certain type of section of a wave that is supposed to act as a ramp and launch the surfer above the lip line of the wave, “catching air”, and landing either in the transition of the wave or the whitewash when hitting a close-out section.
Airs can either be straight airs or rotational airs. Straight airs have minimal rotation if any, but definitely no more rotation than 90 degrees. Rotational airs require a rotation of 90 degrees or more depending on the level of the surfer.
Types of rotations:
The Glossary of surfing includes some of the extensive vocabulary used to describe various aspects of the sport of surfing as described in literature on the subject. In some cases terms have spread to a wider cultural use. These terms were originally coined by people who were directly involved in the sport of surfing.
Many popular surfing destinations have surf schools and surf camps that offer lessons. Surf camps for beginners and intermediates are multi-day lessons that focus on surfing fundamentals. They are designed to take new surfers and help them become proficient riders. All-inclusive surf camps offer overnight accommodations, meals, lessons and surfboards. Most surf lessons begin with instruction and a safety briefing on land, followed by instructors helping students into waves on longboards or "softboards". The softboard is considered the ideal surfboard for learning, due to the fact it is safer, and has more paddling speed and stability than shorter boards. Funboards are also a popular shape for beginners as they combine the volume and stability of the longboard with the manageable size of a smaller surfboard. New and inexperienced surfers typically learn to catch waves on softboards around the funboard size. Due to the softness of the surfboard the chance of getting injured is substantially minimized.
Typical surfing instruction is best performed one-on-one, but can also be done in a group setting. The most popular surf locations offer perfect surfing conditions for beginners, as well as challenging breaks for advanced students. The ideal conditions for learning would be small waves that crumble and break softly, as opposed to the steep, fast-peeling waves desired by more experienced surfers. When available, a sandy seabed is generally safer.
Surfing can be broken into several skills: Paddling strength, Positioning to catch the wave, timing, and balance. Paddling out requires strength, but also the mastery of techniques to break through oncoming waves ("duck diving", "eskimo roll"). Take-off positioning requires experience at predicting the wave set and where they will break. The surfer must pop up quickly as soon as the wave starts pushing the board forward. Preferred positioning on the wave is determined by experience at reading wave features including where the wave is breaking. Balance plays a crucial role in standing on a surfboard. Thus, balance training exercises are a good preparation. Practicing with a Balance board or swing boarding helps novices master the art.
The repetitive cycle of paddling, popping up, and balancing requires stamina, explosivity, and near-constant core stabilization. Having a proper warm up routine can help prevent injuries.
Surfing can be done on various equipment, including surfboards, longboards, stand up paddle boards (SUPs), bodyboards, wave skis, skimboards, kneeboards, surf mats and macca's trays. Surfboards were originally made of solid wood and were large and heavy (often up to long and having a mass of ). Lighter balsa wood surfboards (first made in the late 1940s and early 1950s) were a significant improvement, not only in portability, but also in increasing maneuverability.
Most modern surfboards are made of fiberglass foam (PU), with one or more wooden strips or "stringers", fiberglass cloth, and polyester resin (PE). An emerging board material is epoxy resin and Expanded Polystyrene foam (EPS) which is stronger and lighter than traditional PU/PE construction. Even newer designs incorporate materials such as carbon fiber and variable-flex composites in conjunction with fiberglass and epoxy or polyester resins. Since epoxy/EPS surfboards are generally lighter, they will float better than a traditional PU/PE board of similar size, shape and thickness. This makes them easier to paddle and faster in the water. However, a common complaint of EPS boards is that they do not provide as much feedback as a traditional PU/PE board. For this reason, many advanced surfers prefer that their surfboards be made from traditional materials.
Other equipment includes a leash (to stop the board from drifting away after a wipeout, and to prevent it from hitting other surfers), surf wax, traction pads (to keep a surfer's feet from slipping off the deck of the board), and fins (also known as "skegs") which can either be permanently attached ("glassed-on") or interchangeable. Sportswear designed or particularly suitable for surfing may be sold as "boardwear" (the term is also used in snowboarding). In warmer climates, swimsuits, surf trunks or boardshorts are worn, and occasionally rash guards; in cold water surfers can opt to wear wetsuits, boots, hoods, and gloves to protect them against lower water temperatures. A newer introduction is a rash vest with a thin layer of titanium to provide maximum warmth without compromising mobility. In recent years, there have been advancements in technology that have allowed surfers to pursue even bigger waves with added elements of safety. Big wave surfers are now experimenting with inflatable vests or colored dye packs to help decrease their odds of drowning.
There are many different surfboard sizes, shapes, and designs in use today. Modern longboards, generally in length, are reminiscent of the earliest surfboards, but now benefit from modern innovations in surfboard shaping and fin design. Competitive longboard surfers need to be competent at traditional "walking" manoeuvres, as well as the short-radius turns normally associated with shortboard surfing. The modern shortboard began life in the late 1960s and has evolved into today's common "thruster" style, defined by its three fins, usually around in length. The thruster was invented by Australian shaper Simon Anderson.
Midsize boards, often called funboards, provide more maneuverability than a longboard, with more flotation than a shortboard. While many surfers find that funboards live up to their name, providing the best of both surfing modes, others are critical.
There are also various niche styles, such as the "Egg", a longboard-style short board targeted for people who want to ride a shortboard but need more paddle power. The "Fish", a board which is typically shorter, flatter, and wider than a normal shortboard, often with a split tail (known as a "swallow tail"). The Fish often has two or four fins and is specifically designed for surfing smaller waves. For big waves there is the "Gun", a long, thick board with a pointed nose and tail (known as a pin tail) specifically designed for big waves.
The physics of surfing involves the physical oceanographic properties of wave creation in the surf zone, the characteristics of the surfboard, and the surfer's interaction with the water and the board.
Ocean waves are defined as a collection of dislocated water parcels that undergo a cycle of being forced past their normal position and being restored back to their normal position. Wind caused ripples and eddies form waves that gradually gain speed and distance (fetch). Waves increase in energy and speed, and then become longer and stronger. The fully developed sea has the strongest wave action that experiences storms lasting 10-hours and creates 15 meter wave heights in the open ocean.
The waves created in the open ocean are classified as deep-water waves. Deep-water waves have no bottom interaction and the orbits of these water molecules are circular; their wavelength is short relative to water depth and the velocity decays before the reaching the bottom of the water basin. Deep waves have depths greater than ½ their wavelengths. Wind forces waves to break in the deep sea.
Deep-water waves travel to shore and become shallow water waves. Shallow water waves have depths less than ½ of their wavelength. Shallow wave's wavelengths are long relative to water depth and have elliptical orbitals. The wave velocity effects the entire water basin. The water interacts with the bottom as it approaches shore and has a drag interaction. The drag interaction pulls on the bottom of the wave, causes refraction, increases the height, decreases the celerity (or the speed of the wave form), and the top (crest) falls over. This phenomenon happens because the velocity of the top of the wave is greater than the velocity of the bottom of the wave.
The surf zone is place of convergence of multiple waves types creating complex wave patterns. A wave suitable for surfing results from maximum speeds of 5 meters per second. This speed is relative because local onshore winds can cause waves to break. In the surf zone, shallow water waves are carried by global winds to the beach and interact with local winds to make surfing waves.
Different onshore and off-shore wind patterns in the surf zone create different types of waves. Onshore winds cause random wave breaking patterns and are more suitable for experienced surfers. Light offshore winds create smoother waves, while strong direct offshore winds cause plunging or large barrel waves. Barrel waves are large because the water depth is small when the wave breaks. Thus, the breaker intensity (or force) increases, and the wave speed and height increase. Off-shore winds produce non-surfable conditions by flattening a weak swell. Weak swell is made from surface gravity forces and has long wavelengths.
Surfing waves can be analyzed using the following parameters: breaking wave height, wave peel angle (α), wave breaking intensity, and wave section length. The breaking wave height has two measurements, the relative heights estimated by surfers and the exact measurements done by physical oceanographers. Measurements done by surfers were 1.36 to 2.58 times higher than the measurements done by scientists. The scientifically concluded wave heights that are physically possible to surf are 1 to 20 meters.
The wave peel angle is one of the main constituents of a potential surfing wave. Wave peel angle measures the distance between the peel-line and the line tangent to the breaking crest line. This angle controls the speed of the wave crest. The speed of the wave is an addition of the propagation velocity vector (Vw) and peel velocity vector (Vp), which results in the overall velocity of the wave (Vs).
Wave breaking intensity measures the force of the wave as it breaks, spills, or plunges (a plunging wave is termed by surfers as a "barrel wave"). Wave section length is the distance between two breaking crests in a wave set. Wave section length can be hard to measure because local winds, non-linear wave interactions, island sheltering, and swell interactions can cause multifarious wave configurations in the surf zone.
The parameters breaking wave height, wave peel angle (α), and wave breaking intensity, and wave section length are important because they are standardized by past oceanographers who researched surfing; these parameters have been used to create a guide that matches the type of wave formed and the skill level of surfer.
Table 1 shows a relationship of smaller peel angles correlating with a higher skill level of surfer. Smaller wave peel angles increase the velocities of waves. A surfer must know how to react and paddle quickly to match the speed of the wave to catch it. Therefore, more experience is required to catch a low peel angle waves. More experienced surfers can handle longer section lengths, increased velocities, and higher wave heights. Different locations offer different types of surfing conditions for each skill level.
A surf break is an area with an obstruction or an object that causes a wave to break. Surf breaks entail multiple scale phenomena. Wave section creation has micro-scale factors of peel angle and wave breaking intensity. The micro-scale components influence wave height and variations on wave crests. The mesoscale components of surf breaks are the ramp, platform, wedge, or ledge that may be present at a surf break. Macro-scale processes are the global winds that initially produce offshore waves. Types of surf breaks are headlands (point break), beach break, river/estuary entrance bar, reef breaks, and ledge breaks.
A headland or point break interacts with the water by causing refraction around the point or headland. The point absorbs the high frequency waves and long period waves persist, which are easier to surf. Examples of locations that have headland or point break induced surf breaks are Dunedin (New Zealand), Raglan, Malibu (California), Rincon (California), and Kirra (Australia).
A beach break happens where waves break from offshore waves, and onshore sandbars and rips. Wave breaks happen successively at beach breaks. Example locations are Tairua and Aramoana Beach (New Zealand) and the Gold Coast (Australia).
A river or estuary entrance bar creates waves from the ebb tidal delta, sediment outflow, and tidal currents. An ideal estuary entrance bar exists in Whangamata Bar, New Zealand.
A reef break is conducive to surfing because large waves consistently break over the reef. The reef is usually made of coral, and because of this, many injuries occur while surfing reef breaks. However, the waves that are produced by reef breaks are some of the best in the world. Famous reef breaks are present in Padang Padang (Indonesia), Pipeline (Hawaii), Uluwatu (Bali), and Teahupo'o (Tahiti). When surfing a reef break, the depth of the water needs to be considered as surfboards have fins on the bottom of the board.
A ledge break is formed by steep rocks ledges that makes intense waves because the waves travel through deeper water then abruptly reach shallower water at the ledge. Shark Island, Australia is a location with a ledge break. Ledge breaks create difficult surfing conditions, sometimes only allowing body surfing as the only feasible way to confront the waves.
Jetties are added to bodies of water to regulate erosion, preserve navigation channels, and make harbors. Jetties are classified into four different types and have two main controlling variables: the type of delta and the size of the jetty.
The first classification is a type 1 jetty. This type of jetty is significantly longer than the surf zone width and the waves break at the shore end of the jetty. The effect of a Type 1 jetty is sediment accumulation in a wedge formation on the jetty. These waves are large and increase in size as they pass over the sediment wedge formation. An example of a Type 1 jetty is Mission Beach, San Diego, California. This 1000-meter jetty was installed in 1950 at the mouth of Mission Bay. The surf waves happen north of the jetty, are longer waves, and are powerful. The bathymetry of the sea bottom in Mission Bay has a wedge shape formation that causes the waves to refract as they become closer to the jetty. The waves converge constructively after they refract and increase the sizes of the waves.
A type 2 jetty occurs in an ebb tidal delta, a delta transitioning between high and low tide. This area has shallow water, refraction, and a distinctive seabed shapes that creates large wave heights.
An example of a type 2 jetty is called "The Poles" in Atlantic Beach, Florida. Atlantic Beach is known to have flat waves, with exceptions during major storms. However, "The Poles" has larger than normal waves due to a 500-meter jetty that was installed on the south side of the St. Johns. This jetty was built to make a deep channel in the river. It formed a delta at "The Poles". This is special area because the jetty increases wave size for surfing, when comparing pre-conditions and post-conditions of the southern St. Johns River mouth area.
The wave size at "The Poles" depends on the direction of the incoming water. When easterly waters (from 55°) interact with the jetty, they create waves larger than southern waters (from 100°). When southern waves (from 100°) move toward "The Poles", one of the waves breaks north of the southern jetty and the other breaks south of the jetty. This does not allow for merging to make larger waves. Easterly waves, from 55°, converge north of the jetty and unite to make bigger waves.
A type 3 jetty is in an ebb tidal area with an unchanging seabed that has naturally created waves. Examples of a Type 3 jetty occurs in “Southside” Tamarack, Carlsbad, California.
A type 4 jetty is one that no longer functions nor traps sediment. The waves are created from reefs in the surf zone. A type 4 jetty can be found in Tamarack, Carlsbad, California.
Rip currents are fast, narrow currents that are caused by onshore transport within the surf zone and the successive return of the water seaward. The wedge bathymetry makes a convenient and consistent rip current of 5–10 meters that brings the surfers to the “take off point” then out to the beach.
Oceanographers have two theories on rip current formation. The wave interaction model assumes that two edges of waves interact, create differing wave heights, and cause longshore transport of nearshore currents. The Boundary Interaction Model assumes that the topography of the sea bottom causes nearshore circulation and longshore transport; the result of both models is a rip current.
Rip currents can be extremely strong and narrow as they extend out of the surf zone into deeper water, reaching speeds from and up to , which is faster than any human can swim. The water in the jet is sediment rich, bubble rich, and moves rapidly. The rip head of the rip current has long shore movement. Rip currents are common on beaches with mild slopes that experience sizeable and frequent oceanic swell.
The vorticity and inertia of rip currents have been studied. From a model of the vorticity of a rip current done at Scripps Institute of Oceanography, it was found that a fast rip current extends away from shallow water, the vorticity of the current increases, and the width of the current decreases. This model also acknowledges that friction plays a role and waves are irregular in nature. From data from Sector-Scanning Doppler Sonar at Scripps Institute of Oceanography, it was found that rip currents in La Jolla, CA lasted several minutes, reoccurred one to four times per hour, and created a wedge with a 45° arch and a radius 200–400 meters.
A longer surfboard of causes more friction with the water; therefore, it will be slower than a smaller and lighter board with a length of . Longer boards are good for beginners who need help balancing. Smaller boards are good for more experienced surfers who want to have more control and maneuverability.
When practicing the sport of surfing, the surfer paddles out past the wave break to wait for a wave. When a surfable wave arrives, the surfer must paddle extremely fast to match the velocity of the wave so the wave can accelerate him or her.
When the surfer is at wave speed, the surfer must quickly pop up, stay low, and stay toward the front of the wave to become stable and prevent falling as the wave steepens. The acceleration is less toward the front than toward the back. The physics behind the surfing of the wave involves the horizontal acceleration force (F·sinθ) and the vertical force (F·cosθ=mg). Therefore, the surfer should lean forward to gain speed, and lean on the back foot to brake. Also, to increase the length of the ride of the wave, the surfer should travel parallel to the wave crest.
Surfing, like all water sports, carries the inherent risk of drowning. Anyone at any age can learn to surf, but should have at least intermediate swimming skills. Although the board assists a surfer in staying buoyant, it can become separated from the user. A leash, attached to the ankle or knee, can keep a board from being swept away, but does not keep a rider on the board or above water. In some cases, possibly including the drowning of professional surfer Mark Foo, a leash can even be a cause of drowning by snagging on a reef or other object and holding the surfer underwater. By keeping the surfboard close to the surfer during a wipeout, a leash also increases the chances that the board may strike the rider, which could knock him or her unconscious and lead to drowning. A fallen rider's board can become trapped in larger waves, and if the rider is attached by a leash, he or she can be dragged for long distances underwater. Surfers should be careful to remain in smaller surf until they have acquired the advanced skills and experience necessary to handle bigger waves and more challenging conditions. However, even world-class surfers have drowned in extremely challenging conditions.
Under the wrong set of conditions, anything that a surfer's body can come in contact with is a potential hazard, including sand bars, rocks, small ice, reefs, surfboards, and other surfers. Collisions with these objects can sometimes cause injuries such as cuts and scrapes and in rare instances, death.
A large number of injuries, up to 66%, are caused by collision with a surfboard (nose or fins). Fins can cause deep lacerations and cuts, as well as bruising. While these injuries can be minor, they can open the skin to infection from the sea; groups like Surfers Against Sewage campaign for cleaner waters to reduce the risk of infections. Local bugs and disease can be risk factors when surfing around the globe.
Falling off a surfboard or colliding with others is commonly referred to as a "wipeout".
Sea life can sometimes cause injuries and even fatalities. Animals such as sharks, stingrays, Weever fish, seals and jellyfish can sometimes present a danger. Warmer-water surfers often do the "stingray shuffle" as they walk out through the shallows, shuffling their feet in the sand to scare away stingrays that may be resting on the bottom.
Rip currents are water channels that flow away from the shore. Under the wrong circumstances these currents can endanger both experienced and inexperienced surfers. Since a rip current appears to be an area of flat water, tired or inexperienced swimmers or surfers may enter one and be carried out beyond the breaking waves. Although many rip currents are much smaller, the largest rip currents have a width of forty or fifty feet. The flow of water moving out towards the sea in a rip will be stronger than most swimmers, making swimming back to shore difficult, however, by paddling parallel to the shore, a surfer can easily exit a rip current. Alternatively, some surfers actually ride on a rip current because it is a fast and effortless way to get out beyond the zone of breaking waves.
The seabed can pose a risk for surfers. If a surfer falls while riding a wave, the wave tosses and tumbles the surfer around, often in a downwards direction. At reef breaks and beach breaks, surfers have been seriously injured and even killed, because of a violent collision with the sea bed, the water above which can sometimes be very shallow, especially at beach breaks or reef breaks during low tide. Cyclops, Western Australia, for example, is one of the biggest and thickest reef breaks in the world, with waves measuring up to high, but the reef below is only about below the surface of the water.
A January 2018 study by the University of Exeter called the "Beach Bum Survey" found surfers and bodyboarders to be three times as likely as non-surfers to harbor antibiotic-resistant "E. coli" and four times as likely to harbor other bacteria capable of easily becoming antibiotic resistant. The researchers attributed this to the fact that surfers swallow roughly ten times as much seawater as swimmers.
Surfers should use ear protection such as ear plugs to avoid surfer's ear, inflammation of the ear or other damage. Surfer's ear is where the bone near the ear canal grows after repeated exposure to cold water, making the ear canal narrower. The narrowed canal makes it harder for water to drain from the ear. This can result in pain, infection and sometimes ringing of the ear. If surfer's ear develops it does so after repeated surfing sessions. Yet, damage such as inflammation of the ear can occur after only surfing once. This can be caused by repeatedly falling off the surfboard into the water and having the cold water rush into the ears, which can exert a damaging amount of pressure. Those with sensitive ears should therefore wear ear protection, even if they are not planning to surf very often.
Ear plugs designed for surfers, swimmers and other water athletes are primarily made to keep water out of the ear, thereby letting a protective pocket of air stay inside the ear canal. They can also block cold air, dirt and bacteria. Many designs are made to let sound through, and either float and/or have a leash in case the plug accidentally gets bumped out.
Surfer's eye ("Pterygium (conjunctiva)") is a gradual tissue growth on the cornea of the eye which ultimately can lead to vision loss. The cause of the condition is unclear, but appears to be partly related to long term exposure to UV light, dust and wind exposure. Prevention may include wearing sunglasses and a hat if in an area with strong sunlight. Surfers and other water-sport athletes should therefore wear eye protection that blocks 100% of the UV rays from the water, as is often used by snow-sport athletes. Surf goggles often have a head strap and ventilation to avoid fogging
Users of contact lenses should take extra care, and may consider wearing surfing goggles. Some risks of exposing contact lenses to the elements that can cause eye damage or infections are sand or organisms in the sea water getting between the eye and contact lens, or that lenses might fold.
Surfer's myelopathy is a rare spinal cord injury causing paralysis of the lower extremities, caused by hyperextension of the back. This is due to one of the main blood vessels of the spine becoming kinked, depriving the spinal cord of oxygen. In some cases the paralysis is permanent. Although any activity where the back is arched can cause this condition (i.e. yoga, pilates, etc.), this rare phenomenon has most often been seen in those surfing for the first time. According to DPT Sergio Florian, some recommendations for preventing myelopathy is proper warm up, limiting the session length and sitting on the board while waiting for waves, rather than lying. | https://en.wikipedia.org/wiki?curid=28198 |
SMS
SMS (short message service) is a text messaging service component of most telephone, Internet, and mobile device systems. It uses standardized communication protocols to enable mobile devices to exchange short text messages. An intermediary service can facilitate a text-to-voice conversion to be sent to landlines.
SMS, as used on modern devices, originated from radio telegraphy in radio memo pagers that used standardized phone protocols. These were defined in 1985 as part of the Global System for Mobile Communications (GSM) series of standards. The first test SMS message was sent in 1992 and it commercially rolled out to many cellular networks that decade. SMS became hugely popular worldwide as a way of text communication. By the end of 2010, SMS was the most widely used data application, with an estimated 3.5 billion active users, or about 80% of all mobile phone subscribers.
The protocols allowed users to send and receive messages of up to 160 characters (when entirely alpha-numeric) to and from GSM mobiles. Although most SMS messages are sent from one mobile phone to another, support for the service has expanded to include other mobile technologies, such as ANSI CDMA networks and Digital AMPS.
Mobile marketing, a type of direct marketing, uses SMS. According to a 2014 market research report the global SMS messaging business was estimated to be worth over US$100 billion, accounting for almost 50 percent of all the revenue generated by mobile messaging.
Adding text messaging functionality to mobile devices began in the early 1980s. The first action plan of the CEPT Group GSM was approved in December 1982, requesting that "The services and facilities offered in the public switched telephone networks and public data networks ... should be available in the mobile system." This plan included the exchange of text messages either directly between mobile stations, or transmitted via message handling systems in use at that time.
The SMS concept was developed in the Franco-German GSM cooperation in 1984 by Friedhelm Hillebrand and Bernard Ghillebaert. The GSM is optimized for telephony, since this was identified as its main application. The key idea for SMS was to use this telephone-optimized system, and to transport messages on the signalling paths needed to control the telephone traffic during periods when no signalling traffic existed. In this way, unused resources in the system could be used to transport messages at minimal cost. However, it was necessary to limit the length of the messages to 128 bytes (later improved to 160 seven-bit characters) so that the messages could fit into the existing signalling formats. Based on his personal observations and on analysis of the typical lengths of postcard and Telex messages, Hillebrand argued that 160 characters was sufficient for most brief communications.
SMS could be implemented in every mobile station by updating its software. Hence, a large base of SMS-capable terminals and networks existed when people began to use SMS. A new network element required was a specialized short message service centre, and enhancements were required to the radio capacity and network transport infrastructure to accommodate growing SMS traffic.
The technical development of SMS was a multinational collaboration supporting the framework of standards bodies. Through these organizations the technology was made freely available to the whole world.
The first proposal which initiated the development of SMS was made by a contribution of Germany and France in the GSM group meeting in February 1985 in Oslo. This proposal was further elaborated in GSM subgroup WP1 Services (Chairman Martine Alvernhe, France Telecom) based on a contribution from Germany. There were also initial discussions in the subgroup WP3 network aspects chaired by Jan Audestad (Telenor). The result was approved by the main GSM group in a June 1985 document which was distributed to industry. The input documents on SMS had been prepared by Friedhelm Hillebrand of Deutsche Telekom, with contributions from Bernard Ghillebaert of France Télécom. The definition that Friedhelm Hillebrand and Bernard Ghillebaert brought into GSM called for the provision of a message transmission service of alphanumeric messages to mobile users "with acknowledgement capabilities". The last three words transformed SMS into something much more useful than the electronic paging services used at the time that some in GSM might have had in mind.
SMS was considered in the main GSM group as a possible service for the new digital cellular system. In GSM document ""Services and Facilities to be provided in the GSM System,"" both mobile-originated and mobile-terminated short messages appear on the table of GSM teleservices.
The discussions on the GSM services were concluded in the recommendation GSM 02.03 ""TeleServices supported by a GSM PLMN."" Here a rudimentary description of the three services was given:
The material elaborated in GSM and its WP1 subgroup was handed over in Spring 1987 to a new GSM body called IDEG (the Implementation of Data and Telematic Services Experts Group), which had its kickoff in May 1987 under the chairmanship of Friedhelm Hillebrand (German Telecom). The technical standard known today was largely created by IDEG (later WP4) as the two recommendations GSM 03.40 (the two point-to-point services merged) and GSM 03.41 (cell broadcast).
WP4 created a Drafting Group Message Handling (DGMH), which was responsible for the specification of SMS. Finn Trosby of Telenor chaired the draft group through its first 3 years, in which the design of SMS was established. DGMH had five to eight participants, and Finn Trosby mentions as major contributors Kevin Holley, Eija Altonen, Didier Luizard and Alan Cox. The first action plan mentions for the first time the Technical Specification 03.40 "Technical Realisation of the Short Message Service". Responsible editor was Finn Trosby. The first and very rudimentary draft of the technical specification was completed in November 1987. However, drafts useful for the manufacturers followed at a later stage in the period. A comprehensive description of the work in this period is given in.
The work on the draft specification continued in the following few years, where Kevin Holley of Cellnet (now Telefónica O2 UK) played a leading role. Besides the completion of the main specification GSM 03.40, the detailed protocol specifications on the system interfaces also needed to be completed.
The Mobile Application Part (MAP) of the SS7 protocol included support for the transport of Short Messages through the Core Network from its inception. MAP Phase 2 expanded support for SMS by introducing a separate operation code for Mobile Terminated Short Message transport. Since Phase 2, there have been no changes to the Short Message operation packages in MAP, although other operation packages have been enhanced to support CAMEL SMS control.
From 3GPP Releases 99 and 4 onwards, CAMEL Phase 3 introduced the ability for the Intelligent Network (IN) to control aspects of the Mobile Originated Short Message Service, while CAMEL Phase 4, as part of 3GPP Release 5 and onwards, provides the IN with the ability to control the Mobile Terminated service. CAMEL allows the gsmSCP to block the submission (MO) or delivery (MT) of Short Messages, route messages to destinations other than that specified by the user, and perform real-time billing for the use of the service. Prior to standardized CAMEL control of the Short Message Service, IN control relied on switch vendor specific extensions to the Intelligent Network Application Part (INAP) of SS7.
The first SMS message was sent over the Vodafone GSM network in the United Kingdom on 3 December 1992, from Neil Papworth of Sema Group (now Mavenir Systems) using a personal computer to Richard Jarvis of Vodafone using an Orbitel 901 handset. The text of the message was "Merry Christmas."
The first commercial deployment of a short message service center (SMSC) was by Aldiscon part of Logica (now part of CGI) with Telia (now TeliaSonera) in Sweden in 1993, followed by Fleet Call (now Nextel) in the US, Telenor in Norway and BT Cellnet (now O2 UK) later in 1993. All first installations of SMS gateways were for network notifications sent to mobile phones, usually to inform of voice mail messages.
The first commercially sold SMS service was offered to consumers, as a person-to-person text messaging service by Radiolinja (now part of Elisa) in Finland in 1993. Most early GSM mobile phone handsets did not support the ability to send SMS text messages, and Nokia was the only handset manufacturer whose total GSM phone line in 1993 supported user-sending of SMS text messages. According to Matti Makkonen, an engineer at Nokia at the time, the Nokia 2010, which was released in January 1994, was the first mobile phone to support composing SMSes easily.
Initial growth was slow, with customers in 1995 sending on average only 0.4 messages per GSM customer per month. One factor in the slow takeup of SMS was that operators were slow to set up charging systems, especially for prepaid subscribers, and eliminate billing fraud which was possible by changing SMSC settings on individual handsets to use the SMSCs of other operators. Initially, networks in the UK only allowed customers to send messages to other users on the same network, limiting the usefulness of the service. This restriction was lifted in 1999.
Over time, this issue was eliminated by switch billing instead of billing at the SMSC and by new features within SMSCs to allow blocking of foreign mobile users sending messages through it. By the end of 2000, the average number of messages reached 35 per user per month, and on Christmas Day 2006, over 205 million messages were sent in the UK alone.
SMS was originally designed as part of GSM, but is now available on a wide range of networks, including 3G networks. However, not all text messaging systems use SMS, and some notable alternative implementations of the concept include J-Phone's "SkyMail" and NTT Docomo's "Short Mail", both in Japan. Email messaging from phones, as popularized by NTT Docomo's i-mode and the RIM BlackBerry, also typically uses standard mail protocols such as SMTP over TCP/IP.
, 6.1 trillion (6.1 × 1012) SMS text messages were sent, which is an average of 193,000 SMS per second. SMS has become a large commercial industry, earning $114.6 billion globally in 2010. The global average price for an SMS message is US$0.11, while mobile networks charge each other interconnect fees of at least US$0.04 when connecting between different phone networks.
In 2015, the actual cost of sending an SMS in Australia was found to be $0.00016 per SMS.
In 2014, Caktus Group developed the world's first SMS-based voter registration system in Libya. So far, more than 1.5 million people have registered using that system, providing Libyan voters with unprecedented access to the democratic process.
While SMS is still a growing market, it is being increasingly challenged by Internet Protocol-based messaging services such as Apple's iMessage, Facebook Messenger, WhatsApp, Viber, WeChat (in China) and Line (in Japan), available on smart phones with data connections. It has been reported that over 97% of smart phone owners use alternative messaging services at least once a day. However, in the U.S. these Internet-based services have not caught on as much, and SMS continues to be highly popular there.
SMS enablement allows individuals to send an SMS message to a business phone number (traditional landline) and receive a SMS in return. Providing customers with the ability to text to a phone number allows organizations to offer new services that deliver value. Examples include chat bots, and text enabled customer service and call centers.
The "Short Message Service—Point to Point (SMS-PP)"—was originally defined in GSM recommendation 03.40, which is now maintained in 3GPP as TS 23.040. GSM 03.41 (now 3GPP TS 23.041) defines the "Short Message Service—Cell Broadcast (SMS-CB)", which allows messages (advertising, public information, etc.) to be broadcast to all mobile users in a specified geographical area.
Messages are sent to a short message service center (SMSC), which provides a "store and forward" mechanism. It attempts to send messages to the SMSC's recipients. If a recipient is not reachable, the SMSC queues the message for later retry. Some SMSCs also provide a "forward and forget" option where transmission is tried only once. Both mobile terminated (MT, for messages sent "to" a mobile handset) and mobile originating (MO, for those sent "from" the mobile handset) operations are supported. Message delivery is "best effort", so there are no guarantees that a message will actually be delivered to its recipient, but delay or complete loss of a message is uncommon, typically affecting less than 5 percent of messages. Some providers allow users to request delivery reports, either via the SMS settings of most modern phones, or by prefixing each message with *0# or *N#. However, the exact meaning of confirmations varies from reaching the network, to being queued for sending, to being sent, to receiving a confirmation of receipt from the target device, and users are often not informed of the specific type of success being reported.
SMS is a stateless communication protocol in which every SMS message is considered entirely independent of other messages. Enterprise applications using SMS as a communication channel for stateful dialogue (where an MO reply message is paired to a specific MT message) requires that session management be maintained external to the protocol.
Transmission of short messages between the SMSC and the handset is done whenever using the Mobile Application Part (MAP) of the SS7 protocol. Messages are sent with the MAP MO- and MT-ForwardSM operations, whose payload length is limited by the constraints of the signaling protocol to precisely 140 bytes (140 bytes * 8 bits / byte = 1120 bits).
Short messages can be encoded using a variety of alphabets: the default GSM 7-bit alphabet, the 8-bit data alphabet, and the 16-bit UCS-2 alphabet. Depending on which alphabet the subscriber has configured in the handset, this leads to the maximum individual short message sizes of 160 7-bit characters, 140 8-bit characters, or 70 16-bit characters. GSM 7-bit alphabet support is mandatory for GSM handsets and network elements, but characters in languages such as Hindi, Arabic, Chinese, Korean, Japanese, or Cyrillic alphabet languages (e.g., Russian, Ukrainian, Serbian, Bulgarian, etc.) must be encoded using the 16-bit UCS-2 character encoding (see Unicode). Routing data and other metadata is additional to the payload size.
Larger content (concatenated SMS, multipart or segmented SMS, or "long SMS") can be sent using multiple messages, in which case each message will start with a User Data Header (UDH) containing segmentation information. Since UDH is part of the payload, the number of available characters per segment is lower: 153 for 7-bit encoding, 134 for 8-bit encoding and 67 for 16-bit encoding. The receiving handset is then responsible for reassembling the message and presenting it to the user as one long message. While the standard theoretically permits up to 255 segments, 10 segments is the practical maximum with some carriers, and long messages are often billed as equivalent to multiple SMS messages. Some providers have offered length-oriented pricing schemes for messages, although that type of pricing structure is rapidly disappearing.
SMS gateway providers facilitate SMS traffic between businesses and mobile subscribers, including SMS for enterprises, content delivery, and entertainment services involving SMS, e.g. TV voting. Considering SMS messaging performance and cost, as well as the level of messaging services, SMS gateway providers can be classified as aggregators or SS7 providers.
The aggregator model is based on multiple agreements with mobile carriers to exchange two-way SMS traffic into and out of the operator's SMSC, also known as "local termination model". Aggregators lack direct access into the SS7 protocol, which is the protocol where the SMS messages are exchanged. SMS messages are delivered to the operator's SMSC, but not the subscriber's handset; the SMSC takes care of further handling of the message through the SS7 network.
Another type of SMS gateway provider is based on SS7 connectivity to route SMS messages, also known as "international termination model". The advantage of this model is the ability to route data directly through SS7, which gives the provider total control and visibility of the complete path during SMS routing. This means SMS messages can be sent directly to and from recipients without having to go through the SMSCs of other mobile operators. Therefore, it is possible to avoid delays and message losses, offering full delivery guarantees of messages and optimized routing. This model is particularly efficient when used in mission-critical messaging and SMS used in corporate communications. Moreover, these SMS gateway providers are providing branded SMS services with masking but after misuse of these gateways most countries's Governments have taken serious steps to block these gateways.
Message Service Centers communicate with the Public Land Mobile Network (PLMN) or PSTN via Interworking and Gateway MSCs.
Subscriber-originated messages are transported from a handset to a service center, and may be destined for mobile users, subscribers on a fixed network, or Value-Added Service Providers (VASPs), also known as application-terminated. Subscriber-terminated messages are transported from the service center to the destination handset, and may originate from mobile users, from fixed network subscribers, or from other sources such as VASPs.
On some carriers nonsubscribers can send messages to a subscriber's phone using an Email-to-SMS gateway. Additionally, many carriers, including AT&T Mobility, T-Mobile USA, Sprint, and Verizon Wireless, offer the ability to do this through their respective web sites.
For example, an AT&T subscriber whose phone number was 555-555-5555 would receive e-mails addressed to 5555555555@txt.att.net as text messages. Subscribers can easily reply to these SMS messages, and the SMS reply is sent back to the original email address. Sending email to SMS is free for the sender, but the recipient is subject to the standard delivery charges. Only the first 160 characters of an email message can be delivered to a phone, and only 160 characters can be sent from a phone. However, longer messages may be broken up into multiple texts, depending upon the telephone service provider.
Text-enabled fixed-line handsets are required to receive messages in text format. However, messages can be delivered to nonenabled phones using text-to-speech conversion.
Short messages can send binary content such as ringtones or logos, as well as Over-the-air programming (OTA) or configuration data. Such uses are a vendor-specific extension of the GSM specification and there are multiple competing standards, although Nokia's Smart Messaging is common. An alternative way for sending such binary content is EMS messaging, which is standardized and not dependent on vendors.
SMS is used for M2M (Machine to Machine) communication. For instance, there is an LED display machine controlled by SMS, and some vehicle tracking companies use SMS for their data transport or telemetry needs. SMS usage for these purposes is slowly being superseded by GPRS services owing to their lower overall cost. GPRS is offered by smaller telco players as a route of sending SMS text to reduce the cost of SMS texting internationally.
Many mobile and satellite transceiver units support the sending and receiving of SMS using an extended version of the Hayes command set. The extensions were standardised as part of the GSM Standards and extended as part of the 3GPP standards process.
The connection between the terminal equipment and the transceiver can be realized with a serial cable (e.g., USB), a Bluetooth link, an infrared link, etc. Common AT commands include AT+CMGS (send message), AT+CMSS (send message from storage), AT+CMGL (list messages) and AT+CMGR (read message).
However, not all modern devices support receiving of messages if the message storage (for instance the device's internal memory) is not accessible using AT commands.
Short messages may be used normally to provide premium rate services to subscribers of a telephone network.
Mobile-terminated short messages can be used to deliver digital content such as news alerts, financial information, logos, and ring tones. The first premium-rate media content delivered via the SMS system was the world's first paid downloadable ringing tones, as commercially launched by Saunalahti (later Jippii Group, now part of Elisa Group), in 1998. Initially, only Nokia branded phones could handle them. By 2002 the ringtone business globally had exceeded $1 billion of service revenues, and nearly US$5 billion by 2008. Today, they are also used to pay smaller payments online—for example, for file-sharing services, in mobile application stores, or VIP section entrance. Outside the online world, one can buy a bus ticket or beverages from ATM, pay a parking ticket, order a store catalog or some goods (e.g., discount movie DVDs), make a donation to charity, and much more.
Premium-rated messages are also used in Donors Message Service to collect money for charities and foundations. DMS was first launched at April 1, 2004, and is very popular in the Czech Republic. For example, the Czech people sent over 1.5 million messages to help South Asia recover from the 2004 Indian Ocean earthquake and tsunami.
The Value-added service provider (VASP) providing the content submits the message to the mobile operator's SMSC(s) using an TCP/IP protocol such as the short message peer-to-peer protocol (SMPP) or the External Machine Interface (EMI). The SMSC delivers the text using the normal Mobile Terminated delivery procedure. The subscribers are charged extra for receiving this premium content; the revenue is typically divided between the mobile network operator and the VASP either through revenue share or a fixed transport fee. Submission to the SMSC is usually handled by a third party.
Mobile-originated short messages may also be used in a premium-rated manner for services such as televoting. In this case, the VASP providing the service obtains a short code from the telephone network operator, and subscribers send texts to that number. The payouts to the carriers vary by carrier; percentages paid are greatest on the lowest-priced premium SMS services. Most information providers should expect to pay about 45 percent of the cost of the premium SMS up front to the carrier. The submission of the text to the SMSC is identical to a standard MO Short Message submission, but once the text is at the SMSC, the Service Center (SC) identifies the Short Code as a premium service. The SC will then direct the content of the text message to the VASP, typically using an IP protocol such as SMPP or EMI. Subscribers are charged a premium for the sending of such messages, with the revenue typically shared between the network operator and the VASP. Short codes only work within one country, they are not international.
An alternative to inbound SMS is based on long numbers (international number format, such as "+44 762 480 5000"), which can be used in place of short codes for SMS reception in several applications, such as TV voting, product promotions and campaigns. Long numbers work internationally, allow businesses to use their own numbers, rather than short codes, which are usually shared across many brands. Additionally, long numbers are nonpremium inbound numbers.
Threaded SMS is a visual styling orientation of SMS message history that arranges messages to and from a contact in chronological order on a single screen.
It was first invented by a developer working to implement the SMS client for the BlackBerry, who was looking to make use of the blank screen left below the message on a device with a larger screen capable of displaying far more than the usual 160 characters, and was inspired by threaded Reply conversations in email.
Visually, this style of representation provides a back-and-forth chat-like history for each individual contact. Hierarchical-threading at the conversation-level (as typical in blogs and on-line messaging boards) is not widely supported by SMS messaging clients. This limitation is due to the fact that there is no session identifier or subject-line passed back and forth between sent and received messages in the header data (as specified by SMS protocol) from which the client device can properly thread an incoming message to a specific dialogue, or even to a specific message within a dialogue.
Most smart phone text-messaging-clients are able to create some contextual threading of "group messages" which narrows the context of the thread around the common interests shared by group members. On the other hand, advanced enterprise messaging applications which push messages from a remote server often display a dynamically changing reply number (multiple numbers used by the same sender), which is used along with the sender's phone number to create session-tracking capabilities analogous to the functionality that cookies provide for web-browsing. As one pervasive example, this technique is used to extend the functionality of many Instant Messenger (IM) applications such that they are able to communicate over two-way dialogues with the much larger SMS user-base. In cases where multiple reply numbers are used by the enterprise server to maintain the dialogue, the visual conversation threading on the client may be separated into multiple threads.
While SMS reached its popularity as a person-to-person messaging, another type of SMS is growing fast: application-to-person (A2P) messaging. A2P is a type of SMS sent from a subscriber to an application or sent from an application to a subscriber. It is commonly used by businesses, such as banks, to send SMS messages from their systems to their customers.
In the US, carriers have traditionally preferred that A2P messages must be sent using a short code rather than a standard long code. However, recently multiple US carriers, including Verizon have announced plans to officially support A2P messages over long codes. In the United Kingdom A2P messages can be sent with a dynamic 11 character sender ID; however, short codes are used for OPTOUT commands. There are specialist companies such as MMG Mobile Marketing Group which provide these services to businesses and enterprises.
All commercial satellite phone networks except ACeS and OptusSat support SMS. While early Iridium handsets only support incoming SMS, later models can also send messages. The price per message varies for different networks. Unlike some mobile phone networks, there is no extra charge for sending international SMS or to send one to a different satellite phone network. SMS can sometimes be sent from areas where the signal is too poor to make a voice call.
Satellite phone networks usually have web-based or email-based SMS portals where one can send free SMS to phones on that particular network.
Unlike dedicated texting systems like the Simple Network Paging Protocol and Motorola's ReFLEX protocol, SMS message delivery is not guaranteed, and many implementations provide no mechanism through which a sender can determine whether an SMS message has been delivered in a timely manner. SMS messages are generally treated as lower-priority traffic than voice, and various studies have shown that around 1% to 5% of messages are lost entirely, even during normal operation conditions, and others may not be delivered until long after their relevance has passed. The use of SMS as an emergency notification service in particular has been questioned.
The Global Service for Mobile communications (GSM), with the greatest worldwide number of users, succumbs to several security vulnerabilities. In the GSM, only the airway traffic between the Mobile Station (MS) and the Base Transceiver Station (BTS) is optionally encrypted with a weak and broken stream cipher (A5/1 or A5/2). The authentication is unilateral and also vulnerable. There are also many other security vulnerabilities and shortcomings. Such vulnerabilities are inherent to SMS as one of the superior and well-tried services with a global availability in the GSM networks. SMS messaging has some extra security vulnerabilities due to its store-and-forward feature, and the problem of fake SMS that can be conducted via the Internet. When a user is roaming, SMS content passes through different networks, perhaps including the Internet, and is exposed to various vulnerabilities and attacks. Another concern arises when an adversary gets access to a phone and reads the previous unprotected messages.
In October 2005, researchers from Pennsylvania State University published an analysis of vulnerabilities in SMS-capable cellular networks. The researchers speculated that attackers might exploit the open functionality of these networks to disrupt them or cause them to fail, possibly on a nationwide scale.
The GSM industry has identified a number of potential fraud attacks on mobile operators that can be delivered via abuse of SMS messaging services. The most serious threat is SMS Spoofing, which occurs when a fraudster manipulates address information in order to impersonate a user that has roamed onto a foreign network and is submitting messages to the home network. Frequently, these messages are addressed to destinations outside the home network—with the home SMSC essentially being "hijacked" to send messages into other networks.
The only sure way of detecting and blocking spoofed messages is to screen incoming mobile-originated messages to verify that the sender is a valid subscriber and that the message is coming from a valid and correct location. This can be implemented by adding an intelligent routing function to the network that can query originating subscriber details from the home location register (HLR) before the message is submitted for delivery. This kind of intelligent routing function is beyond the capabilities of legacy messaging infrastructure.
In an effort to limit telemarketers who had taken to bombarding users with hordes of unsolicited messages, India introduced new regulations in September 2011, including a cap of 3,000 SMS messages per subscriber per month, or an average of 100 per subscriber per day. Due to representations received from some of the service providers and consumers, TRAI (Telecom Regulatory Authority of India) has raised this limit to 200 SMS messages per SIM per day in case of prepaid services, and up to 6,000 SMS messages per SIM per month in case of postpaid services with effect from 1 November 2011. However, it was ruled unconstitutional by the Delhi high court, but there are some limitations.
A Flash SMS is a type of SMS that appears directly on the main screen without user interaction and is not automatically stored in the inbox. It can be useful in emergencies, such as a fire alarm or cases of confidentiality, as in delivering one-time passwords.
In Germany in 2010 almost half a million "silent SMS" messages were sent by the federal police, customs and the secret service "Verfassungsschutz" (offices for protection of the constitution). These silent messages, also known as "silent TMS", "stealth SMS", "stealth ping" or "Short Message Type 0", are used to locate a person and thus to create a complete movement profile. They do not show up on a display, nor trigger any acoustical signal when received. Their primary purpose was to deliver special services of the network operator to any cell phone. | https://en.wikipedia.org/wiki?curid=28207 |
Santa Monica, California
Santa Monica () is a beachfront city in western Los Angeles County, California, United States. Situated on Santa Monica Bay, it is bordered on three sides by different neighborhoods of the city of Los Angeles: Pacific Palisades to the north, Brentwood on the northeast, West Los Angeles on the east, Mar Vista on the southeast, and Venice on the south. The 2010 U.S. Census population was 89,736. Due in part to a favorable climate, Santa Monica became a famed resort town by the early 20th century. The city has experienced a boom since the late 1980s through the revitalization of its downtown core, significant job growth and increased tourism. Popular tourists sites include the Santa Monica Pier and Pacific Park.
Santa Monica was inhabited by the Tongva people. Santa Monica was called Kecheek in the Tongva language. The first non-indigenous group to set foot in the area was the party of explorer Gaspar de Portolà, who camped near the present-day intersection of Barrington and Ohio Avenues on August 3, 1769. Named after the Christian saint Monica, there are two different accounts of how the city's name came to be. One says it was named in honor of the feast day of Saint Monica (mother of Saint Augustine), but her feast day is May 4. Another version says it was named by Juan Crespí on account of a pair of springs, the Kuruvungna Springs (Serra Springs), that were reminiscent of the tears Saint Monica shed over her son's early impiety.
Following the Mexican–American War, Mexico signed the Treaty of Guadalupe Hidalgo, which gave Mexicans and Californios living in state certain unalienable rights. US government sovereignty in California began on February 2, 1848.
In the 1870s, the Los Angeles and Independence Railroad connected Santa Monica with Los Angeles, and a wharf out into the bay. The first town hall was an 1873 brick building, later a beer hall, and now part of the Santa Monica Hostel. It is Santa Monica's oldest extant structure. By 1885, the town's first hotel was the Santa Monica Hotel.
Amusement piers became popular in the first decades of the 20th century and the extensive Pacific Electric Railroad brought people to the city's beaches from across the Greater Los Angeles Area.
Around the start of the 20th century, a growing population of Asian Americans lived in and around Santa Monica and Venice. A Japanese fishing village was near the Long Wharf while small numbers of Chinese lived or worked in Santa Monica and Venice. The two ethnic minorities were often viewed differently by White Americans who were often well-disposed towards the Japanese but condescending towards the Chinese. The Japanese village fishermen were an integral economic part of the Santa Monica Bay community.
Donald Wills Douglas, Sr. built a plant in 1922 at Clover Field (Santa Monica Airport) for the Douglas Aircraft Company. In 1924, four Douglas-built planes took off from Clover Field to attempt the first aerial circumnavigation of the world. Two planes returned after covering in 175 days, and were greeted on their return September 23, 1924, by a crowd of 200,000. The Douglas Company (later McDonnell Douglas) kept facilities in the city until the 1960s.
The Great Depression hit Santa Monica deeply. One report gives citywide employment in 1933 of just 1,000. Hotels and office building owners went bankrupt. In the 1930s, corruption infected Santa Monica (along with neighboring Los Angeles). The federal Works Project Administration helped build several buildings, most notably City Hall. The main Post Office and Barnum Hall (Santa Monica High School auditorium) were also among other WPA projects.
Douglas's business grew with the onset of World War II, employing as many as 44,000 people in 1943. To defend against air attack, set designers from the Warner Brothers Studios prepared elaborate camouflage that disguised the factory and airfield. The RAND Corporation began as a project of the Douglas Company in 1945, and spun off into an independent think tank on May 14, 1948. RAND acquired a 15-acre (61,000 m2) campus between the Civic Center and the pier entrance.
The completion of the Santa Monica Freeway in 1966 decimated the Pico neighborhood that had been a leading African American enclave on the Westside.
Beach volleyball is believed to have been developed by Duke Kahanamoku in Santa Monica during the 1920s.
The Santa Monica Looff Hippodrome (carousel) is a National Historic Landmark. It sits on the Santa Monica Pier, which was built in 1909. The La Monica Ballroom on the pier was once the largest ballroom in the US and the source for many New Year's Eve national network broadcasts. The Santa Monica Civic Auditorium was an important music venue for several decades and hosted the Academy Awards in the 1960s. McCabe's Guitar Shop is a leading acoustic performance space as well as retail outlet. Bergamot Station is a city-owned art gallery compound that includes the Santa Monica Museum of Art. The city is also home to the California Heritage Museum and the Angels Attic dollhouse and toy museum. The New West Symphony is the resident orchestra of Barnum Hall. They are also resident orchestra of the Oxnard Performing Arts Center and the Thousand Oaks Civic Arts Plaza.
Santa Monica has three main shopping districts: Montana Avenue on the north side, the Downtown District in the city's core, and Main Street on the south end. Each has its own unique feel and personality. Montana Avenue is a stretch of luxury boutique stores, restaurants, and small offices that generally features more upscale shopping. The Main Street district offers an eclectic mix of clothing, restaurants, and other specialty retail.
The Downtown District is the home of the Third Street Promenade, a major outdoor pedestrian-only shopping district that stretches for three blocks between Wilshire Blvd. and Broadway (not the same Broadway in downtown and south Los Angeles). Third Street is closed to vehicles for those three blocks to allow people to stroll, congregate, shop and enjoy street performers. Santa Monica Place, featuring Bloomingdale's and Nordstrom in a three-level outdoor environment, is at the Promenade's southern end. After a period of redevelopment, the mall reopened in the fall of 2010 as a modern shopping, entertainment and dining complex with more outdoor space.
Santa Monica hosts the annual Santa Monica Film Festival.
The city's oldest movie theater is the Majestic. Opened in 1912 and also known as the Mayfair Theatre, the theater, it has been closed since the 1994 Northridge earthquake. The Aero Theater (now operated by the American Cinematheque) and Criterion Theater were built in the 1930s and still show movies. The Santa Monica Promenade alone supports more than a dozen movie screens.
Palisades Park stretches out along the crumbling bluffs overlooking the Pacific and is a favorite walking area to view the ocean. It includes a totem pole, camera obscura, artwork, benches, picnic areas, pétanque courts, and restrooms.
Tongva Park occupies 6 acres between Ocean Avenue and Main Street, just south of Colorado Avenue. The park includes an overlook, amphitheater, playground, garden, fountains, picnic areas, and restrooms.
The Santa Monica Stairs, a long, steep staircase that leads from north of San Vicente down into Santa Monica Canyon, is a popular spot for outdoor workouts. Some area residents have complained that the stairs have become too popular, and attract too many exercisers to the wealthy neighborhood of multimillion-dollar properties.
Santa Monica has two hospitals: Saint John's Health Center and Santa Monica-UCLA Medical Center. Its cemetery is Woodlawn Memorial.
Santa Monica has several local newspapers including "Santa Monica Daily Press", "Santa Monica Mirror", "Santa Monica Star", and "Santa Monica Observer".
The "Santa Monica Playhouse" is a popular theater in the city.
The city of Santa Monica rests on a mostly flat slope that angles down towards Ocean Avenue and towards the south. High bluffs separate the north side of the city from the beaches. Santa Monica borders the L.A. neighborhoods of Pacific Palisades to the north and Venice to the south. To the west, Santa Monica has the 3-mile coastline fronting the Santa Monica Bay, and to the east of the city borders are the Los Angeles communities of West Los Angeles and Brentwood.
Santa Monica has a coastal Mediterranean climate (Köppen "Csb"). Santa Monica enjoys an average of 310 days of sunshine a year. It is in USDA plant hardiness zone 10a. Because of its location, nestled on the vast and open Santa Monica Bay, morning fog is a common phenomenon in May, June, July and early August (caused by ocean temperature variations and currents). Like other inhabitants of the greater Los Angeles area, residents have a particular terminology for this phenomenon: the "May Gray", the "June Gloom" and even “Fogust”. Overcast skies are common during June mornings, but usually the strong sun burns the fog off by noon. In the late winter/early summer, daily fog is a phenomenon too. It happens suddenly and it may last some hours or past sunset time. Nonetheless, it will sometimes stay cloudy and cool all day during June, even as other parts of the Los Angeles area enjoy sunny skies and warmer temperatures. At times, the sun can be shining east of 20th Street, while the beach area is overcast. As a general rule, the beach temperature is from 5 to 10 degrees Fahrenheit (3 to 6 degrees Celsius) cooler than it is inland during summer days, and 5–10 degrees warmer during winter nights.
It is also in September highest temperatures tend to be reached. It is winter, however, when the hot, dry winds of the Santa Anas are most common. In contrast, temperatures exceeding 10 degrees below average are rare.
The rainy season is from late October through late March. Winter storms usually approach from the northwest and pass quickly through the Southland. There is very little rain during the rest of the year. Yearly rainfall totals are unpredictable as rainy years are occasionally followed by droughts. There has never been any snow or frost, but there has been hail.
Santa Monica usually enjoys cool breezes blowing in from the ocean, which tend to keep the air fresh and clean. Therefore, smog is less of a problem for Santa Monica than elsewhere around Los Angeles. However, in the autumn months of September through November, the Santa Ana winds will sometimes blow from the east, bringing smoggy and hot inland air to the beaches.
The city first proposed its Sustainable City Plan in 1992 and in 1994, was one of the first cities in the nation to formally adopt a comprehensive sustainability plan, setting waste reduction and water conservation policies for both public and private sector through its Office of Sustainability and the Environment. Eighty-two percent of the city's public works vehicles run on alternative fuels, including most of the municipal bus system, making it among the largest such fleets in the country. Santa Monica fleet vehicles and buses source their natural gas from Redeem, a Southern California-based supplier of renewable and sustainable natural gas obtained from non-fracked methane biogas generated from organic landfill waste.
Santa Monica adopted a Community Energy Independence Initiative, with a goal of achieving complete energy independence by 2020 (vs. California's already ambitious 33% renewables goal). The city exceeded that aspiration when, in February 2019, it switched over to electricity from the Clean Power Alliance, with a citywide default of 100% renewably sourced energy. That same year, the Santa Monica City Council adopted a Climate Action and Adaptation Plan aimed at achieving an 80% cut in carbon emissions by 2030, and reaching community-wide carbon neutrality by 2050 or sooner.
An urban runoff facility (SMURFF), the first of its kind in the US, catches and treats of water each week that would otherwise flow into the bay via storm-drains and sells it back to end-users within the city for reuse as gray-water, while bioswales throughout the city allow rainwater to percolate into and replenish the groundwater. The groundwater supply plays an important role in the city's Sustainable Water Master Plan, whereby Santa Monica has set a goal of attaining 100% water independence by 2020. The city has numerous programs designed to promote water conservation among residents, including a rebate for those who convert lawns to drought-tolerant gardens that require less water.
Santa Monica has also instituted a green building-code whereby merely constructing to code automatically renders a building equivalent to the US Green Building Council's LEED Silver standards. The city's Main Library is one of many LEED certified or LEED equivalent buildings in the city. It is built over a 200,000 gallon cistern that collects filtered stormwater from the roof. The water is used for landscape irrigation.
Since 2009, Santa Monica has been developing the Zero Waste Strategic Operations Plan by which the city will set a goal of diverting at least 95% of all waste away from landfills, and toward recycling and composting, by 2030. The plan includes a food waste composting program, which diverts 3 million pounds of restaurant food waste away from landfills annually. Currently, 77% of all solid waste produced citywide is diverted from landfills.
The city is also in the process of implementing a 5-year and 20 year Bike Action Plan with a goal of attaining 14 to 35% bicycle transportation mode share by 2030 through the installation of enhanced bicycle infrastructure throughout the city. Other environmentally focused initiatives include curbside recycling, curbside composting bins (in addition to trash, yard-waste, and recycle bins), farmers' markets, community gardens, garden-share, an urban forest initiative, a hazardous materials home-collection service, and a green business certification.
Santa Monica's population has grown from 417 in 1880 to 89,736 in 2010.
The 2010 United States Census reported Santa Monica had a population of 89,736. The population density was 10,662.6 people per square mile (4,116.9/km2). The racial makeup of Santa Monica was 69,663 (77.6%) White (70.1% Non-Hispanic White), 3,526 (3.9%) African American, 338 (0.4%) Native American, 8,053 (9.0%) Asian, 124 (0.1%) Pacific Islander, 4,047 (4.5%) from other races, and 3,985 (4.4%) from two or more races. Hispanic or Latino of any race were 11,716 persons (13.1%), with Mexican Americans, Spanish Americans, and Argentine Americans making up 64.2%, 6.4%, and 4.7% of the Hispanic population respectively.
The Census reported 87,610 people (97.6% of the population) lived in households, 1,299 (1.4%) lived in non-institutionalized group quarters, and 827 (0.9%) were institutionalized.
There were 46,917 households, out of which 7,835 (16.7%) had children under the age of 18 living in them, 13,092 (27.9%) were opposite-sex married couples living together, 3,510 (7.5%) had a female householder with no husband present, 1,327 (2.8%) had a male householder with no wife present. There were 2,867 (6.1%) unmarried opposite-sex partnerships, and 416 (0.9%) same-sex married couples or partnerships. 22,716 households (48.4%) were made up of individuals, and 5,551 (11.8%) had someone living alone who was 65 years of age or older. The average household size was 1.87. There were 17,929 families (38.2% of all households); the average family size was 2.79.
The population was spread out, with 12,580 people (14.0%) under the age of 18, 6,442 people (7.2%) aged 18 to 24, 32,552 people (36.3%) aged 25 to 44, 24,746 people (27.6%) aged 45 to 64, and 13,416 people (15.0%) who were 65 years of age or older. The median age was 40.4 years. For every 100 females, there were 93.2 males. For every 100 females age 18 and over, there were 91.2 males.
There were 50,912 housing units at an average density of 6,049.5 per square mile (2,335.7/km2), of which 13,315 (28.4%) were owner-occupied, and 33,602 (71.6%) were occupied by renters. The homeowner vacancy rate was 1.1%; the rental vacancy rate was 5.1%. 30,067 people (33.5% of the population) lived in owner-occupied housing units and 57,543 people (64.1%) lived in rental housing units.
According to the 2010 United States Census, Santa Monica had a median household income of $73,649, with 11.2% of the population living below the federal poverty line.
As of the census of 2000, there were 84,084 people, 44,497 households, and 16,775 families in the city. The population density was 10,178.7 inhabitants per square mile (3,930.4/km2). There were 47,863 housing units at an average density of 5,794.0 per square mile (2,237.3/km2). The racial makeup of the city was 78.29% White, 7.25% Asian, 3.78% African American, 0.47% Native American, 0.10% Pacific Islander, 5.97% from other races, and 4.13% from two or more races. 13.44% of the population were Hispanic or Latino of any race.
There were 44,497 households, out of which 15.8% had children under the age of 18, 27.5% were married couples living together, 7.5% had a female householder with no husband present, and 62.3% were non-families. 51.2% of all households were made up of individuals, and 10.6% had someone living alone who was 65 years of age or older. The average household size was 1.83 and the average family size was 2.80.
The city of Santa Monica is consistently among the most educated cities in the United States, with 23.8 percent of all residents holding graduate degrees.
The population was diverse in age, with 14.6% under 18, 6.1% from 18 to 24, 40.1% from 25 to 44, 24.8% from 45 to 64, and 14.4% 65 years or older. The median age was 39 years. For every 100 females, there were 93.0 males. For every 100 females age 18 and over, there were 91.3 males.
According to a 2009 estimate, the median income for a household in the city was $71,095, and the median income for a family was $109,410. Males had a median income of $55,689 versus $42,948 for females. The per capita income for the city was $42,874. 10.4% of the population and 5.4% of families were below the poverty line. Out of the total population, 9.9% of those under the age of 18 and 10.2% of those 65 and older were living below the poverty line.
In 2006, crime in Santa Monica affected 4.41% of the population, slightly lower than the national average crime rate that year of 4.48%. The majority of this was property crime, which affected 3.74% of Santa Monica's population in 2006; this was higher than the rates for Los Angeles County (2.76%) and California (3.17%), but lower than the national average (3.91%). These per-capita crime rates are computed based on Santa Monica's full-time population of about 85,000. However, the Santa Monica Police Department has suggested the actual per-capita crime rate is much lower, as tourists, workers, and beachgoers can increase the city's daytime population to between 250,000 and 450,000 people.
Violent crimes affected 0.67% of the population in Santa Monica in 2006,
in line with Los Angeles County (0.65%),
but higher than the averages for California (0.53%)
and the nation (0.55%).
Hate crime has typically been minimal in Santa Monica, with only one reported incident in 2007.
However, the city experienced a spike of anti-Islamic hate crime in 2001, following the attacks of September 11. Hate crime levels returned to their minimal 2000 levels by 2002.
In 2006, Santa Monica voters passed "Measure Y" with a 65% majority, which moved the issuance of citations for marijuana smoking to the bottom of the police priority list. A 2009 study by the Santa Monica Daily Press showed since the law took effect in 2007, the Santa Monica Police had "not issued any citations for offenses involving the adult, personal use of marijuana inside private residences."
In June 2011, the Boston gangster Whitey Bulger was arrested in Santa Monica after being a fugitive for 16 years. He had been living in the area for 15 years.
A shooting in Santa Monica in 2013 left six (including the perpetrator) dead and five more injured.
The Pico neighborhood of Santa Monica (south of the Santa Monica Freeway) experiences some gang activity. The city estimates there are about 50 gang members based in Santa Monica, although some community organizers dispute this claim. Gang activity has been prevalent for decades in the Pico neighborhood.
In October 1998, alleged Culver City 13 gang member Omar Sevilla, 21, of Culver City was killed. A couple of hours after the shooting of Sevilla, German tourist Horst Fietze was killed. Several days later Juan Martin Campos, age 23, a Santa Monica city employee, was shot and killed. Police believe this was a retaliatory killing in response to the death of Omar Sevilla. Less than twenty-four hours later, Javier Cruz was wounded in a drive-by shooting outside his home on 17th and Michigan.
In 1999, there was a double homicide in the Westside Clothing store on Lincoln Boulevard. During the incident, Culver City gang members David "Puppet" Robles and Jesse "Psycho" Garcia entered the store masked and began opening fire, killing Anthony and Michael Juarez. They then ran outside to a getaway vehicle driven by a third Culver City gang member, who is now also in custody. The clothing store was believed to be a local hang out for Santa Monica gang members. The dead included two men from Northern California who had merely been visiting the store's owner, their cousin, to see if they could open a similar store in their area. Police say the incident was in retaliation for a shooting committed by the Santa Monica 13 gang days before the Juarez brothers were gunned down.
Aside from the rivalry with the Culver City gang, gang members also feud with the Venice and West Los Angeles gangs. The main rivals in these regions include Venice 13, Graveyard Gangster Crips, Hell's Bandidos and Venice Shoreline Crips gangs in the Oakwood area of Venice, California.
The Santa Monica-Malibu Unified School District provides public education at the elementary and secondary levels. In addition to the traditional model of early education school houses, SMASH (Santa Monica Alternative School House) is "a K-8 public school of choice with team teachers and multi-aged classrooms".
The district maintains eight public elementary schools in Santa Monica:
The district maintains three public middle schools in Santa Monica: John Adams Middle School, Lincoln Middle School and SMASH.
The district maintains three high schools in Santa Monica: Olympic High School, Malibu High School and Santa Monica High School.
Private schools in the city include:
Asahi Gakuen, a weekend Japanese supplementary school system, operates its Santa Monica campus (サンタモニカ校・高等部 "Santamonika-kō kōtōbu") at Webster Middle in the Sawtelle neighborhood of Los Angeles. All high school classes in the Asahi Gakuen system are held at the Santa Monica campus. As of 1986, students take buses from as far away as Orange County to go to the high school classes of the Santa Monica campus.
Santa Monica College is a community college founded in 1929. Many SMC graduates transfer to the University of California system. It occupies 35 acres (14 hectares) and enrolls 30,000 students annually. The Frederick S. Pardee RAND Graduate School, associated with the RAND Corporation, is the U.S.'s largest producer of public policy PhDs. The Art Institute of California – Los Angeles is also in Santa Monica near the Santa Monica Airport.
Universities and colleges within a radius from Santa Monica include Santa Monica College, Antioch University Los Angeles, Loyola Marymount University, Mount St. Mary's University, Pepperdine University, California State University, Northridge, California State University, Los Angeles, UCLA, USC, West Los Angeles College, California Institute of Technology (Caltech), Occidental College (Oxy), Los Angeles City College, Los Angeles Southwest College, Los Angeles Valley College, and Emperor's College of Traditional Oriental Medicine.
The Santa Monica Public Library consists of a Main Library in the downtown area, plus four neighborhood branches: Fairview, Montana Avenue, Ocean Park, and Pico Boulevard.
Santa Monica has a bike action plan and launched a bicycle sharing system in November 2015. The city is traversed by the Marvin Braude Bike Trail. Santa Monica has received the Bicycle Friendly Community Award (Bronze in 2009, Silver in 2013) by the League of American Bicyclists. Local bicycle advocacy organizations include Santa Monica Spoke, a local chapter of the Los Angeles County Bicycle Coalition. Santa Monica is thought to be one of the leaders for bicycle infrastructure and programming in Los Angeles County although cycling infrastructure in Los Angeles County in general remains very poor compared to other major cities.
In terms of number of bicycle accidents, Santa Monica ranks as one of the worst (#2) out of 102 California cities with population 50,000–100,000, a ranking consistent with the city's composite ranking.
In 2007 and 2008, local police cracked down on Santa Monica Critical Mass rides that had become controversial, putting a damper on the tradition.
In August 2018, Santa Monica issued permits to Bird, Lime, Lyft, and Jump Bikes to operate dockless scooter-sharing systems in the city.
The Santa Monica Freeway (Interstate 10) begins in Santa Monica near the Pacific Ocean and heads east. The Santa Monica Freeway between Santa Monica and downtown Los Angeles has the distinction of being one of the busiest highways in all of North America. After traversing the Greater Los Angeles area, I-10 crosses seven more states, terminating at Jacksonville, Florida. In Santa Monica, there is a road sign designating this route as the Christopher Columbus Transcontinental Highway. State Route 2 (Santa Monica Boulevard) begins in Santa Monica, barely grazing State Route 1 at Lincoln Boulevard, and continues northeast across Los Angeles County, through the Angeles National Forest, crossing the San Gabriel Mountains as the Angeles Crest Highway, ending in Wrightwood. Santa Monica is also the western terminus of Historic U.S. Route 66. Close to the eastern boundary of Santa Monica, Sepulveda Boulevard reaches from Long Beach at the south, to the northern end of the San Fernando Valley. Just east of Santa Monica is Interstate 405, the San Diego Freeway, a major north–south route in Los Angeles and Orange counties.
The City of Santa Monica has purchased the first ZeroTruck all-electric medium-duty truck. The vehicle will be equipped with a Scelzi utility body, it is based on the Isuzu N series chassis, a UQM PowerPhase 100 advanced electric motor and is the only US built electric truck offered for sale in the United States in 2009.
The city of Santa Monica runs its own bus service, the Big Blue Bus, which also serves much of West Los Angeles and the University of California, Los Angeles (UCLA). A Big Blue Bus was featured prominently in the action movie "Speed".
The city of Santa Monica is also served by the Los Angeles County Metropolitan Transportation Authority's (Metro) bus lines. Metro also complements Big Blue service, as when Big Blue routes are not operational overnight, Metro buses make many Big Blue Bus stops, in addition to MTA stops.
Design and construction on the of the Expo Line from Culver City to Santa Monica started in September 2011, with service beginning on May 20, 2016. Santa Monica Metro stations include 26th Street/Bergamot, 17th Street/Santa Monica College, and Downtown Santa Monica. Travel time between the downtown Santa Monica and the downtown Los Angeles termini is approximately 47 minutes.
Historical aspects of the Expo line route are noteworthy. It uses the former Los Angeles region's electric interurban Pacific Electric Railway's right-of-way that ran from the Exposition Park area of Los Angeles to Santa Monica. This route was called the Santa Monica Air Line and provided electric-powered freight and passenger service between Los Angeles and Santa Monica beginning in the 1920s. Passenger service was discontinued in 1953, but diesel-powered freight deliveries to warehouses along the route continued until March 11, 1988. The abandonment of the line spurred future transportation considerations and concerns within the community, and the entire right-of-way was purchased from Southern Pacific by Los Angeles Metropolitan Transportation Authority. The line was built in 1875 as the steam-powered Los Angeles and Independence Railroad to bring mining ore to ships in Santa Monica harbor and as a passenger excursion train to the beach.
Since the mid-1980s, various proposals have been made to extend the Purple Line subway to Santa Monica under Wilshire Boulevard. There are no current plans to complete the "subway to the sea," an estimated $5 billion project.
The city owns and operates a general aviation airport, Santa Monica Airport, which has been the site of several important aviation achievements. Commercial flights are available for residents at LAX, a few miles south of Santa Monica.
Like other cities in Los Angeles County, Santa Monica is dependent upon the Port of Long Beach and the Port of Los Angeles for international ship cargo. In the 1890s, Santa Monica was once in competition with Wilmington, California, and San Pedro for recognition as the "Port of Los Angeles" (see History of Santa Monica, California).
Two major hospitals are within the Santa Monica city limits, UCLA Santa Monica Hospital and St. John's Hospital. There are four fire stations providing medical and fire response within the city staffed with 6 Paramedic Engines, 1 Truck company, 1 Hazardous Materials team and 1 Urban Search & Rescue team. Santa Monica Fire Department has its own Dispatch Center. Ambulance transportation is provided by McCormick Ambulance Services.
Law enforcement services is provided by the Santa Monica Police Department
The Los Angeles County Department of Health Services operates the Simms/Mann Health and Wellness Center in Santa Monica. The Department's West Area Health Office is in the Simms/Mann Center.
Santa Monica has a municipal wireless network which provides several free city Wi-Fi hotspots distributed around the City.
Santa Monica is governed by the Santa Monica City Council, a Council-Manager governing body with seven members elected at-large. The mayor is Kevin McKeown, and the Mayor Pro Tempore is Terry O'Day. The other five current council members are Sue Himmelrich, Ted Winterer, Ana Maria Jara, Gleam Davis, and Greg Morena.
In the California State Legislature, Santa Monica is in , and in .
In the United States House of Representatives, Santa Monica is in .
In recent years, Santa Monica has voted Democratic in presidential elections, with the Democrats winning over 70% of the vote in all five presidential elections since 2000. The Republican party by contrast, failed to reach 25% of the vote in any of those elections, with both John McCain in 2008, and Donald Trump in 2016 failing to reach 20% of the vote. The Libertarian Party has increased its share of the vote in each of the last four presidential elections. Earning over 2% in the 2012 and 2016 elections, after failing to reach one percent in any of the three prior elections.
Santa Monica is home to the headquarters of many notable businesses, such as Beachbody, Fatburger, Hulu, Illumination Entertainment, Lionsgate Films, Macerich, Miramax, the RAND Corporation, Saban Capital Group, The Recording Academy (which presents the annual Grammy Awards), TOMS Shoes, and Universal Music Group. Atlantic Aviation is at the Santa Monica Airport. The National Public Radio member station KCRW is at the Santa Monica College campus. VCA Animal Hospitals is just outside the eastern city limit.
A number of game development studios are based in Santa Monica, making it a major location for the industry. These include:
Recently, Santa Monica has emerged as the center of the Los Angeles region called Silicon Beach, and serves as the home of hundreds of venture capital funded startup companies.
Former Santa Monica businesses include Douglas Aircraft (now merged with Boeing), GeoCities (which in December 1996 was headquartered on the third floor of 1918 Main Street in Santa Monica), Metro-Goldwyn-Mayer, and MySpace (now headquartered in Beverly Hills).
According to the City's 2012–2013 Comprehensive Annual Financial Report, the top employers in the city are:
The men's and women's marathon ran through parts of Santa Monica during the 1984 Summer Olympics. The Santa Monica Track Club has many prominent track athletes, including many Olympic gold medalists. Santa Monica is the home to Southern California Aquatics, which was founded by Olympic swimmer Clay Evans and Bonnie Adair. Santa Monica is also home to the Santa Monica Rugby Club, a semi-professional team that competes in the Pacific Rugby Premiership, the highest-level rugby union club competition in the United States.
During the 2028 Summer Olympics. Santa Monica will host beach volleyball and surfing.
Hundreds of moving pictures have been shot or set in part within the city of Santa Monica.
One of the oldest exterior shots in Santa Monica is Buster Keaton's "Spite Marriage" (1929) which shows much of 2nd Street. The comedy "It's a Mad, Mad, Mad, Mad World" (1963) included several scenes shot in Santa Monica, including those along the California Incline, which led to the movie's treasure spot, "The Big W". The Sylvester Stallone film "Rocky III" (1982) shows Rocky Balboa and Apollo Creed training to fight Clubber Lang by running on the Santa Monica Beach, and Stallone's "Demolition Man" (1993) includes Santa Monica settings. In "Pee-wee's Big Adventure" (1985), the theft of Pee-wee's bike occurs on the Third Street Promenade. Henry Jaglom's indie "Someone to Love" (1987), the last film in which Orson Welles appeared, takes place in Santa Monica's venerable Mayfair Theatre. "Heathers" (1989) used Santa Monica's John Adams Middle School for many exterior shots. "The Truth About Cats & Dogs" (1996) is set entirely in Santa Monica, particularly the Palisades Park area, and features a radio station that resembles KCRW at Santa Monica College. "17 Again" (2009) was shot at Samohi. Other films that show significant exterior shots of Santa Monica include "Fletch" (1985), "Species" (1995), "Get Shorty" (1995), and "Ocean's Eleven" (2001). Richard Rossi's biopic "Aimee Semple McPherson" opens and closes at the beach in Santa Monica. "Iron Man" features the Santa Monica pier and surrounding communities as Tony Stark tests his experimental flight suit.
The documentary "Dogtown and Z-Boys" (2001) and the related dramatic film "Lords of Dogtown" (2005) are both about the influential skateboarding culture of Santa Monica's Ocean Park neighborhood in the 1970s.
The Santa Monica Pier is shown in many films, including "They Shoot Horses, Don't They?" (1969), "The Sting" (1973), "Ruthless People" (1986), "Beverly Hills Cop III" (1994), "Clean Slate" (1994), "Forrest Gump" (1994), "The Net" (1995), "Love Stinks" (1999), "Cellular" (2004), "" (2006), "Iron Man" (2008) and "" (2009).
The films "The Doors" (1991) and "Speed" (1994) featured vehicles from Santa Monica's Big Blue Bus line, relative to the eras depicted in the films.
The city of Santa Monica (and in particular the Santa Monica Airport) was featured in Roland Emmerich's disaster film "2012" (2009). A magnitude 10.9 earthquake destroys the airport and the surrounding area as a group of survivors escape in a personal plane. The Santa Monica Pier and the whole city sinks into the Pacific Ocean after the earthquake.
A number of television series have been set in Santa Monica, including "Baywatch", "Goliath", "Pacific Blue", "Private Practice", and "Three's Company". The Santa Monica pier is shown in the main theme of CBS series "". In "Buffy the Vampire Slayer", the main exterior set of the town of Sunnydale, including the infamous "sun sign", was in Santa Monica in a lot on Olympic Boulevard.
The main character from Edgar Rice Burroughs' fantasy novel, "The Land That Time Forgot" (serialized in 1918 and published in book form in 1924) was a shipbuilder from Santa Monica.
Horace McCoy's 1935 novel "They Shoot Horses, Don't They?" is set at a dance marathon held in a ballroom on the Santa Monica Pier.
Raymond Chandler's most famous character, private detective Philip Marlowe, frequently has a portion of his adventures in a place called "Bay City", which is modeled on Depression-era Santa Monica. In Marlowe's world, Bay City is "a wide-open town", where gambling and other crimes thrive due to a massively corrupt and ineffective police force.
In Gennifer Choldenko's historical fiction novel for young adults, "Al Capone Does My Shirts" (2006), the Flanagans move to Alcatraz from Santa Monica.
Tennessee Williams lived (while working at MGM Studios) in a hotel on Ocean Avenue in the 1940s. At that location he wrote the play "The Glass Menagerie" (that premiered in 1944). His short story, "" (1954), was set near Santa Monica Beach and mentions the clock visible in much of the city, high up on The Broadway Building, on Broadway near Second Street.
Santa Monica is featured in the video games:
"Driver" (1999), "" (2003), " Destroy All Humans! " (2004), "Grand Theft Auto San Andreas" (2004) as a fictional district – Santa Maria Beach, "" (2004), "L.A. Rush" (2005), "Tony Hawk's American Wasteland" (2005), "" (2008), "Cars Race-O-Rama" (2009) as a fictional city – Santa Carburera, "" (2013) as a fictional U.S. military base – Fort Santa Monica, "Grand Theft Auto V" (2013) as a fictional district – Del Perro, "The Crew" (2014), and "Need for Speed" (2015). | https://en.wikipedia.org/wiki?curid=28208 |
Shot put
The shot put is a track and field event involving "putting" (pushing rather than throwing) a heavy spherical ball—the "shot"—as far as possible. The shot put competition for men has been a part of the modern Olympics since their revival in 1896, and women's competition began in 1948.
Homer mentions competitions of rock throwing by soldiers during the Siege of Troy but there is no record of any dead weights being thrown in Greek competitions. The first evidence for stone- or weight-throwing events were in the Scottish Highlands, and date back to approximately the first century. In the 16th century King Henry VIII was noted for his prowess in court competitions of weight and hammer throwing.
The first events resembling the modern shot put likely occurred in the Middle Ages when soldiers held competitions in which they hurled cannonballs. Shot put competitions were first recorded in early 19th century Scotland, and were a part of the British Amateur Championships beginning in 1866.
Competitors take their throw from inside a marked circle 2.135 m (7 ft) in diameter, with a stopboard about high at the front of the circle. The distance thrown is measured from the inside of the circumference of the circle to the nearest mark made on the ground by the falling shot, with distances rounded down to the nearest centimetre under IAAF and WMA rules.
The following rules (indoor and outdoor) must be adhered to for a legal throw:
Foul throws occur when an athlete:
At any time if the shot loses contact with the neck then it is technically an illegal put.
The following are either obsolete or non-existent, but commonly believed rules within professional competition:
Shot put competitions have been held at the modern Summer Olympic Games since their inception in 1896, and it is also included as an event in the World Athletics Championships.
Each of these competitions in the modern era have a set number of rounds of throws. Typically there are three qualification rounds to determine qualification for the final. There are then three preliminary rounds in the final with the top eight competitors receiving a further three throws. Each competitor in the final is credited with their longest throw, regardless of whether it was achieved in the preliminary or final three rounds. The competitor with the longest legal put is declared the winner.
In open competitions the men's shot weighs , and the women's shot weighs . Junior, school, and masters competitions often use different weights of shots, typically below the weights of those used in open competitions; the individual rules for each competition should be consulted in order to determine the correct weights to be used.
Two putting styles are in current general use by shot put competitors: the "glide" and the "spin". With all putting styles, the goal is to release the shot with maximum forward velocity at an angle of approximately forty-five degrees.
The origin of this technique glide dates to 1951, when Parry O'Brien from the United States invented a technique that involved the putter facing backwards, rotating 180 degrees across the circle, and then tossing the shot. Unlike spin this technique is a linear movement.
With this technique, a right-hand thrower would begin facing the rear of the circle. They would typically adopt a specific type of crouch, involving their bent right leg, in order to begin the throw from a more beneficial posture whilst also isometrically preloading their muscles. The positioning of their bodyweight over their bent leg, which pushes upwards with equal force, generates a preparatory isometric press. The force generated by this press will be channelled into the subsequent throw making it more powerful. To initiate the throw they kick to the front with the left leg, while pushing off forcefully with the right. As the thrower crosses the circle, the hips twist toward the front, the left arm is swung out then pulled back tight, followed by the shoulders, and they then strike in a putting motion with their right arm. The key is to move quickly across the circle with as little air under the feet as possible, hence the name 'glide'.
Also known as rotational technique. It was first practiced in Europe in the 1950s but did not receive much attention until the 1970s. In 1972 Aleksandr Baryshnikov set his first USSR record using a new putting style, the spin ("круговой мах" in Russian), invented by his coach Viktor Alexeyev. The spin involves rotating like a discus thrower and using rotational momentum for power. In 1976 Baryshnikov went on to set a world record of with his spin style, and was the first shot putter to cross the 22-meter mark.
With this technique, a right-hand thrower faces the rear, and begins to spin on the ball of the left foot. The thrower comes around and faces the front of the circle and drives the right foot into the center of the circle. Finally, the thrower reaches for the front of the circle with the left foot, twisting the hips and shoulders like in the glide, and puts the shot.
When the athlete executes the spin, the upper body is twisted hard to the right, so the imaginary lines created by the shoulders and hips are no longer parallel. This action builds up torque, and stretches the muscles, creating an involuntary elasticity in the muscles, providing extra power and momentum. When the athlete prepares to release, the left foot is firmly planted, causing the momentum and energy generated to be conserved, pushing the shot in an upward and outward direction.
Another purpose of the spin is to build up a high rotational speed, by swinging the right leg initially, then to bring all the limbs in tightly, similar to a figure skater bringing in their arms while spinning to increase their speed. Once this fast speed is achieved the shot is released, transferring the energy into the shot put.
Until 2016, a woman has never made an Olympic final (top 8) using the spin technique. The first woman to enter a final and win a medal at the Olympics was Anita Márton.
Currently, most top male shot putters use the spin. However the glide remains popular since the technique leads to greater consistency compared to the rotational technique. Almost all throwers start by using the glide. Tomasz Majewski notes that although most athletes use the spin, he and some other top shot putters achieved success using this classic method (for example he became first to defend the Olympic title in 56 years).
The world record by a male putter of by Randy Barnes was completed with the spin technique, while the second-best all-time put of by Ulf Timmermann was completed with the glide technique.
The decision to glide or spin may need to be decided on an individual basis, determined by the thrower's size and power. Short throwers may benefit from the spin and taller throwers may benefit from the glide, but many throwers do not follow this guideline.
The shot is made of different kinds of materials depending on its intended use. Materials used include sand, iron, cast iron, solid steel, stainless steel, brass, and synthetic materials like polyvinyl. Some metals are more dense than others making the size of the shot vary. For example, different materials are used to make indoor and outdoor shot - because damage to surroundings must be taken into account - so the latter are smaller. There are various size and weight standards for the implement that depend on the age and gender of the competitors as well as the national customs of the governing body.
The current world record holders are:
The current records held on each continent are:
Below is a list of all other throws equal or superior to 22.43 m:
Best women's throw using a spin technique is 19.87 by Anita Márton and Jillian Camarena-Williams.
Below is a list of all other throws equal or superior to 21.49 m:
The following athletes had their performance (inside 21.49 m) annulled due to doping offenses: | https://en.wikipedia.org/wiki?curid=28209 |
Stan Kelly-Bootle
Stanley Bootle, known as Stan Kelly-Bootle (15 September 1929 – 16 April 2014), was a British author, academic, singer-songwriter and computer scientist.
He took his stage name Stan Kelly (he was not known as Stan Kelly-Bootle in folk music circles) from the Irish folk song "Kelly, the boy from Killane". His best-known song is the "Liverpool Lullaby" or "The Mucky Kid" which was recorded in 1965 on the "Three City Four" LP and sung by Marian McKenzie. It was also sung by the Ian Campbell Folk Group on the "Contemporary Campbells" LP. It was later recorded by Judy Collins in 1966 for her album "In My Life". Cilla Black recorded it three years later as the B-side to her pop hit "Conversations". Kelly-Bootle achieved the first postgraduate degree in computer science in 1954, from the University of Cambridge.
Stan Kelly-Bootle was born Stanley Bootle in Liverpool, Lancashire, on 15 September 1929 and grew up in the Wavertree area of the city. His parents were Arthur Bootle and Ada Gallagher.
Kelly-Bootle was schooled at the Liverpool Institute. He spent 1948–1950 doing his national service in the British Army, achieving the rank of Sgt. Instructor in RADAR. He attended Downing College, Cambridge, graduating with a first class degree in Numerical Analysis and Automatic Computing in 1954, the first postgraduate degree in computer science.
In 1950, Kelly-Bootle helped found the St. Lawrence Folk Song Society at Cambridge University. As a folk singer-songwriter, he performed under the name Stan Kelly. He wrote some of his own tunes and also wrote lyrics set to traditional tunes. In the course of his musical career, he made over 200 radio and television appearances, and released several recordings, as well as having his songs recorded by others.
Solo releases include:
Other audio recordings include:
He started his computing career programming the pioneering EDSAC computer, designed and built at Cambridge University. He worked for IBM in the United States and the UK from 1955 to 1970. From 1970 to 1973, he worked as Manager for University Systems for Sperry-UNIVAC. He also lectured at the University of Warwick.
In 1973, Kelly-Bootle left Sperry-UNIVAC and became a freelance consultant, writer and programmer. He was known in the computer community for "The Devil's DP Dictionary" and its second edition, "The Computer Contradictionary" (1995), which he authored. These works are cynical lexicographies in the vein of Ambrose Bierce's "The Devil's Dictionary". Kelly-Bootle authored or co-authored several serious textbooks and tutorials on subjects such as the Motorola 68000 family of CPUs, programming languages including various C compilers, and the Unix operating system. He authored the "Devil's Advocate" column in "UNIX Review" from 1984 to 2000, and had columns in "Computer Language" ("Bit by Bit", 1989–1994), "OS/2 Magazine" ("End Notes", 1994–97) and "Software Development" ("Seamless Quanta", October 1995 – May 1997). He contributed columns and articles to several other computer industry magazines, as well.
Kelly-Bootle's articles for magazines such as "ACM Queue", "AI/Expert", and "UNIX Review" contain examples of word-play, criticism of silly marketing and usage (he refers often to the computer "laxicon"), and commentary on the industry in general. He wrote an online monthly column posted on the Internet. While most of his writing was oriented towards the computer industry, he wrote a few books relating to his other interests, including
Stan Kelly-Bootle died on 16 April 2014, aged 84, in hospital in Oswestry, Shropshire. | https://en.wikipedia.org/wiki?curid=28211 |
Skewness
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.
For a unimodal distribution, negative skew commonly indicates that the "tail" is on the left side of the distribution, and positive skew indicates that the tail is on the right. In cases where one tail is long but the other tail is fat, skewness does not obey a simple rule. For example, a zero value means that the tails on both sides of the mean balance out overall; this is the case for a symmetric distribution, but can also be true for an asymmetric distribution where one tail is long and thin, and the other is short but fat.
Consider the two distributions in the figure just below. Within each graph, the values on the right side of the distribution taper differently from the values on the left side. These tapering sides are called "tails", and they provide a visual means to determine which of the two kinds of skewness a distribution has:
Skewness in a data series may sometimes be observed not only graphically but by simple inspection of the values. For instance, consider the numeric sequence (49, 50, 51), whose values are evenly distributed around a central value of 50. We can transform this sequence into a negatively skewed distribution by adding a value far below the mean, which is probably a negative outlier, e.g. (40, 49, 50, 51). Therefore, the mean of the sequence becomes 47.5, and the median is 49.5. Based on the formula of nonparametric skew, defined as formula_1 the skew is negative. Similarly, we can make the sequence positively skewed by adding a value far above the mean, which is probably a positive outlier, e.g. (49, 50, 51, 60), where the mean is 52.5, and the median is 50.5.
As mentioned earlier, a unimodal distribution with zero value of skewness does not imply that this distribution is symmetric necessarily. However, a symmetric unimodal or multimodal distribution always has zero skewness.
The skewness is not directly related to the relationship between the mean and median: a distribution with negative skew can have its mean greater than or less than the median, and likewise for positive skew.
In the older notion of nonparametric skew, defined as formula_1 where formula_3 is the mean, formula_4 is the median, and formula_5 is the standard deviation, the skewness is defined in terms of this relationship: positive/right nonparametric skew means the mean is greater than (to the right of) the median, while negative/left nonparametric skew means the mean is less than (to the left of) the median. However, the modern definition of skewness and the traditional nonparametric definition do not always have the same sign: while they agree for some families of distributions, they differ in some of the cases, and conflating them is misleading.
If the distribution is symmetric, then the mean is equal to the median, and the distribution has zero skewness. If the distribution is both symmetric and unimodal, then the mean = median = mode. This is the case of a coin toss or the series 1,2,3,4... Note, however, that the converse is not true in general, i.e. zero skewness does not imply that the mean is equal to the median.
A 2005 journal article points out:Many textbooks teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is heavy. Most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are not equal. Such distributions not only contradict the textbook relationship between mean, median, and skew, they also contradict the textbook interpretation of the median.
For example, in the distribution of adult residents across US households, the skew is to the right. However, due to the majority of cases is less or equal to the mode, which is also the median, the mean sits in the heavier left tail. As a result, the rule of thumb that the mean is right of the median under right skew failed.
The skewness of a random variable "X" is the third standardized moment formula_6, defined as:
where "μ" is the mean, "σ" is the standard deviation, E is the expectation operator, "μ"3 is the third central moment, and "κ""t" are the "t"-th cumulants. It is sometimes referred to as Pearson's moment coefficient of skewness, or simply the moment coefficient of skewness, but should not be confused with Pearson's other skewness statistics (see below). The last equality expresses skewness in terms of the ratio of the third cumulant "κ"3 to the 1.5th power of the second cumulant "κ"2. This is analogous to the definition of kurtosis as the fourth cumulant normalized by the square of the second cumulant.
The skewness is also sometimes denoted Skew["X"].
If "σ" is finite, "μ" is finite too and skewness can be expressed in terms of the non-central moment E["X"3] by expanding the previous formula,
Skewness can be infinite, as when
where the third cumulants are infinite, or as when
where the third cumulant is undefined.
Examples of distributions with finite skewness include the following.
For a sample of "n" values, a natural method of moments estimator of the population skewness is
where formula_12 is the sample mean, "s" is the sample standard deviation, and the numerator "m"3 is the sample third central moment. This formula can be thought of as the average cubed deviation in the sample divided by the cubed sample standard deviation.
Another common definition of the "sample skewness" is
where formula_14 is the unique symmetric unbiased estimator of the third cumulant and formula_15 is the symmetric unbiased estimator of the second cumulant (i.e. the variance).
In general, the ratios formula_16 and formula_17 are both biased estimators of the population skewness formula_18; their expected values can even have the opposite sign from the true skewness. (For instance, a mixed distribution consisting of very thin Gaussians centred at −99, 0.5, and 2 with weights 0.01, 0.66, and 0.33 has a skewness of about −9.77, but in a sample of 3, formula_17 has an expected value of about 0.32, since usually all three samples are in the positive-valued part of the distribution, which is skewed the other way.) Nevertheless, formula_16 and formula_17 each have obviously the correct expected value of zero for any symmetric distribution with a finite third moment, including a normal distribution.
Under the assumption that the underlying random variable formula_22 is normally distributed, it can be shown that formula_23, i.e., its distribution converges to a normal distribution with mean 0 and variance 6. The variance of the skewness of a random sample of size "n" from a normal distribution is
An approximate alternative is 6/"n", but this is inaccurate for small samples.
In normal samples, formula_16 has the smaller variance of the two estimators, with
where "m"2 in the denominator is the (biased) sample second central moment.
The adjusted Fisher–Pearson standardized moment coefficient formula_27 is the version found in Excel and several statistical packages including Minitab, SAS and SPSS.
Skewness is a descriptive statistic that can be used in conjunction with the histogram and the normal quantile plot to characterize the data or distribution.
Skewness indicates the direction and relative magnitude of a distribution's deviation from the normal distribution.
With pronounced skewness, standard statistical inference procedures such as a confidence interval for a mean will be not only incorrect, in the sense that the true coverage level will differ from the nominal (e.g., 95%) level, but they will also result in unequal error probabilities on each side.
Skewness can be used to obtain approximate probabilities and quantiles of distributions (such as value at risk in finance) via the Cornish-Fisher expansion.
Many models assume normal distribution; i.e., data are symmetric about the mean. The normal distribution has a skewness of zero. But in reality, data points may not be perfectly symmetric. So, an understanding of the skewness of the dataset indicates whether deviations from the mean are going to be positive or negative.
D'Agostino's K-squared test is a goodness-of-fit normality test based on sample skewness and sample kurtosis.
Other measures of skewness have been used, including simpler calculations suggested by Karl Pearson (not to be confused with Pearson's moment coefficient of skewness, see above). These other measures are:
The Pearson mode skewness, or first skewness coefficient, is defined as
The Pearson median skewness, or second skewness coefficient, is defined as
Which is a simple multiple of the nonparametric skew.
Bowley's measure of skewness (from 1901), also called Yule's coefficient (from 1912) is defined as:
When writing it as formula_29, it is easier to see that the numerator is difference between the average of the upper and lower quartiles (a measure of location) and the median (another measure of location), while the denominator is the semi-interquartile range (Q3-Q1)/2, which for symmetric distributions is the MAD measure of dispersion.
Other names for this measure are Galton's measure of skewness, the Yule–Kendall index and the quartile skewness ,
A more general formulation of a skewness function was described by Groeneveld, R. A. and Meeden, G. (1984):
where "F" is the cumulative distribution function. This leads to a corresponding overall measure of skewness defined as the supremum of this over the range 1/2 ≤ "u"
Groeneveld & Meeden have suggested, as an alternative measure of skewness,
where "μ" is the mean, "ν" is the median, |...| is the absolute value, and "E"() is the expectation operator. This is closely related in form to Pearson's second skewness coefficient.
Use of L-moments in place of moments provides a measure of skewness known as the L-skewness.
A value of skewness equal to zero does not imply that the probability distribution is symmetric. Thus there is a need for another measure of asymmetry that has this property: such a measure was introduced in 2000. It is called distance skewness and denoted by dSkew. If "X" is a random variable taking values in the "d"-dimensional Euclidean space, "X" has finite expectation, "X" is an independent identically distributed copy of "X", and formula_32 denotes the norm in the Euclidean space, then a simple "measure of asymmetry" with respect to location parameter θ is
and dSkew("X") := 0 for "X" = θ (with probability 1). Distance skewness is always between 0 and 1, equals 0 if and only if "X" is diagonally symmetric with respect to θ ("X" and 2θ−"X" have the same probability distribution) and equals 1 if and only if X is a constant "c" (formula_34) with probability one. Thus there is a simple consistent statistical test of diagonal symmetry based on the sample distance skewness:
The medcouple is a scale-invariant robust measure of skewness, with a breakdown point of 25%. It is the median of the values of the kernel function
taken over all couples formula_37 such that formula_38, where formula_39 is the median of the sample formula_40. It can be seen as the median of all possible quantile skewness measures. | https://en.wikipedia.org/wiki?curid=28212 |
Serial Experiments Lain
The series focuses on Lain Iwakura, an adolescent middle school girl living in suburban Japan, and her introduction to the Wired, a global communications network which is similar to the Internet. Lain lives with her middle-class family, which consists of her inexpressive older sister Mika, her emotionally distant mother, and her computer-obsessed father; while Lain herself is somewhat awkward, introverted, and socially isolated from most of her school peers. However, the status-quo of her life becomes upturned by a series of bizarre incidents that start to take place after she learns that girls from her school have received an e-mail from a dead student, Chisa Yomoda, and she pulls out her old computer in order to check for the same message. Lain finds Chisa telling her that she is not dead, but has merely "abandoned her physical body and flesh" and is alive deep within the virtual reality-world of the Wired itself, where she has found the almighty and divine "God". From this point, Lain is caught up in a series of cryptic and surreal events that see her delving deeper into the mystery of the network in a narrative that explores themes of consciousness, perception, and the nature of reality.
The "Wired" is a virtual reality-world that contains and supports the very sum of "all" human communication and networks, created with the telegraph, televisions, and telephone services, and expanded with the Internet, cyberspace, and subsequent networks. The series assumes that the Wired could be linked to a system that enables unconscious communication between people and machines without physical interface. The storyline introduces such a system with the Schumann resonances, a property of the Earth's magnetic field that theoretically allows for unhindered long distance communications. If such a link were created, the network would become equivalent to Reality as the general consensus of all perceptions and knowledge. The increasingly thin invisible line between what is real and what is virtual/digital begins to slowly shatter.
Masami Eiri is introduced as the project director on Protocol Seven (the next-generation Internet protocol in the series' time-frame) for major computer company Tachibana General Laboratories. He had secretly included code of his very own creation to give himself absolute control of the Wired through the wireless system described above. He then "uploaded" his own brain, conscience, consciousness, memory, feelings, emotions – his very self – into the Wired and "died" a few days after, leaving only his physical, living body behind. These details are unveiled around the middle of the series, but this is the point where the story of "Serial Experiments Lain" begins. Masami later explains that Lain is the artifact by which the wall between the virtual and material worlds is to fall, and that he needs her to get to the Wired and "abandon the flesh", as he did, to achieve his plan. The series sees him trying to convince her through interventions, using the promise of unconditional love, romantic seduction and charm, and even, when all else fails, threats and force.
In the meantime, the anime follows a complex game of hide-and-seek between the "Knights of the Eastern Calculus", hackers whom Masami claims are "believers that enable him to be a God in the Wired", and Tachibana General Laboratories, who try to regain control of Protocol Seven. In the end, the viewer sees Lain realizing, after much introspection, that she has absolute control over everyone's mind and over reality itself. Her dialogue with different versions of herself shows how she feels shunned from the material world, and how she is afraid to live in the Wired, where she has the possibilities and responsibilities of an almighty goddess. The last scenes feature her erasing everything connected to herself from everyone's memories. She is last seen, unchanged, encountering her oldest and closest friend Alice once again, who is now married. Lain promises herself that she and Alice will surely meet again anytime as Lain can literally go and be anywhere she desires between both worlds.
"Serial Experiments Lain" was conceived, as a series, to be original to the point of it being considered "an enormous risk" by its producer Yasuyuki Ueda.
Producer Ueda had to answer repeated queries about a statement made in an "Animerica" interview. The controversial statement said "Lain" was "a sort of cultural war against American culture and the American sense of values we [Japan] adopted after World War II". He later explained in numerous interviews that he created "Lain" with a set of values he took as distinctly Japanese; he hoped Americans would not understand the series as the Japanese would. This would lead to a "war of ideas" over the meaning of the anime, hopefully culminating in new communication between the two cultures. When he discovered that the American audience held the same views on the series as the Japanese, he was disappointed.
The "Lain" franchise was originally conceived to connect across forms of media (anime, video games, manga). Producer Yasuyuki Ueda said in an interview, "the approach I took for this project was to communicate the essence of the work by the total sum of many media products". The scenario for the video game was written first, and the video game was produced at the same time as the anime series, though the series was released first. A dōjinshi titled "The Nightmare of Fabrication" was produced by Yoshitoshi ABe and released in Japanese in the artbook "Omnipresence in the Wired". Ueda and Konaka declared in an interview that the idea of a multimedia project was not unusual in Japan, as opposed to the contents of "Lain", and the way they are exposed.
The authors were asked in interviews if they had been influenced by "Neon Genesis Evangelion", in the themes and graphic design. This was strictly denied by writer Chiaki J. Konaka in an interview, arguing that he had not seen "Evangelion" until he finished the fourth episode of "Lain". Being primarily a horror movies writer, his stated influences are Godard (especially for using typography on screen), "The Exorcist", "Hell House", and Dan Curtis's "House of Dark Shadows". Alice's name, like the names of her two friends Julie and Reika, came from a previous production from Konaka, "Alice in Cyberland", which in turn was largely influenced by "Alice in Wonderland". As the series developed, Konaka was "surprised" by how close Alice's character became to the original "Wonderland" character.
Vannevar Bush (and memex), John C. Lilly, Timothy Leary and his eight-circuit model of consciousness, Ted Nelson and Project Xanadu are cited as precursors to the Wired. Douglas Rushkoff and his book "Cyberia" were originally to be cited as such, and in "Lain" Cyberia became the name of a nightclub populated with hackers and techno-punk teenagers. Likewise, the series' "deus ex machina" lies in the conjunction of the Schumann resonances and Jung's collective unconscious (the authors chose this term over Kabbalah and Akashic Record). Majestic 12 and the Roswell UFO incident are used as examples of how a hoax might still affect history, even after having been exposed as such, by creating sub-cultures. This links again to Vannevar Bush, the alleged "brains" of MJ12. Two of the literary references in "Lain" are quoted through Lain's father: he first logs onto a website with the password "Think Bule Count One Tow" ("Think Blue, Count Two" is an Instrumentality of Man story featuring virtual persons projected as real ones in people's minds); and his saying that "madeleines would be good with the tea" in the last episode makes "Lain" "perhaps the only cartoon to allude to Proust".
Yoshitoshi ABe confesses to have never read manga as a child, as it was "off-limits" in his household. His major influences are "nature and everything around him". Specifically speaking about Lain's character, ABe was inspired by Kenji Tsuruta, Akihiro Yamada, Range Murata and Yukinobu Hoshino. In a broader view, he has been influenced in his style and technique by Japanese artists Chinai-san and Tabuchi-san.
The character design of Lain was not ABe's sole responsibility. Her distinctive left forelock for instance was a demand from Yasuyuki Ueda. The goal was to produce asymmetry to reflect Lain's unstable and disconcerting nature. It was designed as a mystical symbol, as it is supposed to prevent voices and spirits from being heard by the left ear. The bear pajamas she wears were a demand from character animation director Takahiro Kishida. Though bears are a trademark of the Konaka brothers, Chiaki Konaka first opposed the idea. Director Nakamura then explained how the bear motif could be used as a shield for confrontations with her family. It is a key element of the design of the shy "real world" Lain (see "mental illness" under Themes). When she first goes to the Cyberia nightclub, she wears a bear hat for similar reasons. Retrospectively, Konaka said that Lain's pajamas became a major factor in drawing fans of "moe" characterization to the series, and remarked that "such items may also be important when making anime".
ABe's original design was generally more complicated than what finally appeared on screen. As an example, the X-shaped hairclip was to be an interlocking pattern of gold links. The links would open with a snap, or rotate around an axis until the moment the " X " became a " = ". This was not used as there is no scene where Lain takes her hairclip off.
"Serial Experiments Lain" is not a conventionally linear story, but "an alternative anime, with modern themes and realization". Themes range from theological to psychological and are dealt with in a number of ways: from classical dialogue to image-only introspection, passing by direct interrogation of imaginary characters.
Communication, in its wider sense, is one of the main themes of the series, not only as opposed to loneliness, but also as a subject in itself. Writer Konaka said he wanted to directly "communicate human feelings". Director Nakamura wanted to show the audience — and particularly viewers between 14 and 15—"the multidimensional wavelength of the existential self: the relationship between self and the world".
Loneliness, if only as representing a lack of communication, is recurrent through "Lain". Lain herself (according to Anime Jump) is "almost painfully introverted with no friends to speak of at school, a snotty, condescending sister, a strangely apathetic mother, and a father who seems to want to care but is just too damn busy to give her much of his time". Friendships turn on the first rumor; and the only insert song of the series is named "Kodoku no shigunaru", literally "signal of loneliness".
Mental illness, especially dissociative identity disorder, is a significant theme in "Lain": the main character is constantly confronted with alter-egos, to the point where writer Chiaki Konaka and Lain's voice actress Kaori Shimizu had to agree on subdividing the character's dialogues between three different orthographs. The three names designate distinct "versions" of Lain: the real-world, "childish" Lain has a shy attitude and bear pajamas. The "advanced" Lain, her Wired personality, is bold and questioning. Finally, the "evil" Lain is sly and devious, and does everything she can to harm Lain or the ones close to her. As a writing convention, the authors spelled their respective names in kanji, katakana, and roman characters (see picture).
Reality never has the pretense of objectivity in "Lain". Acceptations of the term are battling throughout the series, such as the "natural" reality, defined through normal dialogue between individuals; the material reality; and the tyrannic reality, enforced by one person onto the minds of others. A key debate to all interpretations of the series is to decide whether matter flows from thought, or the opposite. The production staff carefully avoided "the so-called God's Eye Viewpoint" to make clear the "limited field of vision" of the world of "Lain".
Theology plays its part in the development of the story too. "Lain" has been viewed as a questioning of the possibility of an infinite spirit in a finite body. From self-realization as a goddess to deicide, religion (the title of a layer) is an inherent part of "Lain" background.
"Lain" contains extensive references to Apple computers, as the brand was used at the time by most of the creative staff, such as writers, producers, and the graphical team. As an example, the title at the beginning of each episode is announced by the Apple computer speech synthesis program PlainTalk, using the voice ""Whisper"", e.g. codice_1. Tachibana Industries, the company that creates the NAVI computers, is a reference to Apple computers: "tachibana" means "Mandarin orange" in Japanese. NAVI is the abbreviation of Knowledge Navigator, and the HandiNAVI is based on the Apple Newton, one of the world's first PDAs. The NAVIs are seen to run "Copland OS Enterprise" (this reference to Copland was an initiative of Konaka, a declared Apple fan), and Lain's and Alice's NAVIs closely resembles the Twentieth Anniversary Macintosh and the iMac respectively. The HandiNAVI programming language, as seen on the seventh episode, is a dialect of Lisp. Notice that the Newton also used a Lisp dialect (NewtonScript). The program being typed by Lain can be found in the CMU AI repository; it is a simple implementation of Conway's Game of Life in Common Lisp.
During a series of disconnected images, an iMac and the Think Different advertising slogan appears for a short time, while the "Whisper" voice says it. This was an unsolicited insertion from the graphic team, also Mac-enthusiasts. Other subtle allusions can be found: "Close the world, Open the nExt" is the slogan for the "Serial Experiments Lain" video game. NeXT was the company that produced NeXTSTEP, which later evolved into Mac OS X after Apple bought NeXT. Another example is "To Be Continued." at the end of episodes 1–12, with a blue "B" and a red "e" on "Be": "this" "Be" is the original logo of Be Inc., a company founded by ex-Apple employees and NeXT's main competitor in its time.
"Serial Experiments Lain" was first aired on TV Tokyo on July 6, 1998 and concluded on September 28, 1998 with the thirteenth and final episode. The series consists of 13 episodes (referred to in the series as "Layers") of 24 minutes each, except for the sixth episode, "Kids" (23 minutes 14 seconds). In Japan, the episodes were released in LD, VHS, and DVD with a total of five volumes. A DVD compilation named ""Serial Experiments Lain DVD-BOX Яesurrection"" was released along with a promo DVD called ""LPR-309"" in 2000. As this box set is now discontinued, a rerelease was made in 2005 called ""Serial Experiments Lain TV-BOX"". A 4-volume DVD box set was released in the US by Pioneer/Geneon. A Blu-ray release of the anime was made in December 2009 called ""Serial Experiments Lain Blu-ray Box | RESTORE"". The anime series returned to US television on October 15, 2012 on the Funimation Channel.
The series' opening theme, "Duvet", was written and performed by Jasmine Rodgers and the British band Bôa. The ending theme, , was written and composed by Reichi Nakaido.
The anime series was licensed in North America by Pioneer Entertainment (later Geneon USA) on VHS, DVD and LaserDisc in 1999. However, the company closed its USA division in December 2007 and the series went out-of-print as a result. However, at Anime Expo 2010, North American distributor Funimation announced that it had obtained the license to the series and re-released it in 2012. It was also released in Singapore by Odex.
The first original soundtrack, "Serial Experiments Lain Soundtrack", features music by Reichi Nakaido: the ending theme and part of the television series' score, alongside other songs inspired by the series. The second, "Serial Experiments Lain Soundtrack: Cyberia Mix", features electronica songs inspired by the television series, including a remix of the opening theme "Duvet" by DJ Wasei. The third, "lain BOOTLEG", consists of the ambient score of the series across forty-five tracks. "BOOTLEG" also contains a second mixed-mode data and audio disc, containing a clock program and a game, as well as an extended version of the first disc – nearly double the length – across 57 tracks in 128 kbit/s MP3 format, and sound effects from the series in WAV format. Because the word "bootleg" appears in its title, it is easily confused with the Sonmay counterfeit edition of itself, which only contains the first disc in an edited format. All three soundtrack albums were released by Pioneer Records.
The series' opening theme, "Duvet", was written and performed in English by the British rock band Bôa. The band released the song as a single and as part of the EP "Tall Snake", which features both an acoustic version and DJ Wasei's remix from "Cyberia Mix".
On November 26, 1998, Pioneer LDC released a video game with the same name as the anime for the PlayStation. It was designed by Konaka and Yasuyuki, and made to be a "network simulator" in which the player would navigate to explore Lain's story. The creators themselves did not call it a game, but "Psycho-Stretch-Ware", and it has been described as being a kind of graphic novel: the gameplay is limited to unlocking pieces of information, and then reading/viewing/listening to them, with little or no puzzle needed to unlock. Lain distances itself even more from classical games by the random order in which information is collected. The aim of the authors was to let the player get the feeling that there are myriads of informations that they would have to sort through, and that they would have to do with less than what exists to understand. As with the anime, the creative team's main goal was to let the player "feel" Lain, and "to understand her problems, and to love her". A guidebook to the game called "Serial Experiments Lain Official Guide" () was released the same month by MediaWorks.
"Serial Experiments Lain" was first broadcast in Tokyo at 1:15 a.m. JST. The word "weird" appears almost systematically in English language reviews of the series, or the alternatives "bizarre", and "atypical", due mostly to the freedoms taken with the animation and its unusual science fiction themes, and due to its philosophical and psychological context. Critics responded positively to these thematic and stylistic characteristics, and it was awarded an Excellence Prize by the 1998 Japan Media Arts Festival for "its willingness to question the meaning of contemporary life" and the "extraordinarily philosophical and deep questions" it asks.
According to Christian Nutt from "Newtype USA", the main attraction to the series is its keen view on "the interlocking problems of identity and technology". Nutt saluted Abe's "crisp, clean character design" and the "perfect soundtrack" in his 2005 review of series, saying that ""Serial Experiments Lain" might not yet be considered a true classic, but it's a fascinating evolutionary leap that helped change the future of anime." "Anime Jump" gave it 4.5/5, and Anime on DVD gave it A+ on all criteria for volume 1 and 2, and a mix of A and A+ for volume 3 and 4.
"Lain" was subject to commentary in the literary and academic worlds. The "Asian Horror Encyclopedia" calls it "an outstanding psycho-horror anime about the psychic and spiritual influence of the Internet". It notes that the red spots present in all the shadows look like blood pools (see picture). It notes the death of a girl in a train accident is "a source of much ghost lore in the twentieth century", more so in Tokyo.
The "Anime Essentials" anthology by Gilles Poitras describes it as a "complex and somehow existential" anime that "pushed the envelope" of anime diversity in the 1990s, alongside the much better known "Neon Genesis Evangelion" and "Cowboy Bebop". Professor Susan J. Napier, in her 2003 reading to the American Philosophical Society called "The Problem of Existence in Japanese Animation" (published 2005), compared "Serial Experiments Lain" to "Ghost in the Shell" and Hayao Miyazaki's "Spirited Away". According to her, the main characters of the two other works cross barriers; they can cross back to our world, but Lain cannot. Napier asks whether there is something to which Lain should return, "between an empty 'real' and a dark 'virtual'". Mike Toole of Anime News Network named "Serial Experiments Lain" as one of the most important anime of the 1990s.
Unlike the anime, the video game drew little attention from the public. Criticized for its (lack of) gameplay, as well as for its "clunky interface", interminable dialogues, absence of music and very long loading times, it was nonetheless remarked for its (at the time) remarkable CG graphics, and its beautiful backgrounds.
Despite the positive feedback the television series had received, Anime Academy gave the series a 75%, partly due to the "lifeless" setting it had. Michael Poirier of "EX" magazine stated that the last three episodes fail to resolve the questions in other DVD volumes. Justin Sevakis of Anime News Network noted that the English dub was decent, but that the show relied so little on dialogue that it hardly mattered. | https://en.wikipedia.org/wiki?curid=28217 |
Spontaneous emission
Spontaneous emission is the process in which a quantum mechanical system (such as a molecule, an atom or a subatomic particle) transits from an excited energy state to a lower energy state (e.g., its ground state) and emits a quantized amount of energy in the form of a photon. Spontaneous emission is ultimately responsible for most of the light we see all around us; it is so ubiquitous that there are many names given to what is essentially the same process. If atoms (or molecules) are excited by some means other than heating, the spontaneous emission is called luminescence. For example, fireflies are luminescent. And there are different forms of luminescence depending on how excited atoms are produced (electroluminescence, chemiluminescence etc.). If the excitation is affected by the absorption of radiation the spontaneous emission is called fluorescence. Sometimes molecules have a metastable level and continue to fluoresce long after the exciting radiation is turned off; this is called phosphorescence. Figurines that glow in the dark are phosphorescent. Lasers start via spontaneous emission, then during continuous operation work by stimulated emission.
Spontaneous emission cannot be explained by classical electromagnetic theory and is fundamentally a quantum process. The first person to derive the rate of spontaneous emission accurately from first principles was Dirac in his quantum theory of radiation, the precursor to the theory which he later called quantum electrodynamics. Contemporary physicists, when asked to give a physical explanation for spontaneous emission, generally invoke the zero-point energy of the electromagnetic field. In 1963, the Jaynes–Cummings model was developed describing the system of a two-level atom interacting with a quantized field mode (i.e. the vacuum) within an optical cavity. It gave the nonintuitive prediction that the rate of spontaneous emission could be controlled depending on the boundary conditions of the surrounding vacuum field. These experiments gave rise to cavity quantum electrodynamics (CQED), the study of effects of mirrors and cavities on radiative corrections.
If a light source ('the atom') is in an excited state with energy formula_1, it may spontaneously decay to a lower lying level (e.g., the ground state) with energy formula_2, releasing the difference in energy between the two states as a photon. The photon will have angular frequency formula_3 and an energy formula_4:
where formula_6 is the reduced Planck constant. Note: formula_7, where formula_8 is the Planck constant and formula_9 is the linear frequency. The phase of the photon in spontaneous emission is random as is the direction in which the photon propagates. This is not true for stimulated emission. An energy level diagram illustrating the process of spontaneous emission is shown below:
If the number of light sources in the excited state at time formula_10 is given by formula_11, the rate at which formula_12 decays is:
where formula_14 is the rate of spontaneous emission. In the rate-equation formula_14 is a proportionality constant for this particular transition in this particular light source. The constant is referred to as the "Einstein A coefficient", and has units formula_16.
The above equation can be solved to give:
where formula_18 is the initial number of light sources in the excited state, formula_10 is the time and formula_20 is the radiative decay rate of the transition. The number of excited states formula_12 thus decays exponentially with time, similar to radioactive decay. After one lifetime, the number of excited states decays to 36.8% of its original value (formula_22-time). The radiative decay rate formula_20 is inversely proportional to the lifetime formula_24:
Spontaneous transitions were not explainable within the framework of the Schrödinger equation, in which the electronic energy levels were quantized, but the electromagnetic field was not. Given that the eigenstates of an atom are properly diagonalized, the overlap of the wavefunctions between the excited state and the ground state of the atom is zero. Thus, in the absence of a quantized electromagnetic field, the excited state atom cannot decay to the ground state. In order to explain spontaneous transitions, quantum mechanics must be extended to a quantum field theory, wherein the electromagnetic field is quantized at every point in space. The quantum field theory of electrons and electromagnetic fields is known as quantum electrodynamics.
In quantum electrodynamics (or QED), the electromagnetic field has a ground state, the QED vacuum, which can mix with the excited stationary states of the atom. As a result of this interaction, the "stationary state" of the atom is no longer a true eigenstate of the combined system of the atom plus electromagnetic field. In particular, the electron transition from the excited state to the electronic ground state mixes with the transition of the electromagnetic field from the ground state to an excited state, a field state with one photon in it. Spontaneous emission in free space depends upon vacuum fluctuations to get started.
Although there is only one electronic transition from the excited state to ground state, there are many ways in which the electromagnetic field may go from the ground state to a one-photon state. That is, the electromagnetic field has infinitely more degrees of freedom, corresponding to the different directions in which the photon can be emitted. Equivalently, one might say that the phase space offered by the electromagnetic field is infinitely larger than that offered by the atom. This infinite degree of freedom for the emission of the photon results in the apparent irreversible decay, i.e., spontaneous emission.
In the presence of electromagnetic vacuum modes, the combined atom-vacuum system is explained by the superposition of the wavefunctions of the excited state atom with no photon and the ground state atom with a single emitted photon:
where formula_27 and formula_28 are the atomic excited state-electromagnetic vacuum wavefunction and its probability amplitude, formula_29 and formula_30 are the ground state atom with a single photon (of mode formula_31) wavefunction and its probability amplitude, formula_32 is the atomic transition frequency, and formula_33 is the frequency of the photon. The sum is over formula_34 and formula_35, which are the wavenumber and polarization of the emitted photon, respectively. As mentioned above, the emitted photon has a chance to be emitted with different wavenumbers and polarizations, and the resulting wavefunction is a superposition of these possibilities. To calculate the probability of the atom at the ground state (formula_36), one needs to solve the time evolution of the wavefunction with an appropriate Hamiltonian. To solve for the transition amplitude, one needs to average over (integrate over) all the vacuum modes, since one must consider the probabilities that the emitted photon occupies various parts of phase space equally. The "spontaneously" emitted photon has infinite different modes to propagate into, thus the probability of the atom re-absorbing the photon and returning to the original state is negligible, making the atomic decay practically irreversible. Such irreversible time evolution of the atom-vacuum system is responsible for the apparent spontaneous decay of an excited atom. If one were to keep track of all the vacuum modes, the combined atom-vacuum system would undergo unitary time evolution, making the decay process reversible. Cavity quantum electrodynamics is one such system where the vacuum modes are modified resulting in the reversible decay process, see also Quantum revival. The theory of the spontaneous emission under the QED framework was first calculated by Weisskopf and Wigner.
In spectroscopy one can frequently find that atoms or molecules in the excited states dissipate their energy in the absence of any external source of photons. This is not spontaneous emission, but is actually nonradiative relaxation of the atoms or molecules caused by the fluctuation of the surrounding molecules present inside the bulk.
The rate of spontaneous emission (i.e., the radiative rate) can be described by Fermi's golden rule. The rate of emission depends on two factors: an 'atomic part', which describes
the internal structure of the light source and a 'field part', which describes the density of electromagnetic modes of the environment. The atomic part describes the strength of a transition between two states in terms of transition moments. In a homogeneous medium, such as free space, the rate of spontaneous emission in the dipole approximation is given by:
where formula_3 is the emission frequency, formula_40 is the index of refraction, formula_41 is the transition dipole moment, formula_42 is the vacuum permittivity, formula_6 is the reduced Planck constant, formula_44 is the vacuum speed of light, and formula_45 is the fine structure constant. The expression formula_46 stands for the definition of the transition dipole moment formula_47 for dipole moment operator formula_48, where formula_49 is the elementary charge and formula_50 stands for position operator. (This approximation breaks down in the case of inner shell electrons in high-Z atoms.) The above equation clearly shows that the rate of spontaneous emission in free space increases proportionally to formula_51.
In contrast with atoms, which have a discrete emission spectrum, quantum dots can be tuned continuously by changing their size. This property has been used to check the formula_51-frequency dependence of the spontaneous emission rate as described by Fermi's golden rule.
In the rate-equation above, it is assumed that decay of the number of excited states formula_12 only occurs under emission of light. In this case one speaks of full radiative decay and this means that the quantum efficiency is 100%. Besides radiative decay, which occurs under the emission of light, there is a second decay mechanism; nonradiative decay. To determine the total decay rate formula_54, radiative and nonradiative rates should be summed:
where formula_54 is the total decay rate, formula_20 is the radiative decay rate and formula_58 the nonradiative decay rate. The quantum efficiency (QE) is defined as the fraction of emission processes in which emission of light is involved:
In nonradiative relaxation, the energy is released as phonons, more commonly known as heat. Nonradiative relaxation occurs when the energy difference between the levels is very small, and these typically occur on a much faster time scale than radiative transitions. For many materials (for instance, semiconductors), electrons move quickly from a high energy level to a meta-stable level via small nonradiative transitions and then make the final move down to the bottom level via an optical or radiative transition. This final transition is the transition over the bandgap in semiconductors. Large nonradiative transitions do not occur frequently because the crystal structure generally cannot support large vibrations without destroying bonds (which generally doesn't happen for relaxation). Meta-stable states form a very important feature that is exploited in the construction of lasers. Specifically, since electrons decay slowly from them, they can be deliberately piled up in this state without too much loss and then stimulated emission can be used to boost an optical signal. | https://en.wikipedia.org/wiki?curid=28219 |
Nicolas Léonard Sadi Carnot
"Sous-lieutenant" Nicolas Léonard Sadi Carnot (; 1 June 1796 – 24 August 1832) was a French mechanical engineer in the French Army, military scientist and physicist, often described as the "father of thermodynamics." Like Copernicus, he published only one book, the "Reflections on the Motive Power of Fire" (Paris, 1824), in which he expressed, at the age of 27 years, the first successful theory of the maximum efficiency of heat engines. In this work he laid the foundations of an entirely new discipline, thermodynamics. Carnot's work attracted little attention during his lifetime, but it was later used by Rudolf Clausius and Lord Kelvin to formalize the second law of thermodynamics and define the concept of entropy.
Nicolas Léonard Sadi Carnot was born in Paris into a family that was distinguished in both science and politics. He was the first son of Lazare Carnot, an eminent mathematician, military engineer and leader of the French Revolutionary Army. Lazare chose his son's third given name (by which he would always be known) after the Persian poet Sadi of Shiraz. Sadi was the elder brother of statesman Hippolyte Carnot and the uncle of Marie François Sadi Carnot, who would serve as President of France from 1887 to 1894.
At the age of 16, Sadi Carnot became a cadet in the École Polytechnique in Paris, where his classmates included Michel Chasles and Gaspard-Gustave Coriolis. The École Polytechnique was intended to train engineers for military service, but its professors included such eminent scientists as André-Marie Ampère, François Arago, Joseph Louis Gay-Lussac, Louis Jacques Thénard and Siméon Denis Poisson, and the school had become renowned for its mathematical instruction. After graduating in 1814, Sadi became an officer in the French army's corps of engineers. His father Lazare had served as Napoleon's minister of the interior during the "Hundred Days", and after Napoleon's final defeat in 1815 Lazare was forced into exile. Sadi's position in the army, under the restored Bourbon monarchy of Louis XVIII, became increasingly difficult.
Sadi Carnot was posted to different locations, he inspected fortifications, tracked plans and wrote many reports. It appears his recommendations were ignored and his career was stagnating. On 15 September 1818 he took a six-month leave to prepare for the entrance examination of Royal Corps of Staff and School of Application for the Service of the General Staff.
In 1819, Sadi transferred to the newly formed General Staff, in Paris. He remained on call for military duty, but from then on he dedicated most of his attention to private intellectual pursuits and received only two-thirds pay. Carnot befriended the scientist Nicolas Clément and attended lectures on physics and chemistry. He became interested in understanding the limitation to improving the performance of steam engines, which led him to the investigations that became his "Reflections on the Motive Power of Fire", published in 1824.
Carnot retired from the army in 1828, without a pension. He was interned in a private asylum in 1832 as suffering from "mania" and "general delirum", and he died of cholera shortly thereafter, aged 36, at the hospital in Ivry-sur-Seine.
When Carnot began working on his book, steam engines had achieved widely recognized economic and industrial importance, but there had been no real scientific study of them. Newcomen had invented the first piston-operated steam engine over a century before, in 1712; some 50 years after that, James Watt made his celebrated improvements, which were responsible for greatly increasing the efficiency and practicality of steam engines. Compound engines (engines with more than one stage of expansion) had already been invented, and there was even a crude form of internal-combustion engine, with which Carnot was familiar and which he described in some detail in his book. Although there existed some intuitive understanding of the workings of engines, scientific theory for their operation was almost nonexistent. In 1824 the principle of conservation of energy was still poorly developed and controversial, and an exact formulation of the first law of thermodynamics was still more than a decade away; the mechanical equivalence of heat would not be formulated for another two decades. The prevalent theory of heat was the caloric theory, which regarded heat as a sort of weightless and invisible fluid that flowed when out of equilibrium.
Engineers in Carnot's time had tried, by means such as highly pressurized steam and the use of fluids, to improve the efficiency of engines. In these early stages of engine development, the efficiency of a typical engine—the useful work it was able to do when a given quantity of fuel was burned—was only 3%.
Carnot wanted to answer two questions about the operation of heat engines: "Is the work available from a heat source potentially unbounded?" and "Can heat engines in principle be improved by replacing the steam with some other working fluid or gas?" He attempted to answer these in a memoir, published as a popular work in 1824 when he was only 27 years old. It was entitled "Réflexions sur la Puissance Motrice du Feu" ("Reflections on the Motive Power of Fire"). The book was plainly intended to cover a rather wide range of topics about heat engines in a rather popular fashion; equations were kept to a minimum and called for little more than simple algebra and arithmetic, except occasionally in the footnotes, where he indulged in a few arguments involving some calculus. He discussed the relative merits of air and steam as working fluids, the merits of various aspects of steam engine design, and even included some ideas of his own regarding possible practical improvements. The most important part of the book was devoted to an abstract presentation of an idealized engine that could be used to understand and clarify the fundamental principles that are generally applied to all heat engines, independent of their design.
Perhaps the most important contribution Carnot made to thermodynamics was his abstraction of the essential features of the steam engine, as they were known in his day, into a more general and idealized heat engine. This resulted in a model thermodynamic system upon which exact calculations could be made, and avoided the complications introduced by many of the crude features of the contemporary steam engine. By idealizing the engine, he could arrive at clear and indisputable answers to his original two questions.
He showed that the efficiency of this idealized engine is a function only of the two temperatures of the reservoirs between which it operates. He did not, however, give the exact form of the function, which was later shown to be (T1−T2)/T1, where T1 is the absolute temperature of the hotter reservoir. (Note: This equation probably came from Kelvin.) No thermal engine operating any other cycle can be more efficient, given the same operating temperatures.
The Carnot cycle is the most efficient possible engine, not only because of the (trivial) absence of friction and other incidental wasteful processes; the main reason is that it assumes no conduction of heat between parts of the engine at different temperatures. Carnot knew that the conduction of heat between bodies at different temperatures is a wasteful and irreversible process, which must be eliminated if the heat engine is to achieve maximum efficiency.
Regarding the second point, he also was quite certain that the maximum efficiency attainable did not depend upon the exact nature of the working fluid. He stated this for emphasis as a general proposition:
For his "motive power of heat", we would today say "the efficiency of a reversible heat engine", and rather than "transfer of caloric" we would say "the reversible transfer of entropy ∆S" or "the reversible transfer of heat at a given temperature Q/T". He knew intuitively that his engine would have the maximum efficiency, but was unable to state what that efficiency would be.
He concluded:
and
In an idealized model, the caloric transported from a hot to a cold body by a frictionless heat engine that lacks of conductive heat flow, driven by a difference of temperature, yielding work, could also be used to transport the caloric back to the hot body by reversing the motion of the engine consuming the same amount of work, a concept subsequently known as thermodynamic reversibility. Carnot further postulated that no caloric is lost during the operation of his idealized engine. The process being completely reversible, executed by this kind of heat engine is the most efficient possible process. The assumption that heat conduction driven by a temperature difference cannot exist, so that no caloric is lost by the engine, guided him to design the Carnot-cycle to be operated by his idealized engine. The cycle is consequently composed of adiabatic processes where no heat/caloric ∆S = 0 flows and isothermal processes where heat is transferred ∆S > 0 but no temperature difference ∆T = 0 exist. The proof of the existence of a maximum efficiency for heat engines is as follows:
As the cycle named after him doesn't waste caloric, the reversible engine has to use this cycle. Imagine now two large bodies, a hot and a cold one. He postulates now the existence of a heat machine with a greater efficiency. We couple now two idealized machine but of different efficiencies and connect them to the same hot and the same cold body. The first and less efficient one lets a constant amount of entropy ∆S = Q/T flow from hot to cold during each cycle, yielding an amount of work denoted W. If we use now this work to power the other more efficient machine, it would, using the amount of work W gained during each cycle by the first machine, make an amount of entropy ∆S' > ∆S flow from the cold to the hot body. The net effect is a flow of ∆S' − ∆S ≠ 0 of entropy from the cold to the hot body, while no net work is done. Consequently, the cold body is cooled down and the hot body rises in temperature. As the difference of temperature rises now the yielding of work by the first is greater in the successive cycles and due to the second engine difference in temperature of the two bodies stretches by each cycle even more. In the end this set of machines would be a perpetuum mobile that cannot exist. This proves that the assumption of the existence of a more efficient engine was wrong so that an heat engine that operates the Carnot cycle must be the most efficient one. This means that a frictionless heat engine that lacks of conductive heat flow driven by a difference of temperature shows maximum possible efficiency.
He concludes further that the choice of the working fluid, its density or the volume occupied by it cannot change this maximum efficiency. Using the equivalence of any working gas used in heat engines he deduced that the difference in the specific heat of a gas measured at constant pressure and at constant volume must be constant for all gases.
By comparing the operation of his hypothetical heat engines for two different volumes occupied by the same amount of working gas he correctly deduces the relation between entropy and volume for an isothermal process:
formula_1
Carnot's book received very little attention from his contemporaries. The only reference to it within a few years after its publication was in a review in the periodical "Revue Encyclopédique", which was a journal that covered a wide range of topics in literature. The impact of the work had only become apparent once it was modernized by Émile Clapeyron in 1834 and then further elaborated upon by Clausius and Kelvin, who together derived from it the concept of entropy and the second law of thermodynamics.
On Carnot's religious views, he was a Philosophical theist. As a deist, he believed in divine causality, stating that "what to an ignorant man is chance, cannot be chance to one better instructed," but he did not believe in divine punishment. He criticized established religion, though at the same time spoke in favor of "the belief in an all-powerful Being, who loves us and watches over us."
He was a reader of Blaise Pascal, Molière and Jean de La Fontaine.
Carnot died during a cholera epidemic in 1832, at the age of 36.
Because of the contagious nature of cholera, many of Carnot's belongings and writings were buried together with him after his death. As a consequence, only a handful of his scientific writings survived.
After the publication of "Reflections on the Motive Power of Fire", the book quickly went out of print and for some time was very difficult to obtain. Kelvin, for one, had a difficult time getting a copy of Carnot's book. In 1890 an English translation of the book was published by R. H. Thurston; this version has been reprinted in recent decades by Dover and by Peter Smith, most recently by Dover in 2005. Some of Carnot's posthumous manuscripts have also been translated into English.
Carnot published his book in the heyday of steam engines. His theory explained why steam engines using superheated steam were better because of the higher temperature of the consequent hot reservoir. Carnot's theories and efforts did not immediately help improve the efficiency of steam engines; his theories only helped to explain why one existing practice was superior to others. It was only towards the end of the nineteenth century that Carnot's ideas, namely that a heat engine can be made more efficient if the temperature of its hot reservoir is increased, were put into practice. Carnot's book did, however, eventually have a real impact on the design of practical engines. Rudolf Diesel, for example, used Carnot's theories to design the diesel engine, in which the temperature of the hot reservoir is much higher than that of a steam engine, resulting in an engine which is more efficient. | https://en.wikipedia.org/wiki?curid=28220 |
Sydney Opera House
The Sydney Opera House is a multi-venue performing arts centre at Sydney Harbour in Sydney, New South Wales, Australia. It is one of the 20th century's most famous and distinctive buildings.
Designed by Danish architect Jørn Utzon, but completed by an Australian architectural team headed up by Peter Hall, the building was formally opened on 20 October 1973 after a gestation beginning with Utzon's 1957 selection as winner of an international design competition. The Government of New South Wales, led by the premier, Joseph Cahill, authorised work to begin in 1958 with Utzon directing construction. The government's decision to build Utzon's design is often overshadowed by circumstances that followed, including cost and scheduling overruns as well as the architect's ultimate resignation.
The building and its surrounds occupy the whole of Bennelong Point on Sydney Harbour, between Sydney Cove and Farm Cove, adjacent to the Sydney central business district and the Royal Botanic Gardens, and close by the Sydney Harbour Bridge.
The building comprises multiple performance venues, which together host well over 1,500 performances annually, attended by more than 1.2 million people. Performances are presented by numerous performing artists, including three resident companies: Opera Australia, the Sydney Theatre Company and the Sydney Symphony Orchestra. As one of the most popular visitor attractions in Australia, the site is visited by more than eight million people annually, and approximately 350,000 visitors take a guided tour of the building each year. The building is managed by the Sydney Opera House Trust, an agency of the New South Wales State Government.
On 28 June 2007, the Sydney Opera House became a UNESCO World Heritage Site, having been listed on the (now defunct) Register of the National Estate since 1980, the National Trust of Australia register since 1983, the City of Sydney Heritage Inventory since 2000, the New South Wales State Heritage Register since 2003, and the Australian National Heritage List since 2005. Furthermore, the Opera House was a finalist in the "New7Wonders of the World" campaign list.
The facility features a modern expressionist design, with a series of large precast concrete "shells", each composed of sections of a sphere of radius, forming the roofs of the structure, set on a monumental podium. The building covers of land and is long and wide at its widest point. It is supported on 588 concrete piers sunk as much as below sea level. The highest roof point is 67 metres above sea-level which is the same height as that of a 22-storey building. The roof is made of 2,194 pre-cast concrete sections, which weigh up to 15 tonnes each.
Although the roof structures are commonly referred to as "shells" (as in this article), they are precast concrete panels supported by precast concrete ribs, not shells in a strictly structural sense. Though the shells appear uniformly white from a distance, they actually feature a subtle chevron pattern composed of 1,056,006 tiles in two colours: glossy white and matte cream. The tiles were manufactured by the Swedish company Höganäs AB which generally produced stoneware tiles for the paper-mill industry.
Apart from the tile of the shells and the glass curtain walls of the foyer spaces, the building's exterior is largely clad with aggregate panels composed of pink granite quarried at Tarana. Significant interior surface treatments also include off-form concrete, Australian white birch plywood supplied from Wauchope in northern New South Wales, and brush box glulam.
Of the two larger spaces, the Concert Hall is in the western group of shells, the Joan Sutherland Theatre in the eastern group. The scale of the shells was chosen to reflect the internal height requirements, with low entrance spaces, rising over the seating areas up to the high stage towers. The smaller venues (the Drama Theatre, the Playhouse and the Studio) are within the podium, beneath the Concert Hall. A smaller group of shells set to the western side of the Monumental Steps houses the Bennelong Restaurant. The podium is surrounded by substantial open public spaces, and the large stone-paved forecourt area with the adjacent monumental steps is regularly used as a performance space.
The Sydney Opera House includes a number of performance venues:
Other areas (for example the northern and western foyers) are also used for performances on an occasional basis. Venues are also used for conferences, ceremonies and social functions.
The building also houses a recording studio, cafes, restaurants, bars and retail outlets. Guided tours are available, including a frequent tour of the front-of-house spaces, and a daily backstage tour that takes visitors backstage to see areas normally reserved for performers and crew members.
Planning began in the late 1940s, when Eugene Goossens, the Director of the NSW State Conservatorium of Music, lobbied for a suitable venue for large theatrical productions. The normal venue for such productions, the Sydney Town Hall, was not considered large enough. By 1954, Goossens succeeded in gaining the support of NSW Premier Joseph Cahill, who called for designs for a dedicated opera house. It was also Goossens who insisted that Bennelong Point be the site: Cahill had wanted it to be on or near Wynyard Railway Station in the northwest of the CBD.
An international design competition was launched by Cahill on 13 September 1955 and received 233 entries, representing architects from 32 countries. The criteria specified a large hall seating 3,000 and a small hall for 1,200 people, each to be designed for different uses, including full-scale operas, orchestral and choral concerts, mass meetings, lectures, ballet performances, and other presentations.
The winner, announced in 1957, was Jørn Utzon, a Danish architect. According to legend the Utzon design was rescued by noted Finnish-American architect Eero Saarinen from a final cut of 30 "rejects". The runner-up was a Philadelphia-based team assembled by Robert Geddes and George Qualls, both teaching at the University of Pennsylvania School of Design. They brought together a band of Penn faculty and friends from Philadelphia architectural offices, including Melvin Brecher, Warren Cunningham, Joseph Marzella, Walter Wiseman, and Leon Loschetter. Geddes, Brecher, Qualls, and Cunningham went on to found the firm GBQC Architects. The grand prize was 5,000 Australian pounds. Utzon visited Sydney in 1957 to help supervise the project. His office moved to Palm Beach, Sydney in February 1963.
Utzon received the Pritzker Architecture Prize, architecture's highest honour, in 2003. The Pritzker Prize citation read:
The Fort Macquarie Tram Depot, occupying the site at the time of these plans, was demolished in 1958 and construction began in March 1959. It was built in three stages: stage I (1959–1963) consisted of building the upper podium; stage II (1963–1967) the construction of the outer shells; stage III (1967–1973) interior design and construction.
Stage I commenced on 2 March 1959 with the construction firm Civil & Civic, monitored by the engineers Ove Arup and Partners. The government had pushed for work to begin early, fearing that funding, or public opinion, might turn against them. However, Utzon had still not completed the final designs. Major structural issues still remained unresolved. By 23 January 1961, work was running 47 weeks behind, mainly because of unexpected difficulties (inclement weather, unexpected difficulty diverting stormwater, construction beginning before proper construction drawings had been prepared, changes of original contract documents). Work on the podium was finally completed in February 1963. The forced early start led to significant later problems, not least of which was the fact that the podium columns were not strong enough to support the roof structure, and had to be re-built.
The shells of the competition entry were originally of undefined geometry, but, early in the design process, the "shells" were perceived as a series of parabolas supported by precast concrete ribs. However, engineers Ove Arup and Partners were unable to find an acceptable solution to constructing them. The formwork for using "in-situ" concrete would have been prohibitively expensive, and, because there was no repetition in any of the roof forms, the construction of precast concrete for each individual section would possibly have been even more expensive.
From 1957 to 1963, the design team went through at least 12 iterations of the form of the shells trying to find an economically acceptable form (including schemes with parabolas, circular ribs and ellipsoids) before a workable solution was completed. The design work on the shells involved one of the earliest uses of computers in structural analysis, to understand the complex forces to which the shells would be subjected. The computer system was also used in the assembly of the arches. The pins in the arches were surveyed at the end of each day, and the information was entered into the computer so the next arch could be properly placed the following day. In mid-1961, the design team found a solution to the problem: the shells all being created as sections from a sphere. This solution allows arches of varying length to be cast in a common mould, and a number of arch segments of common length to be placed adjacent to one another, to form a spherical section. With whom exactly this solution originated has been the subject of some controversy. It was originally credited to Utzon. Ove Arup's letter to Ashworth, a member of the Sydney Opera House Executive Committee, states: "Utzon came up with an idea of making all the shells of uniform curvature throughout in both directions." Peter Jones, the author of Ove Arup's biography, states that "the architect and his supporters alike claimed to recall the precise "eureka" moment ... ; the engineers and some of their associates, with equal conviction, recall discussion in both central London and at Ove's house."
He goes on to claim that "the existing evidence shows that Arup's canvassed several possibilities for the geometry of the shells, from parabolas to ellipsoids and spheres." Yuzo Mikami, a member of the design team, presents an opposite view in his book on the project, "Utzon's Sphere". It is unlikely that the truth will ever be categorically known, but there is a clear consensus that the design team worked very well indeed for the first part of the project and that Utzon, Arup, and Ronald Jenkins (partner of Ove Arup and Partners responsible for the Opera House project) all played a very significant part in the design development.
As Peter Murray states in "The Saga of the Sydney Opera House":
The design of the roof was tested on scale models in wind tunnels at University of Southampton and later NPL in order to establish the wind-pressure distribution around the roof shape in very high winds, which helped in the design of the roof tiles and their fixtures.
The shells were constructed by Hornibrook Group Pty Ltd, who were also responsible for construction in Stage III. Hornibrook manufactured the 2400 precast ribs and 4000 roof panels in an on-site factory and also developed the construction processes. The achievement of this solution avoided the need for expensive formwork construction by allowing the use of precast units and it also allowed the roof tiles to be prefabricated in sheets on the ground, instead of being stuck on individually at height.
The tiles themselves where manufactured by the Swedish company Höganäs Keramik. It took three years of development to produce the effect Utzon wanted in what became known as the Sydney Tile, 120mm square. It is made from clay with a small percentage of crushed stone.
Ove Arup and Partners' site engineer supervised the construction of the shells, which used an innovative adjustable steel-trussed "erection arch" (developed by Hornibrook's engineer Joe Bertony) to support the different roofs before completion. On 6 April 1962, it was estimated that the Opera House would be completed between August 1964 and March 1965.
Stage III, the interiors, started with Utzon moving his entire office to Sydney in February 1963. However, there was a change of government in 1965, and the new Robert Askin government declared the project under the jurisdiction of the Ministry of Public Works. Due to the Ministry's criticism of the project's costs and time, along with their impression of Utzon's designs being impractical, this ultimately led to his resignation in 1966 (see below).
The cost of the project so far, even in October 1966, was still only A$22.9 million, less than a quarter of the final $102 million cost. However, the projected costs for the design were at this stage much more significant.
The second stage of construction was progressing toward completion when Utzon resigned. His position was principally taken over by Peter Hall, who became largely responsible for the interior design. Other persons appointed that same year to replace Utzon were E. H. Farmer as government architect, D. S. Littlemore and Lionel Todd.
Following Utzon's resignation, the acoustic advisor, Lothar Cremer, confirmed to the Sydney Opera House Executive Committee (SOHEC) that Utzon's original acoustic design allowed for only 2,000 seats in the main hall and further stated that increasing the number of seats to 3,000 as specified in the brief would be disastrous for the acoustics. According to Peter Jones, the stage designer, Martin Carr, criticised the "shape, height and width of the stage, the physical facilities for artists, the location of the dressing rooms, the widths of doors and lifts, and the location of lighting switchboards."
The Opera House was formally completed in 1973, having cost $102 million. H.R. "Sam" Hoare, the Hornibrook director in charge of the project, provided the following approximations in 1973:
Stage I: podium Civil & Civic Pty Ltd approximately $5.5m.
Stage II: roof shells M.R. Hornibrook (NSW) Pty Ltd approximately $12.5m.
Stage III: completion The Hornibrook Group $56.5m.
Separate contracts: stage equipment, stage lighting and organ $9.0m. Fees and other costs: $16.5m.
The original cost and scheduling estimates in 1957 projected a cost of £3,500,000 ($7 million) and completion date of 26 January 1963 (Australia Day). In reality, the project was completed ten years late and 1,357% over budget in real terms.
In 1972, a construction worker was fired, leading the BLF affiliated workers to demand his rehiring and a 25% wage increase. In response to this, all the workers were fired, and in revenge the workers broke into the construction site with a crowbar and brought their own toolboxes. Workers' control was applied to the site for 5 weeks as the construction workers worked 35 hours a week with improved morale, more efficient organization and fewer people skipping work. The workers agreed to end their work-in when management agreed to give them a 25% wage increase, the right to elect their foremen, four weeks annual leave and a large payment for their troubles.
Before the Sydney Opera House competition, Jørn Utzon had won seven of the 18 competitions he had entered but had never seen any of his designs built. Utzon's submitted concept for the Sydney Opera House was almost universally admired and considered groundbreaking. The Assessors Report of January 1957, stated:
For the first stage, Utzon worked successfully with the rest of the design team and the client, but, as the project progressed, the Cahill government insisted on progressive revisions. They also did not fully appreciate the costs or work involved in design and construction. Tensions between the client and the design team grew further when an early start to construction was demanded despite an incomplete design. This resulted in a continuing series of delays and setbacks while various technical engineering issues were being refined. The building was unique, and the problems with the design issues and cost increases were exacerbated by commencement of work before the completion of the final plans.
After the 1965 election of the Liberal Party, with Robert Askin becoming Premier of New South Wales, the relationship of client, architect, engineers and contractors became increasingly tense. Askin had been a "vocal critic of the project prior to gaining office." His new Minister for Public Works, Davis Hughes, was even less sympathetic. Elizabeth Farrelly, an Australian architecture critic, wrote that:
Differences ensued. One of the first was that Utzon believed the clients should receive information on all aspects of the design and construction through his practice, while the clients wanted a system (notably drawn in sketch form by Davis Hughes) where architect, contractors, and engineers each reported to the client directly and separately. This had great implications for procurement methods and cost control, with Utzon wishing to negotiate contracts with chosen suppliers (such as Ralph Symonds for the plywood interiors) and the New South Wales government insisting contracts be put out to tender.
Utzon was highly reluctant to respond to questions or criticism from the client's Sydney Opera House Executive Committee (SOHEC). However, he was greatly supported throughout by a member of the committee and one of the original competition judges, Harry Ingham Ashworth. Utzon was unwilling to compromise on some aspects of his designs that the clients wanted to change.
Utzon's ability was never in doubt, despite questions raised by Davis Hughes, who attempted to portray Utzon as an impractical dreamer. Ove Arup actually stated that Utzon was "probably the best of any I have come across in my long experience of working with architects" and: "The Opera House could become the world's foremost contemporary masterpiece if Utzon is given his head."
In October 1965, Utzon gave Hughes a schedule setting out the completion dates of parts of his work for stage III. Utzon was at this time working closely with Ralph Symonds, a manufacturer of plywood based in Sydney and highly regarded by many, despite an Arup engineer warning that Ralph Symonds's "knowledge of the design stresses of plywood, was extremely sketchy" and that the technical advice was "elementary to say the least and completely useless for our purposes." Australian architecture critic Elizabeth Farrelly has referred to Ove Arup's project engineer Michael Lewis as having "other agendas". In any case, Hughes shortly after withheld permission for the construction of plywood prototypes for the interiors, and the relationship between Utzon and the client never recovered. By February 1966, Utzon was owed more than $100,000 in fees. Hughes then withheld funding so that Utzon could not even pay his own staff. The government minutes record that following several threats of resignation, Utzon finally stated to Davis Hughes: "If you don't do it, I resign." Hughes replied: "I accept your resignation. Thank you very much. Goodbye."
Utzon left the project on 28 February 1966. He said that Hughes's refusal to pay him any fees and the lack of collaboration caused his resignation and later famously described the situation as "Malice in Blunderland". In March 1966, Hughes offered him a subordinate role as "design architect" under a panel of executive architects, without any supervisory powers over the House's construction, but Utzon rejected this. Utzon left the country never to return.
Following the resignation, there was great controversy about who was in the right and who was in the wrong. "The Sydney Morning Herald" initially opined: "No architect in the world has enjoyed greater freedom than Mr Utzon. Few clients have been more patient or more generous than the people and the Government of NSW. One would not like history to record that this partnership was brought to an end by a fit of temper on the one side or by a fit of meanness on the other." On 17 March 1966, the "Herald" offered the view that: "It was not his [Utzon's] fault that a succession of Governments and the Opera House Trust should so signally have failed to impose any control or order on the project ... his concept was so daring that he himself could solve its problems only step by step ... his insistence on perfection led him to alter his design as he went along."
The Sydney Opera House opened the way for the immensely complex geometries of some modern architecture. The design was one of the first examples of the use of computer-aided design to design complex shapes. The design techniques developed by Utzon and Arup for the Sydney Opera House have been further developed and are now used for architecture, such as works of Gehry and blobitecture, as well as most reinforced concrete structures. The design is also one of the first in the world to use araldite to glue the precast structural elements together and proved the concept for future use.
It was also a first in mechanical engineering. Another Danish firm, Steensen Varming, was responsible for designing the new air-conditioning plant, the largest in Australia at the time, supplying over of air per minute, using the innovative idea of harnessing the harbour water to create a water-cooled heat pump system that is still in operation today.
After the resignation of Utzon, the Minister for Public Works, Davis Hughes, and the Government Architect, Ted Farmer, organised a team to bring the Sydney Opera House to completion. The architectural work was divided between three appointees who became the Hall, Todd, Littlemore partnership. David Littlemore would manage construction supervision, Lionel Todd contract documentation, while the crucial role of design became the responsibility of Peter Hall.
Peter Hall (1931–1995) completed a combined arts and architecture degree at Sydney University. Upon graduation a travel scholarship enabled him to spend twelve months in Europe during which time he visited Utzon in Hellebæk. Returning to Sydney, Hall worked for the Government Architect, a branch of the NSW Public Works Department. While there he established himself as a talented design architect with a number of court and university buildings, including the Goldstein Hall at the University of New South Wales, which won the Sir John Sulman Medal in 1964.
Hall resigned from the Government Architects office in early 1966 to pursue his own practice. When approached to take on the design role, (after at least two prominent Sydney architects had declined), Hall spoke with Utzon by phone before accepting the position. Utzon reportedly told Hall: he (Hall) would not be able to finish the job and the Government would have to invite him back. Hall also sought the advice of others, including architect Don Gazzard who warned him acceptance would be a bad career move as the project would "never be his own".
Hall agreed to accept the role on the condition there was no possibility of Utzon returning. Even so, his appointment did not go down well with many of his fellow architects who considered that no one but Utzon should complete the Sydney Opera House. Upon Utzon's dismissal, a rally of protest had marched to Bennelong Point. A petition was also circulated, including in the Government Architects office. Peter Hall was one of the many who had signed the petition that called for Utzon's reinstatement.,
When Hall agreed to the design role and was appointed in April 1966, he imagined he would find the design and documentation for the Stage III well advanced. What he found was an enormous amount of work ahead of him with many aspects completely unresolved by Utzon in relation to seating capacity, acoustics and structure. In addition Hall found the project had proceeded for nine years without the development of a concise client brief. To bring himself up to speed, Hall investigated concert and opera venues overseas and engaged stage consultant Ben Schlange and acoustic consultant Wilhelm Jordan, while establishing his team. In consultation with all the potential building users the first Review of Program was completed in January 1967. The most significant conclusion reached by Hall was that concert and opera were incompatible in the same hall. Although Utzon had sketched ideas using plywood for the great enclosing glass walls, their structural viability was unresolved when Hall took on the design role. With the ability to delegate tasks and effectively coordinate the work of consultants, Hall guided the project for over five years until the opening day in 1973.
A former Government Architect, Peter Webber, in his book "Peter Hall: the Phantom of the Opera House", concludes: when Utzon resigned no one was better qualified (than Hall) to rise to the challenge of completing the design of the Opera House.
The Sydney Opera House was formally opened by Queen Elizabeth II, Queen of Australia on 20 October 1973. A large crowd attended. Utzon was not invited to the ceremony, nor was his name mentioned. The opening was televised and included fireworks and a performance of Beethoven's Symphony No. 9.
During the construction phase, lunchtime performances were often arranged for the workers, with American vocalist Paul Robeson the first artist to perform, in 1960.
Various performances were presented prior to the official opening:
After the opening:
In the late 1990s, the Sydney Opera House Trust resumed communication with Utzon in an attempt to effect a reconciliation and to secure his involvement in future changes to the building. In 1999, he was appointed by the Trust as a design consultant for future work.
In 2004, the first interior space rebuilt to an Utzon design was opened, and renamed "The Utzon Room" in his honour. It contains an original Utzon tapestry (14.00 x 3.70 metres) called "Homage to Carl Philipp Emmanuel Bach". In April 2007, he proposed a major reconstruction of the Opera Theatre, as it was then known. Utzon died on 29 November 2008.
A state memorial service, attended by Utzon's son Jan and daughter Lin, celebrating his creative genius, was held in the Concert Hall on 25 March 2009 featuring performances, readings and recollections from prominent figures in the Australian performing arts scene.
Refurbished Western Foyer and Accessibility improvements were commissioned on 17 November 2009, the largest building project completed since Utzon was re-engaged in 1999. Designed by Utzon and his son Jan, the project provided improved ticketing, toilet and cloaking facilities. New escalators and a public lift enabled enhanced access for the disabled and families with prams. The prominent paralympian athlete Louise Sauvage was announced as the building's "accessibility ambassador" to advise on further improvements to aid people with disabilities.
On 29 March 2016, an original 1959 tapestry by Le Corbusier (2.18 x 3.55 metres), commissioned by Utzon to be hung in the Sydney Opera House and called "Les Dés Sont Jetés" (The Dice Are Cast), was finally unveiled "in situ" after being owned by the Utzon family and held at their home in Denmark for over 50 years. The tapestry was bought at auction by the Sydney Opera House in June 2015. It now hangs in the building's Western Foyer and is accessible to the public.
In the second half of 2017, the Joan Sutherland Theatre was closed to replace the stage machinery and for other works. The Concert Hall is scheduled for work in 2020–2021.
In 1993, Constantine Koukias was commissioned by the Sydney Opera House Trust in association with REM Theatre to compose "Icon", a large-scale music theatre piece for the 20th anniversary of the Sydney Opera House.
During the 2000 Summer Olympics, the venue served as the focal point for the triathlon events. The event had a swimming loop at Farm Cove, along with competitions in the neighbouring Royal Botanical Gardens for the cycling and running portions of the event.
Since 2013, a group of residents from the nearby Bennelong Apartments (better known as 'The Toaster'), calling themselves the Sydney Opera House Concerned Citizens Group, have been campaigning against Forecourt Concerts on the grounds that they exceed noise levels outlined in the development approval (DA). In February 2017 the NSW Department of Planning and the Environment handed down a $15,000 fine to the Sydney Opera House for breach of allowed noise levels at a concert held in November 2015. However the DA was amended in 2016 to allow an increase in noise levels in the forecourt by 5 decibels. The residents opposing the concerts contend that a new DA should have been filed rather than an amendment.
The Sydney Opera House sails formed a graphic projection-screen in a lightshow mounted in connection with the International Fleet Review in Sydney Harbour on 5 October 2013.
On 31 December 2013, the venue's 40th anniversary year, a New Year firework display was mounted for the first time in a decade. The Sydney Opera House hosted an event, 'the biggest blind date' on Friday 21 February 2014 that won an historic Guinness World Record. The longest continuous serving employee was commemorated on 27 June 2018, for 50 years of service.
On 14 June 2019, a state memorial service for former Australian Prime Minister Bob Hawke was held at the Sydney Opera House.
On 5 October 2018 the Opera House chief executive Louise Herron clashed with Sydney radio commentator Alan Jones, who called for her sacking for refusing to allow Racing NSW to use the Opera House sails to advertise The Everest horse race. Within hours, NSW Premier Gladys Berejiklian overruled Herron. Two days later, Prime Minister Scott Morrison supported the decision, calling the Opera House "the biggest billboard Sydney has". The NSW Labor Party leader, Luke Foley, and senior federal Labor frontbencher Anthony Albanese had supported the proposal. The political view was not supported by significant public opinion, with a petition against the advertising collecting over 298,000 names by 9 October 2018. 235,000 printed petition documents were presented to the NSW Parliament in the morning. A survey conducted on 8 October by market research firm Micromex found that 81% of those surveyed were not supportive of the premier's direction.
The opera house, along with the harbour bridge, frequently features in establishing shots in film and television to represent Sydney and the Australian nation.
This Wikipedia article contains material from "Sydney Opera House", listed on the "New South Wales State Heritage Register" published by the Government of New South Wales under CC-BY 3.0 AU licence (accessed on 3 September 2017). | https://en.wikipedia.org/wiki?curid=28222 |
Selim II
Selim II (Ottoman Turkish: سليم ثانى "Selīm-i sānī", Turkish: "II.Selim"; 28 May 1524 – 15 December 1574), also known as "Sarı Selim" ("Selim the Blond") or "Sarhoş Selim" ("Selim the Drunk"), was the Sultan of the Ottoman Empire from 1566 until his death in 1574. He was a son of Suleiman the Magnificent and his wife Hurrem Sultan. Selim had been an unlikely candidate for the throne until his brother Mehmed died of smallpox, his half-brother Mustafa was strangled to death by the order of his father, his brother Cihangir died of grief at the news of this latter execution, and his brother Bayezid was killed on the order of his father after a rebellion against Selim.
Selim died on 15 December 1574 and was buried in Hagia Sophia.
Selim was born in Constantinople (Istanbul), on 30 May 1524 , during the reign of his father Suleiman the Magnificent. His mother was Hurrem Sultan, a slave and concubine who was born an Orthodox priest's daughter, and later was freed and became Suleiman's legal wife.
In 1545, at Konya, Selim married Nurbanu Sultan, whose background is disputed. It is said that she was originally named Cecelia Venier Baffo, or Rachel, or Kale Katenou. She was the mother of Murad III, Selim's successor.
Hubbi Hatun, a famous poet of the sixteenth century, was a lady-in-waiting to him.
Selim II gained the throne after palace intrigue and fraternal dispute, succeeding as sultan on the 7th of September 1566. Selim's Grand Vizier, Mehmed Sokollu and wife, Nurbanu Sultan, a native of what is now Bosnia and Herzegovina, controlled much of state affairs, and two years after Selim's accession succeeded in concluding at Constantinople a treaty (17 February 1568) with the Habsburg Holy Roman Emperor, Maximilian II, whereby the Emperor agreed to pay an annual "present" of 30,000 ducats and granted the Ottomans authority in Moldavia and Walachia.
A plan had been prepared in Constantinople for uniting the Volga and Don by a canal in order to counter Russian expansion toward the Ottomans' northern frontier. In the summer of 1569 a large force of Janissaries and cavalry were sent to lay siege to Astrakhan and begin the canal works, while an Ottoman fleet besieged Azov. However, a sortie from the Astrakhan garrison drove back the besiegers. A Russian relief army of 15,000 attacked and scattered the workmen and the Tatar force sent for their protection. The Ottoman fleet was then destroyed by a storm. Early in 1570 the ambassadors of Ivan IV of Russia concluded at Istanbul a treaty that restored friendly relations between the Sultan and the Tsar.
Expeditions in the Hejaz and Yemen were more successful, but the conquest of Cyprus in 1571, led to the naval defeat against Spain and Italian states in the Battle of Lepanto in the same year.
The Empire's shattered fleets were soon restored (in just six months, it consisted of about 150 galleys and eight galleasses), and the Ottomans maintained control of the eastern Mediterranean (1573). In August 1574, months before Selim's death, the Ottomans regained control of Tunis from Spain, which had captured it in 1572.
Selim is known for restoring to Mahidevran Sultan her status and her wealth. He also built the tomb of his eldest brother, Şehzade Mustafa, who was executed in 1553.
In the famine of 1573, due to severe cold the farmers of that time did not provide good for the people. Selim gave people food and vegetables in the food kitchen. In April 1574, a fire broke in the printing house of Topkapi Palace and burned many rooms including the cooks and maids were burned and the kitchen was also burned. A few days later the captain, the Janissary, Istanbul lord and Mimar Sinan determined the location and size of the new kitchens to come to the fire. Mimar Sinan Ağa cleaned the fire place of the "building of the official building (design)" of Üslüb-ı ahar. The construction of the new Matbalı-ı Amir, which is broader and longer than the previous one, was taken from Divan-ı Ali Square.
He is introduced as a generous monarch who is fond of pleasure and entertainment in the sources of the period, who is fond of drink councils, enjoys the presence of scholars and poets around him, as well as musicians, wrestlers, connoisseurs such as connoisseurs, who do not want to break the hearts of anyone. However, it is stated that he did not appear much in public, that his father often went to Friday and he went public and he neglected this and spent time in the palace.
Selim's first and only wife, Nurbanu Sultan, was a Venetian who was the mother of his successor Murad III and three of his daughters. As a Haseki Sultan she received 1,000 aspers a day, while lower-ranking concubines who were the mothers of princes received 40 aspers a day. Selim bestowed upon Nurbanu 110,000 ducats as a dowry, surpassing the 100,000 ducats that his father bestowed upon his mother Hürrem Sultan. According to a privy purse register cited by Leslie Pierce, Selim had four other women, and each of them was mother of a prince. Augusta Hamilton records that he had two thousand concubines.
Selim had seven sons:
Selim had at least four daughters:
[aged 50] | https://en.wikipedia.org/wiki?curid=28223 |
Speaker for the Dead
Speaker for the Dead is a 1986 science fiction novel by American writer Orson Scott Card, an indirect sequel to the novel "Ender's Game". The book takes place around the year 5270, some 3,000 years after the events in "Ender's Game". However, because of relativistic space travel at near-light speed, Ender himself is only about 35 years old.
This is the first book to discuss the Starways Congress, a high standpoint legislation for the human space colonies. It is also the first to describe the Hundred Worlds, the planets with human colonies that are tightly intertwined by Ansible technology which enables instantaneous communication, even across light years of distance.
Like "Ender's Game", the book won the Nebula Award in 1986 and the Hugo Award in 1987. "Speaker for the Dead" was published in a slightly revised edition in 1991. It was followed by "Xenocide" and "Children of the Mind".
Some years after the xenocide of the Formic species (in "Ender's Game"), Ender Wiggin writes a book called "The Hive Queen", describing the life of the Formics as described to him by the dormant Formic Queen whom he secretly carries. As humanity uses light-speed travel to establish new colonies, Ender and his sister Valentine age slowly through relativistic travel. Ender's older brother, the now-aged Hegemon Peter Wiggin, recognizes Ender's writings in "The Hive Queen", and requests Ender write for him once he dies. Ender agrees and authors "The Hegemon". These two books, written under the pseudonym "Speaker for the Dead", launch a new religious movement of Speakers, who have authority to investigate and eulogize a person and their work after their death.
Three thousand years after the Formic xenocide, humans have spread across the Hundred Worlds, ruled by Starways Congress. A Brazilian Catholic human colony called Milagre is established on the planet Lusitania (1886 S.C.). The planet is home to a sentient species of symbiotic forest dwellers. The colonists (who primarily speak Portuguese) dub them "Pequeninos" (Little Ones) but they are often referred to as "piggies" due to their porcine snouts. Their society is matriarchal and gender-segregated, and their belief system centers around the trees of the forests. The Pequeninos prove to be of great interest to scientists. Since humans had destroyed the only sentient species they had encountered (the Formics), special care is taken to ensure no similar mistakes are made with the Pequeninos. The colony is fenced in, strictly regulated to limit contact with the Pequeninos to a handful of scientists, and forbidden to share human technology with them. Shortly after the colony's founding, many of the colonists die from the "Descolada" virus (Portuguese for "uncoiled"), which causes terrible pain, rampant cancerous growth of fungus and even extra limbs, decay of healthy tissue, and death. The xenobiologists Gusto and Cida von Hesse manage to create a treatment for the virus before succumbing to it themselves (1936 S.C.), leaving behind their young daughter Novinha.
"Further information:" List of Ender's Game characters
Eight years after the Descolada virus is cured, Xenologer Pipo and his thirteen-year-old son and apprentice Libo have developed a friendship with the Pequeninos. They allow Novinha to join their science team as the colony's only xenobiologist, after she manages to pass the test at age thirteen. After accidentally sharing information about human genders with a male Pequenino named Rooter, the scientists find Rooter's body eviscerated, a sapling planted within it, and guess this may be a torturous sacrificial ritual.
A few years later, Novinha discovers that every lifeform on Lusitania carries the "Descolada" virus which, though lethal to humans, appears to serve a beneficial purpose to native lifeforms. When Pipo learns of this, he suddenly has an insight, and before he tells the others, races off to talk to the Pequeninos. Hours later, Libo and Novinha find Pipo's body cut open just as Rooter's had been, but with no sapling planted. As Pipo's death appears unprovoked, the Pequeninos are now considered a threat by the Starways Congress and restrictions on studying them are tightened. Novinha, having fallen in love with Libo but fearing that he will find out from her files what led to Pipo's death, marries another colonist, Marcos Ribeira, so as to lock her files from being opened, under colony law. Emotionally distraught, she then makes a call for a Speaker for the Dead for Pipo.
Andrew "Ender" Wiggin, living innocuously on the planet Trondheim, responds to Novinha's call. Though she has traveled with him for thousands of years, his sister Valentine is now pregnant and settled. He travels on alone, save for an artificial intelligence named Jane who communicates with Ender through a jewel in his ear and appears to live in the ansible network that enables faster-than-light communications. After relativistic travel, Ender arrives at Lusitania 22 years later (1970 S.C.), finding that Novinha had canceled her request for a Speaker five days after sending it. In the intervening time, Libo had died in a similar manner to Pipo, and Marcos had succumbed to a chronic illness. Novinha's eldest children, Ela and Miro, have requested a Speaker for Libo and Marcos. Ender, gaining access to all of the appropriate files, learns of tension since Pipo's death: Novinha has turned away from xenobiology to study crop growth, which created a loveless relationship with Marcos; Miro has secretly worked with Ouanda to continue to study the Pequeninos, while sharing human technology and knowledge with them. Over the course of time, Miro and Ouanda have fallen in love. With Ender's arrival, Miro tells him that one of the Pequeninos, Human, has taken a great interest in Ender, and Ender becomes aware that Human can hear messages from the Formic Hive Queen. Ender and Jane discover that Marcos was infertile: all six of Novinha's children, including Miro, were fathered by Libo, Ender also learns what Pipo had seen in Novinha's data.
As word of Miro's and Ouanda's illegal sharing of human technology with the Pequeninos is reported to Congress, Ender secretly goes to meet with the Pequeninos. They know his true identity, and they implore him to help them be part of civilization, while the Formic Queen tells Ender that Lusitania would be an ideal place to restart the hive, as her race can help guide the Pequeninos. By the time Ender returns to the colony, Congress has ordered Miro and Ouanda to be sent off-planet for penal action and the colony to be disbanded. Ender delivers his eulogy for Marcos, revealing Novinha's infidelity. Miro, distraught to learn that he is Ouanda's half-brother, attempts to escape to the Pequeninos, but he suffers neurological damage as he tries to cross the electrified fence. Ender reveals to the colony what he discovered that Pipo had learned: that every life form on Lusitania is paired with another through the "Descolada" virus, so that the death of one births the other, and as in the case of the Pequeninos, who become trees when they die. The colony leaders recognize Ender's words, and they agree to rebel against Congress, severing their ansible connection and deactivating the fence, allowing Ender, Ouanda, and Ela to go with Human to speak to the Pequenino wives, to help establish a case to present to Congress.
The Pequenino wives help Ender corroborate the complex life cycle of the Pequeninos, affirming that the death ritual Pipo observed was to help create "fathertrees" who fertilize the Pequenino females to continue their race. The Pequeninos believed they were honoring Pipo, and later Libo, by helping them become fathertrees, but Ender explains that humans lack this "third life", and if the Pequeninos are to cohabitate with humans, they must respect this difference. To affirm their understanding, Ender is allowed to perform the ritual of giving Human "third life" as a fathertree, providing Ouanda with the confirmation needed to present to Congress.
Miro recovers from most of the physical damage from his encounter with the fence, but he is still paralyzed. Valentine and her family inform Ender they plan to help Lusitania with the revolt, and they are traveling from Trondheim to help; Ender has Miro meet them halfway. Novinha, having gained understanding into the death of Pipo and Libo, finally absolves herself of her guilt, and she and Ender marry. Ender plants the Hive Queen as per her request, and he writes his third book, a biography of the life of the Pequenino, Human.
At the Los Angeles Times Book Festival (April 20, 2013), Card stated why he does not want "Speaker for the Dead" made into a film:
""Speaker for the Dead" is unfilmable," Card said in response to a question from the audience. "It consists of talking heads, interrupted by moments of excruciating and unwatchable violence. Now, I admit, there's plenty of unwatchable violence in film, but never attached to my name. "Speaker for the Dead", I don't want it to be filmed. I can't imagine it being filmed."
Card writes in his introduction to the 1991 edition that he has received letters from readers who have conducted "Speakings" at funerals. | https://en.wikipedia.org/wiki?curid=28230 |
Star catalogue
A star catalogue (Commonwealth English) or star catalog (American English), is an astronomical catalogue that lists stars. In astronomy, many stars are referred to simply by catalogue numbers. There are a great many different star catalogues which have been produced for different purposes over the years, and this article covers only some of the more frequently quoted ones. Star catalogues were compiled by many different ancient people, including the Babylonians, Greeks, Chinese, Persians, and Arabs. They were sometimes accompanied by a star chart for illustration. Most modern catalogues are available in electronic format and can be freely downloaded from space agencies' data centres. The largest is being compiled from the Gaia (spacecraft) and thus far has over a billion stars.
Completeness and accuracy are described by the weakest limiting magnitude V (largest number) and the accuracy of the positions.
From their existing records, it is known that the ancient Egyptians recorded the names of only a few identifiable constellations and a list of thirty-six decans that were used as a star clock. The Egyptians called the circumpolar star "the star that cannot perish" and, although they made no known formal star catalogues, they nonetheless created extensive star charts of the night sky which adorn the coffins and ceilings of tomb chambers.
Although the ancient Sumerians were the first to record the names of constellations on clay tablets, the earliest known star catalogues were compiled by the ancient Babylonians of Mesopotamia in the late 2nd millennium BC, during the Kassite Period (c. 1531 BC to c. 1155 BC). They are better known by their Assyrian-era name 'Three Stars Each'. These star catalogues, written on clay tablets, listed thirty-six stars: twelve for "Anu" along the celestial equator, twelve for "Ea" south of that, and twelve for "Enlil" to the north. The Mul.Apin lists, dated to sometime before the Neo-Babylonian Empire (626–539 BC), are direct textual descendants of the "Three Stars Each" lists and their constellation patterns show similarities to those of later Greek civilization.
In Ancient Greece, the astronomer and mathematician Eudoxus laid down a full set of the classical constellations around 370 BC. His catalogue "Phaenomena", rewritten by Aratus of Soli between 275 and 250 BC as a didactic poem, became one of the most consulted astronomical texts in antiquity and beyond. It contained descriptions of the positions of the stars and the shapes of the constellations, and provided information on their relative times of rising and setting.
Approximately in the 3rd century BC, the Greek astronomers Timocharis of Alexandria and Aristillus created another star catalogue. Hipparchus (c. 190 – c. 120 BC) completed his star catalogue in 129 BC, which he compared to Timocharis' and discovered that the longitude of the stars had changed over time. This led him to determine the first value of the precession of the equinoxes. In the 2nd century, Ptolemy (c. 90 – c. 186 AD) of Roman Egypt published a star catalogue as part of his "Almagest", which listed 1,022 stars visible from Alexandria. Ptolemy's catalogue was based almost entirely on an earlier one by Hipparchus. It remained the standard star catalogue in the Western and Arab worlds for over eight centuries. The Islamic astronomer al-Sufi updated it in 964, and the star positions were redetermined by Ulugh Beg in 1437, but it was not fully superseded until the appearance of the thousand-star catalogue of Tycho Brahe in 1598.
Although the ancient Vedas of India specified how the ecliptic was to be divided into twenty-eight "nakshatra", Indian constellation patterns were ultimately borrowed from Greek ones sometime after Alexander's conquests in Asia in the 4th century BC.
The earliest known inscriptions for Chinese star names were written on oracle bones and date to the Shang Dynasty (c. 1600 – c. 1050 BC). Sources dating from the Zhou Dynasty (c. 1050 – 256 BC) which provide star names include the "Zuo Zhuan", the "Shi Jing", and the "Canon of Yao" (堯典) in the "Book of Documents". The "Lüshi Chunqiu" written by the Qin statesman Lü Buwei (d. 235 BC) provides most of the names for the twenty-eight mansions (i.e. asterisms across the ecliptic belt of the celestial sphere used for constructing the calendar). An earlier lacquerware chest found in the Tomb of Marquis Yi of Zeng (interred in 433 BC) contains a complete list of the names of the twenty-eight mansions. Star catalogues are traditionally attributed to Shi Shen and Gan De, two rather obscure Chinese astronomers who may have been active in the 4th century BC of the Warring States period (403–221 BC). The "Shi Shen astronomy" (石申天文, Shi Shen tienwen) is attributed to Shi Shen, and the "Astronomic star observation" (天文星占, Tianwen xingzhan) to Gan De.
It was not until the Han Dynasty (202 BC – 220 AD) that astronomers started to observe and record names for all the stars that were apparent (to the naked eye) in the night sky, not just those around the ecliptic. A star catalogue is featured in one of the chapters of the late 2nd-century-BC history work "Records of the Grand Historian" by Sima Qian (145–86 BC) and contains the "schools" of Shi Shen and Gan De's work (i.e. the different constellations they allegedly focused on for astrological purposes). Sima's catalogue—the "Book of Celestial Offices" (天官書 Tianguan shu)—includes some 90 constellations, the stars therein named after temples, ideas in philosophy, locations such as markets and shops, and different people such as farmers and soldiers. For his "Spiritual Constitution of the Universe" (靈憲, Ling Xian) of 120 AD, the astronomer Zhang Heng (78–139 AD) compiled a star catalogue comprising 124 constellations. Chinese constellation names were later adopted by the Koreans and Japanese.
A large number of star catalogues were published by Muslim astronomers in the medieval Islamic world. These were mainly "Zij" treatises, including Arzachel's "Tables of Toledo" (1087), the Maragheh observatory's "Zij-i Ilkhani" (1272) and Ulugh Beg's "Zij-i-Sultani" (1437). Other famous Arabic star catalogues include Alfraganus' "A compendium of the science of stars" (850) which corrected Ptolemy's "Almagest"; and Azophi's "Book of Fixed Stars" (964) which described observations of the stars, their positions, magnitudes, brightness and colour, drawings for each constellation, and the first descriptions of Andromeda Galaxy and the Large Magellanic Cloud. Many stars are still known by their Arabic names (see List of Arabic star names).
The "Motul Dictionary", compiled in the 16th century by an anonymous author (although attributed to Fray Antonio de Ciudad Real), contains a list of stars originally observed by the ancient Mayas. The Maya Paris Codex also contain symbols for different constellations which were represented by mythological beings.
Two systems introduced in historical catalogues remain in use to the present day. The first system comes from the German astronomer Johann Bayer's "Uranometria", published in 1603 and regarding bright stars. These are given a Greek letter followed by the genitive case of the constellation in which they are located; examples are Alpha Centauri or Gamma Cygni. The major problem with Bayer's naming system was the number of letters in the Greek alphabet (24). It was easy to run out of letters before running out of stars needing names, particularly for large constellations such as Argo Navis. Bayer extended his lists up to 67 stars by using lower-case Roman letters ("a" through "z") then upper-case ones ("A" through "Q"). Few of those designations have survived. It is worth mentioning, however, as it served as the starting point for variable star designations, which start with "R" through "Z", then "RR", "RS", "RT"..."RZ", "SS", "ST"..."ZZ" and beyond.
The second system comes from the English astronomer John Flamsteed's "Historia coelestis Britannica" (1725). It kept the genitive-of-the-constellation rule for the back end of his catalogue names, but used numbers instead of the Greek alphabet for the front half. Examples include 61 Cygni and 47 Ursae Majoris.
Bayer and Flamsteed covered only a few thousand stars between them. In theory, full-sky catalogues try to list every star in the sky. There are, however, billions of stars resolvable by telescopes, so this is an impossible goal; with this kind of catalog, an attempt is generally made to get every star brighter than a given magnitude.
Jérôme Lalande published the "Histoire Céleste Française" in 1801, which contained an extensive star catalog, among other things. The observations made were made from the Paris Observatory and so it describes mostly northern stars. This catalogue contained the positions and magnitudes of 47,390 stars, out to magnitude 9, and was the most complete catalogue up to that time. A significant reworking of this catalogue in 1846 added reference numbers to the stars that are used to refer to some of these stars to this day. The decent accuracy of this catalogue kept it in common use as a reference by observatories around the world throughout the 19th century.
The "Bonner Durchmusterung" ("German": Bonn sampling) and follow-ups were the most complete of the pre-photographic star catalogues.
The "Bonner Durchmusterung" itself was published by Friedrich Wilhelm Argelander, Adalbert Krüger, and Eduard Schönfeld between 1852 and 1859. It covered 320,000 stars in epoch 1855.0.
As it covered only the northern sky and some of the south (being compiled from the Bonn observatory), this was then supplemented by the "Südliche Durchmusterung " (SD), which covers stars between declinations −1 and −23 degrees
(1886, 120,000 stars). It was further supplemented by the "Cordoba Durchmusterung" (580,000 stars), which began to be compiled at Córdoba, Argentina in 1892 under the initiative of John M. Thome and covers declinations −22 to −90. Lastly, the "Cape Photographic Durchmusterung" (450,000 stars, 1896), compiled at the Cape, South Africa, covers declinations −18 to −90.
Astronomers preferentially use the HD designation (see next entry) of a star, as that catalogue also gives spectroscopic information, but as the Durchmusterungs cover more stars they occasionally fall back on the older designations when dealing with one not found in Draper. Unfortunately, a lot of catalogues cross-reference the Durchmusterungs without specifying which one is used in the zones of overlap, so some confusion often remains.
Star names from these catalogues include the initials of which of the four catalogues they are from (though the "Southern" follows the example of the "Bonner" and uses BD; CPD is often shortened to CP), followed by the angle of declination of the star (rounded towards zero, and thus ranging from +00 to +89 and −00 to −89), followed by an arbitrary number as there are always thousands of stars at each angle. Examples include BD+50°1725 or CD−45°13677.
The Henry Draper Catalogue was published in the period 1918–1924. It covers the whole sky down to about ninth or tenth magnitude, and is notable as the first large-scale attempt to catalogue spectral types of stars.
The catalogue was compiled by Annie Jump Cannon and her co-workers at Harvard College Observatory under the supervision of Edward Charles Pickering, and was named in honour of Henry Draper, whose widow donated the money required to finance it.
HD numbers are widely used today for stars which have no Bayer or Flamsteed designation. Stars numbered 1–225300 are from the original catalogue and are numbered in order of right ascension for the 1900.0 epoch. Stars in the range 225301–359083 are from the 1949 extension of the catalogue. The notation HDE can be used for stars in this extension, but they are usually denoted HD as the numbering ensures that there can be no ambiguity.
The "Catalogue astrographique" (Astrographic Catalogue) was part of the international "Carte du Ciel" programme designed to photograph and measure the positions of all stars brighter than magnitude 11.0. In total, over 4.6 million stars were observed, many as faint as 13th magnitude. This project was started in the late 19th century. The observations were made between 1891 and 1950. To observe the entire celestial sphere without burdening too many institutions, the sky was divided among 20 observatories, by declination zones. Each observatory exposed and measured the plates of its zone, using a standardized telescope (a "normal astrograph") so each plate photographed had a similar scale of approximately 60 arcsecs/mm. The U.S. Naval Observatory took over custody of the catalogue, now in its 2000.2 edition.
First published in 1930 as the "Yale Catalog of Bright Stars", this catalogue contained information on all stars brighter than visual magnitude 6.5 in the "Harvard Revised Photometry Catalogue". The list was revised in 1983 with the publication of a supplement that listed additional stars down to magnitude 7.1. The catalogue detailed each star's coordinates, proper motions, photometric data, spectral types, and other useful information.
The last printed version of the Bright Star Catalogue was the 4th revised edition, released in 1982. The 5th edition is in electronic form and is available online.
The Smithsonian Astrophysical Observatory catalogue was compiled in 1966 from various previous astrometric catalogues, and contains only the stars to about ninth magnitude for which accurate proper motions were known. There is considerable overlap with the Henry Draper catalogue, but any star lacking motion data is omitted. The epoch for the position measurements in the latest edition is J2000.0. The SAO catalogue contains this major piece of information not in Draper, the proper motion of the stars, so it is often used when that fact is of importance. The cross-references with the Draper and Durchmusterung catalogue numbers in the latest edition are also useful.
Names in the SAO catalogue start with the letters SAO, followed by a number. The numbers are assigned following 18 ten-degree bands in the sky, with stars sorted by right ascension within each band.
USNO-B1.0 is an all-sky catalogue created by research and operations astrophysicists at the U.S. Naval Observatory (as developed at the United States Naval Observatory Flagstaff Station), that presents positions, proper motions, magnitudes in various optical passbands, and star/galaxy estimators for 1,042,618,261 objects derived from 3,643,201,733 separate observations. The data was obtained from scans of 7,435 Schmidt plates taken for the various sky surveys during the last 50 years. USNO-B1.0 is believed to provide all-sky coverage, completeness down to V = 21, 0.2 arcsecond astrometric accuracy at J2000.0, 0.3 magnitude photometric accuracy in up to five colors, and 85% accuracy for distinguishing stars from non-stellar objects. USNO-B is now followed by NOMAD; both can be found on the Naval Observatory server. The Naval Observatory is currently working on B2 and C variants of the USNO catalogue series.
The "Guide Star Catalog" is an online catalogue of stars produced for the purpose of accurately positioning and identifying stars satisfactory for use as guide stars by the Hubble Space Telescope program. The first version of the catalogue was produced in the late 1980s by digitizing photographic plates and contained about 20 million stars, out to about magnitude 15. The latest version of this catalogue contains information for 945,592,683 stars, out to magnitude 21. The latest version continues to be used to accurately position the Hubble Space Telescope.
The PPM Star Catalogue (1991) is one of the best, both in the proper motion and star position till 1999. Not as precise as the Hipparcos catalogue but with many more stars. The PPM was built from BD, SAO, HD and more, with sophisticated algorithm and is an extension for the Fifth Fundamental Catalogue, "Catalogues of Fundamental Stars".
The Hipparcos catalogue was compiled from the data gathered by the European Space Agency's astrometric satellite "Hipparcos", which was operational from 1989 to 1993. The catalogue was published in June 1997 and contains 118,218 stars; an updated version with re-processed data was published in 2007. It is particularly notable for its parallax measurements, which are considerably more accurate than those produced by ground-based observations.
The Gaia catalogue is released in stages that will contain increasing amounts of information; the early releases will also miss some stars, especially fainter stars located in dense star fields. Data from every data release can be accessed at the "Gaia" archive. Gaia DR1, the first data release of the spacecraft "Gaia" mission, based on 14 months of observations made through September 2015, took place on 13 September 2016. The data release includes positions and magnitudes in a single photometric band for 1.1 billion stars using only "Gaia" data, positions, parallaxes and proper motions for more than 2 million stars based on a combination of "Gaia" and Tycho-2 data for those objects in both catalogues, light curves and characteristics for about 3000 variable stars, and positions and magnitudes for more than 2000 extragalactic sources used to define the celestial reference frame. The second data release (DR2), which occurred on 25 April 2018, is based on 22 months of observations made between 25 July 2014 and 23 May 2016. It includes positions, parallaxes and proper motions for about 1.3 billion stars and positions of an additional 300 million stars, red and blue photometric data for about 1.1 billion stars and single colour photometry for an additional 400 million stars, and median radial velocities for about 7 million stars between magnitude 4 and 13. It also contains data for over 14,000 selected Solar System objects. The full "Gaia" catalogue will be released in 2022.
Specialized catalogues make no effort to list all the stars in the sky, working instead to highlight a particular type of star, such as variables or nearby stars.
Aitken's double star catalogue (1932) lists 17,180 double stars north of declination −30 degrees.
Stephenson's General Catalogue of galactic Carbon stars is a catalogue of 7000+ carbon stars.
The Gliese (later Gliese-Jahreiß) catalogue attempts to list all star systems within of Earth ordered by right ascension (see the List of nearest stars). Later editions expanded the coverage to . Numbers in the range 1.0–915.0 (Gl numbers) are from the second edition, which was
The integers up to 915 represent systems which were in the first edition. Numbers with a decimal point were used to insert new star systems for the second edition without destroying the desired order (by right ascension). This catalogue is referred to as CNS2, although this name is never used in catalogue numbers.
Numbers in the range 9001–9850 (Wo numbers) are from the supplement
Numbers in the ranges 1000–1294 and 2001–2159 (GJ numbers) are from the supplement
The range 1000–1294 represents nearby stars, while 2001–2159 represents suspected nearby stars. In the literature, the GJ numbers are sometimes retroactively extended to the Gl numbers (since there is no overlap). For example, Gliese 436 can be interchangeably referred to as either Gl 436 or GJ 436.
Numbers in the range 3001–4388 are from
Although this version of the catalogue was termed "preliminary", it is still the current one , and is referred to as CNS3. It lists a total of 3,803 stars. Most of these stars already had GJ numbers, but there were also 1,388 which were not numbered. The need to give these 1,388 "some" name has resulted in them being numbered 3001–4388 (NN numbers, for "no name"), and data files of this catalogue now usually include these numbers. An example of a star which is often referred to by one of these unofficial GJ numbers is GJ 3021.
The General Catalogue of Trigonometric Parallaxes, first published in 1952 and later superseded by the New GCTP (now in its fourth edition), covers nearly 9,000 stars. Unlike the Gliese, it does not cut off at a given distance from the Sun; rather it attempts to catalogue all known measured parallaxes. It gives the co-ordinates in 1900 epoch, the secular variation, the proper motion, the weighted average absolute parallax and its standard error, the number of parallax observations, quality of interagreement of the different values, the visual magnitude and various cross-identifications with other catalogues. Auxiliary information, including UBV photometry, MK spectral types, data on the variability and binary nature of the stars, orbits when available, and miscellaneous information to aid in determining the reliability of the data are also listed.
A common way of detecting nearby stars is to look for relatively high proper motions. Several catalogues exist, of which we'll mention a few. The Ross and Wolf catalogues pioneered the domain:
Willem Jacob Luyten later produced a series of catalogues:
L – Luyten, Proper motion stars and White dwarfs
LFT – Luyten Five-Tenths catalogue
LHS – Luyten Half-Second catalogue
LTT – Luyten Two-Tenths catalogue
NLTT – New Luyten Two-Tenths catalogue
LPM – Luyten Proper-Motion catalogue
Around the same time period, Henry Lee Giclas worked on a similar series of catalogues:
The "ubvyβ Photoelectric Photometric Catalogue" is a compilation of previously published photometric data. Published in 1998, the catalogue includes 63,316 stars surveyed through 1996.
The Robertson's "Zodiacal Catalogue", collected by the astronomer James Robertson, is a catalogue of 3539 zodiacal stars brighter than 9th magnitude. It is mainly used for Star Occultations by the Moon.
Stars evolve and move over time, making catalogues evolving, impermanent databases at even the most rigorous levels of production. The USNO catalogues are the most current and widely used astrometric catalogues available at present, and include USNO products such as USNO-B (the successor to USNO-A), NOMAD, UCAC and others in production or narrowly released. Some users may see specialized catalogues (more recent versions of the above), tailored catalogues, interferometrically-produced cataloges, dynamic catalogues, and those with updated positions, motions, colors, and improved errors. Catalogue data is continually collected at the Naval Observatory dark-sky facility, NOFS; and the latest refined, updated catalogues are reduced and produced by NOFS and the USNO. See the USNO Catalog and Image Servers for more information and access. | https://en.wikipedia.org/wiki?curid=28232 |
Stellar designations and names
In astronomy, stars have a variety of different stellar designations and names, including catalogue designations, current and historical proper names, and foreign language names.
Only a tiny minority of known stars have proper names; all others have only designations from various catalogues or lists, or no identifier at all. Hipparchus in the 2nd century BC enumerated about 850 naked-eye stars. Johann Bayer in 1603 listed about twice this number. Only in the 19th century did star catalogues list the naked-eye stars exhaustively. The Bright Star Catalogue, which is a star catalogue listing all stars of apparent magnitude 6.5 or brighter, or roughly every star visible to the naked eye from Earth, contains 9,096 stars. The most voluminous modern catalogues list on the order of a billion stars, out of an estimated total of 200 to 400 billion in the Milky Way.
Proper names may be historical, often transliterated from Arabic or Chinese names. Such transliterations can vary so there may be multiple spellings. A smaller number of names have been introduced since the Middle Ages, and a few in modern times as nicknames have come into popular use, for example "Sualocin" for α Delphini and "Navi" for γ Cassiopeiae.
The International Astronomical Union (IAU) has begun a process to select and formalise unique proper names for the brighter naked-eye stars and for other stars of popular interest. To the IAU, "name" refers to the (usually colloquial) term used for a star in everyday speech, while ""designation" is solely alphanumerical" and used almost exclusively in official catalogues and for professional astronomy. Many of the names and some of the designations in use today were inherited from the time before the IAU existed. Other designations are being added all the time. As of the start of 2019, the IAU had decided on a little over 300 proper names, mostly for the brighter naked-eye stars.
Several hundred of the brightest stars had traditional names, most of which derived from Arabic, but a few from Latin. There were a number of problems with these names, however:
In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin dated July 2016 included a table of 125 stars comprising the first two batches of names approved by the WGSN (on 30 June and 20 July 2016) together with names of stars (including four traditional star names: Ain, Edasich, Errai and Fomalhaut) reviewed and adopted by the IAU Executive Committee Working Group on Public Naming of Planets and Planetary Satellites during the 2015 NameExoWorlds campaign and recognized by the WGSN. Further batches of names were approved on 21 August, 12 September, 5 October and 6 November 2016. These were listed in a table of 102 stars included in the WGSN's second bulletin dated November 2016. The next additions were done on 1 February 2017 (13 new star names), 30 June 2017 (29), 5 September 2017 (41), 17 November 2017 (3) and 1 June 2018 (17). All 330 names are included in the current List of IAU-approved Star Names, last updated on 1 June 2018 (with a minor correction posted on 11 June 2018).
In practice, names are only universally used for the very brightest stars (Sirius, Arcturus, Vega, etc.) and for a small number of slightly less bright but "interesting" stars (Algol, Polaris, Mira, etc.). For other naked eye stars, the Bayer or Flamsteed designation is often preferred.
In addition to the traditional names, a small number of stars that are "interesting" can have modern English names. For instance, two second-magnitude stars, Alpha Pavonis and Epsilon Carinae, were assigned the proper names Peacock and Avior respectively in 1937 by Her Majesty's Nautical Almanac Office during the creation of "The Air Almanac", a navigational almanac for the Royal Air Force. Of the fifty-seven stars included in the new almanac, these two had no traditional names. The RAF insisted that all of the stars must have names, so new names were invented for them. These names have been approved by the IAU WGSN.
The book "" by R. H. Allen (1899) has had effects on star names:
A few stars are named for individuals. These are mostly names in common use that were taken up by the scientific community at some juncture. The first such case (discounting characters from mythology) was Cor Caroli (α CVn), named in the 17th century for Charles I of England. The remaining examples are mostly named after astronomers, the best known are probably Barnard's Star (which has the highest known proper motion of any star and is thus notable even though it is far too faint to be seen with the naked eye), Kapteyn's Star and recently Tabby's Star.
In July 2014 the IAU launched a process for giving proper names to exoplanets and their host stars. As a result, the IAU approved the names Cervantes for Mu Arae and Copernicus for 55 Cancri A.
In the absence of any better means of designating a star, catalogue designations are generally used. Many star catalogues are used for this purpose; see star catalogues.
The first modern schemes for designating stars systematically labelled them within their constellation.
Full-sky star catalogues detach the star designation from the star's constellation and aim at enumerating all stars with apparent magnitude greater than a given cut-off value.
Variable stars that do not have Bayer designations are assigned designations in a variable star scheme that superficially extends the Bayer scheme with uppercase Latin letters followed by constellation names, starting with single letters R to Z, and proceeding to pairs of letters. Such designations mark them as variable stars. Examples include R Cygni, RR Lyrae, and GN Andromedae. (Many variable stars also have designations in other catalogues.)
When a planet is detected around a star, the star is often given a name and number based on the name of the telescope or survey mission that discovered it and based on how many planets have already been discovered by that mission e.g. HAT-P-9, WASP-1, COROT-1, Kepler-4, TRAPPIST-1.
Star naming rights are not available for sale via the IAU. Rather, star names are selected on a non-commercial basis by a small number of international organizations of astronomers, scientists, and registration bodies, who assign names consisting usually of a Greek letter followed by the star's constellation name, or less frequently based on their ancient traditional name.
However, there are a number of non-scientific "star-naming" companies that offer to assign personalized names to stars within their own private catalogs. These names are used only within that company (and usually available for viewing on their web site), and are not recognized by the astronomical community, or by competing star-naming companies. A survey conducted by amateur astronomers discovered that 54% of consumers would still want to "name a star" with a non-scientific star-naming company even though they have been warned or informed such naming is not recognized by the astronomical community. | https://en.wikipedia.org/wiki?curid=28233 |
Space Shuttle Challenger
Space Shuttle "Challenger" (Orbiter Vehicle Designation: OV-099) was the second orbiter of NASA's Space Shuttle program to be put into service, after "Columbia". "Challenger" was built by Rockwell International's Space Transportation Systems Division, in Downey, California. Its maiden flight, STS-6, began on April 4, 1983. The orbiter was launched and landed nine times before disintegrating 73 seconds into its tenth mission, STS-51-L, on January 28, 1986, resulting in the deaths of all seven crew members including a civilian school teacher.
"Challenger" was the first of two orbiters that were destroyed in flight, the other being "Columbia" in 2003. The "Challenger" accident led to a two-and-a-half-year grounding of the shuttle fleet; flights resumed in 1988, with STS-26 flown by "Discovery". "Challenger" was replaced by "Endeavour", which was built from structural spares ordered by NASA in the construction contracts for "Discovery" and "Atlantis".
"Challenger" was named after HMS "Challenger", a British corvette that was the command ship for the "Challenger" Expedition, a pioneering global marine research expedition undertaken from 1872 through 1876. The Apollo 17 Lunar Module, which landed on the Moon in 1972, was also named "Challenger".
Because of the low production volume of orbiters, the Space Shuttle program decided to build a vehicle as a Structural Test Article, STA-099, that could later be converted to a flight vehicle. The contract for STA-099 was awarded to North American Rockwell on July 26, 1972, and construction was completed in February 1978. After STA-099's rollout, it was sent to a Lockheed test site in Palmdale, where it spent over 11 months in vibration tests designed to simulate entire shuttle flights, from launch to landing. To prevent damage during structural testing, qualification tests were performed to a safety factor of 1.2 times the design limit loads. The qualification tests were used to validate computational models, and compliance with the required 1.4 factor of safety was shown by analysis. STA-099 was essentially a complete airframe of a Space Shuttle orbiter, with only a mockup crew module installed and thermal insulation placed on its forward fuselage.
NASA planned to refit the prototype orbiter "Enterprise" (OV-101), used for flight testing, as the second operational orbiter; but "Enterprise" lacked most of the systems needed for flight, including a functional propulsion system, thermal insulation, a life support system, and most of the cockpit instrumentation. Modifying it for spaceflight was considered to be too difficult, expensive, and time-consuming. Since STA-099 was not as far along in the construction of its airframe, it would be easier to upgrade to a flight article. Because STA-099's qualification testing prevented damage, NASA found that rebuilding STA-099 as a flight worthy orbiter would be less expensive than refitting "Enterprise". Work on converting STA-099 to operational status began in January 1979, starting with the crew module (the pressurized portion of the vehicle), as the rest of the vehicle was still being used for testing by Lockheed. STA-099 returned to the Rockwell plant in November 1979, and the original, unfinished crew module was replaced with the newly constructed model. Major parts of STA-099, including the payload bay doors, body flap, wings, and vertical stabilizer, also had to be returned to their individual subcontractors for rework. By early 1981, most of these components had returned to Palmdale to be reinstalled. Work continued on the conversion until July 1982, when the new orbiter was rolled out as "Challenger".
"Challenger", as did the orbiters built after it, had fewer tiles in its Thermal Protection System than "Columbia", though it still made heavier use of the white LRSI tiles on the cabin and main fuselage than did the later orbiters. Most of the tiles on the payload bay doors, upper wing surfaces, and rear fuselage surfaces were replaced with DuPont white Nomex felt insulation. These modifications and an overall lighter structure allowed "Challenger" to carry 2,500 lb (1,100 kg) more payload than "Columbia." "Challenger's" fuselage and wings were also stronger than "Columbia's" despite being lighter. The hatch and vertical-stabilizer tile patterns were also different from those of the other orbiters. "Challenger" was also the first orbiter to have a head-up display system for use in the descent phase of a mission, and the first to feature Phase I main engines rated for 104% maximum thrust.
After its first flight in April 1983, "Challenger" flew on 85% of all Space Shuttle missions. Even when the orbiters "Discovery" and "Atlantis" joined the fleet, "Challenger" flew three missions a year from 1983 to 1985. "Challenger", along with "Discovery", was modified at Kennedy Space Center to be able to carry the Centaur-G upper stage in its payload bay. If flight STS-51-L had been successful, "Challenger"'s next mission would have been the deployment of the "Ulysses" probe with the Centaur to study the polar regions of the Sun.
"Challenger" flew the first American woman, African-American, Dutchman and Canadian into space; carried three Spacelab missions; and performed the first night launch and night landing of a Space Shuttle.
"Challenger" was also the first space shuttle to be destroyed in an accident during a mission. The collected debris of the vessel is currently buried in decommissioned missile silos at Launch Complex 31, Cape Canaveral Air Force Station. A section of the fuselage recovered from Space Shuttle "Challenger" can also be found at the "Forever Remembered" memorial at the Kennedy Space Center Visitor Complex in Florida. From time to time, further pieces of debris from the orbiter wash up on the Florida coast. When this happens, they are collected and transported to the silos for storage. Because of its early loss, "Challenger" was the only space shuttle that never wore the NASA "meatball" logo, and was never modified with the MEDS "glass cockpit". The tail was never fitted with a drag chute – it was fitted to the remaining orbiters in 1992. Also because of its early demise "Challenger" was also one of only two shuttles that never visited the Mir Space Station or the International Space Station – the other one being its sister ship "Columbia". | https://en.wikipedia.org/wiki?curid=28235 |
Space Shuttle Enterprise
Space Shuttle "Enterprise" (Orbiter Vehicle Designation: OV-101) was the first orbiter of the Space Shuttle system. Rolled out on September 17, 1976, it was built for NASA as part of the Space Shuttle program to perform atmospheric test flights after being launched from a modified Boeing 747. It was constructed without engines or a functional heat shield. As a result, it was not capable of spaceflight.
Originally, "Enterprise" had been intended to be refitted for orbital flight to become the second space-rated orbiter in service. However, during the construction of , details of the final design changed, making it simpler and less costly to build around a body frame that had been built as a test article. Similarly, "Enterprise" was considered for refit to replace "Challenger" after the latter was destroyed, but was built from structural spares instead.
"Enterprise" was restored and placed on display in 2003 at the Smithsonian's new Steven F. Udvar-Hazy Center in Virginia. Following the retirement of the Space Shuttle fleet, replaced "Enterprise" at the Udvar-Hazy Center, and "Enterprise" was transferred to the Intrepid Sea, Air & Space Museum in New York City, where it has been on display since July 2012.
The design of "Enterprise" was not the same as that planned for , the first flight model; the aft fuselage was constructed differently, and it did not have the interfaces to mount OMS pods. A large number of subsystems—ranging from main engines to radar equipment—were not installed on "Enterprise", but the capacity to add them in the future was retained, as NASA originally intended to refit the orbiter for spaceflight at the conclusion of its testing. Instead of a thermal protection system, its surface was primarily covered with simulated tiles made from polyurethane foam. Fiberglass was used for the leading edge panels in place of the reinforced carbon–carbon ones of spaceflight-worthy orbiters. Only a few sample thermal tiles and some Nomex blankets were real. "Enterprise" used fuel cells to generate its electrical power, but these were not sufficient to power the orbiter for spaceflight.
"Enterprise" also lacked reaction control system thrusters and hydraulic mechanisms for the landing gear; the landing gear doors were simply opened through the use of explosive bolts and the gear dropped down solely by gravity. As it was only used for atmospheric testing, "Enterprise" featured a large nose probe mounted on its nose cap, common on test aircraft because the location provides the most accurate readings for the test instruments, being mounted out in front of the disturbed airflow.
"Enterprise" was equipped with Lockheed-manufactured zero-zero ejection seats like those its sister carried on its first four missions.
Construction began on "Enterprise" on June 4, 1974. Designated OV-101, it was originally planned to be named "Constitution" and unveiled on Constitution Day, September 17, 1976. Fans of asked US President Gerald Ford, through a letter-writing campaign, to name the orbiter after the television show's fictional starship, USS "Enterprise". White House advisors cited "hundreds of thousands of letters" from Trekkies, "one of the most dedicated constituencies in the country", as a reason for giving the shuttle the name. Although Ford did not publicly mention the campaign, the president said that he was "partial to the name" "Enterprise", and directed NASA officials to change the name.
In mid-1976 the orbiter was used for ground vibration tests, allowing engineers to compare data from an actual flight vehicle with theoretical models.
On September 17, 1976, "Enterprise" was rolled out of Rockwell's plant at Palmdale, California. In recognition of its fictional namesake, "Star Trek" creator Gene Roddenberry and most of the principal cast of the original series of "Star Trek" were on hand at the dedication ceremony.
On January 31, 1977, "Enterprise" was taken by road to Dryden Flight Research Center at Edwards Air Force Base to begin operational testing.
While at NASA Dryden "Enterprise" was used by NASA for a variety of ground and flight tests intended to validate aspects of the shuttle program. The initial nine-month testing period was referred to by the acronym ALT, for "Approach and Landing Test". These tests included a maiden "flight" on February 18, 1977, atop a Boeing 747 Shuttle Carrier Aircraft (SCA) to measure structural loads and ground handling and braking characteristics of the mated system. Ground tests of all orbiter subsystems were carried out to verify functionality prior to atmospheric flight.
The mated "Enterprise"/SCA combination was then subjected to five test flights with "Enterprise" uncrewed and unactivated. The purpose of these test flights was to measure the flight characteristics of the mated combination. These tests were followed with three test flights with "Enterprise" crewed to test the shuttle flight control systems.
On August 12, 1977, "Enterprise" flew on its own for the first time. "Enterprise" underwent four more free flights where the craft separated from the SCA and was landed under astronaut control. These tests verified the flight characteristics of the orbiter design and were carried out under several aerodynamic and weight configurations. The first three flights were flown with a tailcone placed at the end of "Enterprise" aft fuselage, which reduced drag and turbulence when mated to the SCA. The final two flights saw the tailcone removed and mockup main engines installed. On the fifth and final glider flight, pilot-induced oscillation problems were revealed, which had to be addressed before the first orbital launch occurred.
Following the conclusion of the ALT test flight program, on March 13, 1978, "Enterprise" was flown once again, but this time halfway across the country to NASA's Marshall Space Flight Center (MSFC) in Alabama for the Mated Vertical Ground Vibration Testing (MGVT). The orbiter was lifted up on a sling very similar to the one used at Kennedy Space Center and placed inside the Dynamic Test Stand building, and there mated to the Vertical Mate Ground Vibration Test tank (VMGVT-ET), which in turn was attached to a set of inert Solid Rocket Boosters (SRB) to form a complete shuttle launch stack, and marked the first time in the program's history that all Space Shuttle elements, an Orbiter, an External Tank (ET), and two SRBs, were mated together. During the course of the program, "Enterprise" and the rest of the launch stack would be exposed to a punishing series of vibration tests simulating as closely as possible those expected during various phases of launch, some tests with and others without the SRBs in place.
At the conclusion of this testing, "Enterprise" was due to be taken back to Palmdale for retrofitting as a fully spaceflight capable vehicle. Under this arrangement, "Enterprise" would be launched on its maiden spaceflight in July 1981 to launch a communications satellite and retrieve the Long Duration Exposure Facility, then planned for a 1980 release on the first operational orbiter, "Columbia". Afterwards, "Enterprise" would conduct two Spacelab missions. However, in the period between the rollout of "Enterprise" and the rollout of "Columbia", a number of significant design changes had taken place, particularly with regard to the weight of the fuselage and wings. This meant that retrofitting the prototype would have been a much more expensive process than previously realized, involving the dismantling of the orbiter and the return of various structural sections to subcontractors across the country. As a consequence, NASA made the decision to convert an incomplete Structural Test Article, numbered STA-099, which had been built to undergo a variety of stress tests, into a fully flight-worthy orbiter, which became .
Following the MGVT program and with the decision to not use "Enterprise" for orbital missions, it was ferried to Kennedy Space Center on April 10, 1979. By June 1979, it was again mated with an external tank and solid rocket boosters (known as a boilerplate configuration) and tested in a launch configuration at KSC Launch Complex 39A for a series of fit checks of the facilities there.
With the completion of critical testing, was returned to Rockwell's plant in Palmdale in October 1987 and was partially disassembled to allow certain components to be reused in other shuttles. After this period, "Enterprise" was returned to NASA's Dryden Flight Research Facility in September 1981. In 1983 and 1984, "Enterprise" underwent an international tour visiting France, West Germany, Italy, the United Kingdom, and Canada. "Enterprise" also visited California, Alabama, and Louisiana (while visiting the 1984 Louisiana World Exposition). It was also used to fit-check the never-used shuttle launch pad at Vandenberg AFB, California. On November 18, 1985, "Enterprise" was ferried to Washington, D.C., where it became property of the Smithsonian Institution and was stored in the National Air and Space Museum's hangar at Dulles International Airport.
After the "Challenger" disaster, NASA considered using "Enterprise" as a replacement. Refitting the shuttle with all of the necessary equipment for it to be used in space was considered, but NASA decided to use spares constructed at the same time as and to build .
In 2003 after the breakup of during re-entry, the "Columbia" Accident Investigation Board conducted tests at Southwest Research Institute, which used an air cannon to shoot foam blocks of similar size, mass and speed to that which struck "Columbia" at a test structure which mechanically replicated the orbiter wing leading edge. They removed a section of fiberglass leading edge from "Enterprise" wing to perform analysis of the material and attached it to the test structure, then shot a foam block at it. While the leading edge was not broken as a result of the test, which took place on May 29, 2003, the impact was enough to permanently deform a seal and leave a thin gap long. Since the strength of the reinforced carbon–carbon (RCC) on "Columbia" is "substantially weaker and less flexible" than the test section from "Enterprise", this result suggested that the RCC would have been shattered. A section of RCC leading edge from "Discovery" was tested on June 6, to determine the effects of the foam on a similarly aged leading edge, resulting in a crack on panel 6 and cracking on a "T"-shaped seal between panels 6 and 7. On July 7, using a leading edge from "Atlantis" and focused on panel 8 with refined parameters stemming from the "Columbia" accident investigation, a second test created a ragged hole approximately in the RCC structure. The tests clearly demonstrated that a foam impact of the type "Columbia" sustained could seriously breach the protective RCC panels on the wing leading edge.
The board determined that the probable cause of the accident was that the foam impact caused a breach of a reinforced carbon-carbon panel along the leading edge of "Columbia" left wing, allowing hot gases generated during re-entry to enter the wing and cause structural collapse. This caused "Columbia" to tumble out of control, breaking up with the loss of the entire crew.
From 1985 to 2003, "Enterprise" was stored at the Smithsonian's hangar at Washington Dulles International Airport before it was restored and moved to the Smithsonian's newly built National Air and Space Museum Steven F. Udvar-Hazy Center at Washington Dulles, where it was the centerpiece of the space collection. On April 12, 2011, NASA announced that , the most traveled orbiter in the fleet, would replace "Enterprise" in the Smithsonian's collection once the Shuttle fleet was retired, with ownership of "Enterprise" transferred to the Intrepid Sea, Air & Space Museum in New York City. On April 17, 2012, "Discovery" was transported by Shuttle Carrier Aircraft to Dulles from Kennedy Space Center, where it made several passes over the Washington D.C. metro area. After "Discovery" had been removed from the Shuttle Carrier Aircraft, both orbiters were displayed nose-to-nose outside the Steven F. Udvar-Hazy Center before "Enterprise" was made ready for its flight to New York.
On December 12, 2011, ownership of "Enterprise" was officially transferred to the Intrepid Sea, Air & Space Museum in New York City. In preparation for the anticipated relocation, engineers evaluated the vehicle in early 2010 and determined that it was safe to fly on the Shuttle Carrier Aircraft once again. At approximately 13:40 UTC on April 27, 2012, "Enterprise" took off from Dulles International Airport en route to a fly-by over the Hudson River, New York's JFK International Airport, the Statue of Liberty, the George Washington and Verrazano-Narrows Bridges, and several other landmarks in the city, in an approximately 45-minute "final tour". At 15:23 UTC, "Enterprise" touched down at JFK International Airport.
The mobile Mate-Demate Device and cranes were transported from Dulles to the ramp at JFK and the shuttle was removed from the SCA overnight on May 12, 2012, placed on a specially designed flat bed trailer and returned to Hangar 12. On June 3 a Weeks Marine barge took "Enterprise" to Jersey City. The Shuttle sustained cosmetic damage to a wingtip when a gust of wind blew the barge towards a piling. It was hoisted June 6 onto the Intrepid Museum in Manhattan.
"Enterprise" went on public display on July 19, 2012, at the Intrepid Museum's new Space Shuttle Pavilion, a temporary shelter consisting of a pressurized, air-supported fabric bubble constructed on the aft end of the carrier's flight deck.
On October 29, 2012, storm surges from Hurricane Sandy caused Pier 86, including the Intrepid Museum's visitor center, to flood, and knocked out the museum's electrical power and both backup generators. The loss of power caused the Space Shuttle Pavilion to deflate, and high winds from the hurricane caused the fabric of the Pavilion to tear and collapse around the orbiter. Minor damage was spotted on the vertical stabilizer of the orbiter, as a portion of the tail fin above the rudder/speedbrake had broken off. The broken section was recovered by museum staff. While the pavilion itself could not be replaced for some time in 2013, the museum erected scaffolding and sheeting around "Enterprise" to protect it from the environment.
By April 2013, the damage sustained to "Enterprise" vertical stabilizer had been fully repaired, and construction work on the structure for a new pavilion was under way. The pavilion and exhibit reopened on July 10, 2013.
"Enterprise" was listed on the National Register of Historic Places on March 13, 2013, reference number 13000071, in recognition of its role in the development of the Space Shuttle Program. The historic significance criteria are in space exploration, transportation, and engineering. | https://en.wikipedia.org/wiki?curid=28236 |
Space Shuttle Columbia
The Space Shuttle "Columbia" (Orbiter Vehicle Designation: OV-102) was the first space-rated orbiter in NASA's Space Shuttle fleet. It launched for the first time on mission STS-1 on April 12, 1981, the first flight of the Space Shuttle program. Serving for over 22 years, it completed 27 missions before disintegrating during re-entry near the end of its 28th mission, STS-107 on February 1, 2003, resulting in the deaths of all seven crew members.
Construction began on "Columbia" in 1975 at Rockwell International's (formerly North American Aviation/North American Rockwell) principal assembly facility in Palmdale, California, a suburb of Los Angeles. "Columbia" was named after the American sloop "Columbia Rediviva" which, from 1787 to 1793, under the command of Captain Robert Gray, explored the US Pacific Northwest and became the first American vessel to circumnavigate the globe. It is also named after the command module of Apollo 11, the first crewed landing on another celestial body. "Columbia" was also the female symbol of the United States. After construction, the orbiter arrived at Kennedy Space Center on March 25, 1979, to prepare for its first launch. "Columbia" was originally scheduled to lift off in late 1979, however the launch date was delayed by problems with both the RS-25 engine, as well as the thermal protection system (TPS). On March 19, 1981, during preparations for a ground test, workers were asphyxiated while working in Columbia's nitrogen-purged aft engine compartment, resulting in (variously reported) two or three fatalities.
The first flight of "Columbia" (STS-1) was commanded by John Young, a veteran from the Gemini and Apollo programs who was the ninth person to walk on the Moon in 1972, and piloted by Robert Crippen, a rookie astronaut originally selected to fly on the military's Manned Orbital Laboratory (MOL) spacecraft, but transferred to NASA after its cancellation, and served as a support crew member for the Skylab and Apollo-Soyuz missions.
"Columbia" spent 610 days in the Orbiter Processing Facility (OPF), another 35 days in the Vehicle Assembly Building (VAB), and 105 days on Pad 39A before finally lifting off. "Columbia" was successfully launched on April 12, 1981, the 20th anniversary of the first human spaceflight (Vostok 1), and returned on April 14, 1981, after orbiting the Earth 36 times, landing on the dry lakebed runway at Edwards Air Force Base in California. "Columbia" then undertook three further research missions to test its technical characteristics and performance. Its first operational mission, with a four-man crew, was STS-5, which launched on November 11, 1982. At this point "Columbia" was joined by "Challenger", which flew the next three shuttle missions, while "Columbia" underwent modifications for the first Spacelab mission.
In 1983, "Columbia", under the command of John Young on what was his sixth spaceflight, undertook its second operational mission (STS-9), in which the Spacelab science laboratory and a six-person crew was carried, including the first non-American astronaut on a space shuttle, Ulf Merbold. After the flight, "Columbia" spent 18 months at the Rockwell Palmdale facility beginning in January 1984, undergoing modifications that removed the Orbiter Flight Test hardware and bringing it up to similar specifications as those of its sister orbiters. At that time the shuttle fleet was expanded to include "Discovery" and "Atlantis".
"Columbia" returned to space on January 12, 1986, with the launch of STS-61-C. The mission's crew included Dr. Franklin Chang-Diaz, as well as the first sitting member of the House of Representatives to venture into space, Bill Nelson.
The next shuttle mission, STS-51-L, was undertaken by "Challenger". It was launched on January 28, 1986, ten days after STS-61-C had landed, and ended in disaster 73 seconds after launch. Prior to the accident, "Columbia" had been slated to be ferried to Vandenberg Air Force Base to conduct fueling tests and to perform a flight readiness firing at SLC-6 to validate the west coast launch site. In the aftermath NASA's shuttle timetable was disrupted, and the Vandenberg tests, which would have cost $60 million, were canceled. "Columbia" was not flown again until 1989 (on STS-28), after which it resumed normal service as part of the shuttle fleet.
STS-93, launched on July 23, 1999, was the first U.S. space mission with a female commander, Lt. Col. Eileen Collins. This mission deployed the Chandra X-ray Observatory.
"Columbia"'s final complete mission was STS-109, the fourth servicing mission for the Hubble Space Telescope. Its next mission, STS-107, culminated in the orbiter's loss when it disintegrated during reentry, killing all seven of its crew.
Consequently, President George W. Bush decided to retire the Shuttle orbiter fleet by 2010 in favor of the Constellation program and its crewed Orion spacecraft. The Constellation program was later cancelled with the NASA Authorization Act of 2010 signed by President Barack Obama on October 11.
As the second orbiter to be constructed, and the first able to fly into space, "Columbia" was roughly heavier than subsequent orbiters such as "Endeavour", which were of a slightly different design, and had benefited from advances in materials technology. In part, this was due to heavier wing and fuselage spars, the weight of early test instrumentation that remained fitted to the avionics suite, and an internal airlock that, originally fitted into the other orbiters, was later removed in favor of an external airlock to facilitate Shuttle/Mir and Shuttle/International Space Station dockings. Due to its weight, "Columbia" could not have used the planned Centaur-G booster (cancelled after the loss of "Challenger"). The retention of the internal airlock allowed NASA to use "Columbia" for the STS-109 Hubble Space Telescope servicing mission, along with the Spacehab double module used on STS-107. Due to "Columbia's" higher weight, it was less ideal for NASA to use it for missions to the International Space Station, though modifications were made to the Shuttle during its last refit in case the spacecraft was needed for such tasks.
Externally, "Columbia" was the first orbiter in the fleet whose surface was mostly covered with High & Low Temperature Reusable Surface Insulation (HRSI/LRSI) tiles as its main thermal protection system (TPS), with white silicone rubber-painted Nomex – known as Felt Reusable Surface Insulation (FRSI) blankets – in some areas on the wings, fuselage and payload bay doors. FRSI once covered almost 25% of the orbiter; the first upgrade resulted in its removal from many areas, and in later flights it was only used on the upper section of the payload bay doors and inboard sections of the upper wing surfaces. The upgrade also involved replacing many of the white LRSI tiles on the upper surfaces with Advanced Flexible Reusable Surface Insulation (AFRSI) blankets (also known as Fibrous Insulation Blankets, or FIBs) that had been used on "Discovery" and "Atlantis". Originally, "Columbia" had 32,000 tiles – the upgrade reduced this to 24,300. The AFRSI blankets consisted of layers of pure silica felt sandwiched between a layer of silica fabric on the outside and S-Glass fabric on the inside, stitched together using pure silica thread in a 1-inch grid, then coated with a high-purity silica coating. The blankets were semi-rigid and could be made as large as 30" by 30". Each blanket replaced as many as 25 tiles and was bonded directly to the orbiter. The direct application of the blankets to the orbiter resulted in weight reduction, improved durability, reduced fabrication and installation cost, and reduced installation schedule time. All of this work was performed during "Columbia's" first retrofitting and the post-"Challenger" stand-down.
Despite refinements to the orbiter's thermal protection system and other enhancements, "Columbia" would never weigh as little unloaded as the other orbiters in the fleet. The next-oldest shuttle, "Challenger", was also relatively heavy, although lighter than "Columbia".
Until its last refit, "Columbia" was the only operational orbiter with wing markings consisting of an American flag on the port (left) wing and the letters "USA" on the starboard (right) wing. "Challenger", "Discovery", "Atlantis" and "Endeavour" all, until 1998, bore markings consisting of the letters "USA" above an American flag on the left wing, and the pre-1998 NASA "worm" logo afore the respective orbiter's name on the right wing. ("Enterprise", the test vehicle which was the prototype for "Columbia", originally had the same wing markings as "Columbia" but with the letters "USA" on the right wing spaced closer together; "Enterprise"'s markings were modified to match "Challenger" in 1983.) The name of the orbiter was originally placed on the payload bay doors much like "Enterprise" but was placed on the crew cabin after the "Challenger" disaster so that the orbiter could be easily identified while in orbit. From its last refit to its destruction, "Columbia" bore markings identical to those of its operational sister orbiters – the NASA "meatball" logo on the left wing and the American flag afore the orbiter's name on the right; only "Columbia's" distinctive wing "chines" remained. These black areas on the upper surfaces of the shuttle's forward wing were added because, at first, shuttle designers did not know how reentry heating would affect the craft's upper wing surfaces. The "chines" allowed "Columbia" to be easily recognized at a distance, as opposed to the subsequent orbiters. The "chines" were added after "Columbia" arrived at KSC in 1979.
Another unique external feature, termed the "SILTS" pod (Shuttle Infrared Leeside Temperature Sensing), was located on the top of "Columbia's" vertical stabilizer, and was installed after STS-9 to acquire infrared and other thermal data. Though the pod's equipment was removed after initial tests, NASA decided to leave it in place, mainly to save costs, along with the agency's plans to use it for future experiments. The vertical stabilizer was later modified to incorporate the drag chute first used on "Endeavour" in 1992.
"Columbia" was also originally fitted with Lockheed-built ejection seats identical to those found on the SR-71 Blackbird. These were active for the four orbital test flights, but deactivated after STS-4, and removed entirely after STS-9. "Columbia" was also the only spaceworthy orbiter not delivered with head-up displays for the Commander and Pilot, although these were incorporated after STS-9. Like its sister ships, "Columbia" was eventually retrofitted with the new MEDS "glass cockpit" display and lightweight seats.
Had "Columbia" not been destroyed, it would have been fitted with the external airlock/docking adapter for STS-118, an International Space Station assembly mission, originally planned for November 2003. "Columbia" was scheduled for this mission due to "Discovery" being out of service for its Orbital Maintenance Down Period, and because the ISS assembly schedule could not be adhered to with only "Endeavour" and "Atlantis".
"Columbia"'s 'career' would have started to wind down after STS-118. It was to service the Hubble Space Telescope two more times between 2004 and 2005, but no more missions were planned for it again except for a mission designated STS-144 where it would retrieve the Hubble Space Telescope from orbit and bring it back to Earth. Following the "Columbia" accident, NASA flew the STS-125 mission using "Atlantis", combining the planned fourth and fifth servicing missions into one final mission to Hubble. Because of the retirement of the Space Shuttle fleet, the batteries and gyroscopes that keep the telescope pointed will eventually fail also because of the magnifier screen, which would result in its reentry and break-up in Earth's atmosphere. A "Soft Capture Docking Mechanism", based on the docking adapter that was to be used on the Orion spacecraft, was installed during the last servicing mission in anticipation of this event.
"Columbia" was also scheduled to launch the X-38 V-201 Crew Return Vehicle prototype as the next mission after STS-118, until the cancellation of the project in 2002.
"Columbia" flew 28 missions, gathering 300.74 days spent in space with 4,808 orbits and a total distance of up until STS-107.
Despite being in service during the Shuttle-Mir and International Space Station programs, "Columbia" did not fly any missions that visited a space station. The other three active orbiters at the time had visited both "Mir" and the ISS at least once. "Columbia" was not suited for high-inclination missions.
* Mission cancelled following suspension of shuttle flights following the "Challenger" disaster.
** Mission flown by "Endeavour" due to loss of "Columbia" on STS-107.
*** Mission flown by "Discovery" due to loss of "Columbia" on STS-107.
"Columbia" was destroyed at about 09:00 EST on February 1, 2003, while re-entering the atmosphere after a 16-day scientific mission. The Columbia Accident Investigation Board determined that a hole was punctured in the leading edge on one of "Columbia's" wings, which was made of a carbon composite. The hole had formed when a piece of insulating foam from the external fuel tank peeled off during the launch 16 days earlier and struck the shuttle's left wing. During the intense heat of re-entry, hot gases penetrated the interior of the wing, likely compromising the hydraulic system and leading to control failure of the control surfaces. The resulting loss of control exposed minimally protected areas of the orbiter to full-entry heating and dynamic pressures that eventually led to vehicle break up.
The report delved deeply into the underlying organizational and cultural issues that the Board believed contributed to the accident. The report was highly critical of NASA's decision-making and risk-assessment processes. Further, the board determined that, unlike NASA's early claims, a rescue mission would have been possible using the Shuttle "Atlantis", which was essentially ready for launch, and might have saved the "Columbia" crewmembers. The nearly 84,000 pieces of collected debris of the vessel are stored in a 16th-floor office suite in the Vehicle Assembly Building at the Kennedy Space Center. The collection was opened to the media once and has since been open only to researchers. Unlike "Challenger", which had a replacement orbiter built, "Columbia" did not.
The seven crew members who died aboard this final mission were: Rick Husband, Commander; William C. McCool, Pilot; Michael P. Anderson, Payload Commander/Mission Specialist 3; David M. Brown, Mission Specialist 1; Kalpana Chawla, Mission Specialist 2; Laurel Clark, Mission Specialist 4; and Ilan Ramon, Payload Specialist 1.
The debris field encompassed hundreds of miles across Texas and into Louisiana and Arkansas. The nose cap and remains of all seven crew members were found in Sabine County, East Texas.
To honor those who lost their lives aboard the shuttle and during the recovery efforts, the Patricia Huffman Smith NASA Museum "Remembering Columbia" was opened in Hemphill, Sabine County, Texas. The museum tells the story of Space Shuttle "Columbia" explorations throughout all its missions, including the final STS-107. Its exhibits also show the efforts of local citizens during the recovery period of the "Columbia" shuttle debris and its crew's remains. An area is dedicated to each STS-107 crew member, and also to the Texas Forest Service helicopter pilot who died in the recovery effort. The museum houses many objects and artifacts from: NASA and its contractors; the families of the STS-107 crew; and other individuals. The crew's families contributed personal items of the crew members to be on permanent display. The museum features two interactive simulator displays that emulate activities of the shuttle and orbiter. The digital learning center and its classroom provide educational opportunities for all ages.
The Columbia Memorial Space Center is the U.S. national memorial for the Space Shuttle "Columbia"s seven crew members. It is located in Downey on the site of the Space Shuttle's origins and production, the former North American Aviation plant in Los Angeles County, California. The facility is also a hands-on learning center with interactive exhibits, workshops, and classes about space science, astronautics, and the Space Shuttle program's legacy — providing educational opportunities for all ages.
The Shuttle's final crew was honored in 2003 when the United States Board on Geographic Names approved the name Columbia Point for a mountain in Colorado's Sangre de Cristo Mountains, less than a half-mile from Challenger Point, a peak named after America's other lost Space Shuttle. The Columbia Hills on Mars were also named in honor of the crew, and a host of other memorials were dedicated in various forms.
The Columbia supercomputer at the NASA Advanced Supercomputing (NAS) Division located at Ames Research Center in California was named in honor of the crew lost in the 2003 disaster. Built as a joint effort between NASA and technical partners SGI and Intel in 2004, the supercomputer was used in scientific research of space, the Earth's climate, and aerodynamic design of space launch vehicles and aircraft. The first part of the system, built in 2003, was dedicated to STS-107 astronaut and engineer Kalpana Chawla, who prior to joining the Space Shuttle program worked at Ames Research Center.
A female bald eagle at the National Eagle Center in Wabasha, Minnesota is named in tribute to the victims of the disaster.
Guitarist Steve Morse of the rock band Deep Purple wrote the instrumental "Contact Lost" in response to the tragedy, recorded by Deep Purple and featured as the closing track on their 2003 album "Bananas". It was dedicated to the astronauts whose lives were lost in the disaster. Morse donated songwriting royalties to the families of lost astronauts. Astronaut and mission specialist engineer Kalpana Chawla, one of the victims of the accident, was a fan of Deep Purple and had exchanged e-mails with the band during the flight, making the tragedy even more personal for the group. She took three CDs into space with her, two of which were Deep Purple albums ("Machine Head" and "Purpendicular"). Both CDs survived the destruction of the shuttle and the 39-mile plunge.
The musical group Echo's Children included singer-songwriter Cat Faber's "Columbia" on their final album "From the Hazel Tree".
The Long Winters band's 2005 album "Ultimatum" features the song "The Commander Thinks Aloud", a tribute to the final "Columbia" crew.
The Eric Johnson instrumental "Columbia" from his 2005 album "Bloom" was written as a commemoration and tribute to the lives that were lost. Johnson said "I wanted to make it more of a positive message, a salute, a celebration rather than just concentrating on a few moments of tragedy, but instead the bigger picture of these brave people's lives."
The graphic novel "Orbiter" by Warren Ellis and Colleen Doran was dedicated to the "lives, memories and legacies of the seven astronauts lost on space shuttle "Columbia" during mission STS-107."
The Scottish band Runrig pays tribute to Clark on the 2016 album "The Story". The final track, "Somewhere", ends with a recording of her voice. Clark was a Runrig fan and had a wake up call with Runrig's "Running to the Light". She took "The Stamping Ground" CD into space with her. When the shuttle exploded the CD was found back on Earth, and was presented to the band by her family. | https://en.wikipedia.org/wiki?curid=28237 |
Space Shuttle Discovery
Space Shuttle "Discovery" (Orbiter Vehicle Designation: OV-103) is one of the orbiters from NASA's Space Shuttle program and the third of five fully operational orbiters to be built. Its first mission, STS-41-D, flew from August 30 to September 5, 1984. Over 27 years of service it launched and landed 39 times, gathering more spaceflights than any other spacecraft to date. Like other shuttles, the shuttle has three main components: the Space Shuttle orbiter, a central fuel tank, and two rocket boosters. Nearly 25,000 heat-resistant tiles cover the orbiter to protect it from high temperatures on re-entry.
"Discovery" became the third operational orbiter to enter service, preceded by "Columbia" and "Challenger". It embarked on its last mission, STS-133, on February 24, 2011 and touched down for the final time at Kennedy Space Center on March 9, having spent a cumulative total of almost a full year in space. "Discovery" performed both research and International Space Station (ISS) assembly missions, and also carried the Hubble Space Telescope into orbit.
"Discovery" was the first operational shuttle to be retired, followed by "Endeavour" and then "Atlantis". The shuttle is now on display at the Steven F. Udvar-Hazy Center of the Smithsonian National Air and Space Museum.
The name "Discovery" was chosen to carry on a tradition based on ships of exploration, primarily , one of the ships commanded by Captain James Cook during his third and final major voyage from 1776 to 1779, and Henry Hudson's , which was used in 1610–1611 to explore Hudson Bay and search for a Northwest Passage. Other ships bearing the name have included of the 1875–1876 British Arctic Expedition to the North Pole and , which led the 1901–1904 "Discovery Expedition" to Antarctica.
"Discovery" launched the Hubble Space Telescope and conducted the second and third Hubble service missions. It also launched the "Ulysses" probe and three TDRS satellites. Twice "Discovery" was chosen as the "Return To Flight" Orbiter, first in 1988 after the loss of "Challenger" in 1986, and then again for the twin "Return To Flight" missions in July 2005 and July 2006 after the "Columbia" disaster in 2003. Project Mercury astronaut John Glenn, who was 77 at the time, flew with "Discovery" on STS-95 in 1998, making him the oldest person to go into space.
Had plans to launch United States Department of Defense payloads from Vandenberg Air Force Base gone ahead, "Discovery" would have become the dedicated US Air Force shuttle. Its first West Coast mission, STS-62-A, was scheduled for 1986, but canceled in the aftermath of "Challenger".
"Discovery" was retired after completing its final mission, STS-133 on March 9, 2011. The spacecraft is now on display in Virginia at the Steven F. Udvar-Hazy Center, an annex of the Smithsonian Institution's National Air and Space Museum.
"Discovery" weighed roughly 3600 kg (3.6t) less than "Columbia" when it was brought into service due to optimizations determined during the construction and testing of "Enterprise", "Columbia" and "Challenger". "Discovery" weighs heavier than "Atlantis" and heavier than "Endeavour".
Part of the "Discovery" weight optimizations included the greater use of quilted AFRSI blankets rather than the white LRSI tiles on the fuselage, and the use of graphite epoxy instead of aluminum for the payload bay doors and some of the wing spars and beams.
Upon its delivery to the Kennedy Space Center in 1983, "Discovery" was modified alongside "Challenger" to accommodate the liquid-fueled Centaur-G booster, which had been planned for use beginning in 1986 but was cancelled in the wake of the "Challenger" disaster.
Beginning in late 1995, the orbiter underwent a nine-month Orbiter Maintenance Down Period (OMDP) in Palmdale, California. This included outfitting the vehicle with a 5th set of cryogenic tanks and an external airlock to support missions to the International Space Station. As with all the orbiters, it could be attached to the top of specialized aircraft and did so in June 1996 when it returned to the Kennedy Space Center, and later in April 2012 when sent to the Udvar-Hazy Center, riding piggy-back on a modified Boeing 747.
After STS-105, "Discovery" became the first of the orbiter fleet to undergo Orbiter Major Modification (OMM) period at the Kennedy Space Center. Work began in September 2002 to prepare the vehicle for Return to Flight. The work included scheduled upgrades and additional safety modifications.
"Discovery" was decommissioned on March 9, 2011.
NASA offered "Discovery" to the Smithsonian Institution's National Air and Space Museum for public display and preservation, after a month-long decontamination process, as part of the national collection. "Discovery" replaced in the Smithsonian's display at the Steven F. Udvar-Hazy Center in Virginia. "Discovery" was transported to Washington Dulles International Airport on April 17, 2012, and was transferred to the Udvar-Hazy on April 19 where a welcome ceremony was held. Afterwards, at around 5:30 pm, "Discovery" was rolled to its "final wheels stop" in the Udvar Hazy Center.
By its last mission, "Discovery" had flown 149 million miles (238 million km) in 39 missions, completed 5,830 orbits, and spent 365 days in orbit over 27 years. "Discovery" flew more flights than any other Orbiter Shuttle, including four in 1985 alone. "Discovery" flew all three "return to flight" missions after the "Challenger" and "Columbia" disasters: STS-26 in 1988, STS-114 in 2005, and STS-121 in 2006. "Discovery" flew the ante-penultimate mission of the Space Shuttle program, STS-133, having launched on February 24, 2011. "Endeavour" flew STS-134 and "Atlantis" performed STS-135, NASA's last Space Shuttle mission. On February 24, 2011, Space Shuttle "Discovery" launched from Kennedy Space Center's Launch Complex 39-A to begin its final orbital flight.
‡ Longest shuttle mission for "Discovery"
– shortest shuttle mission for "Discovery"
The Flow Director was responsible for the overall preparation of the shuttle for launch and processing it after landing, and remained permanently assigned to head the spacecraft's ground crew while the astronaut flight crews changed for every mission. Each shuttle's Flow Director was supported by a Vehicle Manager for the same spacecraft. Space Shuttle "Discovery"'s Flow Directors were: | https://en.wikipedia.org/wiki?curid=28238 |
Space Shuttle Atlantis
Space Shuttle "Atlantis" (Orbiter Vehicle Designation: OV‑104) is a Space Shuttle orbiter vehicle belonging to the National Aeronautics and Space Administration (NASA), the spaceflight and space exploration agency of the United States. Manufactured by the Rockwell International company in Southern California and delivered to the Kennedy Space Center in Eastern Florida in April 1985, "Atlantis" is the fourth operational and the second-to-last Space Shuttle built. Its maiden flight was STS-51-J from 3 to 7 October 1985.
"Atlantis" embarked on its 33rd and final mission, also the final mission of a space shuttle, STS-135, on 8 July 2011. STS-134 by "Endeavour" was expected to be the final flight before STS-135 was authorized in October 2010. STS-135 took advantage of the processing for the STS-335 Launch on Need mission that would have been necessary if STS-134's crew became stranded in orbit.
"Atlantis" landed for the final time at the Kennedy Space Center on 21 July 2011.
By the end of its final mission, "Atlantis" had orbited the Earth a total of 4,848 times, traveling nearly or more than 525 times the distance from the Earth to the Moon.
"Atlantis" is named after RV "Atlantis", a two-masted sailing ship that operated as the primary research vessel for the Woods Hole Oceanographic Institution from 1930 to 1966.
Space Shuttle "Atlantis" lifted off on its maiden voyage on 3 October 1985, on mission STS-51-J, the second dedicated Department of Defense flight. It flew one other mission, STS-61-B, the second night launch in the shuttle program, before the Space Shuttle "Challenger" disaster temporarily grounded the Shuttle fleet in 1986. Among the five Space Shuttles flown into space, "Atlantis" conducted a subsequent mission in the shortest time after the previous mission (turnaround time) when it launched in November 1985 on STS-61-B, only 50 days after its previous mission, STS-51-J in October 1985. "Atlantis" was then used for ten flights between 1988 and 1992. Two of these, both flown in 1989, deployed the planetary probes "Magellan" to Venus (on STS-30) and "Galileo" to Jupiter (on STS-34). With STS-30 "Atlantis" became the first Space Shuttle to launch an interplanetary probe.
During NASA's 27th Shuttle Launch of STS-27 during an operation to release the payload, which was eventually determined to be a Lacrosse Surveillance satellite, "Atlantis" lost part of its protective heat shield during lift off, which substantially damaged the underside of her right wing, damaging over 700 tiles, which caused the melting of aluminum plating during her reentry. Before return to Earth, the Commander Robert L. Gibson thought to himself "We are going to die." due to the extensive damage to her wing. Due to the secretive nature of the "Atlantis's" payload, the crew was forced to use a more secure encrypted transmission, which had more than likely been received at a low quality. NASA engineers thought the damage was just light and shadows, and as a result the crew was infuriated. During reentry, Guy Garder, the pilot for the mission, returned the Shuttle safely. Upon inspection the Shuttle's bottom right wing was seen to be severely damaged in critical areas. Ultimately, the same fate would eventually be the result that destroyed the Space Shuttle "Columbia" in 2003, due to tile failures, which resulted in the "Columbia" being ripped apart on reentry. Had "Atlantis" been destroyed during her mission in 1988, more than likely the second destruction of an Orbiter would have set NASA back at least two years, forced a redesign of the fuel tanks foam coverings and the fragile heat shield plating, or it would have forced NASA to close down the Shuttle Program 30 years before it actually ended.
During another mission, STS-37 flown in 1991, "Atlantis" deployed the Compton Gamma Ray Observatory. Beginning in 1995 with STS-71, "Atlantis" made seven straight flights to the former Russian space station Mir as part of the Shuttle-Mir Program. STS-71 marked a number of firsts in human spaceflight: 100th U.S. crewed space flight; first U.S. Shuttle-Russian Space Station Mir docking and joint on-orbit operations; and first on-orbit change-out of shuttle crew. When linked, "Atlantis" and "Mir" together formed the largest spacecraft in orbit at the time.
Shuttle "Atlantis" also delivered several vital components for the construction of the International Space Station (ISS). During the February 2001 mission STS-98 to the ISS, "Atlantis" delivered the Destiny Module, the primary operating facility for U.S. research payloads aboard the ISS. The five-hour 25-minute third spacewalk performed by astronauts Robert Curbeam and Thomas Jones during STS-98 marked NASA's 100th extra vehicular activity in space. The Quest Joint Airlock, was flown and installed to the ISS by "Atlantis" during the mission STS-104 in July 2001. The successful installation of the airlock gave on-board space station crews the ability to stage repair and maintenance spacewalks outside the ISS using U.S. EMU or Russian Orlan space suits. The first mission flown by "Atlantis" after the Space Shuttle "Columbia" disaster was STS-115, conducted during September 2006. The mission carried the P3/P4 truss segments and solar arrays to the ISS. On ISS assembly flight STS-122 in February 2008, "Atlantis" delivered the Columbus laboratory to the ISS. Columbus laboratory is the largest single contribution to the ISS made by the European Space Agency (ESA).
In May 2009 "Atlantis" flew a seven-member crew to the Hubble Space Telescope for its Servicing Mission 4, STS-125. The mission was a success, with the crew completing five spacewalks totalling 37 hours to install new cameras, batteries, a gyroscope and other components to the telescope.
This was the final mission not to the ISS.
The longest mission flown using "Atlantis" was STS-117 which lasted almost 14 days in June 2007. During STS-117, Atlantis' crew added a new starboard truss segment and solar array pair (the S3/S4 truss), folded the P6 array in preparation for its relocation and performed four spacewalks. "Atlantis" was not equipped to take advantage of the Station-to-Shuttle Power Transfer System so missions could not be extended by making use of power provided by ISS.
During the STS-129 post-flight interview on 16 November 2009, shuttle launch director Mike Leinbach said that "Atlantis" officially beat Space Shuttle "Discovery" for the record low amount of Interim Problem Reports, with a total of just 54 listed since returning from STS-125. He continued to add "It is due to the team and the hardware processing. They just did a great job. The record will probably never be broken again in the history of the Space Shuttle Program, so congratulations to them".
During the STS-132 post-launch interview on 14 May 2010, Shuttle launch director Mike Leinbach said that "Atlantis" beat its own previous record low amount of Interim Problem Reports, with a total of 46 listed between STS-129 and STS-132.
"Atlantis" went through two overhauls of scheduled Orbiter Maintenance Down Periods (OMDPs) during its operational history.
"Atlantis" arrived at Palmdale, California in October 1992 for OMDP-1. During that visit 165 modifications were made over the next 20 months. These included the installation of a drag chute, new plumbing lines to configure the orbiter for extended duration, improved nose wheel steering, more than 800 new heat tiles and blankets and new insulation for main landing gear and structural modifications to the airframe.
"Atlantis" after suffering severe damage to her right wing during take-off, was forced to undergo repair to her aluminum structure, and replacement to 700 of her tiles in 1988. The Shuttle was relaunched in 1989.
On 5 November 1997, "Atlantis" again arrived at Palmdale for OMDP-2 which was completed on 24 September 1998. The 130 modifications carried out during OMDP-2 included glass cockpit displays, replacement of TACAN navigation with GPS and ISS airlock and docking installation. Several weight reduction modifications were also performed on the orbiter including replacement of Advanced Flexible Reusable Surface Insulation (AFRSI) insulation blankets on upper surfaces with FRSI. Lightweight crew seats were installed and the Extended Duration Orbiter (EDO) package installed on OMDP-1 was removed to lighten "Atlantis" to better serve its prime mission of servicing the ISS.
During the stand down period post "Columbia" accident, "Atlantis" went through over 75 modifications to the orbiter ranging from very minor bolt change-outs to window change-outs and different fluid systems.
"Atlantis" was known among the Shuttle workforce as being more prone than the others in the fleet to problems that needed to be addressed while readying the vehicle for launch, leading to some nicknaming it "Britney".
NASA initially planned to withdraw "Atlantis" from service in 2008, as the orbiter would have been due to undergo its third scheduled OMDP; the timescale of the final retirement of the shuttle fleet was such that having the orbiter undergo this work was deemed uneconomical. It was planned that "Atlantis" would be kept in near-flight condition to be used as a spares source for "Discovery" and "Endeavour". However, with the significant planned flight schedule up to 2010, the decision was taken to extend the time between OMDPs, allowing "Atlantis" to be retained for operations. "Atlantis" was subsequently swapped for one flight of each "Discovery" and "Endeavour" in the flight manifest. "Atlantis" had completed what was meant to be its last flight, STS-132, prior to the end of the shuttle program, but the extension of the Shuttle program into 2011 led to "Atlantis" being selected for STS-135, the final Space Shuttle mission in July 2011.
"Atlantis" is currently displayed at the Kennedy Space Center Visitor Complex. NASA Administrator Charles Bolden announced the decision at an employee event held on 12 April 2011 to commemorate the 30th anniversary of the first shuttle flight: "First, here at the Kennedy Space Center where every shuttle mission and so many other historic human space flights have originated, we'll showcase my old friend, "Atlantis"."
The Visitor Complex displays "Atlantis" with payload bay doors opened mounted at an angle to give the appearance of being in orbit around the Earth. The 43.21-degree mount angle also pays tribute to the countdown that preceded every shuttle launch at KSC. A multi-story digital projection of Earth rotates behind the orbiter in a indoor facility. Ground breaking of the facility occurred in 2012.
The exhibit opened on 29 June 2013.
A total of 156 individuals flew with Space Shuttle "Atlantis" over the course of its 33 missions. Because the shuttle sometimes flew crew members arriving and departing Mir and the ISS, not all of them launched and landed on "Atlantis".
Astronaut Clayton Anderson, ESA astronaut Leopold Eyharts and Russian cosmonauts Nikolai Budarin and Anatoly Solovyev only launched on "Atlantis". Similarly, astronauts Daniel Tani and Sunita Williams, as well as cosmonauts Vladimir Dezhurov and Gennady Strekalov only landed with "Atlantis". Only 146 men and women both launched and landed aboard "Atlantis".
Some of those people flew with "Atlantis" more than once. Taking them into account, 203 total seats were filled over "Atlantis" 33 missions. Astronaut Jerry Ross holds the record for the most flights aboard "Atlantis" at five.
Astronaut Rodolfo Neri Vela who flew aboard "Atlantis" on STS-61-B mission in 1985 became the first and so far only Mexican to have traveled to space. ESA astronaut Dirk Frimout who flew on STS-45 as a payload specialist was the first Belgian in space. STS-46 mission specialist Claude Nicollier was the first astronaut from Switzerland. On the same flight, astronaut Franco Malerba became the first citizen of Italy to travel to space.
Astronaut Michael Massimino who flew on STS-125 mission became the first person to use Twitter in space in May 2009.
Having flown aboard "Atlantis" as part of the STS-132 crew in May 2010 and "Discovery" as part of the STS-133 crew in February/March 2011, Stephen Bowen became the first NASA astronaut to be launched on consecutive missions.
NASA announced in 2007 that 24 helium and nitrogen gas tanks in "Atlantis" were older than their designed lifetime. These composite overwrapped pressure vessels (COPV) were designed for a 10-year life and later cleared for an additional 10 years; they exceeded this life in 2005. NASA said it could not guarantee any longer that the vessels on "Atlantis" would not burst or explode under full pressure. Failure of these tanks could have damaged parts of the orbiter and even wound or kill ground personnel. An in-flight failure of a pressure vessel could have even resulted in the loss of the orbiter and its crew. NASA analyses originally assumed that the vessels would leak before they burst, but new tests showed that they could in fact burst before leaking.
Because the original vendor was no longer in business, and a new manufacturer could not be qualified before 2010, when the shuttles were scheduled to be retired, NASA decided to continue operations with the existing tanks. Therefore, to reduce the risk of failure and the cumulative effects of load, the vessels were maintained at 80 percent of the operating pressure as late in the launch countdown as possible, and the launch pad was cleared of all but essential personnel when pressure was increased to 100 percent. The new launch procedure was employed during some of the remaining launches of "Atlantis", but was resolved when the two COPVs deemed to have the highest risk of failure were replaced.
After the STS-125 mission, a work light knob was discovered jammed in the space between one of "Atlantis"s front interior windows and the Orbiter dashboard structure. The knob was believed to have entered the space during flight, when the pressurized Orbiter was expanded to its maximum size. Then, once back on Earth, the Orbiter contracted, jamming the knob in place. Leaving "as-is" was considered unsafe for flight, and some options for removal (including window replacement) would have included a 6-month delay of "Atlantis"s next mission (planned to be STS-129). Had the removal of the knob been unsuccessful, the worst-case scenario was that "Atlantis" could have been retired from the fleet, leaving "Discovery" and "Endeavour" to complete the manifest alone. On 29 June 2009, "Atlantis" was pressurized to (3 psi above ambient), which forced the Orbiter to expand slightly. The knob was then frozen with dry ice, and successfully removed. Small areas of damage to the window were discovered where the edges of the knob had been embedded into the pane. Subsequent investigation of the window damage discovered a maximum defect depth of approximately , less than the reportable depth threshold of and not serious enough to warrant the pane's replacement. | https://en.wikipedia.org/wiki?curid=28239 |
Space Shuttle Endeavour
Space Shuttle "Endeavour" (Orbiter Vehicle Designation: OV-105) is a retired orbiter from NASA's Space Shuttle program and the fifth and final operational Shuttle built. It embarked on its first mission, STS-49, in May 1992 and its 25th and final mission, STS-134, in May 2011. STS-134 was expected to be the final mission of the Space Shuttle program, but with the authorization of STS-135, "Atlantis" became the last shuttle to fly.
The United States Congress approved the construction of "Endeavour" in 1987 to replace "Challenger", which was destroyed in 1986.
Structural spares built during the construction of "Discovery" and "Atlantis" were used in its assembly. NASA chose, on cost grounds, to build "Endeavour" from spares rather than refitting "Enterprise."
Following the loss of "Challenger", in 1987 NASA was authorized to begin the procurement process for a replacement orbiter. Again, a major refit of the prototype orbiter "Enterprise" was looked at and rejected on cost grounds, with instead the cache of structural spares that were produced as part of the construction of "Discovery" and "Atlantis" earmarked for assembly into the new orbiter. Assembly was completed in July 1990, and the new orbiter was rolled out in April 1991. As part of the process, NASA ran a national competition for schools to name the new orbiter - the criteria included a requirement that it be named after an exploratory or research vessel, with a name "easily understood in the context of space"; entries included an essay about the name, the story behind it and why it was appropriate for a NASA shuttle, and the project that supported the name. Amongst the entries, "Endeavour" was suggested by one-third of the participating schools, with President Bush eventually selecting it on the advice of the NASA Administrator, Richard Truly. The national winners were Senatobia Middle School in Senatobia, Mississippi, in the elementary division and Tallulah Falls School in Tallulah Falls, Georgia, in the upper school division. They were honored at several ceremonies in Washington, D.C., including a White House ceremony where President Bush presented awards to each school. "Endeavour" was delivered by Rockwell International Space Transportation Systems Division in May 1991 and first launched a year later, in May 1992, on STS-49. Rockwell International claimed that it had made no profit on Space Shuttle "Endeavour", despite construction costing US$2.2 billion.
The orbiter is named after the British HMS "Endeavour", the ship which took Captain James Cook on his first voyage of discovery (1768–1771). This is why the name is spelled in the British English manner, rather than the American English ("Endeavor"). This has caused confusion, including when NASA itself misspelled a sign on the launch pad in 2007. The Space Shuttle carried a piece of the original wood from Cook's ship inside the cockpit. The name also honored "Endeavour", the command module of Apollo 15, which was also named for Cook's ship.
On May 30, 2020, Dragon 2 capsule C206 was named "Endeavour" during the Crew Dragon Demo-2 mission by astronauts Doug Hurley and Bob Behnken in honor of the shuttle, on which both astronauts took their first flights (STS-127 and STS-123 respectively).
On its first mission, it captured and redeployed the stranded "INTELSAT VI" communications satellite. The first African-American woman astronaut, Mae Jemison, was launched into space on the mission STS-47 on September 12, 1992.
"Endeavour" flew the first servicing mission STS-61 for the Hubble Space Telescope in 1993. In 1997 it was withdrawn from service for eight months for a retrofit, including installation of a new airlock. In December 1998, it delivered the Unity Module to the International Space Station.
"Endeavour"s last Orbiter Major Modification period began in December 2003 and ended on October 6, 2005. During this time, "Endeavour" received major hardware upgrades, including a new, multi-functional, electronic display system, often referred to as a glass cockpit, and an advanced GPS receiver, along with safety upgrades recommended by the "Columbia" Accident Investigation Board (CAIB) for the shuttle's return to flight following the loss of "Columbia" during reentry on 1 February 2003.
The STS-118 mission, "Endeavour"s first since the refit, included astronaut Barbara Morgan, formerly assigned to the Teacher in Space project, and later a member of the Astronaut Corps from 1998 to 2008, as part of the crew. Morgan was the backup for Christa McAuliffe who was on the ill-fated mission STS-51-L in 1986.
As it was constructed later than its elder sisters, "Endeavour" was built with new hardware designed to improve and expand orbiter capabilities. Most of this equipment was later incorporated into the other three orbiters during out-of-service major inspection and modification programs. ""Endeavour"s upgrades include:
Modifications resulting from a 2005–2006 refit of "Endeavour" included:
"Endeavour" flew its final mission, STS-134, to the International Space Station (ISS) in May 2011. After the conclusion of STS-134, "Endeavour" was formally decommissioned.
STS-134 was intended to launch in late 2010, but on July 1 NASA released a statement saying the "Endeavour" mission was rescheduled for February 27, 2011.
"The target dates were adjusted because critical payload hardware for STS-133 will not be ready in time to support the previously planned 16 September launch," NASA said in a statement. With the "Discovery" launch moving to November, "Endeavour" mission "cannot fly as planned, so the next available launch window is in February 2011," NASA said, adding that the launch dates were subject to change.
The launch was further postponed until April to avoid a scheduling conflict with a Russian supply vehicle heading for the International Space Station. STS-134 did not launch until 16 May at 08:56 EDT.
"Endeavour" landed at the Kennedy Space Center at 06:34 UTC on June 1, 2011, completing its final mission. It was the 25th night landing of a shuttle. Over its flight career, "Endeavour" flew 122,883,151 miles and spent 299 days in space. During "Endeavour's" last mission, the Russian spacecraft Soyuz TMA-20 departed from the ISS and paused at a distance of 200 meters. Italian astronaut Paolo Nespoli took a series of photographs and videos of the ISS with "Endeavour" docked. This was the second time a shuttle was photographed docked and the first time since 1996. Commander Mark Kelly was the last astronaut off "Endeavour" after the landing, and the crew stayed on the landing strip to sign autographs and pose for pictures.
STS-134 was the penultimate Space Shuttle mission; STS-135 was added to the schedule in January 2011, and in July "Atlantis" flew for the final time.
After more than twenty organizations submitted proposals to NASA for the display of an orbiter, NASA announced that "Endeavour" would go to the California Science Center in Los Angeles.
After low level flyovers above NASA and civic landmarks across the country and in California, it was delivered to Los Angeles International Airport (LAX) on September 21, 2012. The orbiter was slowly and carefully transported through the streets of Los Angeles and Inglewood three weeks later, from October 11–14 along La Tijera, Manchester, Crenshaw, and Martin Luther King, Jr. Boulevards to its final destination at the California Science Center in Exposition Park.
"Endeavour"s route on the city streets between LAX and Exposition Park was meticulously measured and each move was carefully choreographed. In multiple locations, there were only inches of clearance for the Shuttle's wide wings between telephone poles, apartment buildings and other structures. Many street light standards and traffic signals were temporarily removed as the Shuttle passed through. It was necessary to remove over 400 street trees as well, some of which were fairly old, creating a small controversy. However, the removed trees were replaced two-for-one by the Science Center, using part of the $200 million funding for the move.
The power had to be turned off and power carrying poles had to be removed temporarily as the orbiter crept along Manchester, to Prairie Avenue, then Crenshaw Boulevard. News crews lined the streets along the path with visible news personalities in the news trucks. Police escorts and other security personnel, among them including the LAPD, LASD, CHP, and NASA officials, controlled the large crowds gathered, with support from the LAFD and LACoFD to treat heat exhaustion victims as "Endeavour" made its way through the city. "Endeavour" was parked for a few hours at the Great Western Forum where it was available for viewing. The journey was famous for an unmodified Toyota Tundra pickup truck pulling the Space Shuttle across the Manchester Boulevard Bridge. The Space Shuttle was mainly carried by four self-propelled robotic dollies throughout the 12 mile journey. However, due to bridge weight restrictions, "Endeavour" was moved onto the dolly towed by the Tundra. After it had completely crossed the bridge, the Space Shuttle was returned to the robotic dollies. The footage was later used in a commercial for the 2013 Super Bowl. Having taken longer than expected, "Endeavour" finally reached the Science Center on October 14.
The exhibit was opened to the public on October 30, 2012 at the temporary Samuel Oschin Space Shuttle "Endeavour" Display Pavilion of the museum. A new addition to the Science Center, called the Samuel Oschin Air and Space Center, is under construction as "Endeavour"s permanent home. Before the opening, "Endeavour" will be mounted vertically with an external tank and a pair of solid rocket boosters in the Shuttle stack configuration. One payload door will be opened out to reveal a demonstration payload inside.
After its decommissioning, "Endeavour"s Canadarm (formally the 'Shuttle Remote Manipulator System') was removed in order to be sent to the Canadian Space Agency's John H. Chapman Space Centre in Longueuil, Quebec, a suburb of Montreal, where it was to be placed on display. In a Canadian poll on which science or aerospace museum should be selected to display the Canadarm, originally built by SPAR Aerospace, the Canadian Space Agency's headquarters placed third to last with only 35 out of 638 votes. "Endeavour"s Canadarm has since gone on permanent display at the Canada Aviation and Space Museum in Ottawa.
In August 2015 NASA engineers went to work on removing a few of the tanks from "Endeavour" for reuse as storage containers for potable water on the International Space Station.
Space Shuttle "Endeavour" is the namesake for SpaceX's Dragon 2 capsule 206, which first flew on Crew Dragon Demo-2 beginning May 30, 2020.
‡ Longest shuttle mission for "Endeavour"
The Flow Director was responsible for the overall preparation of the Shuttle for launch and processing it after landing, and remained permanently assigned to head the spacecraft's ground crew while the astronaut flight crews changed for every mission. Each Shuttle's Flow Director was supported by a Vehicle Manager for the same spacecraft. Space Shuttle "Endeavour"s Flow Directors were:
"Endeavour" is currently housed in the Samuel Oschin Pavilion at the California Science Center in Exposition Park in South Los Angeles about two miles south of Downtown Los Angeles. A companion exhibit, ""Endeavour": The California Story", features images and artifacts that relate the Space Shuttle program to California, where the orbiters were originally constructed. It has been planned for a new facility to be built with "Endeavour" attached to an external fuel tank (the last mission-ready one in existence as all others were destroyed during launch) and the two solid rocket boosters (SRBs) and raised in an upright position, as if "Endeavour" were to make one more flight. "Endeavour" is on display at the museum, the SRBs are in storage, and the external tank ET-94 is on display: ET-94 is currently undergoing restoration after being used to analyze the foam on its sister tank, which was a factor in the failure of STS-107.
Following their May 30, 2020 launch on board the SpaceX Crew Dragon Demo-2 vehicle, the crew announced in orbit that they had named their spacecraft "Capsule Endeavour". Astronauts Bob Behnken and Doug Hurley said the name has a dual meaning: first, after the "incredible endeavor" put forth by SpaceX and NASA after the retirement of the space shuttle fleet in 2011; and second, because both Hurley and Behnken each flew their first flight aboard the shuttle Endeavour (Behnken on STS-123, Hurley on STS-127) and wanted to name this new spacecraft after the one that took each of them into space. | https://en.wikipedia.org/wiki?curid=28240 |
Sports Car Club of America
The Sports Car Club of America (SCCA) is a non-profit American automobile club and sanctioning body supporting road racing, rallying, and autocross in the United States. Formed in 1944, it runs many programs for both amateur and professional racers.
The SCCA traces its roots to the Automobile Racing Club of America (not to be confused with the current stock car series of the same name). ARCA was founded in 1933 by brothers Miles and Sam Collier, and dissolved in 1941 at the outbreak of World War II. The SCCA was formed in 1944 as an enthusiast group. The SCCA began sanctioning road racing in 1948 with the inaugural Watkins Glen Grand Prix. Cameron Argetsinger, an SCCA member and local enthusiast who would later become Director of Pro Racing and Executive Director of the SCCA, helped organize the event for the SCCA.
In 1951, the SCCA National Sports Car Championship was formed from existing marquee events around the nation, including Watkins Glen, Pebble Beach, and Elkhart Lake. Many early SCCA events were held on disused air force bases, organized with the help of Air Force General Curtis LeMay, a renowned enthusiast of sports car racing. LeMay loaned out facilities of Strategic Air Command bases for the SCCA's use; the SCCA relied heavily on these venues during the early and mid-1950s during the transition from street racing to permanent circuits.
By 1962, the SCCA was tasked with managing the U.S. World Sportscar Championship rounds at Daytona, Sebring, Bridgehampton and Watkins Glen. The club was also involved in the Formula 1 U.S. Grand Prix. SCCA Executive Director John Bishop helped to create the United States Road Racing Championship series for Group 7 sports cars to recover races that had been taken by rival USAC Road Racing Championship. Bishop was also instrumental in founding the SCCA Trans-Am Series and the SCCA/CASC Can-Am series. In 1969, tension and infighting over Pro Racing's autonomy caused Bishop to resign and help form the International Motor Sports Association.
The SCCA dropped its amateur policy in 1962 and began sanctioning professional racing. In 1963, the United States Road Racing Championship was formed. In 1966 the Canadian-American Challenge Cup (Can-Am) was created for Group 7 open-top sportscars. The Trans-Am Series for pony cars also began in 1966. Today, Trans-Am uses GT-1 class regulations, giving amateur drivers a chance to race professionally. A professional series for open-wheel racing cars was introduced in 1967 as the SCCA Grand Prix Championship. This series was then held under various names through to the 1976 SCCA/USAC Formula 5000 Championship.
Current SCCA-sanctioned series include Trans Am, the Pirelli World Challenge for GT and touring cars, the Global MX-5 Cup, F2000 Championship Series, F1600 Championship Series and the Atlantic Championship Series. SCCA Pro Racing has also sanctioned professional series for some amateur classes such as Spec Racer Ford Pro and Formula Enterprises Pro. SCCA Pro Racing also sanctioned the Volkswagen Jetta TDI Cup during its time.
The Club Racing program is a road racing division where drivers race on either dedicated race tracks or on temporary street circuits. Competitors require either a regional or a national racing license. Both modified production cars (ranging from lightly modified cars with only extra safety equipment to heavily modified cars that retain only the basic shape of the original vehicle) and designed-from-scratch "formula" and "sports racer" cars can be used in Club Racing. Most of the participants in the Club Racing program are unpaid amateurs, but some go on to professional racing careers. The club is also the source for race workers in all specialties.
The annual national championship for Club Racing is called the SCCA National Championship Runoffs and has been held at Riverside International Raceway (1964, 1966, 1968), Daytona International Speedway (1965, 1967, 1969, 2015), Road Atlanta (1970–1993), Mid-Ohio Sports Car Course (1994–2005, 2016), Heartland Park Topeka (2006–2008), Road America (2009-2013, 2020), Mazda Raceway Laguna Seca (2014), and Indianapolis Motor Speedway (2017). In 2018, the Runoffs will go back west to Sonoma Raceway. In 2019, the race will be held at Virginia International Raceway a track where the race has never been held. It was announced on June 15, 2018 that the Runoffs would go back to Road America in the year 2020. On May 25th 2019, the weekend of the 2019 Indianapolis 500, SCCA announced they will be returning to Indianapolis Motor Speedway in 2021. The current SCCA record holder is Jerry Hansen, (former owner of Brainerd International Raceway), with twenty-seven national championships.
The seven national classes of the formula group are Formula Atlantic (FA), Formula Continental (FC), Formula SCCA (FE), Formula F (FF), Formula Vee (FV), Formula X (FX), and Formula 500 (F500).
The autocross program is branded as "Solo". Up to four cars at a time run on a course laid out with traffic cones on a large paved surface, such as a parking lot or airport runway, without interfering with one another.
Competitions are held at the regional, divisional, and national levels. A national champion in each class is determined at the national championship (usually referred to as "Nationals") held in September. In 2009, Solo Nationals moved to the Lincoln Airpark in Lincoln, Nebraska. Individual national-level events called "Championship Tours" and "Match Tours" are held throughout the racing season. The SCCA also holds national-level events in an alternate format called "ProSolo". In ProSolo, two cars compete at the same time on mirror-image courses with drag racing-style starts, complete with reaction and 60-foot times. Class winners and other qualifiers (based on time differential against the class winner) then compete in a handicapped elimination round called the "Challenge". Points are awarded in both class and Challenge competition, and an annual champion is crowned each September at the ProSolo Finale event in Lincoln, Nebraska.
The SCCA sanctions "RallyCross" events, similar to autocross, but on a non-paved course. SCCA ProRally was a national performance rally series similar to the World Rally Championship. At the end of the 2004 season SCCA dropped ProRally and ClubRally. A new organization, Rally America, picked up both series starting in 2005.
Road rallies are run on open, public roads. These are not races in the sense of speed, but of precision and navigation. The object is to drive on time, arriving at checkpoints with the proper amount of elapsed time from the previous checkpoint. Competitors do not know where the checkpoints are.
In recent years, the SCCA has expanded and re-organized some of the higher-speed events under the Time Trials banner. These include Performance Driving Experience ("PDX"), Club Trials, Track Trials, and Hill Climb events. PDX events are non-competition HPDE-type events and consist of driver-education and car control classroom learning combined with on-track instruction.
The SCCA is organized into six conferences, nine divisions and 115 regions, each organizing events in that area to make the events more accessible to people throughout the country. The number of divisions has increased since the SCCA's foundation. Northern Pacific and Southern Pacific started as a single Pacific Coast Division until dividing in 1966. Rocky Mountain Division is a relatively recent split. The Great Lakes Division was split from the Central Division at the end of 2006. | https://en.wikipedia.org/wiki?curid=28242 |
Star network
A star network is an implementation of a spoke–hub distribution paradigm in computer networks. In a star network, every host is connected to a central hub. In its simplest form, one central hub acts as a conduit to transmit messages. The star network is one of the most common computer network topologies.
The hub and hosts, and the transmission lines between them, form a graph with the topology of a star. Data on a star network passes through the hub before continuing to its destination. The hub manages and controls all functions of the network. It also acts as a repeater for the data flow.
The star topology reduces the impact of a transmission line failure by independently connecting each host to the hub. Each host may thus communicate with all others by transmitting to, and receiving from, the hub. The failure of a transmission line linking any host to the hub will result in the isolation of that host from all others, but the rest of the network will be unaffected.
The star configuration is commonly used with twisted pair cable and optical fiber cable. However, it can also be used with coaxial cable. | https://en.wikipedia.org/wiki?curid=28244 |
Sufism
Sufism or Taṣawwuf (), variously defined as "Islamic mysticism", "the inward dimension of Islam" or "the phenomenon of mysticism within Islam", is mysticism in Islam, "characterized ... [by particular] values, ritual practices, doctrines and institutions" which began very early in Islamic history and represents "the main manifestation and the most important and central crystallization of" mystical practice in Islam. Practitioners of Sufism have been referred to as "Sufis" (from , "ṣūfiyy" / "ṣūfī").
Historically, Sufis have often belonged to different "ṭuruq" or "orders" – congregations formed around a grand master referred to as a "wali" who traces a direct chain of successive teachers back to the Islamic prophet, Muhammad. These orders meet for spiritual sessions ("majalis") in meeting places known as "zawiyas", "khanqahs" or "tekke". They strive for "ihsan" (perfection of worship), as detailed in a "hadith": "Ihsan is to worship Allah as if you see Him; if you can't see Him, surely He sees you." Sufis regard Muhammad as "al-Insān al-Kāmil", the primary perfect man who exemplifies the morality of God, and see him as their leader and prime spiritual guide.
All Sufi orders trace most of their original precepts from Muhammad through his cousin and son-in-law Ali, with the notable exception of the Naqshbandi order, who trace their original precepts to Muhammad through his companion and father-in-law, Abu Bakr.
It has historically been mistaken as a sect of Islam, when it is in fact a religious order for any Islamic denomination.
Although the overwhelming majority of Sufis, both pre-modern and modern, were and are adherents of Sunni Islam, there also developed certain strands of Sufi practice within the ambit of Shia Islam during the late medieval period, particularly after the forced conversion of Iran from majority Sunni to Shia. Traditional Sufi orders during the first five centuries of Islam were all based in Sunni Islam. Although Sufis were opposed to dry legalism, they strictly observed Islamic law and belonged to various schools of Islamic jurisprudence and theology.
Sufis have been characterized by their asceticism, especially by their attachment to "dhikr", the practice of remembrance of God, often performed after prayers. They gained adherents among a number of Muslims as a reaction against the worldliness of the early Umayyad Caliphate (661–750)
and have spanned several continents and cultures over a millennium, initially expressing their beliefs in Arabic and later expanding into Persian, Turkish, Punjabi and Urdu, among others. Sufis played an important role in the formation of Muslim societies through their missionary and educational activities. According to William Chittick, "In a broad sense, Sufism can be described as the interiorization, and intensification of Islamic faith and practice."
Despite a relative decline of Sufi orders in the modern era and criticism of some aspects of Sufism by modernist thinkers and conservative Salafists, Sufism has continued to play an important role in the Islamic world, and has also influenced various forms of spirituality in the West. Association with Sufism was in fact widespread amongst common as well as learned Muslims before the advent of the 20th century.
The Arabic word "tasawwuf" (lit. being or becoming a Sufi), generally translated as Sufism, is commonly defined by Western authors as Islamic mysticism. The Arabic term "sufi" has been used in Islamic literature with a wide range of meanings, by both proponents and opponents of Sufism. Classical Sufi texts, which stressed certain teachings and practices of the Quran and the sunnah (exemplary teachings and practices of the Islamic prophet Muhammad), gave definitions of "tasawwuf" that described ethical and spiritual goals and functioned as teaching tools for their attainment. Many other terms that described particular spiritual qualities and roles were used instead in more practical contexts.
Some modern scholars have used other definitions of Sufism such as "intensification of Islamic faith and practice" and "process of realizing ethical and spiritual ideals".
The term Sufism was originally introduced into European languages in the 18th century by Orientalist scholars, who viewed it mainly as an intellectual doctrine and literary tradition at variance with what they saw as sterile monotheism of Islam. In modern scholarly usage the term serves to describe a wide range of social, cultural, political and religious phenomena associated with Sufis.
The original meaning of "sufi" seems to have been "one who wears wool (')", and the Encyclopaedia of Islam calls other etymological hypotheses "untenable". Woollen clothes were traditionally associated with ascetics and mystics. Al-Qushayri and Ibn Khaldun both rejected all possibilities other than ' on linguistic grounds.
Another explanation traces the lexical root of the word to "" "()", which in Arabic means "purity", and in this context another similar idea of "tasawuf" as considered in Islam is "tazkiyah" (, meaning: self-purification), which is also widely used in sufism. These two explanations were combined by the Sufi al-Rudhabari (d. 322 AH), who said, "The Sufi is the one who wears wool on top of purity".
Others have suggested that the word comes from the term "" ("the people of the suffah or the bench"), who were a group of impoverished companions of Muhammad who held regular gatherings of "dhikr", one of the most prominent companion among them was Abu Huraira. These men and women who sat at al-Masjid an-Nabawi are considered by some to be the first Sufis.
According to Carl W. Ernst the earliest figures of Sufism are Muhammad himself and his companions ("Sahabah"). Sufi orders are based on the ""bay‘ah"" (بَيْعَة "bay‘ah", مُبَايَعَة "mubāya‘ah" "pledge, allegiance") that was given to Muhammad by his "Ṣahabah". By pledging allegiance to Muhammad, the Sahabah had committed themselves to the service of God.
Sufis believe that by giving "bayʿah" (pledging allegiance) to a legitimate Sufi shaykh, one is pledging allegiance to Muhammad; therefore, a spiritual connection between the seeker and Muhammad is established. It is through Muhammad that Sufis aim to learn about, understand and connect with God. Ali is regarded as one of the major figures amongst the "Sahaba" who have directly pledged allegiance to Muhammad, and Sufis maintain that through Ali, knowledge about Muhammad and a connection with Muhammad may be attained. Such a concept may be understood by the "hadith", which Sufis regard to be authentic, in which Muhammad said, "I am the city of knowledge and Ali is its gate". Eminent Sufis such as Ali Hujwiri refer to Ali as having a very high ranking in "Tasawwuf". Furthermore, Junayd of Baghdad regarded Ali as sheikh of the principals and practices of "Tasawwuf".
Historian Jonathan A.C. Brown notes that during the lifetime of Muhammad, some companions were more inclined than others to "intensive devotion, pious abstemiousness and pondering the divine mysteries" more than Islam required, such as Abu Dhar al-Ghifari. Hasan al-Basri, a tabi, is considered a "founding figure" in the "science of purifying the heart".
Practitioners of Sufism hold that in its early stages of development Sufism effectively referred to nothing more than the internalization of Islam. According to one perspective, it is directly from the Qur'an, constantly recited, meditated, and experienced, that Sufism proceeded, in its origin and its development. Other practitioners have held that Sufism is the strict emulation of the way of Muhammad, through which the heart's connection to the Divine is strengthened.
Modern academics and scholars have rejected early Orientalist theories asserting a non-Islamic origin of Sufism, The consensus is that it emerged in Western Asia. Many have asserted Sufism to be unique within the confines of the Islamic religion, and contend that Sufism developed from people like Bayazid Bastami, who, in his utmost reverence to the sunnah, refused to eat a watermelon because he did not find any proof that Muhammad ever ate it. According to the late medieval mystic, the Persian poet Jami, Abd-Allah ibn Muhammad ibn al-Hanafiyyah (died c. 716) was the first person to be called a "Sufi".
Important contributions in writing are attributed to Uwais al-Qarani, Hasan of Basra, Harith al-Muhasibi, Abu Nasr as-Sarraj and Said ibn al-Musayyib. Ruwaym, from the second generation of Sufis in Baghdad, was also an influential early figure, as was Junayd of Baghdad; a number of early practitioners of Sufism were disciples of one of the two.
Sufism had a long history already before the subsequent institutionalization of Sufi teachings into devotional orders ("tarîqât") in the early Middle Ages. The Naqshbandi order is a notable exception to general rule of orders tracing their spiritual lineage through Muhammad's grandsons, as it traces the origin of its teachings from Muhammad to the first Islamic Caliph, Abu Bakr.
Over the years, Sufi orders have influenced and been adopted by various Shi'i movements, especially Isma'ilism, which led to the Safaviyya order's conversion to Shia Islam from Sunni Islam and the spread of Twelverism throughout Iran. Sufi orders include Ba 'Alawiyya, Badawiyya, Bektashi, Burhaniyya, Chishti, Khalwati, Mevlevi, Naqshbandi, Ni'matullāhī, Uwaisi, Qadiriyya, Qalandariyya, Rifa'i, Sarwari Qadiri, Shadhiliyya, Suhrawardiyya, Tijaniyyah, Zinda Shah Madariya, and others.
Existing in both Sunni and Shia Islam, Sufism is not a distinct sect, as is sometimes erroneously assumed, but a method of approaching or a way of understanding the religion, which strives to take the regular practice of the religion to the "supererogatory level" through simultaneously "fulfilling ... [the obligatory] religious duties" and finding a "way and a means of striking a root through the 'narrow gate' in the depth of the soul out into the domain of the pure arid unimprisonable Spirit which itself opens out on to the Divinity." Academic studies of Sufism confirm that Sufism, as a separate tradition from Islam apart from so-called "pure Islam", is frequently a product of Western orientalism and modern Islamic fundamentalists.
As a mystic and ascetic aspect of Islam, it is considered as the part of Islamic teaching that deals with the purification of the inner self. By focusing on the more spiritual aspects of religion, Sufis strive to obtain direct experience of God by making use of "intuitive and emotional faculties" that one must be trained to use. "Tasawwuf" is regarded as a science of the soul that has always been an integral part of Orthodox Islam. In his "Al-Risala al-Safadiyya", ibn Taymiyyah describes the Sufis as those who belong to the path of the Sunna and represent it in their teachings and writings.
Ibn Taymiyya's Sufi inclinations and his reverence for Sufis like Abdul-Qadir Gilani can also be seen in his hundred-page commentary on "Futuh al-ghayb", covering only five of the seventy-eight sermons of the book, but showing that he considered "tasawwuf" essential within the life of the Islamic community.
In his commentary, Ibn Taymiyya stresses that the primacy of the "sharia" forms the soundest tradition in "tasawwuf", and to argue this point he lists over a dozen early masters, as well as more contemporary shaykhs like his fellow Hanbalis, al-Ansari al-Harawi and Abdul-Qadir, and the latter's own shaykh, Hammad al-Dabbas the upright. He cites the early shaykhs (shuyukh al-salaf) such as Al-Fuḍayl ibn ‘Iyāḍ, Ibrahim ibn Adham, Ma`ruf al-Karkhi, Sirri Saqti, Junayd of Baghdad, and others of the early teachers, as well as Abdul-Qadir Gilani, Hammad, Abu al-Bayan and others of the later masters— that they do not permit the followers of the Sufi path to depart from the divinely legislated command and prohibition.
Al-Ghazali narrates in "Al-Munqidh min al-dalal":
In the eleventh-century, Sufism, which had previously been a less "codified" trend in Islamic piety, began to be "ordered and crystallized" into orders which have continued until the present day. All these orders were founded by a major Islamic scholar, and some of the largest and most widespread included the Suhrawardiyya (after Abu al-Najib Suhrawardi [d. 1168), Qadiriyya (after Abdul-Qadir Gilani [d. 1166]), the Rifa'iyya (after Ahmed al-Rifa'i [d. 1182]), the Chishtiyya (after Moinuddin Chishti [d. 1236]), the Shadiliyya (after Abul Hasan ash-Shadhili [d. 1258]), the Hamadaniyyah (after Sayyid Ali Hamadani [d. 1384], the Naqshbandiyya (after Baha-ud-Din Naqshband Bukhari [d. 1389]). Contrary to popular perception in the West, however, neither the founders of these orders nor their followers ever considered themselves to be anything other than orthodox Sunni Muslims, and in fact all of these orders were attached to one of the four orthodox legal schools of Sunni Islam. Thus, the Qadiriyya order was Hanbali, with its founder, Abdul-Qadir Gilani, being a renowned jurist; the Chishtiyya was Hanafi; the Shadiliyya order was Maliki; and the Naqshbandiyya order was Hanafi. Thus, it is precisely because it is historically proven that "many of the most eminent defenders of Islamic orthodoxy, such as Abdul-Qadir Gilani, Ghazali, and the Sultan Ṣalāḥ ad-Dīn (Saladin) were connected with Sufism" that the popular studies of writers like Idries Shah are continuously disregarded by scholars as conveying the fallacious image that "Sufism" is somehow distinct from "Islam."
Towards the end of the first millennium, a number of manuals began to be written summarizing the doctrines of Sufism and describing some typical Sufi practices. Two of the most famous of these are now available in English translation: the "Kashf al-Mahjûb" of Ali Hujwiri and the "Risâla" of Al-Qushayri.
Two of al-Ghazali's greatest treatises are the "Revival of Religious Sciences" and what he termed "its essence", the "Kimiya-yi sa'ādat". He argued that Sufism originated from the Qur'an and thus was compatible with mainstream Islamic thought and did not in any way contradict Islamic Law—being instead necessary to its complete fulfillment. Ongoing efforts by both traditionally trained Muslim scholars and Western academics are making al-Ghazali's works more widely available in English translation, allowing English-speaking readers to judge for themselves the compatibility of Islamic Law and Sufi doctrine. Several sections of the "Revival of Religious Sciences" have been published in translation by the Islamic Texts Society. An abridged translation (from an Urdu translation) of "The Alchemy of Happiness" was published by Claud Field () in 1910. It has been translated in full by Muhammad Asim Bilal (2001).
Historically, Sufism became “an incredibly important part of Islam” and "one of the most widespread and omnipresent aspects of Muslim life" in Islamic civilization from the early medieval period onwards, when it began to permeate nearly all major aspects of Sunni Islamic life in regions stretching from India and Iraq to the Balkans and Senegal.
The rise of Islamic civilization coincides strongly with the spread of Sufi philosophy in Islam. The spread of Sufism has been considered a definitive factor in the spread of Islam, and in the creation of integrally Islamic cultures, especially in Africa and Asia. The Senussi tribes of Libya and the Sudan are one of the strongest adherents of Sufism. Sufi poets and philosophers such as Khoja Akhmet Yassawi, Rumi, and Attar of Nishapur (c. 1145 – c. 1221) greatly enhanced the spread of Islamic culture in Anatolia, Central Asia, and South Asia. Sufism also played a role in creating and propagating the culture of the Ottoman world, and in resisting European imperialism in North Africa and South Asia.
Between the 13th and 16th centuries, Sufism produced a flourishing intellectual culture throughout the Islamic world, a “Golden Age” whose physical artifacts survive. In many places a person or group would endow a waqf to maintain a lodge (known variously as a "zawiya", "khanqah", or "tekke") to provide a gathering place for Sufi adepts, as well as lodging for itinerant seekers of knowledge. The same system of endowments could also pay for a complex of buildings, such as that surrounding the Süleymaniye Mosque in Istanbul, including a lodge for Sufi seekers, a hospice with kitchens where these seekers could serve the poor and/or complete a period of initiation, a library, and other structures. No important domain in the civilization of Islam remained unaffected by Sufism in this period.
Opposition to Sufi teachers and orders from more literalist and legalist strains of Islam existed in various forms throughout Islamic history. It took on a particularly violent form in the 18th century with the emergence of the Wahhabi movement.
Around the turn of the 20th century, Sufi rituals and doctrines also came under sustained criticism from modernist Islamic reformers, liberal nationalists, and, some decades later, socialist movements in the Muslim world. Sufi orders were accused of fostering popular superstitions, resisting modern intellectual attitudes, and standing in the way of progressive reforms. Ideological attacks on Sufism were reinforced by agrarian and educational reforms, as well as new forms of taxation, which were instituted by Westernizing national governments, undermining the economic foundations of Sufi orders. The extent to which Sufi orders declined in the first half of the 20th century varied from country to country, but by the middle of the century the very survival of the orders and traditional Sufi lifestyle appeared doubtful to many observers.
However, defying these predictions, Sufism and Sufi orders have continued to play a major role in the Muslim world, also expanding into Muslim-minority countries. Its ability to articulate an inclusive Islamic identity with greater emphasis on personal and small-group piety has made Sufism especially well-suited for contexts characterized by religious pluralism and secularist perspectives.
In the modern world, the classical interpretation of Sunni orthodoxy, which sees in Sufism an essential dimension of Islam alongside the disciplines of jurisprudence and theology, is represented by institutions such as Egypt's Al-Azhar University and Zaytuna College, with Al-Azhar's current Grand Imam Ahmed el-Tayeb recently defining "Sunni orthodoxy" as being a follower "of any of the four schools of [legal] thought (Hanafi, Shafi’i, Maliki or Hanbali) and ... [also] of the Sufism of Imam Junayd of Baghdad in doctrines, manners and [spiritual] purification."
Current Sufi orders include Alians, Bektashi Order, Mevlevi Order, Ba 'Alawiyya, Chishti Order, Jerrahi, Naqshbandi, Mujaddidi, Ni'matullāhī, Qadiriyya, Qalandariyya, Sarwari Qadiriyya, Shadhiliyya, Suhrawardiyya, Saifiah (Naqshbandiah), and Uwaisi. The relationship of Sufi orders to modern societies is usually defined by their relationship to governments.
Turkey and Persia together have been a center for many Sufi lineages and orders. The Bektashi were closely affiliated with the Ottoman Janissaries and are the heart of Turkey's large and mostly liberal Alevi population. They have spread westwards to Cyprus, Greece, Albania, Bulgaria, Republic of Macedonia, Bosnia and Herzegovina, Kosovo, and, more recently, to the United States, via Albania.
Sufism is popular in such African countries as Egypt, Tunisia, Algeria, Morocco, and Senegal, where it is seen as a mystical expression of Islam. Sufism is traditional in Morocco, but has seen a growing revival with the renewal of Sufism under contemporary spiritual teachers such as Hamza al Qadiri al Boutchichi. Mbacke suggests that one reason Sufism has taken hold in Senegal is because it can accommodate local beliefs and customs, which tend toward the mystical.
The life of the Algerian Sufi master Abdelkader El Djezairi is instructive in this regard. Notable as well are the lives of Amadou Bamba and El Hadj Umar Tall in West Africa, and Sheikh Mansur and Imam Shamil in the Caucasus. In the twentieth century, some Muslims have called Sufism a superstitious religion which holds back Islamic achievement in the fields of science and technology.
A number of Westerners have embarked with varying degrees of success on the path of Sufism. One of the first to return to Europe as an official representative of a Sufi order, and with the specific purpose to spread Sufism in Western Europe, was the Swedish-born wandering Sufi Ivan Aguéli. René Guénon, the French scholar, became a Sufi in the early twentieth century and was known as Sheikh Abdul Wahid Yahya. His manifold writings defined the practice of Sufism as the essence of Islam, but also pointed to the universality of its message. Other spiritualists, such as George Gurdjieff, may or may not conform to the tenets of Sufism as understood by orthodox Muslims.
Other noteworthy Sufi teachers who have been active in the West in recent years include Bawa Muhaiyaddeen, Inayat Khan, Nazim Al-Haqqani, Muhammad Alauddin Siddiqui, Javad Nurbakhsh, Bulent Rauf, Irina Tweedie, Idries Shah, Muzaffer Ozak, Nahid Angha, and Ali Kianfar.
Currently active Sufi academics and publishers include Llewellyn Vaughan-Lee, Nuh Ha Mim Keller, Abdullah Nooruddeen Durkee, Waheed Ashraf, Omer Tarin, Ahmed abdu r Rashid and Timothy Winter.
While all Muslims believe that they are on the pathway to Allah and hope to become close to God in Paradise—after death and after the Last Judgment—Sufis also believe that it is possible to draw closer to God and to more fully embrace the divine presence in this life. The chief aim of all Sufis is to seek the pleasing of God by working to restore within themselves the primordial state of "fitra".
To Sufis, the outer law consists of rules pertaining to worship, transactions, marriage, judicial rulings, and criminal law—what is often referred to, broadly, as "qanun". The inner law of Sufism consists of rules about repentance from sin, the purging of contemptible qualities and evil traits of character, and adornment with virtues and good character.
To the Sufi, it is the transmission of divine light from the teacher's heart to the heart of the student, rather than worldly knowledge, that allows the adept to progress. They further believe that the teacher should attempt inerrantly to follow the Divine Law.
According to Moojan Momen "one of the most important doctrines of Sufism is the concept of "al-Insan al-Kamil" "the Perfect Man". This doctrine states that there will always exist upon the earth a "Qutb" (Pole or Axis of the Universe)—a man who is the perfect channel of grace from God to man and in a state of wilayah (sanctity, being under the protection of Allah). The concept of the Sufi Qutb is similar to that of the Shi'i Imam. However, this belief puts Sufism in "direct conflict" with Shia Islam, since both the Qutb (who for most Sufi orders is the head of the order) and the Imam fulfill the role of "the purveyor of spiritual guidance and of Allah's grace to mankind". The vow of obedience to the Shaykh or Qutb which is taken by Sufis is considered incompatible with devotion to the Imam".
As a further example, the prospective adherent of the Mevlevi Order would have been ordered to serve in the kitchens of a hospice for the poor for 1001 days prior to being accepted for spiritual instruction, and a further 1,001 days in solitary retreat as a precondition of completing that instruction.
Some teachers, especially when addressing more general audiences, or mixed groups of Muslims and non-Muslims, make extensive use of parable, allegory, and metaphor. Although approaches to teaching vary among different Sufi orders, Sufism as a whole is primarily concerned with direct personal experience, and as such has sometimes been compared to other, non-Islamic forms of mysticism (e.g., as in the books of Hossein Nasr).
Many Sufi believe that to reach the highest levels of success in Sufism typically requires that the disciple live with and serve the teacher for a long period of time. An example is the folk story about Baha-ud-Din Naqshband Bukhari, who gave his name to the Naqshbandi Order. He is believed to have served his first teacher, Sayyid Muhammad Baba As-Samasi, for 20 years, until as-Samasi died. He is said to then have served several other teachers for lengthy periods of time. He is said to have helped the poorer members of the community for many years and after this concluded his teacher directed him to care for animals cleaning their wounds, and assisting them.
Devotion to Muhammad is an exceptionally strong practice within Sufism. Sufis have historically revered Muhammad as the prime personality of spiritual greatness. The Sufi poet Saadi Shirazi stated, "He who chooses a path contrary to that of the prophet, shall never reach the destination. O Saadi, do not think that one can treat that way of purity except in the wake of the chosen one." Rumi attributes his self-control and abstinence from worldly desires as qualities attained by him through the guidance of Muhammad. Rumi states, "I 'sewed' my two eyes shut from [desires for] this world and the next – this I learned from Muhammad." Ibn Arabi regards Muhammad as the greatest man and states, "Muhammad's wisdom is uniqueness ("fardiya") because he is the most perfect existent creature of this human species. For this reason, the command began with him and was sealed with him. He was a Prophet while Adam was between water and clay, and his elemental structure is the Seal of the Prophets." Attar of Nishapur claimed that he praised Muhammad in such a manner that was not done before by any poet, in his book the "Ilahi-nama". Fariduddin Attar stated, "Muhammad is the exemplar to both worlds, the guide of the descendants of Adam. He is the sun of creation, the moon of the celestial spheres, the all-seeing eye...The seven heavens and the eight gardens of paradise were created for him, he is both the eye and the light in the light of our eyes." Sufis have historically stressed the importance of Muhammad's perfection and his ability to intercede. The persona of Muhammad has historically been and remains an integral and critical aspect of Sufi belief and practice. Bayazid Bastami is recorded to have been so devoted to the "sunnah" of Muhammad that he refused to eat a watermelon because he could not establish that Muhammad ever ate one.
In the 13th century, a Sufi poet from Egypt, Al-Busiri, wrote the "al-Kawākib ad-Durrīya fī Madḥ Khayr al-Barīya" (The Celestial Lights in Praise of the Best of Creation) commonly referred to as "Qaṣīdat al-Burda" ("Poem of the Mantle"), in which he extensively praised Muhammad. This poem is still widely recited and sung amongst Sufi groups all over the world.
According to Ibn Arabi, Islam is the best religion because of Muhammad. Ibn Arabi regards that the first entity that was brought into existence is the reality or essence of Muhammad ("al-ḥaqīqa al-Muhammadiyya"). Ibn Arabi regards Muhammad as the supreme human being and master of all creatures. Muhammad is therefore the primary role model for human beings to aspire to emulate. Ibn Arabi believes that God's attributes and names are manifested in this world and that the most complete and perfect display of these divine attributes and names are seen in Muhammad. Ibn Arabi believes that one may see God in the mirror of Muhammad, meaning that the divine attributes of God are manifested through Muhammad. Ibn Arabi maintains that Muhammad is the best proof of God and by knowing Muhammad one knows God. Ibn Arabi also maintains that Muhammad is the master of all of humanity in both this world and the afterlife. In this view, Islam is the best religion, because Muhammad is Islam.
Sufis believe the "sharia" (exoteric "canon"), "tariqa" ("order") and "haqiqa" ("truth") are mutually interdependent. Sufism leads the adept, called "salik" or "wayfarer", in his "sulûk" or "road" through different stations ("maqaam") until he reaches his goal, the perfect "tawhid", the existential confession that God is One. Ibn Arabi says, "When we see someone in this Community who claims to be able to guide others to God, but is remiss in but one rule of the Sacred Law—even if he manifests miracles that stagger the mind—asserting that his shortcoming is a special dispensation for him, we do not even turn to look at him, for such a person is not a sheikh, nor is he speaking the truth, for no one is entrusted with the secrets of God Most High save one in whom the ordinances of the Sacred Law are preserved. ("Jamiʿ karamat al-awliyaʾ")".
The Amman Message, a detailed statement issued by 200 leading Islamic scholars in 2005 in Amman, specifically recognized the validity of Sufism as a part of Islam. This was adopted by the Islamic world's political and temporal leaderships at the Organisation of the Islamic Conference summit at Mecca in December 2005, and by six other international Islamic scholarly assemblies including the International Islamic Fiqh Academy of Jeddah, in July 2006. The definition of Sufism can vary drastically between different traditions (what may be intended is simple tazkiah as opposed to the various manifestations of Sufism around the Islamic world).
The literature of Sufism emphasizes highly subjective matters that resist outside observation, such as the subtle states of the heart. Often these resist direct reference or description, with the consequence that the authors of various Sufi treatises took recourse to allegorical language. For instance, much Sufi poetry refers to intoxication, which Islam expressly forbids. This usage of indirect language and the existence of interpretations by people who had no training in Islam or Sufism led to doubts being cast over the validity of Sufism as a part of Islam. Also, some groups emerged that considered themselves above the "sharia" and discussed Sufism as a method of bypassing the rules of Islam in order to attain salvation directly. This was disapproved of by traditional scholars.
For these and other reasons, the relationship between traditional Islamic scholars and Sufism is complex and a range of scholarly opinion on Sufism in Islam has been the norm. Some scholars, such as Al-Ghazali, helped its propagation while other scholars opposed it. William Chittick explains the position of Sufism and Sufis this way:
The term "neo-Sufism" was originally coined by Fazlur Rahman and used by other scholars to describe reformist currents among 18th century Sufi orders, whose goal was to remove some of the more ecstatic and pantheistic elements of the Sufi tradition and reassert the importance of Islamic law as the basis for inner spirituality and social activism. In recent times, it has been increasingly used by scholars like Mark Sedgwick in another sense, to describe various forms of Sufi-influenced spirituality in the West, in particular the deconfessionalized spiritual movements which emphasize universal elements of the Sufi tradition and de-emphasize its Islamic context. Such groups include The Sufi Order in the West, founded by Inayat Khan, which teaches the essential unity of all faiths, and accepts members of all creeds. Sufism Reoriented is an offshoot of it charted by the syncretistic teacher Meher Baba. The Golden Sufi Center exists in England, Switzerland and the United States. It was founded by Llewellyn Vaughan-Lee to continue the work of his teacher Irina Tweedie, herself a practitioner of both Hinduism and neo-Sufism. Other Western Sufi organisations include the Sufi Foundation of America and the International Association of Sufism.
Traditional Islamic scholars have recognized two major branches within the practice of Sufism, and use this as one key to differentiating among the approaches of different masters and devotional lineages.
On the one hand there is the order from the signs to the Signifier (or from the arts to the Artisan). In this branch, the seeker begins by purifying the lower self of every corrupting influence that stands in the way of recognizing all of creation as the work of God, as God's active Self-disclosure or theophany. This is the way of Imam Al-Ghazali and of the majority of the Sufi orders.
On the other hand, there is the order from the Signifier to His signs, from the Artisan to His works. In this branch the seeker experiences divine attraction ("jadhba"), and is able to enter the order with a glimpse of its endpoint, of direct apprehension of the Divine Presence towards which all spiritual striving is directed. This does not replace the striving to purify the heart, as in the other branch; it simply stems from a different point of entry into the path. This is the way primarily of the masters of the Naqshbandi and Shadhili orders.
Contemporary scholars may also recognize a third branch, attributed to the late Ottoman scholar Said Nursi and explicated in his vast Qur'an commentary called the Risale-i Nur. This approach entails strict adherence to the way of Muhammad, in the understanding that this wont, or "sunnah", proposes a complete devotional spirituality adequate to those without access to a master of the Sufi way.
Sufism has contributed significantly to the elaboration of theoretical perspectives in many domains of intellectual endeavor. For instance, the doctrine of "subtle centers" or centers of subtle cognition (known as "Lataif-e-sitta") addresses the matter of the awakening of spiritual intuition. In general, these subtle centers or "latâ'if" are thought of as faculties that are to be purified sequentially in order to bring the seeker's wayfaring to completion. A concise and useful summary of this system from a living exponent of this tradition has been published by Muhammad Emin Er.
Sufi psychology has influenced many areas of thinking both within and outside of Islam, drawing primarily upon three concepts. Ja'far al-Sadiq (both an imam in the Shia tradition and a respected scholar and link in chains of Sufi transmission in all Islamic sects) held that human beings are dominated by a lower self called the nafs (self, ego, person), a faculty of spiritual intuition called the qalb (heart), and ruh (soul). These interact in various ways, producing the spiritual types of the tyrant (dominated by "nafs"), the person of faith and moderation (dominated by the spiritual heart), and the person lost in love for God (dominated by the "ruh").
Of note with regard to the spread of Sufi psychology in the West is Robert Frager, a Sufi teacher authorized in the Khalwati Jerrahi order. Frager was a trained psychologist, born in the United States, who converted to Islam in the course of his practice of Sufism and wrote extensively on Sufism and psychology.
Sufi cosmology and Sufi metaphysics are also noteworthy areas of intellectual accomplishment.
The devotional practices of Sufis vary widely. This is because an acknowledged and authorized master of the Sufi path is in effect a physician of the heart, able to diagnose the seeker's impediments to knowledge and pure intention in serving God, and to prescribe to the seeker a course of treatment appropriate to his or her maladies. The consensus among Sufi scholars is that the seeker cannot self-diagnose, and that it can be extremely harmful to undertake any of these practices alone and without formal authorization.
Prerequisites to practice include rigorous adherence to Islamic norms (ritual prayer in its five prescribed times each day, the fast of Ramadan, and so forth). Additionally, the seeker ought to be firmly grounded in supererogatory practices known from the life of Muhammad (such as the "sunnah prayers"). This is in accordance with the words, attributed to God, of the following, a famous "Hadith Qudsi":
My servant draws near to Me through nothing I love more than that which I have made obligatory for him. My servant never ceases drawing near to Me through supererogatory works until I love him. Then, when I love him, I am his hearing through which he hears, his sight through which he sees, his hand through which he grasps, and his foot through which he walks.
It is also necessary for the seeker to have a correct creed ("aqidah"), and to embrace with certainty its tenets. The seeker must also, of necessity, turn away from sins, love of this world, the love of company and renown, obedience to satanic impulse, and the promptings of the lower self. (The way in which this purification of the heart is achieved is outlined in certain books, but must be prescribed in detail by a Sufi master.) The seeker must also be trained to prevent the corruption of those good deeds which have accrued to his or her credit by overcoming the traps of ostentation, pride, arrogance, envy, and long hopes (meaning the hope for a long life allowing us to mend our ways later, rather than immediately, here and now).
Sufi practices, while attractive to some, are not a "means" for gaining knowledge. The traditional scholars of Sufism hold it as absolutely axiomatic that knowledge of God is not a psychological state generated through breath control. Thus, practice of "techniques" is not the cause, but instead the "occasion" for such knowledge to be obtained (if at all), given proper prerequisites and proper guidance by a master of the way. Furthermore, the emphasis on practices may obscure a far more important fact: The seeker is, in a sense, to become a broken person, stripped of all habits through the practice of (in the words of Imam Al-Ghazali) solitude, silence, sleeplessness, and hunger.
"Dhikr" is the remembrance of Allah commanded in the Qur'an for all Muslims through a specific devotional act, such as the repetition of divine names, supplications and aphorisms from "hadith" literature and the Quran. More generally, "dhikr" takes a wide range and various layers of meaning. This includes "dhikr" as any activity in which the Muslim maintains awareness of Allah. To engage in "dhikr" is to practice consciousness of the Divine Presence and love, or "to seek a state of godwariness". The Quran refers to Muhammad as the very embodiment of "dhikr" of Allah (65:10–11). Some types of "dhikr" are prescribed for all Muslims and do not require Sufi initiation or the prescription of a Sufi master because they are deemed to be good for every seeker under every circumstance.
The "dhikr" may slightly vary among each order. Some Sufi orders engage in ritualized "dhikr" ceremonies, or "sema". "Sema" includes various forms of worship such as recitation, singing (the most well known being the Qawwali music of the Indian subcontinent), instrumental music, dance (most famously the Sufi whirling of the Mevlevi order), incense, meditation, ecstasy, and trance.
Some Sufi orders stress and place extensive reliance upon "dhikr". This practice of "dhikr" is called "Dhikr-e-Qulb" (invocation of Allah within the heartbeats). The basic idea in this practice is to visualize the Allah as having been written on the disciple's heart.
The practice of "muraqaba" can be likened to the practices of meditation attested in many faith communities.
While variation exists, one description of the practice within a Naqshbandi lineage reads as follows:
He is to collect all of his bodily senses in concentration, and to cut himself off from all preoccupation and notions that inflict themselves upon the heart. And thus he is to turn his full consciousness towards God Most High while saying three times: ""Ilahî anta maqsûdî wa-ridâka matlûbî"—my God, you are my Goal and Your good pleasure is what I seek". Then he brings to his heart the Name of the Essence—Allâh—and as it courses through his heart he remains attentive to its meaning, which is "Essence without likeness". The seeker remains aware that He is Present, Watchful, Encompassing of all, thereby exemplifying the meaning of his saying (may God bless him and grant him peace): "Worship God as though you see Him, for if you do not see Him, He sees you". And likewise the prophetic tradition: "The most favored level of faith is to know that God is witness over you, wherever you may be".
The traditional view of the more orthodox Sunni Sufi orders, such as the Qadiriyya and the Chisti, as well as Sunni Muslim scholars in general, is that dancing with intent during dhikr or whilst listening to Sema is prohibited.
Sufi whirling (or "Sufi spinning") however is a form of Sama or physically active meditation which originated among some Sufis, and which is still practised by the Sufi Dervishes of the Mevlevi order. It is a customary dance performed within the "sema", through which dervishes (also called "semazens", from Persian ) aim to reach the source of all perfection, or kemal. This is sought through abandoning one's nafs, egos or personal desires, by listening to the music, focusing on God, and spinning one's body in repetitive circles, which has been seen as a symbolic imitation of planets in the Solar System orbiting the sun.
As explained by Mevlevi practioners:
In the symbolism of the Sema ritual, the semazen's camel's hair hat (sikke) represents the tombstone of the ego; his wide, white skirt ("tennure") represents the ego's shroud. By removing his black cloak ("hırka"), he is spiritually reborn to the truth. At the beginning of the Sema, by holding his arms crosswise, the semazen appears to represent the number one, thus testifying to God's unity. While whirling, his arms are open: his right arm is directed to the sky, ready to receive God's beneficence; his left hand, upon which his eyes are fastened, is turned toward the earth. The semazen conveys God's spiritual gift to those who are witnessing the Sema. Revolving from right to left around the heart, the semazen embraces all humanity with love. The human being has been created with love in order to love. Mevlâna Jalâluddîn Rumi says, "All loves are a bridge to Divine love. Yet, those who have not had a taste of it do not know!"
Musical instruments (except the duff) have traditionally been considered as prohibited by the four orthodox Sunni schools, and the more orthodox Sufi tariqas also continued to prohibit their use. Throughout history Sufi saints have stressed that musical instruments are forbidden.
"Qawwali" was originally a form of Sufi devotional singing popular in South Asia, and is now usually performed at "dargahs". Sufi saint Amir Khusrau is said to have infused Persian, Arabic Turkish and Indian classical melodic styles to create the genre in the 13th century. The songs are classified into hamd, na'at, manqabat, marsiya or ghazal, among others. Historically, Sufi Saints permitted and encouraged it, whilst maintaining that musical instruments and female voices should not be introduced, although these are commonplace today.
Nowadays, the songs last for about 15 to 30 minutes, are performed by a group of singers, and instruments including the harmonium, tabla and dholak are used. Pakistani singing maestro Nusrat Fateh Ali Khan is credited with popularizing qawwali all over the world.
"Walī" (, plural ) is an Arabic word whose literal meanings include "custodian", "protector", "helper", and "friend." In the vernacular, it is most commonly used by Muslims to indicate an Islamic saint, otherwise referred to by the more literal "friend of God." In the traditional Islamic understanding of saints, the saint is portrayed as someone "marked by [special] divine favor ... [and] holiness", and who is specifically "chosen by God and endowed with exceptional gifts, such as the ability to work miracles." The doctrine of saints was articulated by Islamic scholars very early on in Muslim history, and particular verses of the Quran and certain "hadith" were interpreted by early Muslim thinkers as "documentary evidence" of the existence of saints.
Since the first Muslim hagiographies were written during the period when Sufism began its rapid expansion, many of the figures who later came to be regarded as the major saints in Sunni Islam were the early Sufi mystics, like Hasan of Basra (d. 728), Farqad Sabakhi (d. 729), Dawud Tai (d. 777-81) Rabi'a al-'Adawiyya (d. 801), Maruf Karkhi (d. 815), and Junayd of Baghdad (d. 910). From the twelfth to the fourteenth century, "the general veneration of saints, among both people and sovereigns, reached its definitive form with the organization of Sufism ... into orders or brotherhoods." In the common expressions of Islamic piety of this period, the saint was understood to be "a contemplative whose state of spiritual perfection ... [found] permanent expression in the teaching bequeathed to his disciples."
In popular Sufism (i.e. devotional practices that have achieved currency in world cultures through Sufi influence), one common practice is to visit or make pilgrimages to the tombs of saints, renowned scholars, and righteous people. This is a particularly common practice in South Asia, where famous tombs include such saints as Sayyid Ali Hamadani in Kulob, Tajikistan; Afāq Khoja, near Kashgar, China; Lal Shahbaz Qalandar in Sindh; Ali Hujwari in Lahore, Pakistan; Bahauddin Zakariya in Multan Pakistan; Moinuddin Chishti in Ajmer, India; Nizamuddin Auliya in Delhi, India; and Shah Jalal in Sylhet, Bangladesh.
Likewise, in Fez, Morocco, a popular destination for such pious visitation is the Zaouia Moulay Idriss II and the yearly visitation to see the current Sheikh of the Qadiri Boutchichi Tariqah, Sheikh Sidi Hamza al Qadiri al Boutchichi to celebrate the Mawlid (which is usually televised on Moroccan National television).
In Islamic mysticism, "karamat" ( "karāmāt", pl. of "karāmah", lit. generosity, high-mindedness) refers to supernatural wonders performed by Muslim saints. In the technical vocabulary of Islamic religious sciences, the singular form "karama" has a sense similar to "charism", a favor or spiritual gift freely bestowed by God. The marvels ascribed to Islamic saints have included supernatural physical actions, predictions of the future, and "interpretation of the secrets of hearts". Historically, a "belief in the miracles of saints ("karāmāt al-awliyāʾ", literally 'marvels of the friends [of God]')" has been "a requirement in Sunni Islam."
Persecution of Sufis and Sufism has included destruction of Sufi shrines and mosques, suppression of orders, and discrimination against adherents in a number of Muslim-majority countries. The Turkish Republican state banned all Sufi orders and abolished their institutions in 1925 after Sufis opposed the new secular order. The Iranian Islamic Republic has harassed Shia Sufis, reportedly for their lack of support for the government doctrine of "" (i.e., that the supreme Shiite jurist should be the nation's political leader).
In most other Muslim countries, attacks on Sufis and especially their shrines have come from Salafis who believe that practices such as celebration of the birthdays of Sufi saints, and dhikr ("remembrance" of God) ceremonies are bid‘ah or impure innovation, and polytheistic (Shirk).
At least 305 people were killed and more than 100 wounded during a November 2017 attack on a mosque in Sinai.
Abdul-Qadir Gilani (1077–1166) was an Mesopotamian-born Hanbali jurist and prominent Sufi scholar based in Baghdad, with Persian roots. Qadiriyya was his patronym. Gilani spent his early life in Na'if, a town just East to Baghdad, also the town of his birth. There, he pursued the study of Hanbali law. Abu Saeed Mubarak Makhzoomi gave Gilani lessons in fiqh. He was given lessons about "hadith" by Abu Bakr ibn Muzaffar. He was given lessons about Tafsir by Abu Muhammad Ja'far, a commentator. His Sufi spiritual instructor was Abu'l-Khair Hammad ibn Muslim al-Dabbas. After completing his education, Gilani left Baghdad. He spent twenty-five years as a reclusive wanderer in the desert regions of Iraq. In 1127, Gilani returned to Baghdad and began to preach to the public. He joined the teaching staff of the school belonging to his own teacher, Abu Saeed Mubarak Makhzoomi, and was popular with students. In the morning he taught "hadith" and "tafsir", and in the afternoon he held discourse on the science of the heart and the virtues of the Quran. He is the forefather of all Sufi orders.
Abul Hasan ash-Shadhili (died 1258), the founder of the Shadhiliyya order, introduced "dhikr jahri" (the remembrance of God out loud, as opposed to the silent "dhikr"). He taught that his followers need not abstain from what Islam has not forbidden, but to be grateful for what God has bestowed upon them, in contrast to the majority of Sufis, who preach to deny oneself and to destroy the ego-self ("nafs") "Order of Patience" (Tariqus-Sabr), Shadhiliyya is formulated to be "Order of Gratitude" (Tariqush-Shukr). Imam Shadhili also gave eighteen valuable "hizbs" (litanies) to his followers out of which the notable "Hizb al-Bahr" is recited worldwide even today.
Ahmad al-Tijani
Abu al-ʿAbbâs Ahmad ibn Muhammad at-Tijânî or Ahmed Tijani (1735–1815), in Arabic سيدي أحمد التجاني ("Sidi Ahmed Tijani"), is the founder of the Tijaniyya Sufi order. He was born in a Berber family, in Aïn Madhi, present-day Algeria and died in Fez, Morocco at the age of 80.
Bayazid Bastami is a very well recognized and influential Sufi personality. Bastami was born in 804 in Bastam. Bayazid is regarded for his devout commitment to the Sunnah and his dedication to fundamental Islamic principals and practices.
Bawa Muhaiyaddeen (died 1986) is a Sufi Sheikh from Sri Lanka. He was first found by a group of religious pilgrims in the early 1900s meditating in the jungles of Kataragama in Sri Lanka (Ceylon). Awed and inspired by his personality and the depth of his wisdom, he was invited to a nearby village. Since that time, people of all walks of life from paupers to prime ministers belonging to all religious and ethnic backgrounds have flocked to see Sheikh Bawa Muhaiyaddeen to seek comfort, guidance and help. Sheikh Bawa Muhaiyaddeen tirelessly spent the rest of his life preaching, healing and comforting the many souls that came to see him.
Muhyiddin Muhammad b. 'Ali Ibn 'Arabi (or Ibn al-'Arabi) (AH 561 – AH 638; July 28, 1165 – November 10, 1240) is considered to be one of the most important Sufi masters, although he never founded any order ("tariqa"). His writings, especially al-Futuhat al-Makkiyya and Fusus al-hikam, have been studied within all the Sufi orders as the clearest expression of "tawhid" (Divine Unity), though because of their recondite nature they were often only given to initiates. Later those who followed his teaching became known as the school of "wahdat al-wujud" (the Oneness of Being). He himself considered his writings to have been divinely inspired. As he expressed the Way to one of his close disciples, his legacy is that 'you should never ever abandon your servant-hood ("ʿubudiyya"), and that there may never be in your soul a longing for any existing thing'.
Junayd al-Baghdadi (830–910) was one of the great early Sufis. His order was Junaidia, which links to the golden chain of many Sufi orders. He laid the groundwork for sober mysticism in contrast to that of God-intoxicated Sufis like al-Hallaj, Bayazid Bastami and Abusaeid Abolkheir. During the trial of al-Hallaj, his former disciple, the Caliph of the time demanded his fatwa. In response, he issued this fatwa: "From the outward appearance he is to die and we judge according to the outward appearance and God knows better". He is referred to by Sufis as Sayyid-ut Taifa—i.e., the leader of the group. He lived and died in the city of Baghdad.
Mansur Al-Hallaj (died 922) is renowned for his claim, "Ana-l-Haqq" ("I am The Truth"). His refusal to recant this utterance, which was regarded as apostasy, led to a long trial. He was imprisoned for 11 years in a Baghdad prison, before being tortured and publicly dismembered on March 26, 922. He is still revered by Sufis for his willingness to embrace torture and death rather than recant. It is said that during his prayers, he would say "O Lord! You are the guide of those who are passing through the Valley of Bewilderment. If I am a heretic, enlarge my heresy".
Khwaja Moinuddin Chishti was born in 1141 and died in 1236. Also known as "Gharīb Nawāz" ("Benefactor of the Poor"), he is the most famous Sufi saint of the Chishti Order. Moinuddin Chishti introduced and established the order in the Indian subcontinent. The initial spiritual chain or silsila of the Chishti order in India, comprising Moinuddin Chishti, Bakhtiyar Kaki, Baba Farid, Nizamuddin Auliya (each successive person being the disciple of the previous one), constitutes the great Sufi saints of Indian history. Moinuddin Chishtī turned towards India, reputedly after a dream in which Muhammad blessed him to do so. After a brief stay at Lahore, he reached Ajmer along with Sultan Shahāb-ud-Din Muhammad Ghori, and settled down there. In Ajmer, he attracted a substantial following, acquiring a great deal of respect amongst the residents of the city. Moinuddin Chishtī practiced the Sufi "Sulh-e-Kul" (peace to all) concept to promote understanding between Muslims and non-Muslims.
Rabi'a al-'Adawiyya or Rabia of Basra (died 801) was a mystic who represents countercultural elements of Sufism, especially with regards to the status and power of women. Prominent Sufi leader Hasan of Basra is said to have castigated himself before her superior merits and sincere virtues. Rabi'a was born of very poor origin, but was captured by bandits at a later age and sold into slavery. She was however released by her master when he awoke one night to see the light of sanctity shining above her head. Rabi'a al-Adawiyya is known for her teachings and emphasis on the centrality of the love of God to a holy life. She is said to have proclaimed, running down the streets of Basra, Iraq:
She died in Jerusalem and is thought to have been buried in the Chapel of the Ascension.
A "Dargah" (Persian: درگاه "dargâh" or درگه "dargah", also in Punjabi and Urdu) is a shrine built over the grave of a revered religious figure, often a Sufi saint or dervish. Sufis often visit the shrine for ziyarat, a term associated with religious visits and pilgrimages. "Dargah"s are often associated with Sufi eating and meeting rooms and hostels, called "khanqah" or hospices. They usually include a mosque, meeting rooms, Islamic religious schools (madrassas), residences for a teacher or caretaker, hospitals, and other buildings for community purposes.
The term "Tariqa" is used for a school or order of Sufism, or especially for the mystical teaching and spiritual practices of such an order with the aim of seeking ḥaqīqah (ultimate truth). A tariqa has a murshid (guide) who plays the role of leader or spiritual director. The members or followers of a tariqa are known as "murīdīn" (singular "murīd"), meaning "desirous", viz. "desiring the knowledge of knowing God and loving God".
The Bektashi Order was founded in the 13th century by the Islamic saint Haji Bektash Veli, and greatly influenced during its fomulative period by the Hurufi Ali al-'Ala in the 15th century and reorganized by Balım Sultan in the 16th century.
The Chishti Order () was founded by (Khawaja) Abu Ishaq Shami ("the Syrian"; died 941) who brought Sufism to the town of Chisht, some 95 miles east of Herat in present-day Afghanistan. Before returning to the Levant, Shami initiated, trained and deputized the son of the local Emir (Khwaja) Abu Ahmad Abdal (died 966). Under the leadership of Abu Ahmad's descendants, the "Chishtiyya" as they are also known, flourished as a regional mystical order.
The Kubrawiya order is a Sufi order ("tariqa") named after its 13th-century founder Najmuddin Kubra. The Kubrawiya Sufi order was founded in the 13th century by Najmuddin Kubra in Bukhara in modern Uzbekistan. The Mongols captured Bukhara in 1221, committed genocide and almost killed the city's entire population. Sheikh Nadjm ed-Din Kubra was among those killed by the Mongols.
The Mevlevi Order is better known in the West as the "whirling dervishes".
Mouride is a large Islamic Sufi order most prominent in Senegal and The Gambia, with headquarters in the holy city of Touba, Senegal.
The Naqshbandi order is one of the major Sufi orders of Islam, previously known as Siddiqiyya as the order stems from Mohammad through Abū Bakr as-Șiddīq. It is considered by some to be a "sober" order known for its silent "dhikr" (remembrance of God) rather than the vocalized forms of "dhikr" common in other orders. The word ""Naqshbandi"" () is Persian, taken from the name of the founder of the order, Baha-ud-Din Naqshband Bukhari. Some have said that the translation means "related to the image-maker", some also consider it to mean "Pattern Maker" rather than "image maker", and interpret "Naqshbandi" to mean "Reformer of Patterns", and others consider it to mean "Way of the Chain" or "Silsilat al-dhahab".
The Ni'matullāhī order is the most widespread Sufi order of Persia today. It was founded by Shah Ni'matullah Wali (died 1367), established and transformed from his inheritance of the Ma'rufiyyah circle. There are several suborders in existence today, the most known and influential in the West following the lineage of Dr. Javad Nurbakhsh who brought the order to the West following the 1979 Revolution in Iran.
The Qadiri Order is one of the oldest Sufi Orders. It derives its name from Abdul-Qadir Gilani (1077–1166), a native of the Iranian province of Gīlān. The order is one of the most widespread of the Sufi orders in the Islamic world, and has a huge presence in Central Asia, Pakistan, Turkey, Balkans and much of East and West Africa. The Qadiriyyah have not developed any distinctive doctrines or teachings outside of mainstream Islam. They believe in the fundamental principles of Islam, but interpreted through mystical experience.
Senussi is a religious-political Sufi order established by Muhammad ibn Ali as-Senussi. Muhammad ibn Ali as-Senussi founded this movement due to his criticism of the Egyptian ulema. Originally from Mecca, as-Senussi left due to pressure from Wahhabis to leave and settled in Cyrenaica where he was well received. Idris bin Muhammad al-Mahdi as-Senussi was later recognized as Emir of Cyrenaica and eventually became King of Libya. The monarchy was abolished by Muammar Gaddafi but, a third of Libyan still claim to be Senussi.
The Shadhili is a Sufi order founded by Abu-l-Hassan ash-Shadhili. Ikhwans (Murids - followers) of the Shadhiliyya are often known as Shadhilis. Fassiya a branch of Shadhiliyya founded by Imam al Fassi of Makkah is the widely practiced Sufi order in Saudi Arabia, Egypt, India, Sri Lanka, Bangladesh, Pakistan, Malaysia, Singapore, Mauritius, Indonesia and other middle east countries.
The Suhrawardiyya order () is a Sufi order founded by Abu al-Najib al-Suhrawardi (1097–1168). The order was formalized by his nephew, Shahab al-Din Abu Hafs Umar Suhrawardi.
The Tijaniyyah order attach a large importance to culture and education, and emphasize the individual adhesion of the disciple (murīd).
Sufi mysticism has long exercised a fascination upon the Western world, and especially its Orientalist scholars. Figures like Rumi have become well known in the United States, where Sufism is perceived as a peaceful and apolitical form of Islam. Orientalists have proposed a variety of diverse theories pertaining to the nature of Sufism, such as it being influenced by Neoplatonism or as an Aryan historical reaction against "Semitic" cultural influence. Hossein Nasr states that the preceding theories are false according to the point of view of Sufism.
The Islamic Institute in Mannheim, Germany, which works towards the integration of Europe and Muslims, sees Sufism as particularly suited for interreligious dialogue and intercultural harmonisation in democratic and pluralist societies; it has described Sufism as a symbol of tolerance and humanism—nondogmatic, flexible and non-violent. According to Philip Jenkins, a Professor at Baylor University, "the Sufis are much more than tactical allies for the West: they are, potentially, the greatest hope for pluralism and democracy within Muslim nations." Likewise, several governments and organisations have advocated the promotion of Sufism as a means of combating intolerant and violent strains of Islam. For example, the Chinese and Russian governments openly favor Sufism as the best means of protecting against Islamist subversion. The British government, especially following the 7 July 2005 London bombings, has favoured Sufi groups in its battle against Muslim extremist currents. The influential RAND Corporation, an American think-tank, issued a major report titled "Building Moderate Muslim Networks," which urged the US government to form links with and bolster Muslim groups that opposed Islamist extremism. The report stressed the Sufi role as moderate traditionalists open to change, and thus as allies against violence. News organisations such as the BBC, Economist and Boston Globe have also seen Sufism as a means to deal with violent Muslim extremists.
Idries Shah states that Sufism is universal in nature, its roots predating the rise of Islam and Christianity. He quotes Suhrawardi as saying that "this [Sufism] was a form of wisdom known to and practiced by a succession of sages including the mysterious ancient Hermes of Egypt.", and that Ibn al-Farid "stresses that Sufism lies behind and before systematization; that 'our wine existed before what you call the grape and the vine' (the school and the system)..." Shah's views have however been rejected by modern scholars. Such modern trends of neo-Sufis in Western countries allow non-Muslims to receive "instructions on following the Sufi path", not without opposition by Muslims who consider such instruction outside the sphere of Islam.
Both Judaism and Islam are monotheistic. There is evidence that Sufism did influence the development of some schools of Jewish philosophy and ethics. In the first writing of this kind, we see "Kitab al-Hidayah ila Fara'iḍ al-Ḳulub", "Duties of the Heart", of Bahya ibn Paquda. This book was translated by Judah ibn Tibbon into Hebrew under the title "Ḥōḇōṯ Ha-lleḇāḇōṯ".
It is noteworthy that in the ethical writings of the Sufis Al-Kusajri and Al-Harawi there are sections which treat of the same subjects as those treated in the "Ḥovot ha-Lebabot" and which bear the same titles: e.g., "Bab al-Tawakkul"; "Bab al-Taubah"; "Bab al-Muḥasabah"; "Bab al-Tawaḍu'"; "Bab al-Zuhd". In the ninth gate, Baḥya directly quotes sayings of the Sufis, whom he calls "Perushim". However, the author of the "Ḥōḇōṯ Ha-lleḇāḇōṯ" did not go so far as to approve of the asceticism of the Sufis, although he showed a marked predilection for their ethical principles.
Abraham ben Moses ben Maimon, the son of the Jewish philosopher Maimonides, believed that Sufi practices and doctrines continue the tradition of the Biblical prophets. See Sefer Hammaspiq, "Happerishuth", Chapter 11 ("Ha-mmaʿaḇāq") s.v. hithbonen efo be-masoreth mufla'a zo, citing the Talmudic explanation of Jeremiah 13:27 in Chagigah 5b; in Rabbi Yaakov Wincelberg's translation, "The Way of Serving God" (Feldheim), p. 429 and above, p. 427. Also see ibid., Chapter 10 ("Iqquḇim"), s.v. wa-halo yoḏeʾaʿ atta; in "The Way of Serving God", p. 371.
Abraham Maimuni's principal work is originally composed in Judeo-Arabic and entitled "כתאב כפאיה אלעאבדין" "Kitāb Kifāyah al-'Ābidīn" ("A Comprehensive Guide for the Servants of God"). From the extant surviving portion it is conjectured that Maimuni's treatise was three times as long as his father's Guide for the Perplexed. In the book, Maimuni evidences a great appreciation for, and affinity to, Sufism. Followers of his path continued to foster a Jewish-Sufi form of pietism for at least a century, and he is rightly considered the founder of this pietistic school, which was centered in Egypt.
The followers of this path, which they called, interchangeably, Hasidism (not to be confused with the [later] Jewish Hasidic movement) or Sufism ("Tasawwuf"), practiced spiritual retreats, solitude, fasting and sleep deprivation. The Jewish Sufis maintained their own brotherhood, guided by a religious leader—like a Sufi sheikh.
The Jewish Encyclopedia in its entry on Sufism states that the revival of Jewish mysticism in Muslim countries is probably due to the spread of Sufism in the same geographical areas. The entry details many parallels to Sufic concepts found in the writings of prominent Kabbalists during the Golden age of Jewish culture in Spain.
In 2005, Indian musician Rabbi Shergill released a Sufi rock song called "Bulla Ki Jaana", which became a chart-topper in India and Pakistan.
The 13th century Persian poet Rumi, is considered one of the most influential figures of Sufism, as well as one of the greatest poets of all time. He has become one of the most widely read poets in the United States, thanks largely to the interpretative translations published by Coleman Barks. Elif Şafak's novel "The Forty Rules of Love" is a fictionalized account of Rumi's encounter with the Persian dervish Shams Tabrizi.
Allama Iqbal, one of the greatest Urdu poets has discussed Sufism, philosophy and Islam in his English work "The Reconstruction of Religious Thought in Islam."
Many painters and visual artists have explored the Sufi motif through various disciplines. One of the outstanding pieces in the Brooklyn Museum's Islamic gallery has been the museum's associate curator of Islamic art, is a large 19th- or early-20th-century portrayal of the Battle of Karbala painted by Abbas Al-Musavi, which was a violent episode in the disagreement between the Sunni and Shia branches of Islam; during this battle, Husayn ibn Ali, a pious grandson of the Islamic prophet Muhammad, died and is considered a martyr in Islam.
In July 2016, at International Sufi Festival held in Noida Film City, UP, India, H.E. Abdul Basit who was the High Commissioner of Pakistan to India at that time, while inaugurating the exhibition of Farkhananda Khan said, “There is no barrier of words or explanation about the paintings or rather there is a soothing message of brotherhood, peace in Sufism”. | https://en.wikipedia.org/wiki?curid=28246 |
Search algorithm
In computer science, a search algorithm is any algorithm which solves the search problem, namely, to retrieve information stored within some data structure, or calculated in the search space of a problem domain, either with discrete or continuous values. Specific applications of search algorithms include:
The classic search problems described above and web search are both problems in information retrieval, but are generally studied as separate subfields and are solved and evaluated differently. Web search problems are generally focused on filtering and finding documents that are most relevant to human queries. Classic search algorithms are typically evaluated on how fast they can find a solution, and whether that solution is guaranteed to be optimal. Though information retrieval algorithms must be fast, the quality of ranking is more important, as is whether good results have been left out and bad results included.
The appropriate search algorithm often depends on the data structure being searched, and may also include prior knowledge about the data. Some database structures are specially constructed to make search algorithms faster or more efficient, such as a search tree, hash map, or a database index.
Search algorithms can be classified based on their mechanism of searching. Linear search algorithms check every record for the one associated with a target key in a linear fashion. Binary, or half interval searches, repeatedly target the center of the search structure and divide the search space in half. Comparison search algorithms improve on linear searching by successively eliminating records based on comparisons of the keys until the target record is found, and can be applied on data structures with a defined order. Digital search algorithms work based on the properties of digits in data structures that use numerical keys. Finally, hashing directly maps keys to records based on a hash function. Searches outside a linear search require that the data be sorted in some way.
Algorithms are often evaluated by their computational complexity, or maximum theoretical run time. Binary search functions, for example, have a maximum complexity of , or logarithmic time. This means that the maximum number of operations needed to find the search target is a logarithmic function of the size of the search space.
Algorithms for searching virtual spaces are used in the constraint satisfaction problem, where the goal is to find a set of value assignments to certain variables that will satisfy specific mathematical equations and inequations / equalities. They are also used when the goal is to find a variable assignment that will maximize or minimize a certain function of those variables. Algorithms for these problems include the basic brute-force search (also called "naïve" or "uninformed" search), and a variety of heuristics that try to exploit partial knowledge about the structure of this space, such as linear relaxation, constraint generation, and constraint propagation.
An important subclass are the local search methods, that view the elements of the search space as the vertices of a graph, with edges defined by a set of heuristics applicable to the case; and scan the space by moving from item to item along the edges, for example according to the steepest descent or best-first criterion, or in a stochastic search. This category includes a great variety of general metaheuristic methods, such as simulated annealing, tabu search, A-teams, and genetic programming, that combine arbitrary heuristics in specific ways.
This class also includes various tree search algorithms, that view the elements as vertices of a tree, and traverse that tree in some special order. Examples of the latter include the exhaustive methods such as depth-first search and breadth-first search, as well as various heuristic-based search tree pruning methods such as backtracking and branch and bound. Unlike general metaheuristics, which at best work only in a probabilistic sense, many of these tree-search methods are guaranteed to find the exact or optimal solution, if given enough time. This is called "completeness".
Another important sub-class consists of algorithms for exploring the game tree of multiple-player games, such as chess or backgammon, whose nodes consist of all possible game situations that could result from the current situation. The goal in these problems is to find the move that provides the best chance of a win, taking into account all possible moves of the opponent(s). Similar problems occur when humans or machines have to make successive decisions whose outcomes are not entirely under one's control, such as in robot guidance or in marketing, financial, or military strategy planning. This kind of problem — combinatorial search — has been extensively studied in the context of artificial intelligence. Examples of algorithms for this class are the minimax algorithm, alpha–beta pruning, * Informational search and the A* algorithm.
The name "combinatorial search" is generally used for algorithms that look for a specific sub-structure of a given discrete structure, such as a graph, a string, a finite group, and so on. The term combinatorial optimization is typically used when the goal is to find a sub-structure with a maximum (or minimum) value of some parameter. (Since the sub-structure is usually represented in the computer by a set of integer variables with constraints, these problems can be viewed as special cases of constraint satisfaction or discrete optimization; but they are usually formulated and solved in a more abstract setting where the internal representation is not explicitly mentioned.)
An important and extensively studied subclass are the graph algorithms, in particular graph traversal algorithms, for finding specific sub-structures in a given graph — such as subgraphs, paths, circuits, and so on. Examples include Dijkstra's algorithm, Kruskal's algorithm, the nearest neighbour algorithm, and Prim's algorithm.
Another important subclass of this category are the string searching algorithms, that search for patterns within strings. Two famous examples are the Boyer–Moore and Knuth–Morris–Pratt algorithms, and several algorithms based on the suffix tree data structure.
In 1953, American statistician Jack Kiefer devised Fibonacci search which can be used to find the maximum of a unimodal function and has many other applications in computer science.
There are also search methods designed for quantum computers, like Grover's algorithm, that are theoretically faster than linear or brute-force search even without the help of data structures or heuristics.
Categories: | https://en.wikipedia.org/wiki?curid=28249 |
Software package
Software package may refer to: | https://en.wikipedia.org/wiki?curid=28251 |
Safe semantics
Safe semantics is a computer hardware consistency model. It describes one type of guarantee that a data register provides when it is shared by several processors in a parallel computer or in a network of computers working together.
Safe semantics was first defined by Leslie Lamport in 1985. It was formally defined in Lamport's "On Interprocess Communication" in 1986.
Safe register has been implemented in many distributed systems.
Safe semantics are defined for a variable with a single writer but multiple readers (SWMR). A SWMR register is safe if each read operation satisfies these properties:
In particular, given concurrency of a read and a write operation, the read can return a value that has not been written by a write. The return value need only belong to the register domain.
A binary safe register can be seen as modeling a bit flickering. Whatever the previous value of the register is, its value could flicker until the write finishes. Therefore, the read that overlaps with a write could return 0 or 1.
"Churn" refers to the entry and exit of servers to/from a distributed system. Baldoni et al. show that no register can have the stronger property of regular semantics in a synchronous system under continuous churn. However, a safe register can be implemented under continuous churn in a non-synchronous system. Modeling and implementing a type of storage memory (Safe Register) under non-quiescent churn requires some system models such as client and server systems. Client systems contains a finite, arbitrary number of processes that are responsible for reading and writing the server system. However,the server system must ensure that read and write operations happen properly.
Safe register implementation involves:
Safe register is maintained by the set of active servers.
Clients maintain no register information.
Eventually synchronous system
Quora (set of server or client systems)
Size of the Read and Write operation executed on quora = n – f – J (n is the number of servers, J is the number of servers that enter and exit, and f is the number of Byzantine failures.
Algorithms such as join, read, and write.
A server ("si") that wants to enter a server system broadcasts an inquiry message to other servers to inform them of its entry, si requests a current value of the register. Once other server receive this inquiry they send reply messages to si. After si receives enough replies from other servers, it collects the replies and saves them into a reply set. Si waits until it gets enough replies (n-f-j) from other servers then it picks the most frequently received value. Si also:
The read algorithm is a basic version of join. The difference is the broadcast mechanism used by the read operation. A client ("cw") broadcasts a message to the system and once a server receives the inquiry, it sends a reply message to the client. Once the client receives enough replies (n-f-j) it stops sending an inquiry.
Client ("cw") sends an inquiry into the system in different rounds and waits until it receives two acknowledgment. ("sn" =sequence number)
The reason for receiving two acknowledgments is to avoid danger in a system. When a process sends an acknowledgement ("ack"), it may die after one millisecond. Therefore, no confirmation is received by the client.
The validity of the safe register (If a read is not concurrent with any write, return the last value written) was proved based on the quorum system. Given two quorum systems (Qw, Qr) Qw indicates the servers that know about the latest value, and Qr indicates values of read responses. The size of each quorum is equal to n-f-j. Proving the safe register's validity requires proving
formula_1
were "B" is the number of Byzantine failures.
Proof : Red region indicates (Qw∩Qr)\B and the blue region indicates Qr∩B. From the assumption, the size of each quorum is n-f-j, so the red region has n-3f-2j active servers. Therefore,
formula_2is strictly greater than f. | https://en.wikipedia.org/wiki?curid=28254 |
Sarawak
Sarawak (; ) is a state of Malaysia. The largest among the 13 states, with an area almost equal to that of Peninsular Malaysia, Sarawak is located in northwest Borneo Island, and is bordered by the Malaysian state of Sabah to the northeast, Kalimantan (the Indonesian portion of Borneo) to the south, and Brunei in the north. The capital city, Kuching, is the largest city in Sarawak, the economic centre of the state, and the seat of the Sarawak state government. Other cities and towns in Sarawak include Miri, Sibu, and Bintulu. As of the 2015 census, the population of Sarawak was 2,636,000. Sarawak has an equatorial climate with tropical rainforests and abundant animal and plant species. It has several prominent cave systems at Gunung Mulu National Park. Rajang River is the longest river in Malaysia; Bakun Dam, one of the largest dams in Southeast Asia, is located on one of its tributaries, the Balui River. Mount Murud is the highest point in Sarawak.
The earliest known human settlement in Sarawak at the Niah Caves dates back 40,000 years. A series of Chinese ceramics dated from the 8th to 13th century AD was uncovered at the archaeological site of Santubong. The coastal regions of Sarawak came under the influence of the Bruneian Empire in the 16th century. In 1839, James Brooke, a British explorer, arrived in Sarawak. He, and his descendants, governed the state from 1841 to 1946. During World War II, it was occupied by the Japanese for three years. After the war, the last White Rajah, Charles Vyner Brooke, ceded Sarawak to Britain, and in 1946 it became a British Crown Colony. On 22 July 1963, Sarawak was granted self-government by the British and subsequently became one of the founding members of Malaysia, established on 16 September 1963. However, the federation was opposed by Indonesia leading to a three-year confrontation. The creation of Malaysia also resulted in a communist insurgency that lasted until 1990.
The head of state is the Governor, also known as the Yang di-Pertua Negeri, while the head of government is the Chief Minister. Sarawak is divided into administrative divisions and districts, governed by a system that is closely modelled on the Westminster parliamentary system and was the earliest state legislature system in Malaysia.
Because of its natural resources, Sarawak specialises in the export of oil and gas, timber and oil palms, but also possesses strong manufacturing, energy and tourism sectors. It is ethnically, culturally, and linguistically diverse; major ethnic groups including Iban, Malay, Chinese, Melanau, Bidayuh and Orang Ulu. English and Malay are the two official languages of the state; there is no official religion.
The generally-accepted explanation of the state's name is that it is derived from the Sarawak Malay word "serawak", which means antimony. A popular alternative explanation is that it is a contraction of the four Malay words purportedly uttered by Pangeran Muda Hashim (uncle to the Sultan of Brunei), ""Saya serah pada awak"" (I surrender it to you), when he gave Sarawak to James Brooke, an English explorer in 1841. However, the latter explanation is incorrect: the territory had been named Sarawak before the arrival of James Brooke, and the word "awak" was not in the vocabulary of Sarawak Malay before the formation of Malaysia.
Sarawak is nicknamed "Land of the Hornbills" ("Bumi Kenyalang"). These birds are important cultural symbols for the Dayak people, representing the spirit of God. It is also believed that if a hornbill is seen flying over residences, it will bring good luck to the local community. Sarawak has eight of the world's fifty-four species of hornbills, and the Rhinoceros hornbill is the state bird of Sarawak.
Foragers are known to have lived around the west mouth of the Niah Caves (located southwest of Miri) 40,000 years ago. A modern human skull found near the Niah Caves is the oldest human remain found in Malaysia and the oldest modern human skull from Southeast Asia. Chinese ceramics dating to the Tang and Song dynasties (8th to 13th century AD, respectively) found at Santubong (near Kuching) hint at its significance as a seaport.
The Bruneian Empire was established in the coastal regions of Sarawak by the mid-15th century, and the Kuching area was known to Portuguese cartographers during the 16th century as "Cerava", one of the five great seaports of Borneo. It was also during this time that witnessed the birth of the Sultanate of Sarawak, a local kingdom that lasted for almost half a century before being reunited with Brunei in 1641. By the early 19th century, the Bruneian Empire was in decline, retaining only a tenuous hold along the coastal regions of Sarawak which were otherwise controlled by semi-independent Malay leaders. Away from the coast, territorial wars were fought between the Iban and a Kenyah-Kayan alliance.
The discovery of antimony ore in the Kuching region led Pangeran Indera Mahkota, a representative of the Sultan of Brunei, to increase development in the territory between 1824 and 1830. Increasing antimony production in the region led the Brunei Sultanate to demand higher taxes, which ultimately led to civil unrest. In 1839, Sultan Omar Ali Saifuddin II (1827–1852) assigned his uncle Pangeran Muda Hashim the task of restoring order but his inability to do so caused him to request the aid of British sailor James Brooke. Brooke's success in quelling the revolt was rewarded with antimony, property and the governorship of Sarawak, which at that time consisted only of a small area centred on Kuching.
The Brooke family, later called the White Rajahs, set about expanding the territory they had been ceded.
With expansion came the need for efficient governance and thus, beginning in 1841, Sarawak was separated into the first of its administrative divisions with currency, the Sarawak dollar, beginning circulation in 1858. By 1912, a total of five divisions had been established in Sarawak, each headed by a Resident. The Brooke family generally practised a paternalistic form of government with minimal bureaucracy, but were pressured to establish some form of legal framework. Since they were unfamiliar with local customs, the Brooke government created an advisory Supreme Council, mostly consisting of Malay chiefs, to provide guidance. This council is the oldest state legislative assembly in Malaysia, with the first General Council meeting taking place at Bintulu in 1867. In 1928, a Judicial Commissioner, Thomas Stirling Boyd, was appointed as the first legally trained judge. A similar system relating to matters concerning various Chinese communities was also formed. Members of the local community were encouraged by the Brooke regime to focus on particular functions within the territory: the Ibans and other Dayak people were hired as militia while Malays were primarily administrators. Chinese, both local and immigrant, were mostly employed in plantations, mines and as bureaucrats. Expanding trade led to the formation of the Borneo Company Limited in 1856. The company was involved in a wide range of businesses in Sarawak including trade, banking, agriculture, mineral exploration, and development.
Between 1853 and 1862, there were a number of uprisings against the Brooke government but all were successfully contained with the aid of local tribes. To guard against future uprisings, a series of forts were constructed to protect Kuching, including Fort Margherita, completed in 1871. By that time Brooke's control of Sarawak was such that defences were largely unnecessary.
Charles Anthoni Brooke succeeded his uncle in 1868 as the next White Rajah. Under his rule, Sarawak gained Limbang and the Baram and Trusan valleys from the Sultan of Brunei, later becoming a protectorate in 1888 with Britain handling foreign affairs but the Brooke government retaining administrative powers. Domestically, Brooke established the Sarawak Museum – the oldest museum in Borneo – in 1891, and brokered a peace in Marudi by ending intertribal wars there. Economic development continued, with oil wells drilling from 1910 and the Brooke Dockyard opening two years later. Anthony Brooke, who would become Rajah Muda (heir apparent) in 1939, was born in 1912.
A centenary celebration of Brooke rule in Sarawak was held in 1941. During the celebration, a new constitution was introduced that would limit the power of the Rajah and grant the Sarawak people a greater role in the functioning of government. However, this constitution was never fully implemented due to the Japanese occupation. That same year saw the British withdrawing its air and marine forces defending Sarawak to Singapore. With Sarawak now unguarded, the Brooke regime adopted a scorched earth policy where oil installations in Miri were to be destroyed and the Kuching airfield held as long as possible before being destroyed. Nevertheless, a Japanese invasion force led by Kiyotake Kawaguchi landed in Miri on 16 December 1941 and conquered Kuching on 24 December 1941, with British ground forces retreating to Singkawang in neighbouring Dutch Borneo. After ten weeks of fighting there, the Allied forces surrendered on 1 April 1942. Charles Vyner Brooke, the last Rajah of Sarawak, had already left for Sydney, Australia; his officers were captured by the Japanese and interned at the Batu Lintang camp.
Sarawak remained part of the Empire of Japan for three years and eight months. During this time it was divided into three provinces – Kuching-shu, Sibu-shu, and Miri-shu – each under their respective Provincial Governor. The Japanese otherwise preserved the Brooke administrative structure and appointed the Japanese to important government positions. Allied forces later carried out Operation Semut to sabotage Japanese operations in Sarawak. During the battle of North Borneo, the Australian forces landed at Lutong-Miri area on 20 June 1945 and had penetrated as far as Marudi and Limbang before halting their operations in Sarawak. After the surrender of Japan, the Japanese surrendered to the Australian forces at Labuan on 10 September 1945. The following day, the Japanese forces at Kuching surrendered, and the Batu Lintang camp was liberated. Sarawak was immediately placed under British Military Administration and managed by Australian Imperial Forces (AIF) until April 1946.
Lacking the resources to rebuild Sarawak after the war, Charles Vyner Brooke decided to cede Sarawak as British Crown Colony and a Cession Bill was put forth in the Council Negri (now Sarawak State Legislative Assembly), which was debated for three days. The bill was passed on 17 May 1946 with a narrow majority (19 versus 16 votes). This caused hundreds of Malay civil servants to resign in protest, sparking an anti-cession movement and the assassination of the second colonial governor of Sarawak Sir Duncan Stewart. Despite the resistance, Sarawak became a British Crown colony on 1 July 1946. Anthony Brooke opposed the cession of Sarawak to the British Crown, for which he was banished from Sarawak by the colonial government. He was only allowed to return 17 years later after Sarawak had become part of Malaysia. In 1950 all anti-cession movements in Sarawak ceased after a clamp-down by the colonial government.
On 27 May 1961, Tunku Abdul Rahman, the prime minister of the Federation of Malaya, announced a plan to form a greater federation together with Singapore, Sarawak, Sabah and Brunei, to be called Malaysia. On 17 January 1962, the Cobbold Commission was formed to gauge the support of Sarawak and Sabah for the plan; the Commission reported 80 percent support for federation. On 23 October 1962, five political parties in Sarawak formed a united front that supported the formation of Malaysia. Sarawak was officially granted self-government on 22 July 1963, and became federated with Malaya, North Borneo (now Sabah), and Singapore to form a federation named Malaysia on 16 September 1963. The governments of the Philippines and Indonesia opposed the new federation, as did the Brunei People's Party and Sarawak-based communist groups, and in 1962, the Brunei Revolt broke out. Indonesian President Sukarno responded by deploying armed volunteers and, later, military forces into Sarawak. Thousands of Sarawak communist members went into Kalimantan, Indonesian Borneo, and underwent training with the Communist Party of Indonesia. The most significant engagement of the confrontation was fought at Plaman Mapu in April 1965. The defeat at Plaman Mapu ultimately resulted in the fall of Sukarno and he was replaced by Suharto as president of Indonesia. Negotiations were restarted between Malaysia and Indonesia and led to the end of the confrontation on 11 August 1966.
A number of communist groups existed in Sarawak, the first of which, the Sarawak Overseas Chinese Democratic Youth League, formed in 1951. Another group, the North Kalimantan Communist Party (NKCP) (also known as Clandestine Communist Organisation (CCO) by government sources) was formally set up in 1970. Weng Min Chyuan and Bong Kee Chok were two of the more notable communist leaders involved in the insurgency. As the political scene changed, it grew progressively more difficult for the communists to operate. This led to Bong opening talks with chief minister Abdul Rahman Ya'kub in 1973 and eventually signing an agreement with the government. Weng, who had moved to China in the mid-1960s but nonetheless retained control of the CCO, pushed for a continued armed insurrection against the government in spite of this agreement. The conflict continued mostly in the Rajang Delta region but eventually ended when, on 17 October 1990, the NKCP signed a peace agreement with the Sarawak government.
The head of the Sarawak state is the Yang di-Pertua Negeri (also known as TYT or Governor), a largely symbolic position appointed by the Yang di-Pertuan Agong (King of Malaysia) on the advice of the Malaysian federal government. Since 2014 this position has been held by Abdul Taib Mahmud. The TYT appoints the chief minister, currently held by Abang Johari Openg (BN), as the head of government. Generally, the leader of the party that commands the majority of the state Legislative Assembly is appointed as the chief minister; democratically elected representatives are known as state assemblymen. The state assembly passes laws on subjects that are not under the jurisdiction of the Parliament of Malaysia such as land administration, employment, forests, immigration, merchant shipping and fisheries. The state government is constituted by the chief minister, the cabinet ministers and their assistant ministers.
To protect the interests of the Sarawakians in the Malaysian federation, special safeguards have been included in the Constitution of Malaysia. These include: control over immigration in and out of the state as well as the residence status of non-Sarawakians and non-Sabahans, limitations on the practice of law to resident lawyers, independence of the Sarawak High Court from the High Court Peninsular Malaysia, a requirement that the Sarawak Chief Minister be consulted prior to the appointment of the chief judge of the Sarawak High Court, the existence of Native Courts in Sarawak and the power to levy sales tax. Natives in Sarawak enjoy special privileges such as quotas and employment in public service, scholarships, university placements, and business permits. Local governments in Sarawak are exempt from local council laws enacted by the Malaysian parliament.
Major political parties in Sarawak can be divided into three categories: native non-Muslim, native Muslim, and non-native; parties, however, may also include members from more than one group. The first political party, the Sarawak United Peoples' Party (SUPP), was established in 1959, followed by the Parti Negara Sarawak (PANAS) in 1960 and the Sarawak National Party (SNAP) in 1961. Other major political parties such as Parti Pesaka Sarawak (PESAKA) appeared by 1962. These parties later joined the national coalition of the Alliance Party. The Alliance Party (later regrouped into Barisan Nasional) has ruled Sarawak since the formation of Malaysia. The opposition in Sarawak has consistently alleged that the ruling coalition uses various types of vote-buying tactics in order to win elections. Stephen Kalong Ningkan was the first Chief Minister of Sarawak from 1963 to 1966 following his landslide victory in local council elections. However, he was ousted in 1966 by Tawi Sli with the help of the Malaysian federal government, causing the 1966 Sarawak constitutional crisis.
In 1969, the first Sarawak state election was held, with members of the Council Negri being directly elected by the voters. This election marked the beginning of ethnic Melanau domination in Sarawak politics by Abdul Rahman Ya'kub and Abdul Taib Mahmud. In the same year, the North Kalimantan Communist Party (NKCP) which subsequently waged a guerrilla war against the newly elected Sarawak state government, was formed. The party was dissolved after the signing of a peace agreement in 1990. 1973 saw the birth of Parti Pesaka Bumiputera Bersatu (PBB) following a merger of several parties. This party would later become the backbone of the Sarawak BN coalition. In 1978, the Democratic Action Party (DAP) was the first West Malaysia-based party to open its branches in Sarawak. Sarawak originally held state elections together with national parliamentary elections. However, the then chief minister Abdul Rahman Ya'kub delayed the dissolution of the state assembly by a year to prepare for the challenges posed by opposition parties. This made Sarawak the only state in Malaysia to hold state elections separate from the national parliamentary elections since 1979. In 1983, SNAP started to fragment into several splinter parties due to recurrent leadership crises. The political climate in the state was stable until the 1987 Ming Court Affair, a political coup initiated by Abdul Taib Mahmud's uncle to topple the Taib-led BN coalition. However, the coup was unsuccessful and Taib retained his position as chief minister.
Since the 2006 state election, the Democractic Action Party (DAP) has derived the majority of its support from urban centres and became the largest opposition party in Sarawak. In 2010, it formed the Pakatan Rakyat coalition with Parti Keadilan Rakyat (PKR) and Parti Islam Se-Malaysia (PAS); the latter two parties had become active in Sarawak between 1996 and 2001. Sarawak is the only state in Malaysia where West Malaysia-based component parties in the BN coalition, especially the United Malays National Organisation (UMNO), have not been active in state politics.
On 12 June 2018, the Sarawak Parties Alliance was formed by the BN parties in the state in the aftermath of an historic meeting of party leaders in Kuching, where they decided that in light of the BN defeat in the 2018 Malaysian general election and the changing national situation and a new government, the parties will leave the BN altogether. In conjunction with the celebration of Malaysia Day in 2018 under the new government, Prime Minister Mahathir Mohamad has promised to restore the status of Sarawak (together with Sabah) as an equal partner to Malaya, where all three parties (and then, Singapore) formed Malaysia in accordance to the Malaysia Agreement. However, through the process of the proposed amendment to the Constitution of Malaysia in 2019, the bill for the amendment failed to pass following the failure to reach two-thirds majority support (148 votes) in the Parliament with only 138 agreed with the move while 59 abstained from the voting.
Unlike states in Peninsular Malaysia, Sarawak is divided into divisions, 12 in all, each headed by an appointed resident.
On 26 November 2015, it was announced that the Kuching Division district of Serian would become Sarawak's 12th division and it had officiated by Adenan Satem at its formal creation on 11 April 2015.
A division is divided into districts, each headed by a district officer, which are in turn divided into sub-districts, each headed by a Sarawak Administrative Officer (SAO). There is also one development officer for each division and district to implement development projects. The state government appoints a headman (known as "ketua kampung" or "penghulu") for each village. There are a total of 26 sub-districts in Sarawak all under the jurisdiction of the Sarawak Ministry of Local Government and Community Development. The list of divisions, districts, and subdistricts is shown in the table below:
The first paramilitary armed forces in Sarawak, a regiment formed by the Brooke regime in 1862, were known as the Sarawak Rangers. The regiment, renowned for its jungle tracking skills, served in the campaign to end the intertribal wars in Sarawak. It also engaged in guerrilla warfare against the Japanese, in the Malayan Emergency (in West Malaysia) and the Sarawak Communist Insurgency against the communists. Following the formation of Malaysia, the regiment was absorbed into the Malaysian military forces and is now known as the Royal Ranger Regiment.
In 1888, Sarawak, together with neighbouring North Borneo, and Brunei, became British protectorates, and the responsibility for foreign policy was handed over to the British in exchange for military protection. Since the formation of Malaysia, the Malaysian federal government has been solely responsible for foreign policy and military forces in the country.
The Malaysian government has a number of border disputes with neighbouring countries, of which several concern Sarawak. This includes land and maritime disputes with neighbouring Brunei. In 2009, Malaysian prime minister Abdullah Ahmad Badawi claimed that in a meeting with Sultan of Brunei, Brunei agreed to drop its claim over Limbang. This was however denied by the second Foreign Minister of Brunei Lim Jock Seng, stating the issue was never discussed during the meeting. James Shoal (Betting Serupai) and the Luconia Shoals (Betting Raja Jarum/Patinggi Ali), islands in the South China Sea, fall within Sarawak's exclusive economic zone, but concerns have been raised about Chinese incursions. There are also several Sarawak–Kalimantan border issues with Indonesia.
The total land area of Sarawak is nearly , making up 37.5 percent of the total area of Malaysia, and lies between the northern latitudes 0° 50′ and 5° and eastern longitudes 109° 36′ and 115° 40′ E. Its of coastline is interrupted in the north by about of Bruneian coast. A total of its coastline have been eroding. In 1961, Sarawak including neighbouring Sabah, which had been included in the International Maritime Organization (IMO) through the participation of the United Kingdom, became joint associate members of the IMO. Sarawak is separated from Kalimantan Borneo by ranges of high hills and mountains that are part of the central mountain range of Borneo. These become loftier to the north, and are highest near the source of the Baram River at the steep Mount Batu Lawi and Mount Mulu. Mount Murud is the highest point in Sarawak.
Sarawak has a tropical geography with an equatorial climate and experiences two monsoon seasons: a northeast monsoon and a southwest monsoon. The northeast monsoon occurs between November and February, bringing heavy rainfall while the southwest monsoon, which occurs between March and October, brings somewhat less rainfall. The climate is stable throughout the year except for the two monsoons, with average daily temperature varying between in the morning to in the afternoon at coastal areas. Miri has the lowest average temperatures in comparison to other major towns in Sarawak and has the longest daylight hours (more than six hours a day), while other areas receive sunshine for five to six hours a day. Humidity is usually high, exceeding 68 percent, with annual rainfall varying between and for up to 220 days a year. At highland areas, the temperature can vary from to during the day and as low as during the night.
Sarawak is divided into three ecoregions. The coastal region is rather low-lying and flat with large areas of swamp and other wet environments. Beaches in Sarawak include Pasir Panjang and Damai beaches in Kuching, Tanjung Batu beach in Bintulu, and Tanjung Lobang and Hawaii beaches in Miri. Hilly terrain accounts for much of the inhabited land and is where most of the cities and towns are found. The ports of Kuching and Sibu are built some distance from the coast on rivers while Bintulu and Miri are close to the coastline where the hills stretch right to the South China Sea. The third region is the mountainous region along the SarawakKalimantan border, where a number of villages such as Bario, Ba'kelalan, and Usun Apau Plieran are located. A number of rivers flow through Sarawak, with the Sarawak River being the main river flowing through Kuching. The Rajang River is the longest river in Malaysia, measuring including its tributary, Balleh River. To the north, the Baram, Limbang and Trusan Rivers drain into the Brunei Bay.
Sarawak can be divided into two geological zones: the Sunda Shield, which extends southwest from the Batang Lupar River (near Sri Aman) and forms the southern tip of Sarawak, and the geosyncline region, which extends northeast to the Batang Lupar River, forming the central and northern regions of Sarawak. The oldest rock type in southern Sarawak is schist formed during the Carboniferous and Lower Permian times, while the youngest igneous rock in this region, andesite, can be found at Sematan. Geological formation of the central and northern regions started during the late Cretaceous period. Other types of stone that can be found in central and northern Sarawak are shale, sandstone, and chert. The Miri Division in eastern Sarawak is the region of Neogene strata containing organic rich rock formations which are the prolific oil and gas reserves. The rocks enriched in organic components are mudstones in Lambir, Miri and Tukau Formations of Middle Miocene-Lower Pliocene age. Significant quantities of Sarawak soil are lithosols, up to 60 percent, and podsols, around 12 percent, while abundant alluvial soil is found in coastal and riverine regions. 12 percent of Sarawak is covered with peat swamp forest.
There are thirty national parks, among which are Niah with its eponymous caves, the highly developed ecosystem around Lambir Hills, and the World Heritage Site of Gunung Mulu. The last contains Sarawak Chamber, one of the world's largest underground chambers, Deer Cave, the largest cave passage in the world, and Clearwater Cave, the longest cave system in Southeast Asia.
Sarawak contains large tracts of tropical rainforest with diverse plant species, which has led to a number of them being studied for medicinal properties. Mangrove and nipah forests lining its estuaries comprise 2% of its forested area, peat swamp forests along other parts of its coastline cover 16%, Kerangas forest covers 5% and Dipterocarpaceae forests cover most mountainous areas. The major trees found in estuary forests include "bako" and "nibong", while those in the peat swamp forests include "ramin" ("Gonystylus bancanus"), "meranti" ("Shorea"), and "medang jongkong" ("Dactylocladus stenostachys").
Animal species are also highly varied, with 185 species of mammals, 530 species of birds, 166 species of snakes, 104 species of lizards, and 113 species of amphibians, of which 19 percent of the mammals, 6 percent of the birds, 20 percent of the snakes and 32 percent of the lizards are endemic. These species are largely found in Totally Protected Areas. There are over 2,000 tree species in Sarawak. Other plants includes 1,000 species of orchids, 757 species of ferns, and 260 species of palm. The state is the habitat of endangered animals, including the borneo pygmy elephant, proboscis monkey, orangutans and Sumatran rhinoceroses. Matang Wildlife Centre, Semenggoh Nature Reserve, and Lanjak Entimau Wildlife Sanctuary are noted for their orangutan protection programmes. TalangSatang National Park is notable for its turtle conservation initiatives. Birdwatching is a common activity in various national parks such as Gunung Mulu National Park, Lambir Hills National Park, and Similajau National Park. MiriSibuti National Park is known for its coral reefs and Gunung Gading National Park for its "Rafflesia" flowers. Bako National Park, the oldest national park in Sarawak, is known for its 275 proboscis monkeys, and Padawan Pitcher Garden for its various carnivorous pitcher plants. In 1854, Alfred Russel Wallace visited Sarawak. A year later, he formulated the "Sarawak Law" which foreshadowed the formulation of his (and Darwin's) theory of evolution by natural selection three years later.
The Sarawak state government has enacted several laws to protect its forests and endangered wildlife species. Some of the protected species are the orangutan, green sea turtle, flying lemur, and piping hornbill. Under the Wild Life Protection Ordinance 1998, Sarawak natives are given permissions to hunt for a restricted range of wild animals in the jungles but should not possess more than of meat. The Sarawak Forest Department was established in 1919 to conserve forest resources in the state. Following international criticism of the logging industry in Sarawak, the state government decided to downsize the Sarawak Forest Department and created the Sarawak Forestry Corporation in 1995. The Sarawak Biodiversity Centre was set up in 1997 for the conservation, protection, and sustainable development of biodiversity in the state.
Sarawak's rain forests are primarily threatened by the logging industry and palm oil plantations. The issue of human rights of the Penan and deforestation in Sarawak became an international environmental issue when Swiss activist Bruno Manser visited Sarawak regularly between 1984 and 2000. Deforestation has affected the life of indigenous tribes, especially the Penan, whose livelihood is heavily dependent on forest produce. This led to several blockades by indigenous tribes during the 1980s and 1990s against logging companies encroaching on their lands. Indeed, illegal logging in particular has decimated the forest regions indigenous populations depend on for their livelihoods, depleting fish, wildlife, but also traditional medicinal herbs and construction staples like Palm. There have also been cases where Native Customary Rights (NCR) lands have been given to timber and plantation companies without the permission of the locals. The indigenous people have resorted to legal means to reinstate their NCR. In 2001 the High Court of Sarawak fully reinstated the NCR land claimed by the Rumah Nor people, but this was overturned partially in 2005. However, this case has served as a precedent, leading to more NCR being upheld by the high court in the following years. Sarawak's mega-dam policies, such as the Bakun Dam and Murum Dam projects, have submerged thousands of hectares of forest and displaced thousands of indigenous people. Since 2013, the proposed Baram Dam project has been delayed due to ongoing protests from local indigenous tribes. Since 2014, the Sarawak government under chief minister Adenan Satem started to take action against illegal logging in the state and to diversify the economy of the state. Through the course of 2016 over 2 million acres of forest, much of it in orangutan habitats, were declared protected areas.
Sources vary as to Sarawak's remaining forest cover: former chief minister Abdul Taib Mahmud declared that it fell from 70% to 48% between 2011 and 2012, the Sarawak Forest Department and the Ministry of Resource Planning and Environment both held that it remained at 80% in 2012, and Wetlands International reported that it fell by 10% between 2005 and 2010, 3.5 times faster than the rest of Asia combined.
Historically, Sarawak's economy was stagnant during the rule of previous three white Rajahs. After the formation of Malaysia, Sarawak GDP growth rate has risen due to increase in petroleum output and the rise in global petroleum prices. However, the state economy is less diversified and still heavily dependent upon the export of primary commodities when compared to Malaysia overall. The per capita GDP in Sarawak was lower than the national average from 1970 to 1990. As of 2016, GDP per capita for Sarawak stands at RM 44,333 - the fifth highest in Malaysia. However, the urban-rural income gap remained a major problem in Sarawak.
Sarawak is abundant in natural resources, and primary industries such as mining, agriculture, and forestry accounted for 32.8% of its economy in 2013. It also specialises in the manufacture of food and beverages, wood-based and rattan products, basic metal products, and petrochemicals, as well as cargo and air services and tourism.
The state's gross domestic product (GDP) grew by 5.0% per year on average from 2000 to 2009, but became more volatile later on, ranging from −2.0% in 2009 to 7.0% in 2010. Sarawak contributed 10.1% of Malaysia's GDP in the nine years leading up to 2013, making it the third largest contributor after Selangor and Kuala Lumpur. From 2006 to 2013, the oil and gas industry accounted for 34.8% of the Sarawak government's revenue. It attracted RM 9.6 billion (US$2.88 billion) in foreign investments, with 90% going to the Sarawak Corridor of Renewable Energy (SCORE), the second largest economic corridor in Malaysia.
As of 2017, Sarawak is producing 850,000 barrel of oil equivalent every day in 60 oil and gas producing fields. However, the export-oriented economy is dominated by liquefied natural gas (LNG), which accounts for more than half of total exports. Crude petroleum accounts for 20.8%, while palm oil, sawlogs, and sawn timber account for 9.0% collectively. The state receives a 5% royalty from Petronas over oil explorations in its territorial waters. Most of the oil and gas deposits are located offshore next to Bintulu and Miri at Balingian basin, Baram basin, and around Luconia Shoals.
Sarawak is one of the world's largest exporters of tropical hardwood timber, constituting 65% of the total Malaysian log exports in 2000. The last United Nations statistics in 2001 estimated Sarawak's sawlog exports at an average of per year between 1996 and 2000.
In 1955, OCBC became the first foreign bank to operate in Sarawak, with other overseas banks following suit. Other notable Sarawak-based companies include Cahya Mata Sarawak Berhad, Naim Holdings, and Rimbunan Hijau.
Electricity in Sarawak, supplied by the state-owned Sarawak Energy Berhad (SEB), is primarily sourced from traditional coal fired power plants and thermal power stations using LNG, but diesel based sources and hydroelectricity are also utilised. There are 3 hydroelectric dams at Batang Ai, Bakun, and Murum, with several others under consideration. In early 2016, SEB signed Malaysia's first energy export deal to supply electricity to neighbouring West Kalimantan in Indonesia.
In 2008, SCORE was established as a framework to develop the energy sector in the state, specifically the Murum, Baram, and Baleh Dams as well as potential coal-based power plants, and 10high priority industries out to 2030. The Regional Corridor Development Authority is the government agency responsible for managing SCORE. The entire central region of Sarawak is covered under SCORE, including areas such as Samalaju (near Bintulu), Tanjung Manis, and Mukah. Samalaju will be developed as an industrial park, with Tanjung Manis as a halal food hub, and Mukah as the administrative centre for SCORE with a focus on resource-based research and development.
Tourism plays a major role in the economy of the state, contributing 7.89% of the state's GDP in 2016.
Foreign visitors to Sarawak are predominantly from Brunei, Indonesia, the Philippines, Singapore, China and the United Kingdom. A number of different organisations, both state and private, are involved in the promotion of tourism in Sarawak: the Sarawak Tourism Board is the state body responsible for tourism promotion in the state, various private tourism groups are united under the Sarawak Tourism Federation, and the Sarawak Convention Bureau is responsible for attracting conventions, conferences, and corporate events which are held in the Borneo Convention Centre in Kuching. The public and private bodies in Sarawak hold a biannual event to award the Sarawak Hornbill Tourism Award, an award for achievements within various categories, to recognise businesses and individuals for their efforts in the development of tourism within the state.
The Rainforest World Music Festival is the region's primary musical event, attracting more than 20,000 people annually. Other events that are held regularly in Sarawak are the ASEAN International Film Festival, Asia Music Festival, Borneo Jazz Festival, Borneo Cultural Festival, and Borneo International Kite Festival. Major shopping complexes in Sarawak include The Spring, Boulevard, Hock Lee Centre, City One shopping malls in Kuching, and Bintang Megamall, Boulevard, Imperial Mall, and Miri Plaza shopping malls in Miri.
Infrastructure development in Sarawak is overseen by the Ministry of Infrastructure Development and Transportation, successor to the Ministry of Infrastructure Development and Communications (MIDCom) after it was renamed in 2016. Despite this ministerial oversight, infrastructure in Sarawak remains relatively underdeveloped compared to Peninsular Malaysia.
In 2009, 94% of urban Sarawak was supplied with electricity, but only 67% of rural areas had electricity. However, this had increased to 91% by 2014. According to a 2015 article, household internet penetration in Sarawak was lower than Malaysian national average, 41.2% versus 58.6%, with 58.5% of internet use being in urban areas and 29.9% in rural areas. In comparison, mobile telecommunication uptake in Sarawak was comparable to the national average, 93.3% against a national average of 94.2%, and on par with neighbouring Sabah. Mobile telecommunication infrastructure, specifically broadcast towers, are built and managed by SacofaSdnBhd (Sacofa Private Limited), which enjoys a monopoly in Sarawak after the company was granted a 20-year exclusivity deal on the provision, maintenance and leasing of towers in the state.
A number of different bodies manage the supply of water depending on their region of responsibility, including the Kuching Water Board (KWB), Sibu Water Board (SWB), and LAKU Management SdnBhd, which handle water supply in Miri, Bintulu, and Limbang respectively, and the Rural Water Supply Department managing the water supply for the remaining areas. , 82% of the rural areas have a fresh water supply.
Much like many former British territories, Sarawak uses a dual carriageway with the left-hand traffic rule. As of 2013, Sarawak had a total of of connected roadways, with being paved state routes, of dirt tracks, of gravel roads, and of paved federal highway. The primary route in Sarawak is the Pan Borneo Highway, which runs from Sematan, Sarawak, through Brunei to Tawau, Sabah. Despite being a major highway, the condition of the road is poor leading to numerous accidents and fatalities. 16 billion ringgit worth of contracts were awarded to a number of local companies in December 2016 to add new vehicle and pedestrian bridges, interchanges and bus shelters to the highway as part of a multi-phase project.
A railway line existed before the war, but the last remnants of the line were dismantled in 1959. A rail project was announced in 2008 to be in line with the transport needs of SCORE, but as yet no construction work has begun despite an anticipated completion date in 2015. In 2017, the Sarawak government proposed a light rail system (Kuching Line) connecting Kuching, Samarahan and Serian divisions with anticipated completion in 2020. Currently, buses are the primary mode of public transportation in Sarawak with interstate services connecting the state to Sabah, Brunei, and Pontianak (Indonesia).
Sarawak is served by a number of airports with Kuching International Airport, located south west of Kuching, being the largest. Flights from Kuching are mainly to Kuala Lumpur but also to Johor Bahru, Penang, Sabah, Kelantan, Singapore and Pontianak, Indonesia. A second airport at Miri serves flights primarily to other Malaysian states as well as services to Singapore. Other smaller airports such as Sibu Airport, Bintulu Airport, Mukah Airport, Marudi Airport, Mulu Airport, and Limbang Airport provide domestic services within Malaysia. There are also a number of remote airstrips serving rural communities in the state. Three airlines serve flights in Sarawak, Malaysia Airlines, Air Asia, and MASwings all of which use Kuching Airport as their main hub. The state owned Hornbill Skyways is an aviation company that largely provides private chartered flights and flight services for public servants.
Sarawak has four primary ports located at Kuching, Sibu, Bintulu, and Miri. The busiest seaport at Bintulu is under the jurisdiction of the Malaysian federal government and mainly handles LNG products and regular cargo. The remaining ports are under the respective state port authorities. The combined throughput of the four primary ports was 61.04million freight weight tonnes (FWT) in 2013. Sarawak has 55 navigable river networks with a combined length of . For centuries, the rivers of Sarawak have been a primary means of transport as well as a route for timber and other agricultural goods moving downriver for export at the country's major ports. Sibu port, located from the river's mouth, is the main hub along the Rajang River mainly handling timber products. However, the throughput of Sibu port has declined over the years after Tanjung Manis Industrial Port (TIMP) began operating further downriver.
Health care in Sarawak is provided by three major government hospitals, Sarawak General Hospital, Sibu Hospital, and Miri Hospital, as well as numerous district hospitals, public health clinics, 1Malaysia clinics, and rural clinics. Besides government-owned hospitals and clinics, there are several private hospitals in Sarawak such as the Normah Medical Specialists Centre, Timberland Medical Specialists Centre, and Sibu Specialist Medical Centre. Hospitals in Sarawak typically provide the full gamut of health care options, from triage to palliative care for the terminally ill. In 1994, Sarawak General Hospital Department of Radiotherapy, Oncology & Palliative Care instituted an at-home care, or hospice care, program for cancer patients. The non profit Sarawak Hospice Society was established in 1998 to promote this program.
In comparison to the number of other medical facilities, mental health is only serviced by a single facility, Hospital Sentosa. This abundance of medical services has made Sarawak a medical tourism destination for visitors from neighbouring Brunei and Indonesia.
In comparison to the prevalence of health services in urban regions, much of rural Sarawak is only accessible by river transport, which limits access. Remote rural areas that are beyond the operating areas of health clinics, about , and inaccessible by land or river are serviced by a monthly flying doctor service, which was established in 1973.
A village health promoter program, where volunteers are provided with basic medical training, was established in 1981 but difficulty in providing medical supplies to remote villages, as well as a lack of incentive, resulted in a decline of the program. A variety of traditional medicine practices are still being used by the various communities in Sarawak to supplement modern medical practices but this practice is also declining. However, since 2004, there has been a resurgence in traditional medicine in Malaysia resulting in the establishment of a traditional medicine division within the Ministry of Health. A 2006 government program to have integrated hospitals led to numerous universities starting programs to teach traditional medicine and major hospitals, including Sarawak General Hospital, providing traditional therapies. | https://en.wikipedia.org/wiki?curid=28258 |
Sonnet
A sonnet is a poetic form which originated at the Court of the Holy Roman Emperor Frederick II in Palermo, Sicily. The 13th-century poet and notary Giacomo da Lentini is credited with the sonnet's invention and the Sicilian School of poets who surrounded him is credited with its spread. The earliest sonnets, however, no longer survive in the original Sicilian language, but only after being translated into Tuscan dialect.
The term "sonnet" is derived from the Italian word "sonetto" (from Old Provençal "sonet" a little poem, from "son" song, from Latin "sonus" a sound). By the thirteenth century it signified a poem of fourteen lines that follows a strict rhyme scheme and specific structure. Conventions associated with the sonnet have evolved over its history. Writers of sonnets are sometimes called "sonneteers," although the term can be used derisively.
The sonnet was created by Giacomo da Lentini, head of the Sicilian School under Emperor Frederick II. Guittone d'Arezzo rediscovered it and brought it to Tuscany where he adapted it to his language when he founded the Siculo-Tuscan School, or Guittonian school of poetry (1235–1294). He wrote almost 250 sonnets. Other Italian poets of the time, including Dante Alighieri (1265–1321) and Guido Cavalcanti (c. 1250–1300), wrote sonnets, but the most famous early sonneteer was Petrarch. Other fine examples were written by Michelangelo.
The structure of a typical Italian sonnet of the time included two parts that together formed a compact form of "argument". First, the octave forms the "proposition", which describes a "problem" or "question", followed by a sestet (two tercets) which proposes a "resolution". Typically, the ninth line initiates what is called the "turn", or "volta", which signals the move from proposition to resolution. Even in sonnets that don't strictly follow the problem/resolution structure, the ninth line still often marks a "turn" by signaling a change in the tone, mood, or stance of the poem.
Later, the ABBA ABBA pattern became the standard for Italian sonnets. For the sestet there were two different possibilities: CDE CDE and CDC CDC. In time, other variants on this rhyming scheme were introduced, such as CDCDCD. Petrarch typically used an ABBA ABBA pattern for the octave, followed by either CDE CDE or CDC CDC rhymes in the sestet. The Crybin variant of the Italian sonnet has the rhyme scheme ABBA CDDC EFG EFG.
Most Sonnets in Dante's "La Vita Nuova" are Petrarchan. Chapter VII gives sonnet "O voi che per la via", with two sestets (AABAAB AABAAB) and two quatrains (CDDC CDDC), and Ch. VIII, "Morte villana", with two sestets (AABBBA AABBBA) and two quatrains (CDDC CDDC).
In American poetry, the first notable poet to use the sonnet form was Edgar Allan Poe.
Henry Wadsworth Longfellow also wrote and translated many sonnets, among others the cycle "Divina Commedia" ("Divine Comedy"). He used the Italian rhyme scheme.
Emma Lazarus, a Sephardic Jewish poet from New York City, also published many sonnets. She is the author of perhaps the best-known American sonnet, "The New Colossus," which celebrates the Statue of Liberty and her role in welcoming immigrants to the New World.
Among the major poets of the early Modernist period, Robert Frost, Edna St. Vincent Millay and E. E. Cummings all used the sonnet regularly.
In 1928, American poet and painter John Allan Wyeth published "This Man's Army: A War in Fifty-Odd Sonnets". The collection, with a rhyme scheme unique in the history of the sonnet, traces Wyeth's military service with the American Expeditionary Force in France during World War I. According to Dana Gioia, who rescued Wyeth's work from oblivion during the early 21st century, Wyeth is the only American poet of the Great War who deserves to be compared with British war poets Siegfried Sassoon, Isaac Rosenberg, and Wilfred Owen.
During the Harlem Renaissance, African American writers of sonnets included Paul Lawrence Dunbar, Claude McKay, Countee Cullen, Langston Hughes, and Sterling A. Brown.
Other modern poets, including Don Paterson, Edwin Morgan, Joan Brossa, Paul Muldoon have used the form. Wendy Cope's poem "Stress" is a sonnet. Elizabeth Bishop's inverted "Sonnet" was one of her last poems. Ted Berrigan's book, "The Sonnets", "is conventional almost exclusively in [the] line count". Paul Muldoon often experiments with 14 lines and sonnet rhymes, though without regular sonnet meter.
At the height of the Vietnam War in 1967, American poet Richard Wilbur composed "A Miltonic Sonnet for Mr. Johnson on His Refusal of Peter Hurd's Official Portrait". In a clear cut case of "criticism from the Right", Wilbur compares U.S. President Lyndon Baines Johnson with Thomas Jefferson and finds the former to be greatly wanting. Commenting that Jefferson "would have wept to see small nations dread/ The imposition of our cattle brand," and that in Jefferson's term, "no army's blood was shed", Wilbur urges President Johnson to seriously consider how history will judge him and his Administration.
Beginning in the 1970s and '80s, the New Formalist Revival has also created a revival of the sonnet form in American poetry. Between 1994 and 2017, first "The Formalist" and then "Measure" sponsored the Howard Nemerov Sonnet Award, which was annually offered for the best new sonnet.
Rhina Espaillat, a Dominican immigrant and prominent New Formalist poet, has translated many Spanish and Latin American sonnets into English. No volume of her many translations, however, has yet been published. Espaillat has also used the sonnet form for original poetry, as well.
This revival includes the invention of the "word sonnet", which is a fourteen-line poem, with one word per line. Frequently allusive and imagistic, word sonnets can also be irreverent and playful.
In Canada during the last decades of the century, the Confederation Poets and especially Archibald Lampman were known for their sonnets, which were mainly on pastoral themes.
Canadian poet Seymour Mayne has published a few collections of word sonnets, and is one of the chief innovators of the form.
American-born Canadian poet Catherine Chandler, who lives in Quebec, has published many sonnets.
The sonnet was introduced into Czech literature at the beginning of the 19th century. The first great Czech sonneteer was Ján Kollár, who wrote a cycle of sonnets named "Slávy Dcera" ("The daughter of Sláva" / "The daughter of fame"). Kollár was Slovak and a supporter of Pan-Slavism, but wrote in Czech, as he disagreed that Slovak should be a separate language. Kollár's magnum opus was planned as a Slavic epic poem as great as Dante's Divine Comedy. It consists of "The Prelude" written in quantitative hexameters, and sonnets. The number of poems increased in subsequent editions and came up to 645. The greatest Czech romantic poet, Karel Hynek Mácha also wrote many sonnets. In the second half of the 19th century Jaroslav Vrchlický published "Sonety samotáře" ("Sonnets of a Solitudinarian"). Another poet, who wrote many sonnets was Josef Svatopluk Machar. He published "Čtyři knihy sonetů" ("The Four Books of Sonnets"). In the 20th century Vítězslav Nezval wrote the cycle "100 sonetů zachránkyni věčného studenta Roberta Davida" ("One Hundred Sonnets for the Woman who Rescued Perpetual Student Robert David"). After the Second World War the sonnet was the favourite form of Oldřich Vyhlídal. Czech poets use different metres for sonnets, Kollár and Mácha used decasyllables, Vrchlický iambic pentameter, Antonín Sova free verse, and Jiří Orten the Czech alexandrine. Ondřej Hanus wrote a monograph about Czech Sonnets in the first half of the twentieth century.
In the Netherlands Pieter Corneliszoon Hooft wrote sonnets. A famous example is "Mijn lief, mijn lief, mijn lief". Some of his poems were translated by Edmund Gosse.
More recent sonneteers in Dutch are Gerrit Komrij, Martinus Nijhoff, and Jan Kal.
In English, both the English (or Shakespearean) sonnet and the Italian Petrarchan sonnet are traditionally written in iambic pentameter.
The first known sonnets in English, written by Sir Thomas Wyatt and Henry Howard, Earl of Surrey, used the Italian, Petrarchan form, as did sonnets by later English poets, including John Milton, Thomas Gray, William Wordsworth and Elizabeth Barrett Browning.
When English sonnets were introduced by Thomas Wyatt (1503–1542) in the early 16th century, his sonnets and those of his contemporary the Earl of Surrey were chiefly translations from the Italian of Petrarch and the French of Ronsard and others. While Wyatt introduced the sonnet into English, it was Surrey who developed the rhyme scheme – ABAB CDCD EFEF GG – which now characterizes the English sonnet. Having previously circulated in manuscripts only, both poets' sonnets were first published in Richard Tottel's "Songes and Sonnetts," better known as "Tottel's Miscellany" (1557).
It was, however, Sir Philip Sidney's sequence "Astrophel and Stella" (1591) that started the English vogue for sonnet sequences. The next two decades saw sonnet sequences by William Shakespeare, Edmund Spenser, Michael Drayton, Samuel Daniel, Fulke Greville, William Drummond of Hawthornden, and many others. These sonnets were all essentially inspired by the Petrarchan tradition, and generally treat of the poet's love for some woman, with the exception of Shakespeare's sequence of 154 sonnets. The form is often named after Shakespeare, not because he was the first to write in this form but because he became its most famous practitioner. The form consists of fourteen lines structured as three quatrains and a couplet. The third quatrain generally introduces an unexpected sharp thematic or imagistic "turn", the volta. In Shakespeare's sonnets, however, the volta usually comes in the couplet, and usually summarizes the theme of the poem or introduces a fresh new look at the theme. With only a rare exception (for example, Shakespeare's Sonnet 145 in iambic tetrameter), the meter is iambic pentameter.
This example, Shakespeare's "Sonnet 116", illustrates the form (with some typical variances one may expect when reading an Elizabethan-age sonnet with modern eyes):
Let me not to the marriage of true minds (A)
Admit impediments, love is not love (B)*
Which alters when it alteration finds, (A)
Or bends with the remover to remove. (B)*
O no, it is an ever fixèd mark (C)**
That looks on tempests and is never shaken; (D)***
It is the star to every wand'ring bark, (C)**
Whose worth's unknown although his height be taken. (D)***
Love's not time's fool, though rosy lips and cheeks (E)
Within his bending sickle's compass come, (F)*
Love alters not with his brief hours and weeks, (E)
But bears it out even to the edge of doom: (F)*
"* PRONUNCIATION/RHYME: Note changes in pronunciation since composition."
"** PRONUNCIATION/METER: "Fixed" pronounced as two-syllables, "fix-ed". "
"*** RHYME/METER: Feminine-rhyme-ending, eleven-syllable alternative."
The Prologue to "Romeo and Juliet" is also a sonnet, as is Romeo and Juliet's first exchange in Act One, Scene Five, lines 104–117, beginning with "If I profane with my unworthiest hand" (104) and ending with "Then move not while my prayer's effect I take" (117). The Epilogue to "Henry V" is also in the form of a sonnet.
A variant on the English form is the Spenserian sonnet, named after Edmund Spenser (c.1552–1599), in which the rhyme scheme is ABAB BCBC CDCD EE. The linked rhymes of his quatrains suggest the linked rhymes of such Italian forms as terza rima. This example is taken from "Amoretti":
"Happy ye leaves! whenas those lily hands"
Happy ye leaves. whenas those lily hands, (A)
Which hold my life in their dead doing might, (B)
Shall handle you, and hold in love's soft bands, (A)
Like captives trembling at the victor's sight. (B)
And happy lines on which, with starry light, (B)
Those lamping eyes will deign sometimes to look,(C)
And read the sorrows of my dying sprite, (B)
Written with tears in heart's close bleeding book. (C)
And happy rhymes! bathed in the sacred brook (C)
Of Helicon, whence she derived is, (D)
When ye behold that angel's blessed look, (C)
My soul's long lacked food, my heaven's bliss. (D)
Leaves, lines, and rhymes seek her to please alone, (E)
Whom if ye please, I care for other none. (E)
In the 17th century, the sonnet was adapted to other purposes, with Metaphysical poets John Donne and George Herbert writing religious sonnets (see John Donne's "Holy Sonnets"), and John Milton using the sonnet as a general meditative poem. Probably Milton's most famous sonnet is "When I Consider How My Light is Spent", titled by a later editor "On His Blindness". Both the Shakespearean and Petrarchan rhyme schemes were popular throughout this period, as well as many variants.
"On His Blindness" by Milton, gives a sense of the Petrarchan rhyme scheme:
When I consider how my light is spent (A)
To serve therewith my Maker, and present (A)
That murmur, soon replies, "God doth not need (C)
Is Kingly. Thousands at his bidding speed (C)
The fashion for the sonnet went out with the Restoration, and hardly any sonnets were written between 1670 and Wordsworth's time. However, sonnets came back strongly with the French Revolution. Amongst the first to reintroduce the form was Charlotte Smith with her "Elegaic Sonnets", (1784 onwards) to whom Wordsworth acknowledged a considerable debt. Wordsworth himself wrote hundreds of sonnets, of which amongst the best-known are "Upon Westminster Bridge", "The world is too much with us" and "London, 1802" addressed to Milton; his sonnets were essentially modelled on Milton's. Keats and Shelley also wrote major sonnets; Keats's sonnets used formal and rhetorical patterns inspired partly by Shakespeare, and Shelley innovated radically, creating his own rhyme scheme for the sonnet "Ozymandias". In her later years, Felicia Hemans took up the form in her series "Sonnets Devotional and Memorial". Sonnets were written throughout the 19th century, but, apart from Elizabeth Barrett Browning's "Sonnets from the Portuguese" and the sonnets of Dante Gabriel Rossetti, there were few very successful traditional sonnets. "Modern Love" (1862) by George Meredith is a collection of fifty 16-line sonnets about the failure of his first marriage.
Gerard Manley Hopkins wrote several major sonnets, often in sprung rhythm, such as "The Windhover", and also several sonnet variants such as the 10-line curtal sonnet "Pied Beauty" and the 24-line caudate sonnet "That Nature is a Heraclitean Fire". Hopkin's poetry was, however, not published until 1918. By the end of the 19th century, the sonnet had been adapted into a general-purpose form of great flexibility.
This flexibility was extended even further in the 20th century.
Irish poet William Butler Yeats wrote the major sonnet "Leda and the Swan", which uses half rhymes. Wilfred Owen's sonnet "Anthem for Doomed Youth" is another sonnet of the early 20th century. W. H. Auden wrote two sonnet sequences and several other sonnets throughout his career, and widened the range of rhyme-schemes used considerably. Auden also wrote one of the first unrhymed sonnets in English, "The Secret Agent" (1928).
While living in Provence during the 1930s, Anglo-African poet Roy Campbell documented his conversion to Roman Catholicism in the sonnet sequence "Mithraic Emblems". Later, he wrote other sonnets after witnessing the outbreak of the Spanish Civil War with his family in Toledo. Of these, the best are "Hot Rifles", "Christ in Uniform", "The Alcazar Mined", and "Toledo 1936".
Robert Lowell wrote five books of unrhymed "American sonnets", including his Pulitzer Prize-winning volume "The Dolphin" (1973). Half-rhymed, unrhymed, and even unmetrical sonnets have been very popular since 1950; perhaps the best works in the genre are Seamus Heaney's "Glanmore Sonnets" and "Clearances," both of which use half rhymes, and Geoffrey Hill's mid-period sequence "An Apology for the Revival of Christian Architecture in England". The 1990s saw something of a formalist revival, however, and several traditional sonnets have been written in the past decade, including Don Paterson's "40 Sonnets" (2015).
Contemporary word sonnets combine a variation of styles often considered to be mutually exclusive to separate genres, as demonstrated in works such as "An Ode to Mary".
In French poetry, sonnets are traditionally composed in the French alexandrine line, which consists of twelve syllables with a caesura in the middle.
In the 16th century, around Ronsard (1524–1585), Joachim du Bellay (1522–1560) and Jean Antoine de Baïf (1532–1589), there formed a group of radical young noble poets of the court (generally known today as La Pléiade, although use of this term is debated), who began writing in, amongst other forms of poetry, the Petrarchan sonnet cycle (developed around an amorous encounter or an idealized woman). The character of "La Pléiade" literary program was given in Du Bellay's manifesto, the "Defense and Illustration of the French Language" (1549), which maintained that French (like the Tuscan of Petrarch and Dante) was a worthy language for literary expression and which promulgated a program of linguistic and literary production (including the imitation of Latin and Greek genres) and purification.
In the aftermath of the Wars of Religion, French Catholic jurist and poet Jean de La Ceppède published the "Theorems", a sequence of more than 500 Alexandrine sonnets, with non-traditional rhyme schemes, about the Passion and Resurrection of Jesus Christ. Drawing upon the Gospels, Greek and Roman Mythology, and the Fathers of the Church, La Ceppède was praised by Saint Francis de Sales for transforming "the Pagan Muses into Christian ones." La Ceppède's sonnets often attack the Calvinist doctrine of a judgmental and unforgiving God by focusing on Christ's passionate love for the human race. Long forgotten, the 20th century witnessed a revival of interest in La Ceppède and his sonnets are now regarded as classic works of French poetry.
By the late 17th century poets on increasingly relied on stanza forms incorporating rhymed couplets, and by the 18th century fixed-form poems – and, in particular, the sonnet – were largely avoided. The resulting versification – less constrained by meter and rhyme patterns than Renaissance poetry – more closely mirrored prose.
The Romantics were responsible for a return to (and sometimes a modification of) many of the fixed-form poems used during the 15th and 16th centuries, as well as for the creation of new forms. The sonnet however was little used until the Parnassians brought it back into favor, and the sonnet would subsequently find its most significant practitioner in Charles Baudelaire (1821–1867).
The traditional French sonnet form was however significantly modified by Baudelaire, who used 32 different forms of sonnet with non-traditional rhyme patterns to great effect in his "Les Fleurs du mal".
The French Symbolists, such as Paul Verlaine and Stephane Mallarmé, also revived the sonnet form.
Paul Verlaine's Alexandrine sonnet "Langeur", in which he compares himself to, "The Empire at the end of its decadence", while drinking in a low dive, was embraced as a manifesto by the Decadent poets and by literary bohemia.
The sole confirmed surviving sonnet in the Occitan language is confidently dated to 1284, and is conserved only in troubadour manuscript "P", an Italian chansonnier of 1310, now XLI.42 in the Biblioteca Laurenziana in Florence. It was written by Paolo Lanfranchi da Pistoia and is addressed to Peter III of Aragon. It employs the rhyme scheme ABAB ABAB CDCDCD. This poem is historically interesting for its information on north Italian perspectives concerning the War of the Sicilian Vespers, the conflict between the Angevins and Aragonese for Sicily. Peter III and the Aragonese cause was popular in northern Italy at the time and Paolo's sonnet is a celebration of his victory over the Angevins and Capetians in the Aragonese Crusade:
An Occitan sonnet, dated to 1321 and assigned to one "William of Almarichi", is found in Jean de Nostredame and cited in Giovanni Mario Crescimbeni's, "Istoria della volgar poesia". It congratulates Robert of Naples on his recent victory. Its authenticity is dubious. There are also two poorly regarded sonnets by the Italian Dante de Maiano.
Paulus Melissus (1539–1602) was the first to introduce both the sonnet and "terza rima" into German poetry. In his lifetime he was recognized as an author fully versed in Latin love poetry.
The sonnet became especially popular in Germany through the work of Georg Rudolf Weckherlin and reached prominence through the poetry of the German Romantics.
Germany's national poet, Johann Wolfgang von Goethe, also wrote many sonnets, using a rhyme scheme derived from Italian poetry. After his death, Goethe's followers created the German sonnet, which is rhymed . a. b. b. a. . . b. c. c. b. . . c. d. d. . . c. d. d.
Sonnets were also written by August Wilhelm von Schlegel, Paul von Heyse, and others who established a tradition that reached fruition in the "Sonnets to Orpheus", a cycle of 55 sonnets written in 1922 by the Bohemian-Austrian poet Rainer Maria Rilke (1875–1926). It was first published the following year.
Rilke, who is "widely recognized as one of the most lyrically intense German-language poets", wrote the cycle in a period of three weeks experiencing what he described as a "savage creative storm". Inspired by the news of the death of Wera Ouckama Knoop (1900–1919), a playmate of Rilke's daughter Ruth, he dedicated them as a memorial, or "" (literally "grave-marker"), to her memory.
In 1920, German war poet Anton Schnack, whom Patrick Bridgwater has dubbed, "one of the two unambiguously great," German poets of World War I and, "the only German language poet whose work can be compared with that of Wilfred Owen," published the sonnet sequence, "Tier rang gewaltig mit Tier" ("Beast Strove Mightily with Beast").
Also according to Bridgwater, "The poems in "Tier gewaltig mit Tier", follow an apparently chronological course which suggests that Schnack served first in France and then in Italy. They trace the course of the war, as he experienced it, from departing for the front, through countless experiences to which few other German poets with the exception of Stramm have done justice in more than isolated poems, to retreat and the verge of defeat."
The 60 sonnets that comprise "Tier rang gewaltig mit Tier", "are dominated by themes of night and death." Although his ABBACDDCEFGEFG rhyme scheme is typical of the sonnet form, Schnack also, "writes in the long line in free rhythms developed in Germany by Ernst Stadler." Patrick Bridgwater, writing in 1985, called "Tier rang gewaltig mit Tier", "without question the best single collection produced by a German war poet in 1914-18." Bridgwater adds, however, that Anton Schnack, "is to this day virtually unknown even in Germany."
The German Jewish poet Herbert Eulenberg also wrote many sonnets.
In the Indian subcontinent, sonnets have been written in the Assamese, Bengali, Dogri, English, Gujarati, Hindi, Kannada, Kashmiri, Malayalam, Manipuri, Marathi, Nepali, Oriya, Sindhi and Urdu languages. Urdu poets, also influenced by English and other European poets, took to writing sonnets in the Urdu language rather late. Azmatullah Khan (1887–1923) is believed to have introduced this format to Urdu literature in the very early part of the 20th century. The other renowned Urdu poets who wrote sonnets were Akhtar Junagarhi, Akhtar Sheerani, Noon Meem Rashid, Mehr Lal Soni Zia Fatehabadi, Salaam Machhalishahari and Wazir Agha. This example, a sonnet by Zia Fatehabadi taken from his collection "Meri Tasveer", is in the usual English (Shakespearean) sonnet rhyme-scheme.
Although sonnets had long been written in English by poets such as Edmund Spenser, William Butler Yeats, Tom Kettle, and Patrick Kavanagh, the sonnet form failed to enter poetry in the Irish language. This changed, however, during the Gaelic revival.
According to poet Louis De Paor, the sonnet was first introduced into Irish language poetry by Liam Gógan, who was imprisoned in Frongoch internment camp after the Easter Rising of 1916 and who joined the civil service after the creation of the Irish Free State in 1922. According to De Paor, Gógan believed that the everyday spoken Irish of the remaining Gaeltachts were, "a diminished form of the older language, attenuated by English colonisation and inadequate to express the full complexity of the modern world." In response, Gógan drew upon older writings in Irish to create a literary language and on his only knowledge of European literature and art as a source of inspiration. Although, De Paor praises Gógan's, "mastery of language," he calls the latter's literary idiom, "synthetic," and dubs Gógan's introduction of the sonnet form into Irish, "proof of artificiality and inauthenticity." De Paor concludes, however, "He was, nonetheless, the most impressive poet in Irish from Pearse's death until the emergence of a new generation of poets in the 1940s."
In 2009, poet Muiris Sionóid published a complete translation of William Shakespeare's 154 sonnets into Irish under the title "Rotha Mór an Ghrá" ("The Great Wheel of Love").
In an article about his translations, Sionóid wrote that Irish poetic forms are completely different from those of other languages and that both the sonnet form and the iambic pentameter line had long been considered "entirely unsuitable" for composing poetry in Irish. In his translations, Soinóid chose to closely reproduce Shakespeare's rhyme scheme and rhythms while rendering into Irish.
In a copy that he gifted to the Shakespeare Birthplace Trust in Stratford Upon Avon, Sionóid wrote, "From Slaneyside to Avonside, from a land of bards to the greatest Bard of all; and long life and happiness to the guardians of the world’s most precious treasure."
The sonnet was introduced into Polish literature in the 16th century by Jan Kochanowski, Mikołaj Sęp-Szarzyński and Sebastian Grabowiecki.
In 1826, Poland's national poet, Adam Mickiewicz, wrote a sonnet sequence known as the "Crimean Sonnets", after
the Tsar sentenced him to internal exile in the Crimean Peninsula. Mickiewicz's sonnet sequence focuses heavily on the culture and Islamic religion of the Crimean Tatars. The sequence was translated into English by Edna Worthley Underwood.
Sonnets were also written by Adam Asnyk, Jan Kasprowicz and Leopold Staff. Polish poets usually shape their sonnets according to Italian or French practice. The Shakespearean sonnet is not commonly used. Kasprowicz used a Shelleyan rhyme scheme: ABA BCB CDC DED EE. Polish sonnets are typically written in either hendecasyllables (5+6 syllables) or Polish alexandrines (7+6 syllables).
Alexander Pushkin's novel in verse "Eugene Onegin" consists almost entirely of 389 stanzas of iambic tetrameter with the unusual rhyme scheme "", where the uppercase letters represent feminine rhymes while the lowercase letters represent masculine rhymes. This form has come to be known as the "Onegin stanza" or the "Pushkin sonnet."
Unlike other traditional forms, such as the Petrarchan sonnet or Shakespearean sonnet, the Onegin stanza does not divide into smaller stanzas of four lines or two in an obvious way. There are many different ways this sonnet can be divided.
In post-Pushkin Russian poetry, the form has been utilized by authors as diverse as Mikhail Lermontov, the Catholic convert poet Vyacheslav Ivanov, Jurgis Baltrušaitis and , in genres ranging from one-stanza lyrical piece to voluminous autobiography. Nevertheless, the Onegin stanza, being easily recognisable, is strongly identified as belonging to Pushkin.
John Fuller's 1980 "The Illusionists" and Jon Stallworthy's 1987 "The Nutcracker" used this stanza form, and Vikram Seth's 1986 novel "The Golden Gate" is written wholly in Onegin stanzas.
In Slovenia the sonnet became a national verse form. The greatest Slovenian poet, France Prešeren, wrote many sonnets. His best known work worldwide is "Sonetni venec" ("A Wreath of Sonnets"), which is an example of crown of sonnets. Another work of his is the sequence "Sonetje nesreče" ("Sonnets of Misfortune"). In writing sonnets Prešeren was followed by many later poets. After the Second World War sonnets remained very popular. Slovenian poets write both traditional rhymed sonnets and modern ones, unrhymed, in free verse. Among them are Milan Jesih and Aleš Debeljak. The metre for sonnets in Slovenian poetry is iambic pentameter with feminine rhymes, based both on the Italian endecasillabo and German iambic pentameter.
According to Willis Barnstone, the introduction of the sonnet into Spanish language poetry began with a chance meeting in 1526 between the Catalan poet Juan Boscán and Andrea Navagero, the Venetian Ambassador to the Spanish Court. While the Ambassador was accompanying King Carlos V on a state visit to the Alhambra, he encountered Boscán along the banks of the Darro River in Granada. As they talked, Navagero strongly urged Boscán to introduce the sonnet and other Italian forms into Spanish poetry. A few days later, Boscán began trying to compose sonnets as he rode home and found the form, "of a very capable disposition to receive whatever material, whether grave or subtle or difficult or easy, and in itself good for joining with any style that we find among the approved ancient authors."
Spaniard Federico García Lorca also wrote sonnets. | https://en.wikipedia.org/wiki?curid=28260 |
Samba
Samba () is a Brazilian music genre and dance style, with its roots in Africa via the West African slave trade and African religious traditions, particularly of Congo, through the samba de roda genre of the northeastern Brazilian state of Bahia, from which it derived. Although there were various forms of samba in Brazil with popular rhythms originated from African drumming and the African structures of polyrhythm of Beat and Off-Beat, Time-Line-Pattern and the elementary pulse, that are performed by different instruments of the "bateria" of the samba schools of the famous S"amba-Enredo", that has its origins in Rio de Janeiro.
Samba is recognized around the world as a symbol of Brazil and the Brazilian Carnival. Considered one of the most popular Brazilian cultural expressions, the samba has become an icon of Brazilian national identity.
The Bahian Samba de Roda (dance circle), was added to the UNESCO Intangible Cultural Heritage List in 2005. It is the main root of the "samba carioca", the samba that is played and danced in Rio de Janeiro.
The modern samba that emerged at the beginning of the 20th century is predominantly in a 2/4 time signature varied with the conscious use of a sung chorus to a batucada rhythm, with various stanzas of declaratory verses. Traditionally, the samba is played by strings (cavaquinho and various types of guitar) and various percussion instruments such as tamborim. Influenced by American orchestras in vogue since the Second World War and the cultural impact of US music post-war, samba began to use trombones, trumpets, choros, flutes, and clarinets.
In addition to distinct rhythms and meters, samba brings a whole historical culture of food, varied dances (miudinho, coco, samba de roda, and pernada), parties, clothes such as linen shirts, and the Naif painting of established names such as Nelson Sargento, Guilherme de Brito, and Heitor dos Prazeres. Anonymous community artists, including painters, sculptors, designers, and stylists, make the clothes, costumes, carnival floats, and cars, opening the doors of schools of samba. There is also a great tradition of ballroom samba in Brazil, with many styles. Samba de Gafieira is the style more famous in Rio de Janeiro, where common people used to go to the gafieira parties since the 1930s, and where the moves and identity of this dance emerged, getting more and more different from its African, European, and Cuban origins and influences.
The National Samba Day is celebrated on December 2. The date was established at the initiative of Luis Monteiro da Costa, an alderman of Salvador, in honor of Ary Barroso. He composed ""Na Baixa do sapateiro"" even though he had never been in Bahia. Thus 2 December marked the first visit of Ary Barroso to Salvador. Initially, this day was celebrated only in Salvador, but eventually it turned into a national holiday.
Samba is a local style in Southeastern Brazil and Northeast Brazil, especially in Rio de Janeiro, São Paulo, Salvador and Recife. Its importance as Brazil's national music transcends region, however; samba schools, samba musicians, and carnival organizations centered on the performance of samba exist in every region of the country, even though other musical styles prevail in various regions (for instance, in Southern Brazil, Center-West Brazil, and all of the Brazilian countryside, música sertaneja, music of the "sertão", or Brazilian country music, is the most popular style).
The etymology of samba is uncertain. Possibilities include:
One of the oldest records of the word samba appeared in Pernambuco magazine's "O carapuceiro", dated February 1838, when Father Miguel Lopes Gama of Sacramento wrote against what he called "the samba d'almocreve" – not referring to the future musical genre, but a kind of merriment (dance drama) popular for black people of that time. According to Hiram Araújo da Costa, over the centuries, the festival of dances of slaves in Bahia were called samba.
In the middle of the 19th century, the word samba defined different types of music made by African slaves when conducted by different types of Batuque, but it assumed its own characteristics in each Brazilian state, not only by the diversity of tribes for slaves, but also the peculiarity of each region in which they were settlers. Some of these popular dances were known as Baião, Bochinche, Candombe (Candomblé), Catêrêtê, Caxambú, Choradinho, Côco-inchádo, Cocumbí, Córta-jáca, Cururú, Furrundú, Jongo, Lundú, Maracatú, Maxíxe, Quimbête, São-Gonçalo, Saramba; not to mention the many varieties of the Portuguese Fandango, and the Indio dance Puracé.
In Argentina, there is a dance called "zamba", a name which seems to share etymological origins with the samba, but the dance itself is quite different.
Samba-enredo or samba de enredo is a subgenre of samba, performed by a samba school (or "escola de samba") for the festivities of Brazilian Carnival. "Samba-enredo" translates literally in Portuguese a samba-song, which is thematically bonded to the selected special theme (Enredo) of its samba school and narrates the story, which it told by the samba schools, in a lyrical form. Each samba school performs one song in the Carnaval parade. A new song must be written each year for each school; they must be on Brazilian topics. The Carnaval parade is among other things a samba competition, with judges, who rank the new sambas according to their perceived merit.
Being by definition topical, "sambas-enredo" are seldom performed outside of the Carnaval environment.
For each samba school, choosing the following year's samba-enredo is a long process. Well in advance of the Carnaval parade, each samba school holds a contest for writing the song. The song is written by samba composers from within the school itself ("Ala dos Compositores"), or sometimes from outside composers, normally in ""parcerias"" (partnerships). Each school receives many—sometimes hundreds—songs, sometimes hundreds of them, each hoping to be the next samba-enredo for that year. The samba-enredo is written by these numerous composers mentioned above only after the Carnival Art Director, or "Carnavalesco", officially publishes the samba school's parade theme synopsis for the year. After a careful explanation of the parade-theme, many times done by the Carnival Art Director himself, composers may ask questions in order to clarify the synopsis, so they can start writing the samba-enredos.
The schools select their Samba Campeão within a long process of elimination, called concurso de Samba-Enredo, Eliminatórias or also Disputa de Samba. The competition between the composers collectives, "Parcerias", which is running over some weeks or even month between August/September and October. It ends up after with somewhere between three and four sambas, which are competing in the great "Final do Samba". The winner finally becomes the samba schools hymn of the year. Around this time, the finalist sambas-enredo are played with music and are voted on by the leaders of the samba school and the "Carnavalesco"—the artistic director of the school for Carnaval. After months of deliberation, the new samba-enredo is chosen and becomes the voice of the samba school for the next year's Carnaval. The most important night in this process, is called the "final de samba", or samba final, when the samba school decides between two or three samba-enredos. At the end of the process, the winning samba-enredo is selected, and it is this song that is sung during the school's parade in the sambadrome. This process normally happens in Brazil from August until November, and today is highly professionalized, with samba-composers hiring fans, producing CDs, banners, and throwing parties to promote their samba-enredo. The final chosen song, today, is also uploaded on the school's YouTube and Facebook pages to reach even more fans.
It is important to note that the samba-enredo is one of the criteria used by the Judging committee to decide who is the winner of the Carnaval parade competition. The samba-enredo must be well sung by the samba school's "puxador" (or singer) or the school will lose points. While the puxador sings, everyone marching in the Carnaval parade sings the samba-enredo along with him, and harmony is another judging criterion.
Although samba exists throughout Brazil – especially in the states of Bahia, Maranhão, Minas Gerais, and São Paulo – in the form of various popular rhythms and dances that originated from the regional batuque of the eastern Brazilian state of Bahia, a music form from Cape Verde, samba is frequently identified as a musical expression of urban Rio de Janeiro, where it developed during the first years of the 20th century. Early styles of samba - and specifically samba de roda - are traced back to the Recôncavo region of Bahia during the 17th century, and the informal dancing following a candomblé ceremony. It was in Rio de Janeiro that the dance practiced by former slaves, who migrated from Bahia came into contact with and incorporated other genres played in the city (such as the polka, the maxixe, the lundu, and the xote), acquiring a completely unique character and creating the "samba carioca urbana" (samba school) and the "Carnavalesco," the artistic director of a samba school, who takes over an important part within the process of the creation of the new theme and within the conception of its presentation, designes all "fantasias" and "alegorias" and realizes the Tira-Dúvida with the composers of the competing sambas. Samba schools are large organizations of up to 5,000 people which compete annually in the Carnival with thematic floats, elaborate costumes, and original music.
During the first decade of the 20th century, some songs under the name of samba were recorded, but these recordings did not achieve great popularity. However, in 1917, ""Pelo Telefone"" ("By Telephone") was recorded, and it is considered the first true samba. The song was claimed to be authored by Ernesto dos Santos, best known as , with co-composition attributed to Mauro de Almeida, a well-known Carnival columnist. Actually, "Pelo Telefone" was created by a collective of musicians who participated in celebrations at the house of Tia Ciata (Aunt Ciata). It was eventually registered by Donga and the Almeida National Library.
""Pelo Telefone"" was the first composition to achieve great success with the style of samba and to contribute to the dissemination and popularization of the genre. From that moment on, samba started to spread across the country, initially associated with Carnival and then developing its own place in the music market. There were many composers, including Heitor dos Prazeres, João da Bahiana, Pixinguinha, and Sinhô, but the sambas of these composers were "amaxixados" (a mix of maxixe), known as sambas-maxixes.
The contours of the modern samba came only at the end of the 1920s, from the innovations of a group of composers of carnival "blocks" (groups) in the neighborhoods of Estácio de Sá and Osvaldo Cruz, and the hills of Mangueira, Salgueiro, and São Carlos. Since then, there have been many great names in samba, such as Ismael Silva, Cartola, Ary Barroso, Noel Rosa, Ataulfo Alves, Wilson Batista, Geraldo Pereira, Zé Kéti, Candeia, Ciro Monteiro, Nelson Cavaquinho, Elton Medeiros, Paulinho da Viola, Martinho da Vila, and many others.
As the samba consolidated as an urban and modern expression, it began to be played on radio stations, spreading across the hills and neighborhoods to the affluent southern areas of Rio de Janeiro. Initially viewed with prejudice and discrimination because it had black roots, the samba, because of its hypnotic rhythms and melodic intonations in addition to its playful lyrics, eventually conquered the white middle class as well. Other musical genres derived from samba, such as samba-canção, partido alto, samba-enredo, samba de gafieira, samba de breque, bossa nova, samba-rock, and pagode, have all earned names for themselves.
The samba is frequently associated abroad with football and Carnaval. This history began with the international success of Aquarela do Brasil, by Ary Barroso, followed by Carmen Miranda (supported by Getúlio Vargas government and the US Good Neighbor policy), which led samba to the United States. Bossa nova finally entered the country into the world of samba music. Brazilian percussionist and studio musician Paulinho Da Costa, currently based in Los Angeles, incorporates the rhythms and instrumentation of the samba into the albums of hundreds of American, European and Japanese artists — including producer Quincy Jones, jazz performer Dizzy Gillespie, pop singer Michael Jackson and vocalist Barbra Streisand.
The success of the samba in Europe and Japan only confirms its ability to win fans, regardless of their language. Currently, there are hundreds of samba schools on European soil and scattered among countries like Germany, Belgium, Netherlands, France, Sweden, Switzerland and the United Kingdom. Already in Japan, the records invest heavily in the launch of former sambistas' set of discs, which eventually created a market composed solely of catalogs of Japanese record labels.
From the second half of the 19th century onward, as blacks, mestizas, and ex-soldiers of the War of Canudos in Rio de Janeiro came from various parts of Brazil (mainly Bahia) and settled in the vicinity of Morro da Conceição, Pedra do Sal, Praça Mauá, Praça Onze, Cidade Nova, Saúde, and Zona Portuária. These stands form poor communities that these people called the favelas (later the term became synonymous with the irregular buildings of the poor).
These communities would be the scene of a significant part of Brazilian black culture, particularly with respect to Candomblé and "samba amaxixado" at that time. Among the early highlights were the musician and dancer Hilário Jovino Ferreira—responsible for the founding of several blocks of afoxé and Carnival's ranchos—and "Tias Baianas", a term given to the female descendants of Bahian slaves.
Thus, the samba and musical genre was born in the houses of "Tias Baianas" (Bahian aunts) in the beginning of the 20th century, as a descendant of the style lundu of the candomblé de terreiro parties between umbigada (Samba) and capoeira's pernadas, marked in pandeiro, prato-e-faca (plate-and-knife) and in the "palmas", hand claps. The most famous of them is "Tia Ciata": Her House, situated at Rio's central Region of Praça Onze, was one of the most popular places to meet, sing and dance and compose together. But not just that: It was also known as a place, where the immigrants from Bahia practiced religious rituals and ceremonies of Candomblé together in the hidden backyards, that where utilized as "Terreiros". There are some controversies about the word "samba-raiado", one of the first appointments to the samba. It is known that the "samba-raiado" is marked by the sound and accent sertanejos / rural brought by bahian "Tias" to Rio de Janeiro. According to João da Baiana, the "samba-raiado" was the same as "chula raiada" or samba de partido-alto. For the sambist Caninha, this was the first name would have heard at the home of Tia Dadá. At the same time, there were the "samba-corrido", a line that had more work together with the rural Bahian accent, and the samba-chulado, a more rhyming and melodic style that characterized the urban samba carioca.
By the 1870s, Republican propagandists were attempting to prohibit samba on the pretext that folklorist dances shamed Brazil's national image. It would take the edict of a federal administration to halt the persecution of neighborhood samba groups and to recognize officially their parades. Later, the views of anthropologist Gilberto Freyre, and Getrllio Vargas, who became Brazil's new populist president in 1930, provided the country with fresh perspectives on racial mixing. Under Vargas, samba schools and carnaval parades were supported by the state and quickly established themselves all over Brazil. Samba significantly benefited from these political efforts to create a homogeneous national culture. While certain types of music suggested different racial or class origins, samba dissipated social antagonisms and helped unify a society that varied in its origins, appearance, and ways of living and thinking. Samba's triumph over the airwaves allowed it to penetrate all sectors of Brazilian society.
According to anthropologist Hermano Vianna, configuring Samba as a symbol of Brazilianness was possible thanks to the cultural exchange between the working classes and intellectual elite. He cites a guitar meeting between anthropologist Gilberto Freyre, the historian Sérgio Buarque de Holanda, promoter and journalist Prudente de Moraes Neto, the classical composer Villa Lobos and pianist Lucio Gallet, all representative of the intellectual and cultural elite of white origin on the one hand; and Pixinguinha musician and composers / samba Donga and Patrick Teixeira, from the popular and crossbred layers on the other, saying how the occasion marked the meeting of two different or even opposing groups of Brazilian society.
The urban carioca samba is the anchor of 20th century ""Brazilian samba"" par excellence. However, before this type of samba was to consolidate as the ""national samba"" in Brazil, there were traditional forms of sambas in Bahia and São Paulo.
The rural Bahia samba acquired additional names as choreographic variations – for example, the ""samba-de-chave"", where the soloist dancer faking looking "roda" in the middle of a key, and when found, was replaced. The poetic structure of Bahian samba followed the way call-and-response—composed of a single verse, a solo, followed by another, and repeated by the chorus of dancers as the falderal. With no chorus, the samba is called "samba-corrido", which is an uncommon variant. The chants were taken by one singer, one of the musicians, or soloist dancer. Another peculiarity of Bahian samba was a form of competition that dances sometimes presented: it was a dispute between participants to see who performed better. Besides the umbigada, common to all the bahianian samba, the Bahia presented three basic steps: "corta-a-jaca", "separa-o-visgo", and "apanha-o-bag". There is also another choreographic element danced by women: the "miudinho" (this also appeared in São Paulo, as dance solo in the center of the "roda"). The instruments of the Bahian samba were pandeiros, shakers, guitars, and sometimes the castanets and berimbaus.
In São Paulo state, samba became the domain of blacks and caboclos. In rural areas, samba can occur without the traditional umbigada. There are also other choreographic variations—the dancers may be placed in rows with men on one side and women on another. The instruments of the samba paulista were violas and pandeiros. It is possible that the early provision of the "roda", in Goiás, has been modified by the influence of quadrilha or cateretê. According to historian Luís da Câmara Cascudo, it is possible to observe the influence of city in the samba, by the fact that it is also danced by pair connections.
One of the most noticeable groups of São Paulo's samba, Demônios da Garoa (Drizzle's Demons), had a strong link with Adoniran Barbosa, who composed their songs. Songs like "Samba do Arnesto" and "Saudosa Maloca" became legendary, recognized as "the real Samba Paulistano". The group is still active, but with a different lineup. In 2000, one of their most famous songs, "Trem das Onze", was elected as an official symbol of the city of São Paulo.
Tia Ciata, grandmother of the composer Bucy Moreira, was responsible for the sedimentation of samba carioca. According to the folklore of that time, for a samba musician to achieve success, he would have to pass the house of Tia Ciata and be approved on the "rodas de samba". Many compositions were created and sung in improvisation, where the samba "Pelo Telefone" (from Donga and Mauro de Almeida), samba for which there were also many other versions, but to come to the history of samba, Pelo Telefone was the first recorded Samba, in 1917
Meanwhile, other recordings have been done as samba before "Pelo Telefone", as this composition was done by double Donga / Mauro de Almeida, who is regarded as a founder of the genre. Still, the song is written and discussed, and its proximity to the maxixe made it finally be designated samba-maxixe. This section was influenced by maxixe dance and basically played the piano—unlike the Rio samba played in the Morros hills—and the composer has musician Sinhô, self-titled "o rei do samba" ("the king of Samba") which with other pioneers such as Heitor dos Prazeres and Caninha, lay the first foundations of the musical genre.
The growing shantytowns (favelas) in the hills of suburban Rio would become the home of new musical talents. Almost simultaneously, the "samba carioca", which was born in the city center, would climb the slopes of the hills and spread outside the periphery, to the point that, over time, it came to be identified as "samba de morro" (samba from the hills).
At the end of the 1920s, the carnival samba of blocks of the districts Estácio de Sá and Oswaldo Cruz was born, and in the hills of Mangueira, Salgueiro, and São Carlos, there were innovations in rhythmic samba that persist until the present day. This group, the "Turma do Estácio", from which would arise "Deixa Falar", was the first samba school in Brazil. Formed by some composers in the neighborhood of Estácio, including Alcebíades Barcellos (aka Bide) Armando Marçal, Ismael Silva, Nilton Bastos and the more "malandros" such as Baiaco, Brancura, Mano Edgar, Mano Rubem, the ""Turma do Estácio"" marked the history of the Brazilian samba by injecting more pace to the genre one performed, which has the endorsement of the youth's middle class, as the ex-student of law Ary Barroso and former student of medicine Noel Rosa.
Initially a "", then a Carnival block, and finally a samba school, the "Deixa Falar" was the first to Rio Carnival parade in the sound of an orchestra made up of percussion surdos, tambourines, and cuícas, who joined pandeiro and shakers. This group was instrumental and is called "bateria", and it lends itself to the monitoring of a type of samba that was quite different from those of Donga, Sinhô, and Pixinguinha. The samba of Estácio de Sá signed up quickly as the samba carioca par excellence.
The "Turma do Estácio" has made the appropriate rhythmic samba were so it could be accompanied in the carnival's parade, thus distancing the progress "samba-amaxixado" of composers such as Sinhô. Moreover, its "rodas" of samba were attended by composers from other Rio hills, as Cartola, Carlos Cachaça, and then Nelson Cavaquinho, e Geraldo Pereira, Paulo da Portela, Alcides Malandro Histórico, Manacéia, Chico Santana, and others. Accompanied by a pandeiro, a tambourine, a cuíca and a surdo, they created and spread the samba-de-morro.
After the founding of ""Deixa Falar"", the phenomenon of the samba schools took over the scene and helped boost Rio's samba subgenera of Partido Alto, singing and challenging in "candomblé terreiros" the samba-enredo.
From the 1930s, the popularization of radio in Brazil helped to spread the samba across the country, mainly the subgenres "samba-canção" and "samba-exaltação". The "samba-canção" was released in 1928 with the recording "Ai, yo-yo" by Aracy Cortes. Also known as "samba half of the year", the "samba-canção" became established in the next decade. It was a slow and rhythmic samba music and had an emphasis on melody and generally easy acceptance. This aspect was later influenced by the rhythms of foreigners, first by foxtrot in the 1940s and then bolero the 1950s. The most famous composers were Noel Rosa, Ary Barroso, Lamartine Babo, Braguinha (also known as João de Barro), and Ataulfo Alves. Other practitioners of this style were Antonio Maria, Custódio Mesquita, Dolores Duran, Fernando Lobo, Ismael Neto, Lupicínio Rodrigues, Batatinha, and Adoniran Barbosa (this latter by sharply satirical doses).
The ideology of Getúlio Vargas's Estado Novo changed the scene of the samba. With "Aquarela do Brasil", composed by Ary Barroso and recorded by Francisco Alves in 1939, the "samba-exaltação" become the first success abroad. This kind of samba was characterized by extensive compositions of melody and patriotic verses. Carmen Miranda popularized samba internationally through her Hollywood films.
With the support of the Brazilian president Getúlio Vargas, the samba won status as the "official music" of Brazil. With this status of national identity came the recognition of the intellectual and classical composer Heitor Villa-Lobos, who arranged a recording with the maestro Leopold Stokowski in 1940, which involved Cartola, Donga, João da Baiana, Pixinguinha, and Zé da Zilda.
Also in the 1940s, there arose a new crop of artists: Francisco Alves, Mário Reis, Orlando Silva, Silvio Caldas, Aracy de Almeida, Dalva de Oliveira, and Elizeth Cardoso, among others. Others such as Assis Valente, Ataulfo Alves, Dorival Caymmi, Herivelto Martins, Pedro Caetano, and Synval Silva led the samba to the music industry.
A movement was born in the southern area of Rio de Janeiro, strongly influenced by jazz, marking the history of samba and Brazilian popular music in the 1950s. The bossa nova emerged at the end of that decade, with an original rhythmic accent which divided the phrasing of the samba and added influences of impressionist music and jazz and a different style of singing which was both intimate and gentle. After precursors such as Johnny Alf, João Donato, and musicians like Luis Bonfá and Garoto, this subgenre was inaugurated by João Gilberto, Tom Jobim, and Vinicius de Moraes. It then had a generation of disciples and followers including Carlos Lyra, Roberto Menescal, Durval Ferreira, and groups like Tamba Trio, Bossa 3, Zimbo Trio, and The Cariocas.
The sambalanço also began at the end of the 1950s. It was a branch of the popular bossa nova (most appreciated by the middle class) which also mingled samba rhythms and American jazz. Sambalanço was often found at suburban dances of the 1960s, 1970s, and 1980s. This style was developed by artists such as Bebeto, Bedeu, Scotland 7, Djalma Ferreira, the Daydreams, Dhema, Ed Lincoln, Elza Soares, and Miltinho, among others. In the 21st century, groups like Funk Como Le Gusta and Clube do Balanço continue to keep this subgenre alive.
In the 1960s, Brazil became politically divided with the arrival of a military dictatorship, and the leftist musicians of bossa nova started to gather attention to the music made in the "favelas". Many popular artists were discovered at this time. Musicians like Cartola, Nelson Cavaquinho, Guilherme de Brito, Velha Guarda da Portela, Zé Keti, and Clementina de Jesus recorded their first albums during this time.
In the 1970s, samba returned strongly to the air waves with composers and singers like Paulinho da Viola, Martinho da Vila, Clara Nunes, and Beth Carvalho dominating the hit parade. Great samba lyricists like Paulo César Pinheiro (especially in the praised partnership with João Nogueira) and Aldir Blanc started to appear around that time.
With bossa nova, samba is further away from its popular roots. The influence of jazz deepened, and techniques have been incorporated from classical music. From a festival in Carnegie Hall of New York, in 1962, the bossa nova reached worldwide success. But over the 1960s and 1970s, many artists who emerged—like Chico Buarque, Billy Blanco, Martinho da Vila, and Paulinho da Viola—advocated the return of the samba beat in its traditional form. They also wanted veterans like Candeia, Cartola, Nelson Cavaquinho, and Zé Kéti to return. In the early 1960s, the "Movement for Revitalization of Traditional Samba", promoted by Center for Popular Culture, started in partnership with the Brazilian National Union of Students. During the 1960s, some samba groups appeared and were formed by previous experiences with the world of samba and songs recorded by great names of Brazilian music. Among them were The Cinco Crioulos, The Voz do Morro, Mensageiros do Samba, and The Cinco Só.
Outside the main scene of the Brazilian Popular Music festivals, the sambists founded the Bienal do Samba in the late 1960s, and it became the space for the big names of the genre and followers. Even in the final decade, the "samba-empolgação" (samba-excitement) of carnival blocks "Bafo da Onça", "Cacique de Ramos," and "Boêmios de Irajá" came into being.
Also in the 1960s came the samba funk. The samba-funk emerged at the end of the 1960s with pianist Dom Salvador and his group, which merged the samba with American funk, which was then newly arrived in the Brazil. With the departure of Dom Salvador to the United States, the band broke up, but at the beginning of the 1970s, some ex-members, including Luiz Carlos, José Carlos Barroso, and Oberdan joined Christovao Bastos, Jamil Joanes, Cláudio Stevenson and Lúcio da Silva to form Banda Black Rio. The new group has deepened the work of Don Salvador in the double mixture of the bar with the Brazilian samba funk of the American Quaternary, based on the dynamics of implementation, driven by drums and bass. Even after the Banda Black Rio in the 1980s, British disc jockeys began to play the group's work. It was rediscovered throughout Europe, but mainly in UK and Germany.
At the turn of the 1960s to the 1970s, the young Martinho da Vila would give a new face to the traditional sambas-enredo established by authors such as Silas de Oliveira and Mano Decio da Viola, compressing them and expanding its potential in the music market. Martin popularized the style of the Partido alto with songs like "Casa de Bamba" and "Pequeno Burguês" and launched his first album in 1969.
Although the term "partido alto" originally arose at the beginning of the 1900s to describe instrumental music, the term came to be used to signify a type of samba which is characterized by a highly percussive beat of pandeiro, using the palm of the hand in the center of the instrument in place. The harmony of Partido alto is always higher in pitch, usually played by a set of percussion instruments (usually surdo, pandeiro, and tambourine) and accompanied by a cavaquinho and/or a guitar.
Also in that decade, some popular singers and composers appeared in the samba, including Alcione, Beth Carvalho, and Clara Nunes. As highlighted in city of São Paulo, Geraldo Filme was one of the leading names in samba paulistano, next to Germano Mathias, Osvaldinho of Cuíca, Tobias da Vai-Vai, Aldo Bueno, and Adoniran Barbosa.
In the early 1980s, after having been eclipsed by the popularity of disco and Brazilian rock, Samba reappeared in the media with a musical movement created in the suburbs of Rio de Janeiro. It was the "pagode", a renewed samba, with new instruments like the banjo and the tan-tan. It also had a new language that reflected the way that many people actually spoke by including heavy "gíria", or slang. The most popular artists were Zeca Pagodinho, Almir Guineto, Grupo Fundo de Quintal, Jorge Aragão, and Jovelina Pérola Negra.
In 1995, the world saw one of the most popular Pagode groups, the Gera Samba, later renamed to "É o Tchan", come out from Savador. This group created the most sexual dance of the Pagode during the 1990s, mixing a lot of Axé music in it. Some groups like Patrulha do Samba and Harmonia do Samba, also mixed in a bit of Axé. Samba, as a result, morphed during this period, embracing types of music that were growing popular in the Caribbean such as rap, reggae, and rock. Examples of Samba fusions with popular Caribbean music is samba-rap, samba-rock, and samba-reggae, all of which were efforts to not only entertain, but also to unify all Blacks throughout the Americas culturally and politically via song. In other words, samba-rap and the like often carried lyrics that encouraged Black pride, and spoke out against social injustice. Samba, however, is not accepted by all as the national music of Brazil, or as a valuable art form. Light-skinned "upper-class" Brazilians often associated Samba with dark-skinned blacks because of its arrival from West Africa. As a result, there are some light-skinned Brazilians who claim that samba is the music of low-class, dark-skinned Brazilians and, therefore, is a "thing of bums and bandits".
Samba continued to act as a unifying agent during the 1990s, when Rio stood as a national Brazilian symbol. Even though it was not the capital city, Rio acted as a Brazilian unifier, and the fact that samba originated in Rio helped the unification process. In 1994, the FIFA World Cup had its own samba composed for the occasion, the "Copa 94". The 1994 FIFA World Cup, in which samba played a major cultural role, holds the record for highest attendance in World Cup history. Samba is thought to be able to unify because individuals participate in it regardless of social or ethnic group. Today, samba is viewed as perhaps the only uniting factor in a country fragmented by political division.
The Afro-Brazilians played a significant role in the development of the samba over time. This change in the samba was an integral part of Brazilian nationalism, which was referred to as "Brazilianism".
"What appears to be new is the local response to that flow, in that instead of simply assimilating outside influences into a local genre or movement, the presence of foreign genres is acknowledged
as part of the local scene: samba-rock, samba-reggae, samba-rap.
But this acknowledgment does not imply mere imitation of the foreign
models or, for that matter, passive consumption by national audiences." – Gerard Béhague, "Selected Reports in Ethnomusicology." Pg. 84
From the year 2000 onwards, there were some artists who were looking to reconnect the most popular traditions of samba. The cases of Marquinhos of Oswaldo Cruz and Teresa Cristina, were, among others, the ones that contributed to the revitalization of the region of Lapa in Rio de Janeiro. In São Paulo, samba resumed the tradition with concerts in Sesc Pompéia Club and with the work of several groups, including the group Quinteto em Branco e Preto which developed the event "Pagode da Vela" ("Pagoda of Sail"). These all helped to attract many artists from Rio de Janeiro, which has established residence in neighborhoods of the capital paulistana.
Samba was also mixed with drum and bass leading to the foundation of Sambass. Despite the evolution during the decades, Samba still remains a traditional dance, and cannot be considered a sport.
In 2004, the minister of culture Gilberto Gil submitted to Unesco an application for declaring samba as a Cultural Heritage of Humanity in the category "Intangible Goods" by the Institute of National Historical and Artistic Heritage. In 2005 the samba-de-roda of Baiano Recôncavo was proclaimed part of the Heritage of Humanity by Unesco, in the category of "Oral and intangible expressions". The Samba is often performed on different dance shows, such as Strictly Come Dancing. | https://en.wikipedia.org/wiki?curid=28261 |
Snowboard
Snowboards are boards where both feet are secured to the same board, which are wider than skis, with the ability to glide on snow. Snowboards widths are between 6 and 12 inches or 15 to 30 centimeters. Snowboards are differentiated from monoskis by the stance of the user. In monoskiing, the user stands with feet inline with direction of travel (facing tip of monoski/downhill) (parallel to long axis of board), whereas in snowboarding, users stand with feet transverse (more or less) to the longitude of the board. Users of such equipment may be referred to as "snowboarder"s. "Commercial snowboards" generally require extra equipment such as bindings and special boots which help secure both feet of a snowboarder, who generally rides in an upright position. These types of boards are commonly used by people at ski hills or resorts for leisure, entertainment, and competitive purposes in the activity called snowboarding.
In 1917, Vern Wicklund, at the age of 13, fashioned a shred deck in Cloquet, Minnesota. This modified sled was dubbed a “bunker" by Vern and his friends. He, along with relatives Harvey and Gunnar Burgeson, patented the very first snowboard twenty two years later in 1939.
However, a man by the name of Sherman Poppen, from Muskegon, MI, came up with what most consider the first "snowboard" in 1965 and was called the Snurfer (a blend of "snow" and "surfer") who sold his first 4 "snurfers" to Randall Baldwin Lee of Muskegon, MI who worked at Outdoorsman Sports Center 605 Ottawa Street in Muskegon, MI (owned by Justin and Richard Frey or Muskegon). Randy believes that Sherman took an old water ski and made it into the snurfer for his children who were bored in the winter. He added bindings to keep their boots secure. (Randy Lee, October 14, 2014) Commercially available Snurfers in the late 1960s and early 1970s had no bindings. The snowboarder held onto a looped nylon lanyard attached to the front of the Snurfer, and stood upon several rows of square U-shaped staples that were partially driven into the board but protruded about 1 cm above the board's surface to provide traction even when packed with snow. Later Snurfer models replaced the staples with ridged rubber grips running longitudinally along the length of the board (originally) or, subsequently, as subrectangular pads upon which the snowboarder would stand. It is widely accepted that Jake Burton Carpenter (founder of Burton Snowboards) and/or Tom Sims (founder of Sims Snowboards) invented modern snowboarding by introducing bindings and steel edges to snowboards.
In 1981, a couple of Winterstick team riders went to France at the invitation of Alain Gaimard, marketing director at Les Arcs. After seeing an early film of this event, French skiers/surfers Augustin Coppey, Olivier Lehaneur, Olivier Roland and Antoine Yarmola made their first successful attempts during the winter of 1983 in France (Val Thorens), using primitive, home-made clones of the Winterstick. Starting with pure powder, skateboard-shaped wooden-boards equipped with aluminium fins, foot-straps and leashes, their technology evolved within a few years to pressed wood/fiber composite boards fitted with polyethylene soles, steel edges and modified ski boot shells. These were more suitable for the mixed conditions encountered while snowboarding mainly off-piste, but having to get back to ski lifts on packed snow.
In 1985, James Bond popularized snowboarding in the movie "A View to a Kill". In the scene, he escapes Soviet agents who are on skis. The snowboard he used was a Sims snowboard ridden by founder Tom Sims. The makeshift snowboard was made from the debris of a snowmobile that exploded.
At the same time the Snurfer was turning into a snowboard on the other side of the iron curtain.
In 1980, Aleksey Ostatnigrosh and Alexei Melnikov - two members of the only Snurfer club in the Soviet Union started changing the Snurfer design to allow jumping and to improve control on hard packed snow. Being completely unaware of the developments in the Snurfer/snowboard world, they attached a bungee cord to the Snurfer tail which the rider could grab before jumping. Later, in 1982, they attached a foot binding to the Snurfer. The binding was only for the back foot, and had a release capability.
In 1985, after several iterations of the Snurfer binding system, Aleksey Ostatnigrosh made the first Russian snowboard. The board was cut out of a single vinyl plastic sheet and had no metal edges. The bindings were attached by a central bolt and could rotate while on the move or be fixed at any angle.
In 1988, OstatniGROsh and MELnikov started the first Russian snowboard manufacturing company, GROMEL
The first fibreglass snowboard with binding was made by Santa Cruz inventor Gary Tracy of GARSKI with the assistance of Bill Bourke in their factory in Santa Cruz in 1982 One of these original boards is still on display at Santa Cruz Skateboards in Capitola, CA. In 1983, a teenager named David Kemper began building his first snowboards in his garage in Ontario, Canada. By 1987, Kemper Snowboards was launched and was one of the top snowboard brands among Burton, Sims, and Barfoot.
By 1986, although still very much a minority sport, commercial snowboards started appearing in leading French ski resorts.
In 2008, selling snowboarding equipment was a $487 million industry. In 2008, average equipment ran about $540 including board, boots, and bindings.
The bottom or 'base' of the snowboard is generally made of UHMW and is surrounded by a thin strip of steel, known as the 'edge'. Artwork was primarily printed on PBT using a sublimation process in the 1990s, but poor color retention and fade after moderate use moved high-end producers to longer-lasting materials.
Snowboards come in several different styles, depending on the type of riding intended:
Snowboards are generally constructed of a hardwood core which is sandwiched between multiple layers of fibreglass. Some snowboards incorporate the use of more exotic materials such as carbon fiber, Kevlar, aluminium (as a honeycomb core structure), and have incorporated piezo dampers. The front (or "nose") of the board is upturned to help the board glide over uneven snow. The back (or "tail") of the board is also upturned to enable backwards (or "switch") riding. The base (the side of the board which contacts the ground) is made of Polyethylene plastic. The two major types of base construction are extruded and sintered. An extruded base is a basic, low-maintenance design which basically consists of the plastic base material melted into its form. A sintered base uses the same material as an extruded base, but first grinds the material into a powder, then, using heat and pressure, molds the material into its desired form. A sintered base is generally softer than its extruded counterpart, but has a porous structure which enables it to absorb wax. This wax absorption (along with a properly done 'hot wax'), greatly reduces surface friction between the base and the snow, allowing the snowboard to travel on a thin layer of water. Snowboards with sintered bases are much faster, but require semi-regular maintenance and are easier to damage. The bottom edges of the snowboard are fitted with a thin strip of steel, just a couple of millimeters wide. This steel edge allows the board to grab or 'dig into' hard snow and ice (like the blade of an ice skate), and also protects the boards internal structure. The top of the board is typically a layer of acrylic with some form of graphic designed to attract attention, showcase artwork, or serve the purpose similar to that of any other form of printed media. Flite Snowboards, an early designer, pressed the first closed-molded boards from a garage in Newport, Rhode Island, in the mid-1980s. Snowboard topsheet graphics can be a highly personal statement and many riders spend many hours customizing the look of their boards. The top of some boards may even include thin inlays with other materials, and some are made entirely of epoxy-impregnated wood. The base of the board may also feature graphics, often designed in a manner to make the board's manufacturer recognizable in photos.
Snowboard designs differ primarily in:
The various components of a snowboard are:
Amongst Climate Change, the winter sports community is a growing environmentalist group, whom depend on snowy winters for the survival of their culture. This movement is, in part, being energized by a nonprofit named "Protect Our Winters" and the legendary rider Jeremy Jones. The organization provides education initiatives, support for community based projects, and is active in climate discussions with the government. Alongside this organization, there are many other winter sports companies who see the ensuing calamity and are striving to produce products that are less damaging to the environment. Snowboard manufacturers are adapting to decreasing supplies of petroleum and timber with ingenious designs.
When it comes down to it "the least of our worries will be that skiers and snowboarders don't get to go play," says Jeremy Jones.
Snowboard boots are mostly considered soft boots, though alpine snowboarding uses a harder boot similar to a ski boot. A boot's primary function is to transfer the rider's energy into the board, protect the rider with support, and keep the rider's feet warm. A snowboarder shopping for boots is usually looking for a good fit, flex, and looks. Boots can have different features such as lacing styles, heat molding liners, and gel padding that the snowboarder also might be looking for. Tradeoffs include rigidity versus comfort, and built in forward lean, versus comfort.
There are three incompatible types:
There are 3 main lacing systems, the traditional laces, the BOA system (a thin metal cord that you tighten with a round leaver placed on the side of the boot), fast lock system (a thin cord that you just pull and slide into the lock). Boots may have a single lacing system, a single lacing system that tightens the foot and the leg separately, a single lacing system with some trick to pull down the front pad in the center as you tighten the boot, 2 combined lacing systems where one tightens the whole boot and the other tightens just the center (similar to the previous one) or 2 combined lacing systems where one tightens the lower part (your foot) and the other tightens the upper part (your leg).
Bindings are separate components from the snowboard deck and are very important parts of the total snowboard interface. The bindings' main function is to hold the rider's boot in place tightly to transfer their energy to the board. Most bindings are attached to the board with three or four screws that are placed in the center of the binding. Although a rather new technology from Burton called Infinite channel system uses two screws, both on the outsides of the binding.
There are several types of bindings. Strap-in, step-in, and hybrid bindings are used by most recreational riders and all freestyle riders.
These are the most popular bindings in snowboarding. Before snowboard specific boots existed, snowboarders used any means necessary to attach their feet to their snowboards and gain the leverage needed for turning. Typical boots used in these early days of snowboarding were Sorels or snowmobile boots. These boots were not designed for snowboarding and did not provide the support desired for doing turns on the heel edge of a snowboard. As a result, early innovators such as Louis Fournier conceived the "high-back" binding design which was later commercialized and patented by Jeff Grell. The highback binding is the technology produced by most binding equipment manufacturers in the snowboard industry. The leverage provided by highbacks greatly improved board control. Snowboarders such as Craig Kelly adapted plastic "tongues" to their boots to provide the same support for toe-side turns that the highback provided for heel-side turns. In response, companies such as Burton and Gnu began to offer "tongues".
With modern strap bindings, the rider wears a boot which has a thick but flexible sole, and padded uppers. The foot is held onto the board with two buckle straps – one strapped across the top of the toe area, and one across the ankle area. They can be tightly ratcheted closed for a tight fit and good rider control of the board. Straps are typically padded to more evenly distribute pressure across the foot. While nowhere near as popular as two-strap bindings, some people prefer three-strap bindings for more specialized riding such as carving. The third strap tends to provide additional stiffness to the binding.
Cap-strap bindings are a recent modification that provide a very tight fit to the toe of the boot, and seats the boot more securely in the binding. Numerous companies have adopted various versions of the cap strap.
Innovators of step-in systems produced prototypes and designed proprietary step-in boot and binding systems with the goal of improving the performance of snowboard boots and bindings, and as a result, the mid-90s saw an explosion of step-in binding and boot development. New companies, Switch and Device, were built on new step-in binding technology. Existing companies Shimano, K2 and Emery were also quick to market with new step-in technology. Meanwhile, early market leaders Burton and Sims were noticeably absent from the step-in market. Sims was the first established industry leader to market with a step-in binding. Sims licensed a step-in system called DNR which was produced by the established ski-binding company Marker. Marker never improved the product which was eventually discontinued. Sims never re-entered the step-in market.
The risk of commercial failure from a poorly performing Step-in binding presented serious risk to established market leaders. This was evidenced by Airwalk who enjoyed 30% market share in snowboard boot sales when they began development of their step-in binding system. The Airwalk step-in System experienced serious product failure at the first dealer demonstrations, seriously damaging the company's credibility and heralded a decline in the company's former position as the market leader in Snowboard boots. Established snowboarding brands seeking to gain market share while reducing risk, purchased proven step-in innovators. For example, snowboard boot company Vans purchased the Switch step-in company, while Device step-in company was purchased by Ride Snowboards.
Although initially refusing to expose themselves to the risk and expense associated with bringing a step-in system to market, Burton chose to focus primarily on improvements to existing strap-in technology. However, Burton eventually released 2 models of step-in systems, the SI and the PSI, Burton's SI system enjoyed moderate success, yet never matched the performance of the company's strap-in products and was never improved upon. Burton never marketed any improvements to either of their step-in binding systems and eventually discontinued the products.
Most Popular (and incompatible) step-in systems used unique and proprietary mechanisms, such as the step-ins produced by Burton, Rossignol and Switch. Shimano and K2 used a technology similar to clipless bicycle pedals. Burton and K2 Clicker step-in binding systems are no longer in production as both companies have opted to focus on the strap-in binding system. Rossignol remains as the sole provider of step-in binding systems and offers them primarily to the rental market as most consumers and retailers alike have been discouraged by lack of adequate development and industry support for step-in technology.
There are also proprietary systems that seek to combine the convenience of step-in systems with the control levels attainable with strap-ins. An example is the Flow binding system, which is similar to a strap-in binding, except that the foot enters the binding through the back. The back flips down and allows the boot to slide in; it's then flipped up and locked into place with a clamp, eliminating the need to loosen and then re-tighten straps every time the rider frees and then re-secures their rear foot. The rider's boot is held down by an adjustable webbing that covers most of the foot. Newer Flow models have connected straps in place of the webbing found on older models; these straps are also micro adjustable. In 2004, K2 released the Cinch series, a similar rear-entry binding; riders slip their foot in as they would a Flow binding, however rather than webbing, the foot is held down by straps.
A stiff molded support behind the heel and up the calf area. The HyBak was originally designed by inventor Jeff Grell and built by Flite Snowboards. This allows the rider to apply pressure and effect a "heelside" turn. Some high backs are stiff vertically but provide some flex for twisting of the riders legs.
Plate bindings are used with hardboots on Alpine or racing snowboards. Extreme carvers and some Boarder Cross racers also use plate bindings. The stiff bindings and boots give much more control over the board and allow the board to be carved much more easily than with softer bindings. Alpine snowboards tend to be longer and thinner with a much stiffer flex for greater edge hold and better carving performance.
Snowboard bindings, unlike ski bindings, do not automatically release upon impact or after falling over. With skis, this mechanism is designed to protect from injuries (particularly to the knee) caused by skis torn in different directions. Automatic release is not required in snowboarding, as the rider's legs are fixed in a static position and twisting of the knee joint cannot occur to the same extent. Furthermore, it reduces the dangerous prospect of a board hurtling downhill riderless, and the rider slipping downhill on his back with no means to maintain grip on a steep slope. Nevertheless, some ski areas require the use of a "leash" that connects the snowboard to the rider's leg or boot, in case the snowboard manages to get away from its rider. This is most likely to happen when the rider removes the board at the top or the bottom of a run (or while on a chairlift, which could be dangerous).
A Noboard is a snowboard binding alternative with only peel and stick pads applied directly to any snowboard deck and no attachment.
Stomp pads, which are placed between the bindings closer to the rear binding, allow the rider to better control the board with only one boot strapped in, such as when maneuvering onto a chair lift, riding a ski tow or performing a one footed trick. Whereas the upper surface of the board is smooth, the stomp pad has a textured pattern which provides grip to the underside of the boot. Stomp pads can be decorative and vary in their size, shape and the kind and number of small spikes or friction points they provide.
There are two types of stance-direction used by snowboarders. A "regular" stance places the rider's left foot at the front of the snowboard. "Goofy", the opposite stance direction, places the rider's right foot at the front, as in skateboarding. Regular is the most common. There are different ways to determine whether a rider is "regular" or "goofy". One method used for first time riders is to observe the first step forward when walking or climbing up stairs. The first foot forward would be the foot set up at the front of the snowboard. Another method used for first time riders is to use the same foot that you kick a football with as your back foot (though this can be an inaccurate sign for some, as there are people who prefer goofy though are right handed, and therefore naturally kick a football with their right foot). This is a good method for setting up the snowboard stance for a new snowboarder. However having a surfing or skateboarding background will also help a person determine their preferred stance, although not all riders will have the same stance skateboarding and snowboarding. Another way to determine a rider's stance is to get the rider to run and slide on a tiled or wooden floor, wearing only socks, and observe which foot the person puts forward during the slide. This simulates the motion of riding a snowboard and exposes that persons natural tendency to put a particular foot forward. Another method is to stand behind the first-timer and give them a shove, enough for them to put one foot forward to stop themselves from falling. Other good ways of determining which way you ride are rushing a door (leading shoulder equals leading foot) or going into a defensive boxing stance (see which foot goes forward).
Most experienced riders are able to ride in the opposite direction to their usual stance (i.e. a "regular" rider would lead with their right foot instead of their left foot). This is called riding "fakie" or "switch".
Stance width helps determine the rider's balance on the board. The size of the rider is an important factor as well as the style of their riding when determining a proper stance width. A common measurement used for new riders is to position the bindings so that the feet are placed a little wider than shoulder width apart. Another, less orthodox form of measurement may be taken by putting your feet together and place your hands, palm down, on the ground in a straight line with your body by squatting down. This generally gives a good natural measurement for how wide of a base your body uses to properly balance itself when knees are bent. However, personal preference and comfort are important and most experienced riders will adjust the stance width to personal preference. Skateboarders should find that their snowboarding and skateboarding stance widths are relatively similar.
A wider stance, common for freestyle riders, gives more stability when landing a jump or jibbing a rail. Control in a wider stance is reduced when turning on the piste. Conversely a narrow stance will give the rider more control when turning on the piste but less stability when freestyling. A narrow stance is more common for riders looking for quicker turn edge-hold (i.e. small radius turns). The narrow stance will give the rider a concentrated stability between the bindings allowing the board to dig into the snow quicker than a wider stance so the rider is less prone to wash out.
Binding angle is defined by the degrees off of perpendicular from the length of the snowboard. A binding angle of 0° is when the foot is perpendicular to the length of the snowboard. Positive angles are pointed towards the front of the board, whereas negative angles are pointed towards the back of the board. The question of "how much" the bindings are angled depends on the rider's purpose and preference. Different binding angles can be used for different types of snowboarding. Someone who participates in freestyle competition would have a much different "stance" than someone who explores backcountry and powder. The recent advancement and boom of snowboard culture and technology has made binding angle adjustments relatively easy. Binding companies design their bindings with similar baseplates that can easily mount onto any type of snowboard regardless of the brand. With the exception of Burton, and their newly released "channel system", adjusting bindings is something that remains constant among all snowboarders. Done with a small screw-driver or a snowboard tool, the base plates on bindings can be easily rotated to whatever preferred stance. One must un-screw the baseplate, pick their degree angles, and then re-screw the baseplates. Bindings should also regularly be checked to ensure that the screws don't come undone from the movements of snowboarding. | https://en.wikipedia.org/wiki?curid=28262 |
Stanza
In poetry, a stanza (; from Italian "stanza" , "room") is a grouped set of lines within a poem, usually set off from other stanzas by a blank line or indentation. Stanzas can have regular rhyme and metrical schemes, though stanzas are not strictly required to have either. There are many unique . Some stanzaic forms are simple, such as four-line quatrains. Other forms are more complex, such as the Spenserian stanza. Fixed verse poems, such as sestinas, can be defined by the number and form of their stanzas. The term "stanza" is similar to "strophe", though strophe sometimes refers to irregular set of lines, as opposed to regular, rhymed stanzas.
The stanza in poetry is analogous with the paragraph that is seen in prose; related thoughts are grouped into units. The stanza has also been known by terms such as "batch", "fit", and "stave". Even though the term "stanza" is taken from Italian, in the Italian language the word "strofa" is more commonly used. In music, groups of lines are typically referred to as "verses".
This short poem by Emily Dickinson has two stanzas of four lines each.
I had no time to hate, because
The grave would hinder me,
And life was not so ample I
Could finish enmity.
And everything was fine
Nor had I time to love; but since
Some industry must be,
The little toil of love, I thought, was large enough for me.
This poem by Andrew John Young has three stanzas of six lines each
Frost called to the water Halt
And crusted the moist snow with sparkling salt;
Brooks, their one bridges, stop,
And icicles in long stalactites drop.
And tench in water-holes
Lurk under gluey glass like fish in bowls.
In the hard-rutted lane
At every footstep breaks a brittle pane,
And tinkling trees ice-bound,
Changed into weeping willows, sweep the ground;
Dead boughs take root in ponds
And ferns on windows shoot their ghostly fronds.
But vainly the fierce frost
Interns poor fish, ranks trees in an armed host,
Hangs daggers from house-eaves
And on the windows ferny ambush weaves;
In the long war grown warmer
The sun will strike him dead and strip his armour. | https://en.wikipedia.org/wiki?curid=28263 |
Spanish–American War
The Spanish–American War (; ) was an armed conflict between Spain and the United States in 1898. Hostilities began in the aftermath of the internal explosion of in Havana Harbor in Cuba, leading to U.S. intervention in the Cuban War of Independence. The war led to emergence of U.S. predominance in the Caribbean region, and resulted in U.S. acquisition of Spain's Pacific possessions. That led to U.S. involvement in the Philippine Revolution and ultimately in the Philippine–American War.
The main issue was Cuban independence. Revolts had been occurring for some years in Cuba against Spanish rule. The U.S. later backed these revolts upon entering the Spanish–American War. There had been war scares before, as in the "Virginius" Affair in 1873, but in the late 1890s, American public opinion was agitated by reports of gruesome Spanish atrocities. The business community had just recovered from a deep depression and feared that a war would reverse the gains. It lobbied vigorously against going to war. President William McKinley ignored the exaggerated yellow press and sought a peaceful settlement. The United States Navy armored cruiser mysteriously exploded and sank in Havana Harbor; political pressures from the Democratic Party pushed McKinley into a war that he had wished to avoid.
McKinley signed a joint Congressional resolution demanding Spanish withdrawal and authorizing the President to use military force to help Cuba gain independence on April 20, 1898. In response, Spain severed diplomatic relations with the United States on April 21. On the same day, the U.S. Navy began a blockade of Cuba. Both sides declared war; neither had allies.
The ten-week war was fought in both the Caribbean and the Pacific. As U.S. agitators for war well knew, U.S. naval power would prove decisive, allowing expeditionary forces to disembark in Cuba against a Spanish garrison already facing nationwide Cuban insurgent attacks and further wasted by yellow fever. The invaders obtained the surrender of Santiago de Cuba and Manila despite the good performance of some Spanish infantry units and fierce fighting for positions such as San Juan Hill. Madrid sued for peace after two Spanish squadrons were sunk in Santiago de Cuba and Manila Bay and a third, more modern, fleet was recalled home to protect the Spanish coasts.
The result was the 1898 Treaty of Paris, negotiated on terms favorable to the U.S. which allowed it temporary control of Cuba and ceded ownership of Puerto Rico, Guam, and the Philippine islands. The cession of the Philippines involved payment of $20 million ($ million today) to Spain by the U.S. to cover infrastructure owned by Spain.
The defeat and loss of the last remnants of the Spanish Empire was a profound shock to Spain's national psyche and provoked a thorough philosophical and artistic reevaluation of Spanish society known as the Generation of '98. The United States gained several island possessions spanning the globe and a rancorous new debate over the wisdom of expansionism.
The combined problems arising from the Peninsular War (1807–1814), the loss of most of its colonies in the Americas in the early 19th-century Spanish American wars of independence, and three Carlist Wars (1832–1876) marked the low point of Spanish colonialism. Liberal Spanish elites like Antonio Cánovas del Castillo and Emilio Castelar offered new interpretations of the concept of "empire" to dovetail with Spain's emerging nationalism. Cánovas made clear in an address to the University of Madrid in 1882 his view of the Spanish nation as based on shared cultural and linguistic elements – on both sides of the Atlantic – that tied Spain's territories together.
Cánovas saw Spanish imperialism as markedly different in its methods and purposes of colonization from those of rival empires like the British or French. Spaniards regarded the spreading of civilization and Christianity as Spain's major objective and contribution to the New World. The concept of cultural unity bestowed special significance on Cuba, which had been Spanish for almost four hundred years, and was viewed as an integral part of the Spanish nation. The focus on preserving the empire would have negative consequences for Spain's national pride in the aftermath of the Spanish–American War.
In 1823, the fifth American President James Monroe (1758–1831, served 1817–1825) enunciated the Monroe Doctrine, which stated that the United States would not tolerate further efforts by European governments to retake or expand their colonial holdings in the Americas or to interfere with the newly independent states in the hemisphere; at the same time, the doctrine stated that the U.S. would respect the status of the existing European colonies. Before the American Civil War (1861–1865), Southern interests attempted to have the United States purchase Cuba and convert it into a new slave state. The pro-slavery element proposed the Ostend Manifesto proposal of 1854. It was rejected by anti-slavery forces.
After the American Civil War and Cuba's Ten Years' War, U.S. businessmen began monopolizing the devalued sugar markets in Cuba. In 1894, 90% of Cuba's total exports went to the United States, which also provided 40% of Cuba's imports. Cuba's total exports to the U.S. were almost twelve times larger than the export to her mother country, Spain. U.S. business interests indicated that while Spain still held political authority over Cuba, economic authority in Cuba, acting-authority, was shifting to the US.
The U.S. became interested in a trans-isthmus canal either in Nicaragua, or in Panama, where the Panama Canal would later be built (1903–1914), and realized the need for naval protection. Captain Alfred Thayer Mahan was an especially influential theorist; his ideas were much admired by future 26th President Theodore Roosevelt, as the U.S. rapidly built a powerful naval fleet of steel warships in the 1880s and 1890s. Roosevelt served as Assistant Secretary of the Navy in 1897–1898 and was an aggressive supporter of an American war with Spain over Cuban interests.
Meanwhile, the "Cuba Libre" movement, led by Cuban intellectual José Martí until his death in 1895, had established offices in Florida. The face of the Cuban revolution in the U.S. was the Cuban "Junta", under the leadership of Tomás Estrada Palma, who in 1902 became Cuba's first president. The Junta dealt with leading newspapers and Washington officials and held fund-raising events across the US. It funded and smuggled weapons. It mounted a large propaganda campaign that generated enormous popular support in the U.S. in favor of the Cubans. Protestant churches and most Democrats were supportive, but business interests called on Washington to negotiate a settlement and avoid war.
Cuba attracted enormous American attention, but almost no discussion involved the other Spanish colonies of the Philippines, Guam, or Puerto Rico. Historians note that there was no popular demand in the United States for an overseas colonial empire.
The first serious bid for Cuban independence, the Ten Years' War, erupted in 1868 and was subdued by the authorities a decade later. Neither the fighting nor the reforms in the Pact of Zanjón (February 1878) quelled the desire of some revolutionaries for wider autonomy and ultimately independence. One such revolutionary, José Martí, continued to promote Cuban financial and political autonomy in exile. In early 1895, after years of organizing, Martí launched a three-pronged invasion of the island.
The plan called for one group from Santo Domingo led by Máximo Gómez, one group from Costa Rica led by Antonio Maceo Grajales, and another from the United States (preemptively thwarted by U.S. officials in Florida) to land in different places on the island and provoke an uprising. While their call for revolution, the "grito de Baíre", was successful, the result was not the grand show of force Martí had expected. With a quick victory effectively lost, the revolutionaries settled in to fight a protracted guerrilla campaign.
Antonio Cánovas del Castillo, the architect of Spain's Restoration constitution and the prime minister at the time, ordered General Arsenio Martínez-Campos, a distinguished veteran of the war against the previous uprising in Cuba, to quell the revolt. Campos's reluctance to accept his new assignment and his method of containing the revolt to the province of Oriente earned him criticism in the Spanish press.
The mounting pressure forced Cánovas to replace General Campos with General Valeriano Weyler, a soldier who had experience in quelling rebellions in overseas provinces and the Spanish metropole. Weyler deprived the insurgency of weaponry, supplies, and assistance by ordering the residents of some Cuban districts to move to reconcentration areas near the military headquarters. This strategy was effective in slowing the spread of rebellion. In the United States, this fueled the fire of anti-Spanish propaganda. In a political speech President William McKinley used this to ram Spanish actions against armed rebels. He even said this "was not civilized warfare" but "extermination".
The Spanish Government regarded Cuba as a province of Spain rather than a colony, depending on it for prestige and trade, and also as a training ground for the army. Spanish Prime Minister Antonio Cánovas del Castillo announced that "the Spanish nation is disposed to sacrifice to the last peseta of its treasure and to the last drop of blood of the last Spaniard before consenting that anyone snatch from it even one piece of its territory". He had long dominated and stabilized Spanish politics. He was assassinated in 1897 by Italian anarchist Michele Angiolillo, leaving a Spanish political system that was not stable and could not risk a blow to its prestige.
The eruption of the Cuban revolt, Weyler's measures, and the popular fury these events whipped up proved to be a boon to the newspaper industry in New York City, where Joseph Pulitzer of the "New York World" and William Randolph Hearst of the "New York Journal" recognized the potential for great headlines and stories that would sell copies. Both papers denounced Spain, but had little influence outside New York. American opinion generally saw Spain as a hopelessly backward power that was unable to deal fairly with Cuba. American Catholics were divided before the war began, but supported it enthusiastically once it started.
The U.S. had important economic interests that were being harmed by the prolonged conflict and deepening uncertainty about the future of Cuba. Shipping firms that had relied heavily on trade with Cuba now suffered losses as the conflict continued unresolved. These firms pressed Congress and McKinley to seek an end to the revolt. Other American business concerns, specifically those who had invested in Cuban sugar, looked to the Spanish to restore order. Stability, not war, was the goal of both interests. How stability would be achieved would depend largely on the ability of Spain and the U.S. to work out their issues diplomatically.
While tension increased among the Cubans and Spanish Government, popular support of intervention began to spring up in the United States, due to the emergence of the "Cuba Libre" movement and the fact that many Americans had drawn parallels between the American Revolution and the Cuban revolt, seeing the Spanish Government as the tyrannical colonial oppressor. Historian Louis Pérez notes that "The proposition of war in behalf of Cuban independence took hold immediately and held on thereafter. Such was the sense of the public mood." At the time many poems and songs were written in the United States to express support of the "Cuba Libre" movement. At the same time, many African Americans, facing growing racial discrimination and increasing retardation of their civil rights, wanted to take part in the war because they saw it as a way to advance the cause of equality, service to country hopefully helping to gain political and public respect amongst the wider population.
President McKinley, well aware of the political complexity surrounding the conflict, wanted to end the revolt peacefully. In accordance with this policy, McKinley began to negotiate with the Spanish government, hoping that the negotiations would be able to end the yellow journalism in the United States, and therefore, end the loudest calls to go to war with Spain. An attempt was made to negotiate a peace before McKinley took office. However, the Spanish refused to take part in the negotiations. In 1897 McKinley appointed Stewart L. Woodford as the new minister to Spain, who again offered to negotiate a peace. In October 1897, the Spanish government still refused the United States offer to negotiate between the Spanish and the Cubans, but promised the U.S. it would give the Cubans more autonomy. However, with the election of a more liberal Spanish government in November, Spain began to change their policies in Cuba. First, the new Spanish government told the United States that it was willing to offer a change in the Reconcentration policies (the main set of policies that was feeding yellow journalism in the United States) if the Cuban rebels agreed to a cessation of hostilities. This time the rebels refused the terms in hopes that continued conflict would lead to U.S. intervention and the creation of an independent Cuba. The liberal Spanish government also recalled the Spanish Governor General Valeriano Weyler from Cuba. This action alarmed many Cubans loyal to Spain.
The Cubans loyal to Weyler began planning large demonstrations to take place when the next Governor General, Ramon Blanco, arrived in Cuba. U.S. consul Fitzhugh Lee learned of these plans and sent a request to the U.S. State Department to send a U.S. warship to Cuba. This request lead to USS "Maine" being sent to Cuba. While "Maine" was docked in Havana, an explosion sank the ship. The sinking of "Maine" was blamed on the Spanish and made the possibility of a negotiated peace very slim. Throughout the negotiation process, the major European powers, especially Britain, France, and Russia, generally supported the American position and urged Spain to give in. Spain repeatedly promised specific reforms that would pacify Cuba but failed to deliver; American patience ran out.
McKinley sent USS "Maine" to Havana to ensure the safety of American citizens and interests, and to underscore the urgent need for reform. Naval forces were moved in position to attack simultaneously on several fronts if the war was not avoided. As "Maine" left Florida, a large part of the North Atlantic Squadron was moved to Key West and the Gulf of Mexico. Others were also moved just off the shore of Lisbon, and others were moved to Hong Kong too.
At 9:40 on the evening of February 15, 1898, "Maine" sank in Havana Harbor after suffering a massive explosion. While McKinley urged patience and did not declare that Spain had caused the explosion, the deaths of 250 out of 355 sailors on board focused American attention. McKinley asked Congress to appropriate $50 million for defense, and Congress unanimously obliged. Most American leaders took the position that the cause of the explosion was unknown, but public attention was now riveted on the situation and Spain could not find a diplomatic solution to avoid war. Spain appealed to the European powers, most of whom advised it to accept U.S. conditions for Cuba in order to avoid war. Germany urged a united European stand against the United States but took no action.
The U.S. Navy's investigation, made public on March 28, concluded that the ship's powder magazines were ignited when an external explosion was set off under the ship's hull. This report poured fuel on popular indignation in the US, making the war inevitable. Spain's investigation came to the opposite conclusion: the explosion originated within the ship. Other investigations in later years came to various contradictory conclusions, but had no bearing on the coming of the war. In 1974, Admiral Hyman George Rickover had his staff look at the documents and decided there was an internal explosion. A study commissioned by "National Geographic" magazine in 1999, using AME computer modelling, stated that the explosion could have been caused by a mine, but no definitive evidence was found.
After "Maine" was destroyed, New York City newspaper publishers Hearst and Pulitzer decided that the Spanish were to blame, and they publicized this theory as fact in their papers. They both used sensationalistic and astonishing accounts of "atrocities" committed by the Spanish in Cuba by using headlines in their newspapers, such as "Spanish Murderers" and "Remember The Maine". Their press exaggerated what was happening and how the Spanish were treating the Cuban prisoners. The stories were based on factual accounts, but most of the time, the articles that were published were embellished and written with incendiary language causing emotional and often heated responses among readers. A common myth falsely states that when illustrator Frederic Remington said there was no war brewing in Cuba, Hearst responded: "You furnish the pictures and I'll furnish the war."
This new "yellow journalism" was, however, uncommon outside New York City, and historians no longer consider it the major force shaping the national mood. Public opinion nationwide did demand immediate action, overwhelming the efforts of President McKinley, Speaker of the House Thomas Brackett Reed, and the business community to find a negotiated solution. Wall Street, big business, high finance and Main Street businesses across the country were vocally opposed to war and demanded peace. After years of severe depression, the economic outlook for the domestic economy was suddenly bright again in 1897. However, the uncertainties of warfare posed a serious threat to full economic recovery. "War would impede the march of prosperity and put the country back many years," warned the "New Jersey Trade Review." The leading railroad magazine editorialized, "From a commercial and mercenary standpoint it seems peculiarly bitter that this war should come when the country had already suffered so much and so needed rest and peace." McKinley paid close attention to the strong anti-war consensus of the business community, and strengthened his resolve to use diplomacy and negotiation rather than brute force to end the Spanish tyranny in Cuba.
A speech delivered by Republican Senator Redfield Proctor of Vermont on March 17, 1898, thoroughly analyzed the situation and greatly strengthened the pro-war cause. Proctor concluded that war was the only answer. Many in the business and religious communities which had until then opposed war, switched sides, leaving McKinley and Speaker Reed almost alone in their resistance to a war. On April 11, McKinley ended his resistance and asked Congress for authority to send American troops to Cuba to end the civil war there, knowing that Congress would force a war.
On April 19, while Congress was considering joint resolutions supporting Cuban independence, Republican Senator Henry M. Teller of Colorado proposed the Teller Amendment to ensure that the U.S. would not establish permanent control over Cuba after the war. The amendment, disclaiming any intention to annex Cuba, passed the Senate 42 to 35; the House concurred the same day, 311 to 6. The amended resolution demanded Spanish withdrawal and authorized the President to use as much military force as he thought necessary to help Cuba gain independence from Spain. President McKinley signed the joint resolution on April 20, 1898, and the ultimatum was sent to Spain. In response, Spain severed diplomatic relations with the United States on April 21. On the same day, the U.S. Navy began a blockade of Cuba. On April 23, Spain reacted to the blockade by declaring war on the U.S.
On April 25, the U.S. Congress responded in kind, declaring that a state of war between the U.S. and Spain had de facto existed since April 21, the day the blockade of Cuba had begun.
The Navy was ready, but the Army was not well-prepared for the war and made radical changes in plans and quickly purchased supplies. In the spring of 1898, the strength of the U.S. Regular Army was just 25,000 men. The Army wanted 50,000 new men but received over 220,000 through volunteers and the mobilization of state National Guard units, even gaining nearly 100,000 men on the first night after the explosion of USS "Maine".
The overwhelming consensus of observers in the 1890s, and historians ever since, is that an upsurge of humanitarian concern with the plight of the Cubans was the main motivating force that caused the war with Spain in 1898. McKinley put it succinctly in late 1897 that if Spain failed to resolve its crisis, the United States would see "a duty imposed by our obligations to ourselves, to civilization and humanity to intervene with force." Intervention in terms of negotiating a settlement proved impossible—neither Spain nor the insurgents would agree. Louis Perez states, "Certainly the moralistic determinants of war in 1898 has been accorded preponderant explanatory weight in the historiography." By the 1950s, however, American political scientists began attacking the war as a mistake based on idealism, arguing that a better policy would be realism. They discredited the idealism by suggesting the people were deliberately misled by propaganda and sensationalist yellow journalism. Political scientist Robert Osgood, writing in 1953, led the attack on the American decision process as a confused mix of "self-righteousness and genuine moral fervor," in the form of a "crusade" and a combination of "knight-errantry and national self- assertiveness." Osgood argued:
In his "War and Empire", Prof. Paul Atwood of the University of Massachusetts (Boston) writes:
The Spanish–American War was fomented on outright lies and trumped up accusations against the intended enemy. ... War fever in the general population never reached a critical temperature until the accidental sinking of the "USS Maine" was deliberately, and falsely, attributed to Spanish villainy. ... In a cryptic message ... Senator Lodge wrote that 'There may be an explosion any day in Cuba which would settle a great many things. We have got a battleship in the harbor of Havana, and our fleet, which overmatches anything the Spanish have, is masked at the Dry Tortugas.
In his autobiography, Theodore Roosevelt gave his views of the origins of the war:
Our own direct interests were great, because of the Cuban tobacco and sugar, and especially because of Cuba's relation to the projected Isthmian [Panama] Canal. But even greater were our interests from the standpoint of humanity. ... It was our duty, even more from the standpoint of National honor than from the standpoint of National interest, to stop the devastation and destruction. Because of these considerations I favored war.
In the 333 years of Spanish rule, the Philippines developed from a small overseas colony governed from the Viceroyalty of New Spain to a land with modern elements in the cities. The Spanish-speaking middle classes of the 19th century were mostly educated in the liberal ideas coming from Europe. Among these Ilustrados was the Filipino national hero José Rizal, who demanded larger reforms from the Spanish authorities. This movement eventually led to the Philippine Revolution against Spanish colonial rule. The revolution had been in a state of truce since the signing of the Pact of Biak-na-Bato in 1897, with revolutionary leaders having accepted exile outside of the country.
Lt. William Warren Kimball, Staff Intelligence Officer with the Naval War College prepared a plan with war with Spain including the Philippines on June 1, 1896 known as "the Kimball Plan".
On April 23, 1898, a document appeared in the "Manila Gazette" newspaper warning of the impending war and calling for Filipinos to participate on the side of Spain.
The first battle between American and Spanish forces was at Manila Bay where, on May 1, Commodore George Dewey, commanding the U.S. Navy's Asiatic Squadron aboard , in a matter of hours defeated a Spanish squadron under Admiral Patricio Montojo. Dewey managed this with only nine wounded. With the German seizure of Tsingtao in 1897, Dewey's squadron had become the only naval force in the Far East without a local base of its own, and was beset with coal and ammunition problems. Despite these problems, the Asiatic Squadron not only destroyed the Spanish fleet but also captured the harbor of Manila.
Following Dewey's victory, Manila Bay was filled with the warships of Britain, Germany, France, and Japan. The German fleet of eight ships, ostensibly in Philippine waters to protect German interests, acted provocatively – cutting in front of American ships, refusing to salute the United States flag (according to customs of naval courtesy), taking soundings of the harbor, and landing supplies for the besieged Spanish.
The Germans, with interests of their own, were eager to take advantage of whatever opportunities the conflict in the islands might afford. There was a fear at the time that the islands would become a German possession. The Americans called the bluff of the Germans, threatening conflict if the aggression continued, and the Germans backed down. At the time, the Germans expected the confrontation in the Philippines to end in an American defeat, with the revolutionaries capturing Manila and leaving the Philippines ripe for German picking.
Commodore Dewey transported Emilio Aguinaldo, a Filipino leader who had led rebellion against Spanish rule in the Philippines in 1896, from exile in Hong Kong to the Philippines to rally more Filipinos against the Spanish colonial government. By June 9, Aguinaldo's forces controlled the provinces of Bulacan, Cavite, Laguna, Batangas, Bataan, Zambales, Pampanga, Pangasinan, and Mindoro, and had laid siege to Manila. On June 12, Aguinaldo proclaimed the independence of the Philippines.
On August 5, upon instruction from Spain, Governor-General Basilio Augustin turned over the command of the Philippines to his deputy, Fermin Jaudenes. On August 13, with American commanders unaware that a peace protocol had been signed between Spain and the U.S. on the previous day in Washington D.C., American forces captured the city of Manila from the Spanish in the Battle of Manila. This battle marked the end of Filipino–American collaboration, as the American action of preventing Filipino forces from entering the captured city of Manila was deeply resented by the Filipinos. This later led to the Philippine–American War, which would prove to be more deadly and costly than the Spanish–American War.
The U.S. had sent a force of some 11,000 ground troops to the Philippines. On August 14, 1899, Spanish Captain-General Jaudenes formally capitulated and U.S. General Merritt formally accepted the surrender and declared the establishment of a U.S. military government in occupation. The capitulation document declared, "The surrender of the Philippine Archipelago." and set forth a mechanism for its physical accomplishment. That same day, the Schurman Commission recommended that the U.S. retain control of the Philippines, possibly granting independence in the future. On December 10, 1898, the Spanish government ceded the Philippines to the United States in the Treaty of Paris. Armed conflict broke out between U.S. forces and the Filipinos when U.S. troops began to take the place of the Spanish in control of the country after the end of the war, quickly escalating into the Philippine–American War.
On June 20, 1898, a U.S. fleet commanded by Captain Henry Glass, consisting of the protected cruiser and three transports carrying troops to the Philippines, entered Guam's Apra Harbor, Captain Glass having opened sealed orders instructing him to proceed to Guam and capture it. "Charleston" fired a few rounds at Fort Santa Cruz without receiving return fire. Two local officials, not knowing that war had been declared and believing the firing had been a salute, came out to "Charleston" to apologize for their inability to return the salute as they were out of gunpowder. Glass informed them that the U.S. and Spain were at war.
The following day, Glass sent Lieutenant William Braunersruehter to meet the Spanish Governor to arrange the surrender of the island and the Spanish garrison there. Some 54 Spanish infantry were captured and transported to the Philippines as prisoners of war. No U.S. forces were left on Guam, but the only U.S. citizen on the island, Frank Portusach, told Captain Glass that he would look after things until U.S. forces returned.
Theodore Roosevelt advocated intervention in Cuba, both for the Cuban people and to promote the Monroe Doctrine. While Assistant Secretary of the Navy, he placed the Navy on a war-time footing and prepared Dewey's Asiatic Squadron for battle. He also worked with Leonard Wood in convincing the Army to raise an all-volunteer regiment, the 1st U.S. Volunteer Cavalry. Wood was given command of the regiment that quickly became known as the "Rough Riders".
The Americans planned to capture the city of Santiago de Cuba to destroy Linares' army and Cervera's fleet. To reach Santiago they had to pass through concentrated Spanish defenses in the San Juan Hills and a small town in El Caney. The American forces were aided in Cuba by the pro-independence rebels led by General Calixto García.
For quite some time the Cuban public believed the United States government to possibly hold the key to its independence, and even annexation was considered for a time, which historian Louis Pérez explored in his book "Cuba and the United States: Ties of Singular Intimacy". The Cubans harbored a great deal of discontent towards the Spanish government, due to years of manipulation on the part of the Spanish. The prospect of getting the United States involved in the fight was considered by many Cubans as a step in the right direction. While the Cubans were wary of the United States' intentions, the overwhelming support from the American public provided the Cubans with some peace of mind, because they believed that the United States was committed to helping them achieve their independence. However, with the imposition of the Platt Amendment of 1903 after the war, as well as economic and military manipulation on the part of the United States, Cuban sentiment towards the United States became polarized, with many Cubans disappointed with continuing American interference.
From June 22 to 24, the Fifth Army Corps under General William R. Shafter landed at Daiquirí and Siboney, east of Santiago, and established an American base of operations. A contingent of Spanish troops, having fought a skirmish with the Americans near Siboney on June 23, had retired to their lightly entrenched positions at Las Guasimas. An advance guard of U.S. forces under former Confederate General Joseph Wheeler ignored Cuban scouting parties and orders to proceed with caution. They caught up with and engaged the Spanish rearguard of about 2,000 soldiers led by General Antero Rubín who effectively ambushed them, in the Battle of Las Guasimas on June 24. The battle ended indecisively in favor of Spain and the Spanish left Las Guasimas on their planned retreat to Santiago.
The U.S. Army employed Civil War–era skirmishers at the head of the advancing columns. Three of four of the U.S. soldiers who had volunteered to act as skirmishers walking point at the head of the American column were killed, including Hamilton Fish II (grandson of Hamilton Fish, the Secretary of State under Ulysses S. Grant), and Captain Allyn K. Capron, Jr., whom Theodore Roosevelt would describe as one of the finest natural leaders and soldiers he ever met. Only Oklahoma Territory Pawnee Indian, Tom Isbell, wounded seven times, survived.
Regular Spanish troops were mostly armed with modern charger-loaded, 7 mm 1893 Spanish Mauser rifles and using smokeless powder. The high-speed 7×57mm Mauser round was termed the "Spanish Hornet" by the Americans because of the supersonic crack as it passed overhead. Other irregular troops were armed with Remington Rolling Block rifles in .43 Spanish using smokeless powder and brass-jacketed bullets. U.S. regular infantry were armed with the .30–40 Krag–Jørgensen, a bolt-action rifle with a complex magazine. Both the U.S. regular cavalry and the volunteer cavalry used smokeless ammunition. In later battles, state volunteers used the .45–70 Springfield, a single-shot black powder rifle.
On July 1, a combined force of about 15,000 American troops in regular infantry and cavalry regiments, including all four of the army's "Colored" Buffalo soldier regiments, and volunteer regiments, among them Roosevelt and his "Rough Riders", the 71st New York, the 2nd Massachusetts Infantry, and 1st North Carolina, and rebel Cuban forces attacked 1,270 entrenched Spaniards in dangerous Civil War-style frontal assaults at the Battle of El Caney and Battle of San Juan Hill outside of Santiago. More than 200 U.S. soldiers were killed and close to 1,200 wounded in the fighting, thanks to the high rate of fire the Spanish put down range at the Americans. Supporting fire by Gatling guns was critical to the success of the assault. Cervera decided to escape Santiago two days later. First Lieutenant John J. Pershing, nicknamed "Black Jack", oversaw the 10th Cavalry Unit during the war. Pershing and his unit fought in the Battle of San Juan Hill. Pershing was cited for his gallantry during the battle.
The Spanish forces at Guantánamo were so isolated by Marines and Cuban forces that they did not know that Santiago was under siege, and their forces in the northern part of the province could not break through Cuban lines. This was not true of the Escario relief column from Manzanillo, which fought its way past determined Cuban resistance but arrived too late to participate in the siege.
After the battles of San Juan Hill and El Caney, the American advance halted. Spanish troops successfully defended Fort Canosa, allowing them to stabilize their line and bar the entry to Santiago. The Americans and Cubans forcibly began a bloody, strangling siege of the city. During the nights, Cuban troops dug successive series of "trenches" (raised parapets), toward the Spanish positions. Once completed, these parapets were occupied by U.S. soldiers and a new set of excavations went forward. American troops, while suffering daily losses from Spanish fire, suffered far more casualties from heat exhaustion and mosquito-borne disease. At the western approaches to the city, Cuban general Calixto Garcia began to encroach on the city, causing much panic and fear of reprisals among the Spanish forces.
Lieutenant Carter P. Johnson of the Buffalo Soldiers' 10th Cavalry, with experience in special operations roles as head of the 10th Cavalry's attached Apache scouts in the Apache Wars, chose 50 soldiers from the regiment to lead a deployment mission with at least 375 Cuban soldiers under Cuban Brigadier General Emilio Nunez and other supplies to the mouth of the San Juan River east of Cienfuegos. On June 29, 1898, a reconnaissance team in landing boats from the transports "Florida" and "Fanita" attempted to land on the beach, but were repelled by Spanish fire. A second attempt was made on June 30, 1898, but a team of reconnaissance soldiers was trapped on the beach near the mouth of the Tallabacoa River. A team of four soldiers saved this group and were awarded Medals of Honor. The and the recently arrived then shelled the beach to distract the Spanish while the Cuban deployment landed forty miles east at Palo Alto, where they linked up with Cuban General Gomez.
The major port of Santiago de Cuba was the main target of naval operations during the war. The U.S. fleet attacking Santiago needed shelter from the summer hurricane season; Guantánamo Bay, with its excellent harbor, was chosen. The 1898 invasion of Guantánamo Bay happened between June 6 and 10, with the first U.S. naval attack and subsequent successful landing of U.S. Marines with naval support.
On April 23, a council of senior admirals of the Spanish Navy had decided to order Admiral Pascual Cervera y Topete's squadron of four armored cruisers and three torpedo boat destroyers to proceed from their present location in Cape Verde (having left from Cádiz, Spain) to the West Indies.
The Battle of Santiago de Cuba on July 3, was the largest naval engagement of the Spanish–American War and resulted in the destruction of the Spanish Caribbean Squadron (also known as the "Flota de Ultramar"). In May, the fleet of Spanish Admiral Pascual Cervera y Topete had been spotted by American forces in Santiago harbor, where they had taken shelter for protection from sea attack. A two-month stand-off between Spanish and American naval forces followed.
When the Spanish squadron finally attempted to leave the harbor on July 3, the American forces destroyed or grounded five of the six ships. Only one Spanish vessel, the new armored cruiser , survived, but her captain hauled down her flag and scuttled her when the Americans finally caught up with her. The 1,612 Spanish sailors who were captured, including Admiral Cervera, were sent to Seavey's Island at the Portsmouth Naval Shipyard in Kittery, Maine, where they were confined at Camp Long as prisoners of war from July 11 until mid-September.
During the stand-off, U.S. Assistant Naval Constructor, Lieutenant Richmond Pearson Hobson had been ordered by Rear Admiral William T. Sampson to sink the collier in the harbor to bottle up the Spanish fleet. The mission was a failure, and Hobson and his crew were captured. They were exchanged on July 6, and Hobson became a national hero; he received the Medal of Honor in 1933, retired as a Rear Admiral and became a Congressman.
Yellow fever had quickly spread among the American occupation force, crippling it. A group of concerned officers of the American army chose Theodore Roosevelt to draft a request to Washington that it withdraw the Army, a request that paralleled a similar one from General Shafter, who described his force as an "army of convalescents". By the time of his letter, 75% of the force in Cuba was unfit for service.
On August 7, the American invasion force started to leave Cuba. The evacuation was not total. The U.S. Army kept the black Ninth U.S. Cavalry Regiment in Cuba to support the occupation. The logic was that their race and the fact that many black volunteers came from southern states would protect them from disease; this logic led to these soldiers being nicknamed "Immunes". Still, when the Ninth left, 73 of its 984 soldiers had contracted the disease.
On May 24, 1898, in a letter to Theodore Roosevelt, Henry Cabot Lodge wrote, "Porto Rico is not forgotten and we mean to have it".
In the same month, Lt. Henry H. Whitney of the United States Fourth Artillery was sent to Puerto Rico on a reconnaissance mission, sponsored by the Army's Bureau of Military Intelligence. He provided maps and information on the Spanish military forces to the U.S. government before the invasion.
The American offensive began on May 12, 1898, when a squadron of 12 U.S. ships commanded by Rear Adm. William T. Sampson of the United States Navy attacked the archipelago's capital, San Juan. Though the damage inflicted on the city was minimal, the Americans established a blockade in the city's harbor, San Juan Bay. On June 22, the cruiser and the destroyer delivered a Spanish counterattack, but were unable to break the blockade and "Terror" was damaged.
The land offensive began on July 25, when 1,300 infantry soldiers led by Nelson A. Miles disembarked off the coast of Guánica. The first organized armed opposition occurred in Yauco in what became known as the Battle of Yauco.
This encounter was followed by the Battle of Fajardo. The United States seized control of Fajardo on August 1, but were forced to withdraw on August 5 after a group of 200 Puerto Rican-Spanish soldiers led by Pedro del Pino gained control of the city, while most civilian inhabitants fled to a nearby lighthouse. The Americans encountered larger opposition during the Battle of Guayama and as they advanced towards the main island's interior. They engaged in crossfire at Guamaní River Bridge, Coamo and Silva Heights and finally at the Battle of Asomante. The battles were inconclusive as the allied soldiers retreated.
A battle in San Germán concluded in a similar fashion with the Spanish retreating to Lares. On August 9, 1898, American troops that were pursuing units retreating from Coamo encountered heavy resistance in Aibonito in a mountain known as "Cerro Gervasio del Asomante" and retreated after six of their soldiers were injured. They returned three days later, reinforced with artillery units and attempted a surprise attack. In the subsequent crossfire, confused soldiers reported seeing Spanish reinforcements nearby and five American officers were gravely injured, which prompted a retreat order. All military actions in Puerto Rico were suspended on August 13, after U.S. President William McKinley and French Ambassador Jules Cambon, acting on behalf of the Spanish Government, signed an armistice whereby Spain relinquished its sovereignty over Puerto Rico.
Shortly after the war began in April, the Spanish Navy ordered major units of its fleet to concentrate at Cádiz to form the 2nd Squadron, under the command of Rear Admiral Manuel de la Cámara y Livermoore. Two of Spain's most powerful warships, the battleship and the brand-new armored cruiser , were not available when the war began — the former undergoing reconstruction in a French shipyard and the latter not yet delivered from her builders — but both were rushed into service and assigned to Cámara's squadron. The squadron was ordered to guard the Spanish coast against raids by the U.S. Navy. No such raids materialized, and while Cámara′s squadron lay idle at Cádiz, U.S. Navy forces destroyed Montojo's squadron at Manila Bay on 1 May and bottled up Cervera′s squadron at Santiago de Cuba on 27 May.
During May, the Spanish Ministry of Marine considered options for employing Cámara's squadron. Spanish Minister of Marine Ramón Auñón y Villalón made plans for Cámara to take a portion of his squadron across the Atlantic Ocean and bombard a city on the United States East Coast – preferably Charleston, South Carolina – and then head for the Caribbean to make port at San Juan, Havana, or Santiago de Cuba, but in the end this idea was dropped. Meanwhile, U.S. intelligence reported rumors as early as 15 May that Spain also was considering sending Cámara's squadron to the Philippines to destroy Dewey's squadron and reinforce the Spanish forces there with fresh troops. "Pelayo" and "Emperado Carlos V" each were more powerful than any of Dewey's ships, and the possibility of their arrival in the Philippines was of great concern to the United States, which hastily arranged to dispatch 10,000 additional U.S. Army troops to the Philippines and send two U.S. Navy monitors to reinforce Dewey.
On 15 June, Cámara finally received orders to depart immediately for the Philippines. His squadron, made up of "Pelayo" (his flagship), "Emperador Carlos V", two auxiliary cruisers, three destroyers, and four colliers, was to depart Cádiz escorting four transports. After detaching two of the transports to steam independently to the Caribbean, his squadron was to proceed to the Philippines, escorting the other two transports, which carried 4,000 Spanish Army troops to reinforce Spanish forces there. He then was to destroy Dewey's squadron. Accordingly, he sortied from Cádiz on 16 June and, after detaching two of the transports for their voyages to the Caribbean, passed Gibraltar on 17 June and arrived at Port Said, at the northern end of the Suez Canal, on 26 June. There he found that U.S. operatives had purchased all the coal available at the other end of the canal in Suez to prevent his ships from coaling with it and received word on 29 June from the British government, which controlled Egypt at the time, that his squadron was not permitted to coal in Egyptian waters because to do so would violate Egyptian and British neutrality.
Ordered to continue, Cámara's squadron passed through the Suez Canal on 5–6 July. By that time, the United States Department of the Navy had announced that a U.S. Navy "armored squadron with cruisers" would assemble and "proceed at once to the Spanish coast" and word also reached Spain of the annihilation of Cervera's squadron off Santiago de Cuba on 3 July, freeing up the U.S. Navy's heavy forces from the blockade there. Fearing for the safety of the Spanish coast, the Spanish Ministry of Marine recalled Cámara's squadron, which by then had reached the Red Sea, on 7 July 1898. Cámaras squadron returned to Spain, arriving at Cartagena on 23 July. Cámara and Spain's two most powerful warships thus never saw combat during the war.
With defeats in Cuba and the Philippines, and its fleets in both places destroyed, Spain sued for peace and negotiations were opened between the two parties. After the sickness and death of British consul Edward Henry Rawson-Walker, American admiral George Dewey requested the Belgian consul to Manila, Édouard André, to take Rawson-Walker's place as intermediary with the Spanish Government.
Hostilities were halted on August 12, 1898, with the signing in Washington of a Protocol of Peace between the United States and Spain. After over two months of difficult negotiations, the formal peace treaty, the Treaty of Paris, was signed in Paris on December 10, 1898, and was ratified by the United States Senate on February 6, 1899.
The United States gained Spain's colonies of the Philippines, Guam and Puerto Rico in the treaty, and Cuba became a U.S. protectorate. The treaty came into force in Cuba April 11, 1899, with Cubans participating only as observers. Having been occupied since July 17, 1898, and thus under the jurisdiction of the United States Military Government (USMG), Cuba formed its own civil government and gained independence on May 20, 1902, with the announced end of USMG jurisdiction over the island. However, the U.S. imposed various restrictions on the new government, including prohibiting alliances with other countries, and reserved the right to intervene. The U.S. also established a perpetual lease of Guantánamo Bay.
The war lasted ten weeks. John Hay (the United States Ambassador to the United Kingdom), writing from London to his friend Theodore Roosevelt, declared that it had been "a splendid little war". The press showed Northerners and Southerners, blacks and whites fighting against a common foe, helping to ease the scars left from the American Civil War. Exemplary of this was the fact that four former Confederate States Army generals had served in the war, now in the U.S. Army and all of them again carrying similar ranks. These officers included Matthew Butler, Fitzhugh Lee, Thomas L. Rosser and Joseph Wheeler, though only the latter had seen action. Still, in an exciting moment during the Battle of Las Guasimas, Wheeler apparently forgot for a moment which war he was fighting, having supposedly called out "Let's go, boys! We've got the damn Yankees on the run again!"
The war marked American entry into world affairs. Since then, the U.S. has had a significant hand in various conflicts around the world, and entered many treaties and agreements. The Panic of 1893 was over by this point, and the U.S. entered a long and prosperous period of economic and population growth, and technological innovation that lasted through the 1920s.
The war redefined national identity, served as a solution of sorts to the social divisions plaguing the American mind, and provided a model for all future news reporting.
The idea of American imperialism changed in the public's mind after the short and successful Spanish–American War. Due to the United States' powerful influence diplomatically and militarily, Cuba's status after the war relied heavily upon American actions. Two major developments emerged from the Spanish–American War: one, it greatly enforced the United States' vision of itself as a "defender of democracy" and as a major world power, and two, it had severe implications for Cuban–American relations in the future. As historian Louis Pérez argued in his book "Cuba in the American Imagination: Metaphor and the Imperial Ethos", the Spanish–American War of 1898 "fixed permanently how Americans came to think of themselves: a righteous people given to the service of righteous purpose".
The war greatly reduced the Spanish Empire. Spain had been declining as an imperial power since the early 19th century as a result of Napoleon's invasion. The loss of Cuba caused a national trauma because of the affinity of peninsular Spaniards with Cuba, which was seen as another province of Spain rather than as a colony. Spain retained only a handful of overseas holdings: Spanish West Africa (Spanish Sahara), Spanish Guinea, Spanish Morocco, and the Canary Islands.
The Spanish soldier Julio Cervera Baviera, who served in the Puerto Rican Campaign, published a pamphlet in which he blamed the natives of that colony for its occupation by the Americans, saying, "I have never seen such a servile, ungrateful country [i.e., Puerto Rico] ... In twenty-four hours, the people of Puerto Rico went from being fervently Spanish to enthusiastically American... They humiliated themselves, giving in to the invader as the slave bows to the powerful lord." He was challenged to a duel by a group of young Puerto Ricans for writing this pamphlet.
Culturally, a new wave called the Generation of '98 originated as a response to this trauma, marking a renaissance in Spanish culture. Economically, the war benefited Spain, because after the war large sums of capital held by Spaniards in Cuba and the United States were returned to the peninsula and invested in Spain. This massive flow of capital (equivalent to 25% of the gross domestic product of one year) helped to develop the large modern firms in Spain in the steel, chemical, financial, mechanical, textile, shipyard, and electrical power industries. However, the political consequences were serious. The defeat in the war began the weakening of the fragile political stability that had been established earlier by the rule of Alfonso XII.
The Teller Amendment, which was enacted on April 20, 1898, was a promise from the United States to the Cuban people that it was not declaring war to annex Cuba, but to help it gain its independence from Spain. The Platt Amendment was a move by the United States' government to shape Cuban affairs without violating the Teller Amendment.
The U.S. Congress had passed the Teller Amendment before the war, promising Cuban independence. However, the Senate passed the Platt Amendment as a rider to an Army appropriations bill, forcing a peace treaty on Cuba which prohibited it from signing treaties with other nations or contracting a public debt. The Platt Amendment was pushed by imperialists who wanted to project U.S. power abroad (in contrast to the Teller Amendment which was pushed by anti-imperialists who called for a restraint on U.S. rule). The amendment granted the United States the right to stabilize Cuba militarily as needed. In addition, the Platt Amendment permitted the United States to deploy Marines to Cuba if its freedom and independence was ever threatened or jeopardized by an external or internal force. The Platt Amendment also provided for a permanent American naval base in Cuba. Guantánamo Bay was established after the signing of the Cuban–American Treaty of Relations in 1903. Thus, despite that Cuba technically gained its independence after the war ended, the United States government ensured that it had some form of power and control over Cuban affairs.
The U.S. annexed the former Spanish colonies of Puerto Rico, the Philippines and Guam. The notion of the United States as an imperial power, with colonies, was hotly debated domestically with President McKinley and the Pro-Imperialists winning their way over vocal opposition led by Democrat William Jennings Bryan, who had supported the war. The American public largely supported the possession of colonies, but there were many outspoken critics such as Mark Twain, who wrote "The War Prayer" in protest. Roosevelt returned to the United States a war hero, and he was soon elected governor of New York and then became the vice president. At the age of 42 he became the youngest man to become president after the assassination of President McKinley.
The war served to further repair relations between the American North and South. The war gave both sides a common enemy for the first time since the end of the Civil War in 1865, and many friendships were formed between soldiers of northern and southern states during their tours of duty. This was an important development, since many soldiers in this war were the children of Civil War veterans on both sides.
The African-American community strongly supported the rebels in Cuba, supported entry into the war, and gained prestige from their wartime performance in the Army. Spokesmen noted that 33 African-American seamen had died in the "Maine" explosion. The most influential Black leader, Booker T. Washington, argued that his race was ready to fight. War offered them a chance "to render service to our country that no other race can", because, unlike Whites, they were "accustomed" to the "peculiar and dangerous climate" of Cuba. One of the Black units that served in the war was the 9th Cavalry Regiment. In March 1898, Washington promised the Secretary of the Navy that war would be answered by "at least ten thousand loyal, brave, strong black men in the south who crave an opportunity to show their loyalty to our land, and would gladly take this method of showing their gratitude for the lives laid down, and the sacrifices made, that Blacks might have their freedom and rights."
In 1904, the United Spanish War Veterans was created from smaller groups of the veterans of the Spanish–American War. Today, that organization is defunct, but it left an heir in the Sons of Spanish–American War Veterans, created in 1937 at the 39th National Encampment of the United Spanish War Veterans. According to data from the United States Department of Veterans Affairs, the last surviving U.S. veteran of the conflict, Nathan E. Cook, died on September 10, 1992, at age 106. (If the data is to be believed, Cook, born October 10, 1885, would have been only 12 years old when he served in the war.)
The Veterans of Foreign Wars of the United States (VFW) was formed in 1914 from the merger of two veterans organizations which both arose in 1899: the American Veterans of Foreign Service and the National Society of the Army of the Philippines. The former was formed for veterans of the Spanish–American War, while the latter was formed for veterans of the Philippine–American War. Both organizations were formed in response to the general neglect veterans returning from the war experienced at the hands of the government.
To pay the costs of the war, Congress passed an excise tax on long-distance phone service. At the time, it affected only wealthy Americans who owned telephones. However, the Congress neglected to repeal the tax after the war ended four months later, and the tax remained in place for over 100 years until, on August 1, 2006, it was announced that the U.S. Department of the Treasury and the IRS would no longer collect the tax.
The change in sovereignty of Puerto Rico, like the occupation of Cuba, brought about major changes in both the insular and U.S. economies. Before 1898 the sugar industry in Puerto Rico was in decline for nearly half a century. In the second half of the nineteenth century, technological advances increased the capital requirements to remain competitive in the sugar industry. Agriculture began to shift toward coffee production, which required less capital and land accumulation. However, these trends were reversed with U.S. hegemony. Early U.S. monetary and legal policies made it both harder for local farmers to continue operations and easier for American businesses to accumulate land. This, along with the large capital reserves of American businesses, led to a resurgence in the Puerto Rican nuts and sugar industry in the form of large American owned agro-industrial complexes.
At the same time, the inclusion of Puerto Rico into the U.S. tariff system as a customs area, effectively treating Puerto Rico as a state with respect to internal or external trade, increased the codependence of the insular and mainland economies and benefitted sugar exports with tariff protection. In 1897 the United States purchased 19.6 percent of Puerto Rico's exports while supplying 18.5 percent of its imports. By 1905 these figures jumped to 84 percent and 85 percent, respectively. However, coffee was not protected, as it was not a product of the mainland. At the same time, Cuba and Spain, traditionally the largest importers of Puerto Rican coffee, now subjected Puerto Rico to previously nonexistent import tariffs. These two effects led to a decline in the coffee industry. From 1897 to 1901 coffee went from 65.8 percent of exports to 19.6 percent while sugar went from 21.6 percent to 55 percent. The tariff system also provided a protected market place for Puerto Rican tobacco exports. The tobacco industry went from nearly nonexistent in Puerto Rico to a major part of the country's agricultural sector.
The Spanish–American War was the first U.S. war in which the motion picture camera played a role. The Library of Congress archives contain many films and film clips from the war. In addition, a few feature films have been made about the war. These include
The United States awards and decorations of the Spanish–American War were as follows:
The governments of Spain and Cuba issued a wide variety of military awards to honor Spanish, Cuban, and Philippine soldiers who had served in the conflict. | https://en.wikipedia.org/wiki?curid=28265 |
Scurvy
Scurvy is a disease resulting from a lack of vitamin C (ascorbic acid). Early symptoms of deficiency include weakness, feeling tired and sore arms and legs. Without treatment, decreased red blood cells, gum disease, changes to hair, and bleeding from the skin may occur. As scurvy worsens there can be poor wound healing, personality changes, and finally death from infection or bleeding.
It takes at least a month of little to no vitamin C in the diet before symptoms occur. In modern times, scurvy occurs most commonly in people with mental disorders, unusual eating habits, alcoholism, and older people who live alone. Other risk factors include intestinal malabsorption and dialysis. While many animals produce their own vitamin C, humans and a few others do not. Vitamin C is required to make the building blocks for collagen. Diagnosis is typically based on physical signs, X-rays, and improvement after treatment.
Treatment is with vitamin C supplements taken by mouth. Improvement often begins in a few days with complete recovery in a few weeks. Sources of vitamin C in the diet include citrus fruit and a number of vegetables (such as red peppers, broccoli, and potatoes). Cooking often decreases vitamin C in foods.
Scurvy is rare compared to other nutritional deficiencies. It occurs more often in the developing world in association with malnutrition. Rates among refugees are reported at 5 to 45 percent. Scurvy was described as early as the time of ancient Egypt. It was a limiting factor in long-distance sea travel, often killing large numbers of people. During the Age of Sail, it was assumed that 50 percent of the sailors would die of scurvy on a given trip. A Scottish surgeon in the Royal Navy, James Lind, is generally credited with proving that scurvy can be successfully treated with citrus fruit in 1753. Nonetheless, it would be 1795 before health reformers such as Gilbert Blane persuaded the Royal Navy to routinely give lemon juice to its sailors.
Early symptoms are malaise and lethargy. After one to three months, patients develop shortness of breath and bone pain. Myalgias may occur because of reduced carnitine production. Other symptoms include skin changes with roughness, easy bruising and petechiae, gum disease, loosening of teeth, poor wound healing, and emotional changes (which may appear before any physical changes). Dry mouth and dry eyes similar to Sjögren's syndrome may occur. In the late stages, jaundice, generalised edema, oliguria, neuropathy, fever, convulsions, and eventual death are frequently seen.
Scurvy, including subclinical scurvy, is caused by a deficiency of dietary vitamin C since humans are unable to metabolically synthesize vitamin C. Provided the diet contains sufficient vitamin C, the lack of working L-gulonolactone oxidase (GULO) enzyme has no significance, and in modern Western societies, scurvy is rarely present in adults, although infants and elderly people are affected. Virtually all commercially available baby formulas contain added vitamin C, preventing infantile scurvy. Human breast milk contains sufficient vitamin C, if the mother has an adequate intake. Commercial milk is pasteurized, a heating process that destroys the natural vitamin C content of the milk.
Scurvy is one of the accompanying diseases of malnutrition (other such micronutrient deficiencies are beriberi and pellagra) and thus is still widespread in areas of the world depending on external food aid.
Although rare, there are also documented cases of scurvy due to poor dietary choices by people living in industrialized nations.
Vitamins are essential to the production and use of enzymes that are involved in ongoing processes throughout the human body. Ascorbic acid is needed for a variety of biosynthetic pathways, by accelerating hydroxylation and amidation reactions. In the synthesis of collagen, ascorbic acid is required as a cofactor for prolyl hydroxylase and lysyl hydroxylase. These two enzymes are responsible for the hydroxylation of the proline and lysine amino acids in collagen. Hydroxyproline and hydroxylysine are important for stabilizing collagen by cross-linking the propeptides in collagen.
Collagen is a primary structural protein in the human body, necessary for healthy blood vessels, muscle, skin, bone, cartilage, and other connective tissues.
Defective connective tissue leads to fragile capillaries, resulting in abnormal bleeding, bruising, and internal hemorrhaging.
Collagen is an important part of bone, so bone formation is also affected. Teeth loosen, bones break more easily, and once-healed breaks may recur.
Defective collagen fibrillogenesis impairs wound healing.
Untreated scurvy is invariably fatal.
Diagnosis is typically based on physical signs, X-rays, and improvement after treatment.
Various childhood onset disorders can mimic the clinical and X-ray picture of scurvy such as:
Scurvy can be prevented by a diet that includes vitamin C-rich foods such as amla, bell peppers (sweet peppers), blackcurrants, broccoli, chili peppers, guava, kiwifruit, and parsley. Other sources rich in vitamin C are fruits such as lemons, limes, oranges, papaya, and strawberries. It is also found in vegetables, such as brussels sprouts, cabbage, potatoes, and spinach. Some fruits and vegetables not high in vitamin C may be pickled in lemon juice, which is high in vitamin C. Though redundant in the presence of a balanced diet, various nutritional supplements are available, which provide ascorbic acid well in excess of that required to prevent scurvy.
Some animal products, including liver, muktuk (whale skin), oysters, and parts of the central nervous system, including the adrenal medulla, brain, and spinal cord, contain large amounts of vitamin C, and can even be used to treat scurvy. Fresh meat from animals which make their own vitamin C (which most animals do) contains enough vitamin C to prevent scurvy, and even partly treat it. In some cases (notably French soldiers eating fresh horse meat), it was discovered that meat alone, even partly cooked meat, could alleviate scurvy. Conversely, in other cases, a meat-only diet could cause scurvy.
Scott's 1902 Antarctic expedition used lightly fried seal meat and liver, whereby complete recovery from incipient scurvy was reported to have taken less than two weeks.
Scurvy will improve with doses of vitamin C as low as 10 mg per day though doses of around 100 mg per day are typically recommended. Most people make a full recovery within 2 weeks.
Hippocrates documented scurvy as a disease, and Egyptians have recorded its symptoms as early as 1550 BCE. The knowledge that consuming foods containing vitamin C is a cure for scurvy has been repeatedly forgotten and rediscovered into the early 20th century.
In the 13th century, the Crusaders frequently suffered from scurvy. In the 1497 expedition of Vasco da Gama, the curative effects of citrus fruit were already known and confirmed by Pedro Álvares Cabral and his crew in 1507.
The Portuguese planted fruit trees and vegetables in Saint Helena, a stopping point for homebound voyages from Asia, and left their sick, who had scurvy and other ailments, to be taken home by the next ship if they recovered.
In 1500, one of the pilots of Cabral's fleet bound for India noted that in Malindi, its king offered the expedition fresh supplies such as lambs, chickens, and ducks, along with lemons and oranges, due to which "some of our ill were cured of scurvy".
Unfortunately, these travel accounts did not stop further maritime tragedies caused by scurvy, first because of the lack of communication between travelers and those responsible for their health, and because fruits and vegetables could not be kept for long on ships.
In 1536, the French explorer Jacques Cartier, exploring the St. Lawrence River, used the local natives' knowledge to save his men who were dying of scurvy. He boiled the needles of the arbor vitae tree (eastern white cedar) to make a tea that was later shown to contain 50 mg of vitamin C per 100 grams. Such treatments were not available aboard ship, where the disease was most common.
In February 1601, Captain James Lancaster, while sailing to Sumatra, landed on the northern coast to specifically obtain lemons and oranges for his crew to stop scurvy. Captain Lancaster conducted an experiment using four ships under his command. One ship's crew received routine doses of lemon juice while the other three ships did not receive any such treatment. As a result, members of the non-treated ships started to contract scurvy, with many dying as a result.
During the Age of Exploration (between 1500 and 1800), it has been estimated that scurvy killed at least two million sailors. Jonathan Lamb wrote: "In 1499, Vasco da Gama lost 116 of his crew of 170; In 1520, Magellan lost 208 out of 230;...all mainly to scurvy."
In 1579, the Spanish friar and physician Agustin Farfán published a book in which he recommended oranges and lemons for scurvy, a remedy that was already known in the Spanish Navy.
In 1593, Admiral Sir Richard Hawkins advocated drinking orange and lemon juice as a means of preventing scurvy.
In 1614, John Woodall, Surgeon General of the East India Company, published "The Surgion's Mate" as a handbook for apprentice surgeons aboard the company's ships. He repeated the experience of mariners that the cure for scurvy was fresh food or, if not available, oranges, lemons, limes, and tamarinds. He was, however, unable to explain the reason why, and his assertion had no impact on the opinions of the influential physicians who ran the medical establishment that scurvy was a digestive complaint.
Even on dry land, in Europe, until the late middle ages, scurvy was common in late winter, when few green vegetables, fruits and root vegetables were available. This gradually improved with the introduction from the Americas of potatoes; by 1800, scurvy was virtually unheard of in Scotland, where it had previously been endemic.
A 1707 handwritten book by Mrs. Ebot Mitchell, discovered in a house in Hasfield, Gloucestershire, contains a "Recp.t for the Scurvy" that consisted of extracts from various plants mixed with a plentiful supply of orange juice, white wine or beer.
In 1734, the Leiden-based physician Johann Bachstrom published a book on scurvy in which he stated, "scurvy is solely owing to a total abstinence from fresh vegetable food, and greens; which is alone the primary cause of the disease", and urged the use of fresh fruit and vegetables as a cure.
However, it was not until 1747 that James Lind formally demonstrated that scurvy could be treated by supplementing the diet with citrus fruit, in one of the first controlled clinical experiments reported in the history of medicine.
As a naval surgeon on HMS "Salisbury", Lind had compared several suggested scurvy cures: hard cider, vitriol, vinegar, seawater, oranges, lemons, and a mixture of balsam of Peru, garlic, myrrh, mustard seed and radish root. In "A Treatise on the Scurvy" (1753)
Lind explained the details of his clinical trial and concluded "the results of all my experiments was, that oranges and lemons were the most effectual remedies for this distemper at sea.”
Unfortunately, the experiment and its results occupied only a few paragraphs in a work that was long and complex and had little impact. Lind himself never actively promoted lemon juice as a single 'cure'. He shared medical opinion at the time that scurvy had multiple causes – notably hard work, bad water, and the consumption of salt meat in a damp atmosphere which inhibited healthful perspiration and normal excretion – and therefore required multiple solutions.
Lind was also sidetracked by the possibilities of producing a concentrated 'rob' of lemon juice by boiling it. Unfortunately this process destroyed the vitamin C and was therefore unsuccessful.
During the 18th century, disease killed more British sailors than enemy action. It was mainly by scurvy that George Anson, in his celebrated voyage of 1740–1744, lost nearly two-thirds of his crew (1,300 out of 2,000) within the first 10 months of the voyage.
The Royal Navy enlisted 184,899 sailors during the Seven Years' War; 133,708 of these were "missing" or died from disease, and scurvy was the leading cause.
Although throughout this period sailors and naval surgeons were increasingly convinced that citrus fruits could cure scurvy, the classically trained physicians who ran the medical establishment dismissed this evidence as mere anecdote which did not conform to current theories of disease. Literature championing the cause of citrus juice, therefore, had no practical impact. Medical theory was based on the assumption that scurvy was a disease of internal putrefaction brought on by faulty digestion caused by the hardships of life at sea and the naval diet. Although this basic idea was given different emphases by successive theorists, the remedies they advocated (and which the navy accepted) amounted to little more than the consumption of 'fizzy drinks' to activate the digestive system, the most extreme of which was the regular consumption of 'elixir of vitriol' – sulphuric acid taken with spirits and barley water, and laced with spices.
In 1764, a new variant appeared. Advocated by Dr David MacBride and Sir John Pringle, Surgeon General of the Army and later President of the Royal Society, this idea was that scurvy was the result of a lack of 'fixed air' in the tissues which could be prevented by drinking infusions of malt and wort whose fermentation within the body would stimulate digestion and restore the missing gases. These ideas received wide and influential backing, when James Cook set off to circumnavigate the world (1768–1771) in , malt and wort were top of the list of the remedies he was ordered to investigate. The others were beer, Sauerkraut and Lind's 'rob'. The list did not include lemons.
Cook did not lose a single man to scurvy, and his report came down in favour of malt and wort, although it is now clear that the reason for the health of his crews on this and other voyages was Cook's regime of shipboard cleanliness, enforced by strict discipline, as well as frequent replenishment of fresh food and greenstuffs. Another rule implemented by Cook was his prohibition of the consumption of salt fat skimmed from the ship's copper boiling pans, then a common practice in the Navy. In contact with air, the copper formed compounds that prevented the absorption of vitamins by the intestines.
The first major long distance expedition that experienced virtually no scurvy was that of the Spanish naval officer Alessandro Malaspina, 1789–1794. Malaspina's medical officer, Pedro González, was convinced that fresh oranges and lemons were essential for preventing scurvy. Only one outbreak occurred, during a 56-day trip across the open sea. Five sailors came down with symptoms, one seriously. After three days at Guam all five were healthy again. Spain's large empire and many ports of call made it easier to acquire fresh fruit.
Although towards the end of the century MacBride's theories were being challenged, the medical establishment in Britain remained wedded to the notion that scurvy was a disease of internal 'putrefaction' and the Sick and Hurt Board, run by administrators, felt obliged to follow its advice. Within the Royal Navy, however, opinion – strengthened by first-hand experience of the use of lemon juice at the siege of Gibraltar and during Admiral Rodney's expedition to the Caribbean – had become increasingly convinced of its efficacy. This was reinforced by the writings of experts like Gilbert Blane and Thomas Trotter and by the reports of up-and-coming naval commanders.
With the coming of war in 1793, the need to eliminate scurvy acquired a new urgency. But the first initiative came not from the medical establishment but from the admirals. Ordered to lead an expedition against Mauritius, Rear Admiral Gardner was uninterested in the wort, malt and elixir of vitriol which were still being issued to ships of the Royal Navy, and demanded that he be supplied with lemons, to counteract scurvy on the voyage. Members of the Sick and Hurt Board, recently augmented by two practical naval surgeons, supported the request, and the Admiralty ordered that it be done. There was, however, a last minute change of plan. The expedition against Mauritius was cancelled. On 2 May 1794, only and two sloops under Commodore Peter Rainier sailed for the east with an outward bound convoy, but the warships were fully supplied with lemon juice and the sugar with which it had to be mixed. Then in March 1795, came astonishing news. "Suffolk" had arrived in India after a four-month voyage without a trace of scurvy and with a crew that was healthier than when it set out. The effect was immediate. Fleet commanders clamoured also to be supplied with lemon juice, and by June the Admiralty acknowledged the groundswell of demand in the navy had agreed to a proposal from the Sick and Hurt Board that lemon juice and sugar should in future be issued as a daily ration to the crews of all warships.
It took a few years before the method of distribution to all ships in the fleet had been perfected and the supply of the huge quantities of lemon juice required to be secured, but by 1800, the system was in place and functioning. This led to a remarkable health improvement among the sailors and consequently played a critical role in gaining the advantage in naval battles against enemies who had yet to introduce the measures.
The surgeon-in-chief of Napoleon's army at the Siege of Alexandria (1801), Baron Dominique-Jean Larrey, wrote in his memoirs that the consumption of horse meat helped the French to curb an epidemic of scurvy. The meat was cooked but was freshly obtained from young horses bought from Arabs, and was nevertheless effective. This helped to start the 19th-century tradition of horse meat consumption in France.
Lauchlin Rose patented a method used to preserve citrus juice without alcohol in 1867, creating a concentrated drink known as Rose's lime juice. The Merchant Shipping Act of 1867 required all ships of the Royal Navy and Merchant Navy to provide a daily lime ration of one pound to sailors to prevent scurvy. The product became nearly ubiquitous, hence the term "limey", first for British sailors, then for English immigrants within the former British colonies (particularly America, New Zealand and South Africa), and finally, in old American slang, all British people.
The plant "Cochlearia officinalis", also known as "common scurvygrass", acquired its common name from the observation that it cured scurvy, and it was taken on board ships in dried bundles or distilled extracts. Its very bitter taste was usually disguised with herbs and spices; however, this did not prevent scurvygrass drinks and sandwiches from becoming a popular fad in the UK until the middle of the nineteenth century, when citrus fruits became more readily available.
West Indian limes began to supplement lemons, when Spain's alliance with France against Britain in the Napoleonic Wars made the supply of Mediterranean lemons problematic, and because they were more easily obtained from Britain's Caribbean colonies and were believed to be more effective because they were more acidic. It was the acid, not the (then-unknown) Vitamin C that was believed to cure scurvy. In fact, the West Indian limes were significantly lower in Vitamin C than the previous lemons and further were not served fresh but rather as lime juice, which had been exposed to light and air, and piped through copper tubing, all of which significantly reduced the Vitamin C. Indeed, a 1918 animal experiment using representative samples of the Navy and Merchant Marine's lime juice showed that it had virtually no antiscorbutic power at all.
The belief that scurvy was fundamentally a nutritional deficiency, best treated by consumption of fresh food, particularly fresh citrus or fresh meat, was not universal in the 19th and early 20th centuries, and thus sailors and explorers continued to suffer from scurvy into the 20th century. For example, the Belgian Antarctic Expedition of 1897–1899 became seriously affected by scurvy when its leader, Adrien de Gerlache, initially discouraged his men from eating penguin and seal meat.
In the Royal Navy's Arctic expeditions in the 19th century it was widely believed that scurvy was prevented by good hygiene on board ship, regular exercise, and maintaining the morale of the crew, rather than by a diet of fresh food. Navy expeditions continued to be plagued by scurvy even while fresh (not jerked or tinned) meat was well known as a practical antiscorbutic among civilian whalers and explorers in the Arctic. Even cooking fresh meat did not entirely destroy its antiscorbutic properties, especially as many cooking methods failed to bring all the meat to high temperature.
The confusion is attributed to a number of factors:
In the resulting confusion, a new hypothesis was proposed, following the new germ theory of disease – that scurvy was caused by ptomaine, a waste product of bacteria, particularly in tainted tinned meat.
Infantile scurvy emerged in the late 19th century because children were being fed pasteurized cow's milk, particularly in the urban upper class. While pasteurization killed bacteria, it also destroyed vitamin C. This was eventually resolved by supplementing with onion juice or cooked potatoes. Native Americans helped save some newcomers from scurvy by directing them to eat wild onions.
By the early 20th century, when Robert Falcon Scott made his first expedition to the Antarctic (1901–1904), the prevailing theory was that scurvy was caused by "ptomaine poisoning", particularly in tinned meat. However, Scott discovered that a diet of fresh meat from Antarctic seals cured scurvy before any fatalities occurred.
In 1907, an animal model which would eventually help to isolate and identify the "antiscorbutic factor" was discovered. Axel Holst and Theodor Frølich, two Norwegian physicians studying shipboard beriberi contracted by ship's crews in the Norwegian Fishing Fleet, wanted a small test mammal to substitute for the pigeons then used in beriberi research. They fed guinea pigs their test diet of grains and flour, which had earlier produced beriberi in their pigeons, and were surprised when classic scurvy resulted instead. This was a serendipitous choice of animal. Until that time, scurvy had not been observed in any organism apart from humans and had been considered an exclusively human disease. Certain birds, mammals, and fish are susceptible to scurvy, but pigeons are unaffected, since they can synthesize ascorbic acid internally. Holst and Frølich found they could cure scurvy in guinea pigs with the addition of various fresh foods and extracts. This discovery of an animal experimental model for scurvy, which was made even before the essential idea of "vitamins" in foods had been put forward, has been called the single most important piece of vitamin C research.
In 1915, New Zealand troops in the Gallipoli Campaign had a lack of vitamin C in their diet which caused many of the soldiers to contract scurvy. It is thought that scurvy is one of many reasons that the Allied attack on Gallipoli failed.
Vilhjalmur Stefansson, an arctic explorer who had lived among the Inuit, proved that the all-meat diet they consumed did not lead to vitamin deficiencies. He participated in a study in New York's Bellevue Hospital in February 1928, where he and a companion ate only meat for a year while under close medical observation, yet remained in good health.
In 1927, Hungarian biochemist Szent-Györgyi isolated a compound he called "hexuronic acid". Szent-Györgyi suspected hexuronic acid, which he had isolated from adrenal glands, to be the antiscorbutic agent, but he could not prove it without an animal-deficiency model. In 1932, the connection between hexuronic acid and scurvy was finally proven by American researcher Charles Glen King of the University of Pittsburgh. King's laboratory was given some hexuronic acid by Szent-Györgyi and soon established that it was the sought-after anti-scorbutic agent. Because of this, hexuronic acid was subsequently renamed "ascorbic acid."
Rates of scurvy in most of the world are low. Those most commonly affected are malnourished people in the developing world and the homeless. There have been outbreaks of the condition in refugee camps. Case reports in the developing world of those with poorly healing wounds have occurred.
Notable human dietary studies of experimentally induced scurvy have been conducted on conscientious objectors during World War II in Britain and on Iowa state prisoner volunteers in the late 1960s. These studies both found that all obvious symptoms of scurvy previously induced by an experimental scorbutic diet with extremely low vitamin C content could be completely reversed by additional vitamin C supplementation of only 10 mg per day. In these experiments, no clinical difference was noted between men given 70 mg vitamin C per day (which produced blood levels of vitamin C of about 0.55 mg/dl, about of tissue saturation levels), and those given 10 mg per day (which produced lower blood levels). Men in the prison study developed the first signs of scurvy about 4 weeks after starting the vitamin C-free diet, whereas in the British study, six to eight months were required, possibly because the subjects were pre-loaded with a 70 mg/day supplement for six weeks before the scorbutic diet was fed.
Men in both studies, on a diet devoid or nearly devoid of vitamin C, had blood levels of vitamin C too low to be accurately measured when they developed signs of scurvy, and in the Iowa study, at this time were estimated (by labeled vitamin C dilution) to have a body pool of less than 300 mg, with daily turnover of only 2.5 mg/day.
The vast majority of animals and plants are able to synthesize vitamin C, through a sequence of enzyme-driven steps, which convert monosaccharides to vitamin C. However, some mammals have lost the ability to synthesize vitamin C, notably simians and tarsiers. These make up one of two major primate suborders, haplorrhini, and this group includes humans. The strepsirrhini (non-tarsier prosimians) can make their own vitamin C, and these include lemurs, lorises, pottos, and galagos. Ascorbic acid is also not synthesized by at least two species of caviidae, the capybara and the guinea pig. There are known species of birds and fish that do not synthesize their own vitamin C. All species that do not synthesize ascorbate require it in the diet. Deficiency causes scurvy in humans, and somewhat similar symptoms in other animals.
Animals that can contract scurvy all lack the L-gulonolactone oxidase (GULO) enzyme, which is required in the last step of vitamin C synthesis. The genomes of these species contain GULO as pseudogenes, which serve as insight into the evolutionary past of the species.
In babies, scurvy is sometimes referred to as "Barlow's disease", named after Thomas Barlow, a British physician who described it in 1883. However, "Barlow's disease" may also refer to mitral valve prolapse (Barlow's syndrome), first described by John Brereton Barlow in 1966. | https://en.wikipedia.org/wiki?curid=28266 |
Sydney Harbour Bridge
The Sydney Harbour Bridge is an Australian heritage-listed steel through arch bridge across Sydney Harbour that carries rail, vehicular, bicycle, and pedestrian traffic between the Sydney central business district (CBD) and the North Shore. The view of the bridge, the harbour, and the nearby Sydney Opera House is widely regarded as an iconic image of Sydney, and of Australia itself. The bridge is nicknamed "The Coathanger" because of its arch-based design.
Under the direction of John Bradfield of the New South Wales Department of Public Works, the bridge was designed and built by British firm Dorman Long of Middlesbrough (who based the design on their 1928 Tyne Bridge in Newcastle upon Tyne) and opened in 1932. The bridge's general design, which Bradfield tasked the NSW Department of Public Works with producing, was a rough copy of the Hell Gate Bridge in New York City. This general design document, however, did not form any part of the request for tender, which remained sufficiently broad as to allow cantilever (Bradfield's original preference) and even suspension bridge proposals. The design chosen from the tender responses was original work created by Dorman Long, who leveraged some of the design from their own Tyne Bridge which, though superficially similar, does not share the graceful flares at the ends of each arch which make the harbour bridge so distinctive. It is the sixth longest spanning-arch bridge in the world and the tallest steel arch bridge, measuring from top to water level. It was also the world's widest long-span bridge, at wide, until construction of the new Port Mann Bridge in Vancouver was completed in 2012.
The Sydney Harbour Bridge was added to the Australian National Heritage List on 19 March 2007 and to the New South Wales State Heritage Register on 25 June 1999.
The southern end of the bridge is located at Dawes Point in The Rocks area, and the northern end at Milsons Point in the lower North Shore area. There are six original lanes of road traffic through the main roadway, plus an additional two lanes of road traffic on its eastern side, using lanes that were formerly tram tracks. Adjacent to the road traffic, a path for pedestrian use runs along the eastern side of the bridge, whilst a dedicated path for bicycle use only runs along the western side; between the main roadway and the western bicycle path lies the North Shore railway line.
The main roadway across the bridge is known as the Bradfield Highway and is about long, making it one of the shortest highways in Australia.
The arch is composed of two 28-panel arch trusses; their heights vary from at the centre of the arch to at the ends next to the pylons.
The arch has a span of and its summit is above mean sea level; expansion of the steel structure on hot days can increase the height of the arch by .
The total weight of the steelwork of the bridge, including the arch and approach spans, is , with the arch itself weighing . About 79% of the steel, specifically those technical sections constituting the curve of the arch, was imported pre-formed from England, with the rest being sourced from . On site, the contractors (Dorman Long and Co.) set up two workshops at Milsons Point, at the site of the present day Luna Park, and fabricated the steel into the girders and other required parts.
The bridge is held together by six million Australian-made hand-driven rivets supplied by the McPherson company of Melbourne, the last being driven through the deck on 21 January 1932. The rivets were heated red-hot and inserted into the plates; the headless end was immediately rounded over with a large pneumatic rivet gun. The largest of the rivets used weighed and was long. The practice of riveting large steel structures, rather than welding, was, at the time, a proven and understood construction technique, whilst structural welding had not at that stage been adequately developed for use on the bridge.
At each end of the arch stands a pair of concrete pylons, faced with granite. The pylons were designed by the Scottish architect Thomas S. Tait, a partner in the architectural firm John Burnet & Partners.
Some 250 Australian, Scottish, and Italian stonemasons and their families relocated to a temporary settlement at Moruya, NSW, south of Sydney, where they quarried around of granite for the bridge pylons. The stonemasons cut, dressed, and numbered the blocks, which were then transported to Sydney on three ships built specifically for this purpose. The Moruya quarry was managed by John Gilmore, a Scottish stonemason who emigrated with his young family to Australia in 1924, at the request of the project managers. The concrete used was also Australian-made and supplied from Kandos, New South Wales.
Abutments at the base of the pylons are essential to support the loads from the arch and hold its span firmly in place, but the pylons themselves have no structural purpose. They were included to provide a frame for the arch panels and to give better visual balance to the bridge. The pylons were not part of the original design, and were only added to allay public concern about the structural integrity of the bridge.
Although originally added to the bridge solely for their aesthetic value, all four pylons have now been put to use. The south-eastern pylon contains a museum and tourist centre, with a 360° lookout at the top providing views across the harbour and city. The south-western pylon is used by the New South Wales Roads and Traffic Authority (RTA) to support its CCTV cameras overlooking the bridge and the roads around that area. The two pylons on the north shore include venting chimneys for fumes from the Sydney Harbour Tunnel, with the base of the southern pylon containing the RMS maintenance shed for the bridge, and the base of the northern pylon containing the traffic management shed for tow trucks and safety vehicles used on the bridge.
In 1942, the pylons were modified to include parapets and anti-aircraft guns designed to assist in both Australia's defence and general war effort. The top level of stonework was never removed.
There had been plans to build a bridge as early as 1815, when convict and noted architect Francis Greenway reputedly proposed to Governor Lachlan Macquarie that a bridge be built from the northern to the southern shore of the harbour. In 1825, Greenway wrote a letter to the then "The Australian" newspaper stating that such a bridge would "give an idea of strength and magnificence that would reflect credit and glory on the colony and the Mother Country".
Nothing came of Greenway's suggestions, but the idea remained alive, and many further suggestions were made during the nineteenth century. In 1840, naval architect Robert Brindley proposed that a floating bridge be built. Engineer Peter Henderson produced one of the earliest known drawings of a bridge across the harbour around 1857. A suggestion for a truss bridge was made in 1879, and in 1880 a high-level bridge estimated at $850,000 was proposed.
In 1900, the Lyne government committed to building a new Central railway station and organised a worldwide competition for the design and construction of a harbour bridge. Local engineer Norman Selfe submitted a design for a suspension bridge and won the second prize of £500. In 1902, when the outcome of the first competition became mired in controversy, Selfe won a second competition outright, with a design for a steel cantilever bridge. The selection board were unanimous, commenting that, "The structural lines are correct and in true proportion, and... the outline is graceful". However due to an economic downturn and a change of government at the 1904 NSW State election construction never began.
A unique three-span bridge was proposed in 1922 by Ernest Stowe with connections at Balls Head, Millers Point, and Balmain with a memorial tower and hub on Goat Island.
In 1914 John Bradfield was appointed "Chief Engineer of Sydney Harbour Bridge and Metropolitan Railway Construction", and his work on the project over many years earned him the legacy as the "father" of the bridge. Bradfield's preference at the time was for a cantilever bridge without piers, and in 1916 the NSW Legislative Assembly passed a bill for such a construction, however it did not proceed as the Legislative Council rejected the legislation on the basis that the money would be better spent on the war effort.
Following World War I, plans to build the bridge again built momentum. Bradfield persevered with the project, fleshing out the details of the specifications and financing for his cantilever bridge proposal, and in 1921 he travelled overseas to investigate tenders. On return from his travels Bradfield decided that an arch design would also be suitable and he and officers of the NSW Department of Public Works prepared a general design for a single-arch bridge based upon New York City's Hell Gate Bridge. In 1922 the government passed the Sydney Harbour Bridge Act No. 28, specifying the construction of a high-level cantilever or arch bridge across the harbour between Dawes Point and Milsons Point, along with construction of necessary approaches and electric railway lines, and worldwide tenders were invited for the project.
As a result of the tendering process, the government received twenty proposals from six companies; on 24 March 1924 the contract was awarded to British firm Dorman Long and Co Ltd, of Middlesbrough well known as the contractors who later built the similar Tyne Bridge of Newcastle Upon Tyne, for an arch bridge at a quoted price of AU£4,217,721 11s 10d. The arch design was cheaper than alternative cantilever and suspension bridge proposals, and also provided greater rigidity making it better suited for the heavy loads expected.
Bradfield and his staff were ultimately to oversee the bridge design and building process as it was executed by Dorman Long and Co, whose Consulting Engineer, Sir Ralph Freeman of Sir Douglas Fox and Partners, and his associate Mr. G.C. Imbault, carried out the detailed design and erection process of the bridge. Architects for the contractors were from the British firm John Burnet & Partners of Glasgow, Scotland. Lawrence Ennis, of Dorman Long, served as Director of Construction and primary onsite supervisor throughout the entire build, alongside Edward Judge, Dorman Long's Chief Technical Engineer, who functioned as Consulting and Designing Engineer.
The building of the bridge coincided with the construction of a system of underground railways in Sydney's CBD, known today as the City Circle, and the bridge was designed with this in mind. The bridge was designed to carry six lanes of road traffic, flanked on each side by two railway tracks and a footpath. Both sets of rail tracks were linked into the underground Wynyard railway station on the south (city) side of the bridge by symmetrical ramps and tunnels. The eastern-side railway tracks were intended for use by a planned rail link to the Northern Beaches; in the interim they were used to carry trams from the North Shore into a terminal within Wynyard station, and when tram services were discontinued in 1958, they were converted into extra traffic lanes. The Bradfield Highway, which is the main roadway section of the bridge and its approaches, is named in honour of Bradfield's contribution to the bridge.
Bradfield visited the site sporadically throughout the eight years it took Dorman Long to complete the bridge. Despite having originally championed a cantilever construction and the fact that his own arched general design was used in neither the tender process nor as input to the detailed design specification (and was anyway a rough copy of the Devil's Gate bridge produced by the NSW Works Department), Bradfield subsequently attempted to claim personal credit for Dorman Long's design. This led to a bitter argument, with Dorman Long maintaining that instructing other people to produce a copy of an existing design in a document not subsequently used to specify the final construction did not constitute personal design input on Bradfield's part. This friction ultimately led to a large contemporary brass plaque being bolted very tightly to the side of one of the granite columns of the bridge in order to makes things clear.
The official ceremony to mark the "turning of the first sod" occurred on 28 July 1923, on the spot at Milsons Point on the north shore where two workshops to assist in building the bridge were to be constructed.
An estimated 469 buildings on the north shore, both private homes and commercial operations, were demolished to allow construction to proceed, with little or no compensation being paid. Work on the bridge itself commenced with the construction of approaches and approach spans, and by September 1926 concrete piers to support the approach spans were in place on each side of the harbour.
As construction of the approaches took place, work was also started on preparing the foundations required to support the enormous weight of the arch and loadings. Concrete and granite faced abutment towers were constructed, with the angled foundations built into their sides.
Once work had progressed sufficiently on the support structures, a giant "creeper crane" was erected on each side of the harbour. These cranes were fitted with a cradle, and then used to hoist men and materials into position to allow for erection of the steelwork. To stabilise works while building the arches, tunnels were excavated on each shore with steel cables passed through them and then fixed to the upper sections of each half-arch to stop them collapsing as they extended outwards.
Arch construction itself began on 26 October 1928. The southern end of the bridge was worked on ahead of the northern end, to detect any errors and to help with alignment. The cranes would "creep" along the arches as they were constructed, eventually meeting up in the middle. In less than two years, on Tuesday, 19 August 1930, the two halves of the arch touched for the first time. Workers riveted both top and bottom sections of the arch together, and the arch became self-supporting, allowing the support cables to be removed. On 20 August 1930 the joining of the arches was celebrated by flying the flags of Australia and the United Kingdom from the jibs of the creeper cranes.
Once the arch was completed, the creeper cranes were then worked back down the arches, allowing the roadway and other parts of the bridge to be constructed from the centre out. The vertical hangers were attached to the arch, and these were then joined with horizontal crossbeams. The deck for the roadway and railway were built on top of the crossbeams, with the deck itself being completed by June 1931, and the creeper cranes were dismantled. Rails for trains and trams were laid, and road was surfaced using concrete topped with asphalt. Power and telephone lines, and water, gas, and drainage pipes were also all added to the bridge in 1931.
The pylons were built atop the abutment towers, with construction advancing rapidly from July 1931. Carpenters built wooden scaffolding, with concreters and masons then setting the masonry and pouring the concrete behind it. Gangers built the steelwork in the towers, while day labourers manually cleaned the granite with wire brushes. The last stone of the north-west pylon was set in place on 15 January 1932, and the timber towers used to support the cranes were removed.
On 19 January 1932, the first test train, a steam locomotive, safely crossed the bridge. Load testing of the bridge took place in February 1932, with the four rail tracks being loaded with as many as 96 steam locomotives positioned end-to-end. The bridge underwent testing for three weeks, after which it was declared safe and ready to be opened. The construction worksheds were demolished after the bridge was completed, and the land that they were on is now occupied by Luna Park.
The standards of industrial safety during construction were poor by today's standards. Sixteen workers died during construction, but surprisingly only two from falling off the bridge. Several more were injured from unsafe working practices undertaken whilst heating and inserting its rivets, and the deafness experienced by many of the workers in later years was blamed on the project. Henri Mallard between 1930 and 1932 produced hundreds of stills and film footage which reveal at close quarters the bravery of the workers in tough Depression-era conditions.
Interviews were conducted between 1982-1989 with a variety of tradesmen who worked on the building of the bridge. Among the tradesmen interviewed were drillers, riveters, concrete packers, boilermakers, riggers, ironworkers, plasterers, stonemasons, an official photographer, sleepcutters, engineers and draughtsmen.
The total financial cost of the bridge was AU£6.25 million, which was not paid off in full until 1988.
The bridge was formally opened on Saturday, 19 March 1932. Among those who attended and gave speeches were the Governor of New South Wales, Sir Philip Game, and the Minister for Public Works, Lawrence Ennis. The Premier of New South Wales, Jack Lang, was to open the bridge by cutting a ribbon at its southern end.
However, just as Lang was about to cut the ribbon, a man in military uniform rode up on a horse, slashing the ribbon with his sword and opening the Sydney Harbour Bridge in the name of the people of New South Wales before the official ceremony began. He was promptly arrested. The ribbon was hurriedly retied and Lang performed the official opening ceremony and Game thereafter inaugurated the name of the bridge as 'Sydney Harbour Bridge' and the associated roadway as the 'Bradfield Highway'. After they did so, there was a 21-gun salute and an RAAF flypast. The intruder was identified as Francis de Groot. He was convicted of offensive behaviour and fined £5 after a psychiatric test proved he was sane, but this verdict was reversed on appeal. De Groot then successfully sued the Commissioner of Police for wrongful arrest, and was awarded an undisclosed out of court settlement. De Groot was a member of a right-wing paramilitary group called the New Guard, opposed to Lang's leftist policies and resentful of the fact that a member of the Royal Family had not been asked to open the bridge. De Groot was not a member of the regular army but his uniform allowed him to blend in with the real cavalry. This incident was one of several involving Lang and the New Guard during that year.
A similar ribbon-cutting ceremony on the bridge's northern side by North Sydney's mayor, Alderman Primrose, was carried out without incident. It was later discovered that Primrose was also a New Guard member but his role in and knowledge of the de Groot incident, if any, are unclear. The pair of golden scissors used in the ribbon cutting ceremonies on both sides of the bridge was also used to cut the ribbon at the dedication of the Bayonne Bridge, which had opened between Bayonne, New Jersey, and New York City the year before.
Despite the bridge opening in the midst of the Great Depression, opening celebrations were organised by the Citizens of Sydney Organising Committee, an influential body of prominent men and politicians that formed in 1931 under the chairmanship of the Lord Mayor to oversee the festivities. The celebrations included an array of decorated floats, a procession of passenger ships sailing below the bridge, and a Venetian Carnival. A message from a primary school in Tottenham, away in rural New South Wales, arrived at the bridge on the day and was presented at the opening ceremony. It had been carried all the way from Tottenham to the bridge by relays of school children, with the final relay being run by two children from the nearby Fort Street Boys' and Girls' schools.
After the official ceremonies, the public was allowed to walk across the bridge on the deck, something that would not be repeated until the 50th anniversary celebrations. Estimates suggest that between 300,000 and one million people took part in the opening festivities, a phenomenal number given that the entire population of Sydney at the time was estimated to be 1,256,000.
There had also been numerous preparatory arrangements. On 14 March 1932, three postage stamps were issued to commemorate the imminent opening of the bridge. Several songs were composed for the occasion. In the year of the opening, there was a steep rise in babies being named Archie and Bridget in honour of the bridge.
The bridge itself was regarded as a triumph over Depression times, earning the nickname "the Iron Lung", as it kept many Depression-era workers employed.
In 2010, the average daily traffic included 204 trains, 160,435 vehicles and 1650 bicycles.
From the Sydney CBD side, motor vehicle access to the bridge is normally via Grosvenor Street, Clarence Street, Kent Street, the Cahill Expressway, or the Western Distributor. Drivers on the northern side will find themselves on the Warringah Freeway, though it is easy to turn off the freeway to drive westwards into North Sydney or eastwards to Neutral Bay and beyond upon arrival on the northern side.
The bridge originally only had four wider traffic lanes occupying the central space which now has six, as photos taken soon after the opening clearly show. In 1958 tram services across the bridge were withdrawn and the tracks replaced by two extra road lanes; these lanes are now the leftmost southbound lanes on the bridge and are still clearly distinguishable from the other six road lanes. Lanes 7 and 8 now connect the bridge to the elevated Cahill Expressway that carries traffic to the Eastern Distributor.
In 1988, work began to build a tunnel to complement the bridge. It was determined that the bridge could no longer support the increased traffic flow of the 1980s. The Sydney Harbour Tunnel was completed in August 1992 and carries only motor vehicles.
The Bradfield Highway is designated as a Travelling Stock Route which means that it is permissible to herd livestock across the bridge, but only between midnight and dawn, and after giving notice of intention to do so. In practice, owing to the high-density urban nature of modern Sydney, and the relocation of abattoirs and markets, this has not taken place for approximately half a century.
The bridge is equipped for tidal flow operation, permitting the direction of traffic flow on the bridge to be altered to better suit the morning and evening rush hours' traffic patterns.
The bridge has eight lanes in total, numbered one to eight from west to east. Lanes three, four and five are reversible. One and two always flow north. Six, seven, and eight always flow south. The default is four each way. For the morning rush hour, the lane changes on the bridge also require changes to the Warringah Freeway, with its inner western reversible carriageway directing traffic to the bridge lane numbers three and four southbound.
The bridge has a series of overhead gantries which indicate the direction of flow for each traffic lane. A green arrow pointing down to a traffic lane means the lane is open. A flashing red "X" indicates the lane is closing, but is not yet in use for traffic travelling in the other direction. A static red "X" means the lane is in use for oncoming traffic. This arrangement was introduced in the 1990s, replacing a slow operation where lane markers were manually moved to mark the centre median.
It is possible to see odd arrangements of flow during night periods when maintenance occurs, which may involve completely closing some lanes. Normally this is done between midnight and dawn, because of the enormous traffic demands placed on the bridge outside these hours.
When the Sydney Harbour Tunnel opened in August 1992, Lane 7 became a bus lane.
The vehicular traffic lanes on the bridge are operated as a toll road. As of October 2019, there is a variable tolling system for all vehicles headed into the CBD (southbound). The toll paid is dependent on the time of day in which the vehicle passes through the toll plaza. The toll varies from a minimum value of $2.50 to a maximum value of $4. There is no toll for northbound traffic (though taxis travelling north may charge passengers the toll in anticipation of the toll the taxi must pay on the return journey). In 2017, the Bradfield Highway northern toll plaza infrastructure was removed and replaced with new overhead gantries to service all southbound traffic. And following on from this upgrade, in 2018 all southern toll plaza infrastructure was also removed. Only the Cahill Expressway toll plaza infrastructure remains.
The toll was originally placed on travel across the bridge, in both directions, to recoup the cost of its construction. This was paid off in 1988, but the toll has been kept (indeed increased) to recoup the costs of the Sydney Harbour Tunnel.
After the decision to build the Sydney Harbour Tunnel was made in the early 1980s, the toll was increased (from 20 cents to $1, then to $1.50, and finally to $2 by the time the tunnel opened) to pay for its construction. The tunnel also had an initial toll of $2 southbound. After the increase to $1, the concrete barrier on the bridge separating the Bradfield Highway from the Cahill Expressway was increased in height, because of the large numbers of drivers crossing it illegally from lane 6 to 7, to avoid the toll. The toll for all southbound vehicles was increased to $3 in March 2004.
Originally it cost a car or motorcycle six pence to cross, a horse and rider being three pence. Use of the bridge by bicycle riders (provided that they use the cycleway) and by pedestrians is free. Later governments capped the fee for motorcycles at one-quarter of the passenger-vehicle cost, but now it is again the same as the cost for a passenger vehicle, although quarterly flat-fee passes are available which are much cheaper for frequent users. Originally there were six toll booths at the southern end of the bridge, these were replaced by 16 booths in 1950. The toll was charged in both directions until 4 July 1970 when changed to only be applied to southbound traffic.
In July 2008 a new electronic tolling system called e-TAG was introduced. The Sydney Harbour Tunnel was converted to this new tolling system while the Sydney Harbour Bridge itself had several cash lanes. The electronic system as of 12 January 2009 has now replaced all booths with E-tag lanes. In January 2017 work commenced to remove the southern toll booths. In June 2020, work will commence to remove the remaining toll booths at Milsons Point.
The pedestrian-only footway is located on the east side of the bridge. Access from the northern side involves climbing an easily spotted flight of stairs, located on the east side of the bridge at Broughton St, Kirribilli. Pedestrian access on the southern side is more complicated, but signposts in the Rocks area now direct pedestrians to the long and sheltered flight of stairs that leads to the bridge's southern end. These stairs are located near Gloucester Street and Cumberland Street.
The bridge can also be approached from the south by accessing Cahill Walk, which runs along the Cahill Expressway. Pedestrians can access this walkway from the east end of Circular Quay by a flight of stairs or a lift. Alternatively it can be accessed from the Botanic Gardens.
The bike-only cycleway is located on the western side of the bridge. Access from the northern side involves carrying or pushing a bicycle up a staircase, consisting of 55 steps, located on the western side of the bridge at Burton St, Milsons Point. A wide smooth concrete strip in the centre of the stairs permits cycles to be wheeled up and down from the bridge deck whilst the rider is dismounted. A campaign to eliminate the steps on this popular cycling route to the CBD has been running since at least 2008. On 7 December 2016 the NSW Roads Minister Duncan Gay confirmed that the northern stairway would be replaced with a 20 million ramp alleviating the needs for cyclists to dismount. At the same time the NSW Government announced plans to upgrade the southern ramp at a projected cost of 20 million. Both projects are expected to completed by late 2020. Access to the cycleway on the southern side is via the northern end of the Kent Street cycleway and/or Upper Fort Street in The Rocks.
The bridge lies between Milsons Point and Wynyard railway stations, located on the north and south shores respectively, with two tracks running along the western side of the bridge. These tracks are part of the North Shore railway line.
In 1958, tram services across the bridge were withdrawn and the tracks they had used were removed and replaced by two extra road lanes; these lanes are now the leftmost southbound lanes on the bridge and are still clearly distinguishable from the other six road lanes. The original ramp that took the trams into their terminus at the underground Wynyard railway station is still visible at the southern end of the main walkway under lanes 7 and 8, although around 1964, the former tram tunnels and station were converted for use as a carpark for the Menzies Hotel and as public parking. One of the tunnels was converted for use as a storage facility after reportedly being used by the NSW police as a pistol firing range.
The Sydney Harbour Bridge requires constant inspections and other maintenance work to keep it safe for the public, and to protect from corrosion. Among the trades employed on the bridge are painters, ironworkers, boilermakers, fitters, electricians, plasterers, carpenters, plumbers, and riggers.
The most noticeable maintenance work on the bridge involves painting. The steelwork of the bridge that needs to be painted is a combined , the equivalent of sixty football fields. Each coat on the bridge requires some of paint. A special fast-drying paint is used, so that any paint drops have dried before reaching the vehicles or bridge surface. One notable identity from previous bridge-painting crews is Australian comedian and actor Paul Hogan, who worked as a bridge painter before rising to media fame in the 1970s.
In 2003 the Roads & Traffic Authority began completely repainting the southern approach spans of the bridge. This involved removing the old lead-based paint, and repainting the of steel below the deck. Workers operated from self-contained platforms below the deck, with each platform having an air extraction system to filter airborne particles. An abrasive blasting was used, with the lead waste collected and safely removed from the site for disposal.
Between December 2006 and March 2010 the bridge was subject to works designed to ensure its longevity. The work included some strengthening.
Since 2013, two grit-blasting robots specially developed with the University of Technology, Sydney have been employed to help with the paint stripping operation on the bridge. The robots, nicknamed Rosie and Sandy, are intended to reduce workers' potential exposure to dangerous lead paint and asbestos and the blasting equipment which has enough force to cut through clothes and skin.
Even during its construction, the bridge was such a prominent feature of Sydney that it would attract tourist interest. One of the ongoing tourist attractions of the bridge has been the south-east pylon, which is accessed via the pedestrian walkway across the bridge, and then a climb to the top of the pylon of about 200 steps.
Not long after the bridge's opening, commencing in 1934, Archer Whitford first converted this pylon into a tourist destination. He installed a number of attractions, including a café, a camera obscura, an Aboriginal museum, a "Mother's Nook" where visitors could write letters, and a "pashometer". The main attraction was the viewing platform, where "charming attendants" assisted visitors to use the telescopes available, and a copper cladding (still present) over the granite guard rails identified the suburbs and landmarks of Sydney at the time.
The outbreak of World War II in 1939 saw tourist activities on the bridge cease, as the military took over the four pylons and modified them to include parapets and anti-aircraft guns.
In 1948, Yvonne Rentoul opened the "All Australian Exhibition" in the pylon. This contained dioramas, and displays about Australian perspectives on subjects such as farming, sport, transport, mining, and the armed forces. An orientation table was installed at the viewing platform, along with a wall guide and binoculars. The owner kept several white cats in a rooftop cattery, which also served as an attraction, and there was a souvenir shop and postal outlet. Rentoul's lease expired in 1971, and the pylon and its lookout remained closed to the public for over a decade.
The pylon was reopened in 1982, with a new exhibition celebrating the bridge's 50th anniversary. In 1987 a "Bicentennial Exhibition" was opened to mark the 200th anniversary of European settlement in Australia in 1988.
The pylon was closed from April to November 2000 for the Roads & Traffic Authority and BridgeClimb to create a new exhibition called "Proud Arch". The exhibition focussed on Bradfield, and included a glass direction finder on the observation level, and various important heritage items.
The pylon again closed for four weeks in 2003 for the installation of an exhibit called "Dangerous Works", highlighting the dangerous conditions experienced by the original construction workers on the bridge, and two stained glass feature windows in memory of the workers.
In the 1950s and 1960s, there were occasional newspaper reports of climbers who had made illegal arch traversals of the bridge by night. In 1973 Philippe Petit walked across a wire between the two pylons at the southern end of the Sydney Harbour Bridge. Since 1998, BridgeClimb has made it possible for tourists to legally climb the southern half of the bridge. Tours run throughout the day, from dawn to night, and are only cancelled for electrical storms or high wind.
Groups of climbers are provided with protective clothing appropriate to the prevailing weather conditions, and are given an orientation briefing before climbing. During the climb, attendees are secured to the bridge by a wire lifeline. Each climb begins on the eastern side of the bridge and ascends to the top. At the summit, the group crosses to the western side of the arch for the descent. Each climb takes three-and-a-half-hours, including the preparations.
In December 2006, BridgeClimb launched an alternative to climbing the upper arches of the bridge. The Discovery Climb allows climbers to ascend the lower chord of the bridge and view its internal structure. From the apex of the lower chord, climbers ascend a staircase to a platform at the summit.
Since the opening, the bridge has been the focal point of much tourism and national pride.
In 1982, the 50th anniversary of the opening of the bridge was celebrated. For the first time since its opening in 1932, the bridge was closed to most vehicles with the exception of vintage vehicles, and pedestrians were allowed full access for the day. The celebrations were attended by Edward Judge, who represented Dorman Long.
Australia's bicentennial celebrations on 26 January 1988 attracted large crowds in the bridge's vicinity as merrymakers flocked to the foreshores to view the events on the harbour. The highlight was the biggest parade of sail ever held in Sydney, square-riggers from all over the world, surrounded by hundreds of smaller craft of every description, passing majestically under the Sydney Harbour Bridge. The day's festivities culminated in a fireworks display in which the bridge was the focal point of the finale, with fireworks streaming from the arch and roadway. This was to become the pattern for later firework displays.
The Harbour Bridge has been an integral part of the Sydney New Year's Eve celebrations, generally being used in spectacular ways during the fireworks displays at 21:00 and midnight. In recent times, the bridge has included a ropelight display on a framework in the centre of the eastern arch, which is used to complement the fireworks. The scaffolding and framework were clearly visible for some weeks before the event, revealing the outline of the design.
During the millennium celebrations in 2000, the Sydney Harbour Bridge was lit up with the word "Eternity", as a tribute to the legacy of Arthur Stace a Sydney artist who for many years inscribed that word on pavements in chalk in beautiful copperplate writing despite the fact that he was illiterate.
The effects have been as follows:
The numbers for the New Year's Eve countdown also appear on the eastern side of the Bridge pylons.
In May 2000, the bridge was closed to vehicular access for a day to allow a special reconciliation march—the "Walk for Reconciliation"—to take place. This was part of a response to an Aboriginal Stolen Generations inquiry, which found widespread suffering had taken place amongst Australian Aboriginal children forcibly placed into the care of white parents in a little-publicised state government scheme. Between 200,000 and 300,000 people were estimated to have walked the bridge in a symbolic gesture of crossing a divide.
During the Sydney 2000 Olympics in September and October 2000, the bridge was adorned with the Olympic Rings. It was included in the Olympic torch's route to the Olympic stadium. The men's and women's Olympic marathon events likewise included the bridge as part of their route to the Olympic stadium. A fireworks display at the end of the closing ceremony ended at the bridge. The east-facing side of the bridge has been used several times since as a framework from which to hang static fireworks, especially during the elaborate New Year's Eve displays.
In 2005 Mark Webber drove a Williams-BMW Formula One car across the bridge.
In 2007, the 75th anniversary of its opening was commemorated with an exhibition at the Museum of Sydney, called "Bridging Sydney". An initiative of the Historic Houses Trust, the exhibition featured dramatic photographs and paintings with rare and previously unseen alternative bridge and tunnel proposals, plans and sketches.
On 18 March 2007, the 75th anniversary of the Sydney Harbour Bridge was celebrated. The occasion was marked with a ribbon-cutting ceremony by the governor, Marie Bashir and the premier of New South Wales, Morris Iemma. The bridge was subsequently open to the public to walk southward from Milsons Point or North Sydney. Several major roads, mainly in the CBD, were closed for the day. An Aboriginal smoking ceremony was held at 19:00.
Approximately 250,000 people (50,000 more than were registered) took part in the event. Bright yellow souvenir caps were distributed to walkers. A series of speakers placed at intervals along the bridge formed a sound installation. Each group of speakers broadcast sound and music from a particular era (e.g. King Edward VIII's abdication speech; Gough Whitlam's speech at Parliament House in 1975), the overall effect being that the soundscape would "flow" through history as walkers proceeded along the bridge. A light-show began after sunset and continued late into the night, the bridge being bathed in constantly changing, multi-coloured lighting, designed to highlight structural features of the bridge. In the evening the bright yellow caps were replaced by orange caps with a small, bright LED attached. The bridge was closed to walkers at about 20:30.
On 25 October 2009, turf was laid across the eight lanes of bitumen, and 6,000 people celebrated a picnic on the bridge accompanied by live music. The event was repeated in 2010. Although originally scheduled again in 2011, this event was moved to Bondi Beach due to traffic concerns about the prolonged closing of the bridge.
On 19 March 2012, the 80th anniversary of the Sydney Harbour Bridge was celebrated with a picnic dedicated to the stories of people with personal connections to the bridge. In addition, Google dedicated its Google Doodle on the 19th to the event.
The proposal to upgrade the bridge tolling equipment was announced by the NSW Roads Minister Duncan Gay.
At the time of construction and until recently, the bridge was the longest single span steel arch bridge in the world. The bridge, its pylons and its approaches are all important elements in townscape of areas both near and distant from it. The curved northern approach gives a grand sweeping entrance to the bridge with continually changing views of the bridge and harbour. The bridge has been an important factor in the pattern of growth of metropolitan Sydney, particularly in residential development in post World War II years. In the 1960s and 1970s the Central Business District had extended to the northern side of the bridge at North Sydney which has been due in part to the easy access provided by the bridge and also to the increasing traffic problems associated with the bridge.
Sydney Harbour Bridge was listed on the New South Wales State Heritage Register on 25 June 1999 having satisfied the following criteria.
The place is important in demonstrating the course, or pattern, of cultural or natural history in New South Wales.
The bridge is one of the most remarkable feats of bridge construction. At the time of construction and until recently it was the longest single span steel arch bridge in the world and is still in a general sense the largest.
"The archaeological remains are demonstrative of an earlier phase of urban development within Milsons Point and the wider North Sydney precinct. The walls are physical evidence that a number of 19th century residences existed on the site which were resumed and demolished as part of the Sydney Harbour Bridge construction".
The place is important in demonstrating aesthetic characteristics and/or a high degree of creative or technical achievement in New South Wales.
The bridge, its pylons and its approaches are all important elements in townscape of areas both near and distant from it. The curved northern approach gives a grand sweeping entrance to the bridge with continually changing views of the bridge and harbour.
The place has a strong or special association with a particular community or cultural group in New South Wales for social, cultural or spiritual reasons.
The bridge has been an important factor in the pattern of growth of metropolitan Sydney, particularly in residential development in post World War II years. In the 1960s and 1970s the Central Business District had extended to the northern side of the bridge at North Sydney which has been due in part to the easy access provided by the bridge and also to the increasing traffic problems associated with the bridge.
The place has potential to yield information that will contribute to an understanding of the cultural or natural history of New South Wales.
"The archaeological remains have some potential to yield information about the previous residential and commercial occupation of Milsons Point prior to the construction of the Sydney Harbour Bridge transport link".
The bridge was listed as a National Engineering Landmark by Engineers Australia in 1988, as part of its Engineering Heritage Recognition Program.
Webcams:
Images: | https://en.wikipedia.org/wiki?curid=28267 |
Saving Private Ryan
Saving Private Ryan is a 1998 American epic war film directed by Steven Spielberg and written by Robert Rodat. Set during the Invasion of Normandy in World War II, the film is known for its graphic portrayal of war and for the intensity of its second scene of 23 minutes, a depiction of the Omaha Beach assault during the Normandy landings. The film follows United States Army Rangers Captain John H. Miller (Tom Hanks) and his squad (Tom Sizemore, Edward Burns, Barry Pepper, Giovanni Ribisi, Vin Diesel, Adam Goldberg, and Jeremy Davies) as they search for a paratrooper, Private First Class James Francis Ryan (Matt Damon), the last surviving brother of a family of four, with his three other brothers having been killed in action. The film was a co-production between DreamWorks Pictures, Paramount Pictures, Amblin Entertainment, and Mutual Film Company, with DreamWorks distributing the film in North America while Paramount released the film internationally.
In 1996, producer Mark Gordon pitched Rodat's idea, which was inspired by the Niland brothers, to Paramount, which eventually began development on the project. Spielberg, who at the time was forming DreamWorks, came on board to direct the project, and Hanks joined the cast. After the cast went through training supervised by Marine veteran Dale Dye, the film's principal photography started in June 1997 and lasted two months. The film's D-Day scenes were shot in Ballinesker Beach, Curracloe Strand, Ballinesker, just east of Curracloe, County Wexford, Ireland and used members of the army reserve of the Irish Army as infantry for the D-Day landing.
Released on July 24, 1998, "Saving Private Ryan" received acclaim from critics and audiences for its performances (particularly from Hanks), realism, cinematography, score, screenplay, and Spielberg's direction, and was placed on many film critics' 1998 top ten lists. It was also a box office success, becoming the highest-grossing film of 1998 in the United States with $216.8 million domestically and the second-highest-grossing film of 1998 worldwide with $481.8 million worldwide. Additionally, it grossed $44 million from its release on home video in May 1999. The film won several accolades, including Best Picture and Director at the Golden Globes, Producers Guild of America, Directors Guild of America, and Critics' Choice Awards. The film was nominated for eleven Academy Awards at the 71st Academy Awards, where it won five awards including Spielberg's second win for Best Director, Best Film Editing, Best Cinematography, Best Sound, and Best Sound Effects Editing, though it lost the Academy Award for Best Picture to "Shakespeare in Love" in a controversial Oscars upset.
Since its release", Saving Private Ryan" has been considered one of the greatest films ever made and has been lauded as influential on the war film genre. It is credited for renewing interest in World War II media. In 2007, the American Film Institute ranked "Saving Private Ryan" as the 71st-greatest American movie in AFI's 100 Years...100 Movies (10th Anniversary Edition) and in 2014, the film was selected for preservation in the National Film Registry by the Library of Congress as "culturally, historically, or aesthetically significant".
An elderly man visits the Normandy American Cemetery and Memorial with his family. At a tombstone, he falls to his knees in anguish.
On the morning of June 6, 1944, American soldiers land at Omaha Beach as part of the Normandy Invasion. They suffer heavy losses in assaulting fortified German defensive positions. Captain Miller of the 2nd Ranger Battalion leads a breakout from the beach. Elsewhere on the beach, a dead soldier lies face-down in the bloody surf; his pack is stenciled "Ryan,S".
In Washington, D.C., at the U.S. War Department, General George Marshall learns that three of the four sons of the Ryan family were killed in action and that the fourth son, James Francis Ryan, is with the 101st Airborne Division somewhere in Normandy. After reading Abraham Lincoln's Bixby letter aloud, Marshall orders Ryan to be brought home.
Three days after D-Day, Miller receives orders to find Ryan and bring him back. He chooses seven men from his company—T/Sgt. Horvath, Privates First Class Reiben and Caparzo, Privates Mellish and Jackson, T/4 medic Wade—plus T/5 Upham, an interpreter from the headquarters. They move out to Neuville, where they meet a squad of the 101st engaged against the enemy. Caparzo is killed by a German sniper who is then killed by Jackson. They locate a Private James Ryan, only to learn that he is James Frederick Ryan and not who they were looking for. From passing soldiers, Miller learns that Ryan is defending an important bridge in Ramelle.
Near Ramelle, Miller decides to neutralize a German machine gun position at a derelict radar station, despite his men's misgivings. Wade is killed in the process. At Upham's urging, Miller declines to execute a surviving German soldier, and sets him free. Losing confidence in Miller's leadership, Reiben declares his intention to desert, prompting a confrontation with Horvath. Miller defuses the standoff by disclosing his civilian career as a high school English teacher, about which his men had set up a betting pool; Reiben decides to stay.
At Ramelle, they find Ryan among a small group of paratroopers preparing to defend the key bridge against an imminent German attack. Miller tells Ryan that his brothers are dead, and that he was ordered to bring him home. Ryan is distressed about his brothers, but is unwilling to leave his post. Miller combines his unit with the paratroopers in defense of the bridge. He devises a plan to ambush the enemy with two .30-caliber guns, Molotov cocktails, anti-tank mines and improvised satchel charges made from socks.
Elements of the 2nd SS Panzer Division arrive with two Tiger tanks and two Marder tank destroyers, all protected by German infantry. Although they inflict heavy damage on the Germans, nearly all of the paratroopers, along with Jackson, Mellish and Horvath, are killed; Upham is immobilized by fear. Miller attempts to destroy the bridge, but is shot by the freed German prisoner from the radar station, who had rejoined a fighting unit. Miller crawls to retrieve the bridge detonator, and fires ineffectually but defiantly with his pistol at an oncoming tank. As the tank reaches the bridge, an American P-51 Mustang flies overhead and destroys the tank, after which American armored units arrive to rout the remaining Germans. With the Germans in full retreat, Upham emerges from hiding and shoots the German prisoner dead, having witnessed him shooting Miller, but allows his fellow soldiers to flee.
Reiben and Ryan are with Miller, who dies from his injuries. As the scene transitions to the present, Ryan is revealed to be the veteran from the beginning of the film, and is standing in front of Miller's grave expressing his gratitude for the sacrifices Miller and his unit made in the past. Ryan asks his wife if he was worthy of such sacrifice, to which she replies that he is. The film ends with Ryan saluting Miller's grave.
In 1994, Robert Rodat's wife gave him the bestseller "D-Day: June 6, 1944: The Climactic Battle of World War II" by historian Stephen Ambrose. While reading the book during an early morning walk in a small New Hampshire village, Rodat was "struck by a monument dedicated to those who had died in various wars, particularly because of the repeated last names of brothers who were killed in action". He was inspired by an actual family in Ambrose's book named the Nilands, which had lost two sons in the war and was thought to have lost a third who was "snatched" out of Normandy by the War Department.
Rodat proposed the pitch to producer Mark Gordon. Gordon then pitched Rodat's idea to Paramount Pictures, whose executives liked the idea and commissioned Rodat to write the script. Carin Sage at Creative Artists Agency read Rodat's script and made Steven Spielberg, who was one of the agency's clients, aware of it. At the same time, Spielberg, who was at the time establishing DreamWorks Pictures, picked up the script and became interested in the film.
Spielberg had already demonstrated his interest in World War II themes with the films "1941", "Empire of the Sun", "Schindler's List", and the "Indiana Jones" series. Spielberg later co-produced the World War II themed television miniseries "Band of Brothers" and its counterpart "The Pacific" with Tom Hanks. When asked about this by "American Cinematographer", Spielberg said, "I think that World War II is the most significant event of the last 100 years; the fate of the baby boomers and even Generation X was linked to the outcome. Beyond that, I've just always been interested in World War II. My earliest films, which I made when I was about 14 years old, were combat pictures that were set both on the ground and in the air. For years now, I've been looking for the right World War II story to shoot, and when Robert Rodat wrote "Saving Private Ryan", I found it."
After Spielberg signed on to direct the film, Paramount and DreamWorks, who agreed to finance and produce the film together with Amblin Entertainment and Mutual Film Company, both made a distribution deal where DreamWorks would take over the film's domestic distribution while Paramount would release the film internationally. In exchange for distribution rights for "Saving Private Ryan", Paramount would retain domestic distribution rights to "Deep Impact", while DreamWorks would acquire international distribution.
In casting the film Spielberg sought to create a cast that "looked" the part, stating in an interview, "You know, the people in World War II actually looked different than people look today", adding to this end that he cast partly based on wanting the cast "to match the faces I saw on the newsreels".
Gordon and co-producer Gary Levinsohn were interested in having Tom Hanks appear in the film as Captain Miller. Gordon recounted, "Tom was enormously excited about it and said, 'Steven and I have always wanted to work together." Harrison Ford and Mel Gibson were initially considered for the role of Miller.
Before filming began, several of the film's stars, including Edward Burns, Tom Sizemore, Barry Pepper, Vin Diesel, Adam Goldberg, Giovanni Ribisi, and Tom Hanks, endured ten days of "boot camp" training led by Marine veteran Dale Dye and Warriors, Inc., a California company that specializes in training actors for realistic military portrayals. Matt Damon was trained separately, so the rest of the group, whose characters are supposed to feel resentment towards Damon's character, would not bond with him. Spielberg had stated that his main intention in forcing the actors to go through the boot camp was not to learn the proper techniques but rather "because I wanted them to respect what it was like to be a soldier". During filming, Sizemore was battling drug addiction and Spielberg required him to be drug tested every day. If he failed a test, he would be dismissed and all of his scenes would be reshot with a different actor.
The film's second scene is a 20-plus-minute sequence recounting the landing on the beaches of Normandy. Spielberg chose to include this particularly violent sequence in order "to bring the audience onto the stage with me", specifically noting that he did not want the "audience to be spectators", but rather he wanted to "demand them to be participants with those kids who had never seen combat before in real life, and get to the top of Omaha Beach together".
Filming began June 27, 1997, and lasted for two months. Spielberg wanted an almost exact replica of the Omaha Beach landscape for the movie, including sand and a bluff similar to the one where German forces were stationed and a near match was found in Ballinesker Beach, Curracloe Strand, Ballinesker, just east of Curracloe, County Wexford, Ireland. Production of the sequence depicting the Omaha Beach landings cost US$12 million and involved up to 1,500 extras, some of whom were members of the Irish Reserve Defence Forces. Members of local reenactment groups such as the Second Battle Group were cast as extras to play German soldiers. In addition, twenty to thirty actual amputees were used to portray American soldiers maimed during the landing. Spielberg did not storyboard the sequence, as he wanted spontaneous reactions and for "the action to inspire me as to where to put the camera". Hanks recalled to Roger Ebert that although he realized it was a movie, the experience still hit him hard, stating, "The first day of shooting the D-Day sequences, I was in the back of the landing craft, and that ramp went down and I saw the first 1-2-3-4 rows of guys just getting blown to bits. In my head, of course, I knew it was special effects, but I still wasn't prepared for how tactile it was."
Some shooting was done in Normandy, for the Normandy American Cemetery and Memorial in Colleville-sur-Mer and Calvados. Other scenes were filmed in England, such as a former British Aerospace factory in Hatfield, Hertfordshire, Thame Park, Oxfordshire and Wiltshire. Production was due to also take place in Seaham, County Durham, but government restrictions disallowed this. According to both Gordon and Levinsohn, the producers were hardly involved in the production as Spielberg was entrusted with full creative control of the film. Both producers were only involved in raising foreign financing and handling international distribution. Gordon, however, said that Spielberg was "inclusive and gracious and enormously solicitous in terms of the development of the screenplay".
The historical representation of Charlie Company's actions, led by its commander, Captain Ralph E. Goranson, was well maintained in the opening sequence. The sequence and details of the events are very close to the historical record, including the sea sickness experienced by many of the soldiers as the landing craft moved toward the shoreline, significant casualties among the men as they disembarked from the boats, and difficulty linking up with adjacent units on the shore.
The distinctive "ping" of the US soldiers' M1 Garand rifles ejecting their ammunition clips is heard throughout the battle sequence. Many details of the Company's actions were depicted accurately; for instance, the correct code names for the sector Charlie Company assaulted, and adjacent sectors, were used. Included in the cinematic depiction of the landing was a follow-on mission of clearing a bunker and trench system at the top of the cliffs which was not part of the original mission objectives for Charlie Company, but which was undertaken after the assault on the beach.
The landing craft used included twelve actual World War II examples, 10 LCVPs and 2 LCMs, standing in for the British LCAs that the Ranger Companies rode in to the beach during Operation Overlord. The filmmakers used underwater cameras to better depict soldiers being hit by bullets in the water. Forty barrels of fake blood were used to simulate the effect of blood in the seawater. This degree of realism was more difficult to achieve when depicting World War II German armored vehicles, as few examples survive in operating condition. The Tiger I tanks in the film were copies built on the chassis of old, but functional, Soviet T-34 tanks. The two vehicles described in the film as Panzers were meant to portray Marder III tank destroyers. One was created for the film using the chassis of a Czech-built Panzer 38(t) tank similar to the construction of the original Marder III; the other was a cosmetically modified Swedish SAV m/43 assault gun, which also used the 38(t) chassis.
There are, however, historical inaccuracies in the film's depiction of the Normandy campaign. At the time of the mission, American forces from the two American beach areas, Utah and Omaha, had not yet linked up. In reality, a Ranger team operating out of the Omaha beach area would have had to move through the heavily enemy-occupied city of Carentan, or swim or boat across the estuary linking Carentan to the channel, or transfer by boat to the Utah landing area. On the other hand, US forces moving out of Utah would have had direct and much shorter routes, relatively unencumbered by enemy positions, and were already in contact with some teams from both US airborne divisions landed in the area.
The Utah beach landings, however, were relatively uncontested, with assault units landing on largely unoccupied beaches and experiencing far less action than the landings at Omaha. The filmmakers chose to begin the narrative with a depiction of the more dramatic story of Omaha, despite the strategic inaccuracy of an impossible mission that could easily have been pursued from the other beach area. In addition, one of the operational flaws is the depiction of the 2nd SS Panzer Division Das Reich as the adversary during the fictional Battle of Ramelle. The 2nd SS was not engaged in Normandy until July, and then at Caen against the British and Canadians, 100 miles east (160 km). Furthermore, the Merderet River bridges were not an objective of the 101st Airborne Division but of the 82nd Airborne Division, part of Mission Boston.
Much has also been said about various "tactical errors" made by both the German and American forces in the film's climactic battle. Spielberg responded by saying that in many scenes he opted to replace sound military tactics and strict historical accuracy for dramatic effect. Some other technical errors were also made, such as the reversed orientation of the beach barriers and the tripod obstructions with a mine at the apex.
To achieve a tone and quality that were true to the story as well as reflected the period in which it is set, Spielberg once again collaborated with cinematographer Janusz Kamiński, saying, "Early on, we both knew that we did not want this to look like a Technicolor extravaganza about World War II, but more like color newsreel footage from the 1940s, which is very desaturated and low-tech."
Kamiński had the protective coating stripped from the camera lenses, making them closer to those used in the 1940s. He explains that "without the protective coating, the light goes in and starts bouncing around, which makes it slightly more diffused and a bit softer without being out of focus." The cinematographer completed the overall effect by putting the negative through bleach bypass, a process that reduces brightness and color saturation. The shutter timing was set to 90 or 45 degrees for many of the battle sequences, as opposed to the standard of 180-degree timing. Kamiński clarifies, "In this way, we attained a certain staccato in the actors' movements and a certain crispness in the explosions, which makes them slightly more realistic."
"Saving Private Ryan" was released in 2,463 theaters on July 24, 1998, and grossed $30.5 million on its opening weekend, opening to number one and remained at the top for four weeks until "Blade" topped the film in its fifth week of release. The film grossed $216.5 million in the US and Canada and $265.3 million in other territories, bringing its worldwide total to $481.8 million. It was the highest-grossing US film of 1998, and was the second-highest-grossing film of 1998 worldwide, finishing behind "Armageddon". Box Office Mojo estimates that the film sold over 45.74 million tickets in the United States and Canada.
"Saving Private Ryan" received acclaim from critics and audiences; much of the praise went to Spielberg's directing, the realistic battle scenes, the actors' performances, John Williams' score, the cinematography, editing, and screenplay. The film has a 'certified fresh' rating of 93% on Rotten Tomatoes based on 139 reviews with an average score of 8.64/10. The consensus states "Anchored by another winning performance from Tom Hanks, Steven Spielberg's unflinchingly realistic war film virtually redefines the genre." The film also has a score of 91 out of 100 on Metacritic based on 35 critic reviews indicating "universal acclaim".
Many critics associations, such as New York Film Critics Circle and Los Angeles Film Critics Association, chose "Saving Private Ryan" as Film of the Year. Roger Ebert gave it four stars out of four and called it "a powerful experience". Janet Maslin of "The New York Times" called it "the finest war movie of our time". Gene Siskel, Ebert's co-host and critic of "Chicago Tribune", said that the film "accomplishes something I had been taught was most difficult—making an action-filled anti-war film or, at least, one that doesn't in some way glorify or lie about combat". On their program "At the Movies", Siskel and Ebert each named the film as the fourth- and third-best film of 1998, respectively. Writing for "TIME", Richard Schickel said that was "a war film that, entirely aware of its genre's conventions, transcends them as it transcends the simplistic moralities that inform its predecessors, to take the high, morally haunting ground". Owen Gleiberman of "Entertainment Weekly" praised the film, saying that "Spielberg has captured the hair-trigger instability of modern combat." Kenneth Turan of "Los Angeles Times" praised the film as well, calling it "a powerful and impressive milestone in the realistic depiction of combat, "Saving Private Ryan" is as much an experience we live through as a film we watch on screen."
The film earned some negative reviews from critics. Writing for "Chicago Reader", Jonathan Rosenbaum gave the film two stars and felt that "it has a few pretty good action moments, a lot of spilled guts, a few moments of drama that don't seem phony or hollow, some fairly strained period ambience, and a bit of sentimental morphing that reminds me of "Forrest Gump"." Andrew Sarris of "Observer" wrote that the film was "tediously manipulative despite its Herculean energy".
The film also earned some criticism for ignoring the contributions of several other countries to the D-Day landings in general and at Omaha Beach specifically. The most direct example of the latter is that during the actual landing, the 2nd Rangers disembarked from British ships and were taken to Omaha Beach by Royal Navy landing craft (LCAs). The film depicts them as being United States Coast Guard-crewed craft (LCVPs and LCMs) from an American ship, the . This criticism was far from universal with other critics recognizing the director's intent to make an "American" film. The film was not released in Malaysia after Spielberg refused to cut the violent scenes; however, the film was finally released there on DVD with an 18SG certificate in 2005.
Many World War II veterans stated that the film was the most realistic depiction of combat they had ever seen. The film was so realistic that some combat veterans of D-Day and Vietnam left theaters rather than finish watching the opening scene depicting the Normandy invasion. Their visits to posttraumatic stress disorder counselors rose in number after the film's release, and many counselors advised "'more psychologically vulnerable'" veterans to avoid watching it. The Department of Veterans Affairs set up a nationwide hotline for veterans who were affected by the film, and less than two weeks after the film was released it had already received over 170 calls.
The film has gained criticism from some war veterans. Film director and military veteran Oliver Stone has accused the film of promoting "the worship of World War II as the good war," and has placed it alongside films such as "Gladiator" and "Black Hawk Down" that he believes were well-made, but may have inadvertently contributed to Americans' readiness for the 2003 invasion of Iraq. In defense of the film's portrait of warfare, Brian De Palma commented, "The level of violence in something like "Saving Private Ryan" makes sense because Spielberg is trying to show something about the brutality of what happened." Actor Richard Todd, who performed in "The Longest Day" and was among the first Allied soldiers to land in Normandy (Operation Tonga), said the film was "Rubbish. Overdone." American academic Paul Fussell, who saw combat in France during World War II, objected to what he described as, "the way Spielberg's "Saving Private Ryan", after an honest, harrowing, 15-minute opening visualizing details of the unbearable bloody mess at Omaha Beach, degenerated into a harmless, uncritical patriotic performance apparently designed to thrill 12-year-old boys during the summer bad-film season. Its genre was pure cowboys and Indians, with the virtuous cowboys of course victorious." Historian Jim DiEugenio took note that the film was actually "90 percent fiction" and that Tom Hanks knew this, with his goal being to "...commemorate World War II as the Good War and to depict the American role in it as crucial".
The film was nominated for eleven Academy Awards at the 71st Academy Awards, including Best Picture, Best Actor for Tom Hanks, and Best Original Screenplay. The film later won five including Best Cinematography, Best Sound, Best Sound Effects Editing, Best Film Editing, and Best Director for Spielberg; his second win in that category. In a controversial upset, the film lost the Best Picture award to "Shakespeare in Love", being one of a few that have won the Best Director award without also winning Best Picture. The Academy's decision not to award the film with the Best Picture Oscar has resulted in much criticism in recent years, with many considering it as one of the biggest snubs in the ceremony's history. In a poll in 2015, Academy members indicated that, given a second chance, they would award the Oscar for Best Picture to "Saving Private Ryan".
The film also won the Golden Globes for Best Motion Picture – Drama and Director, the BAFTA Award for Special Effects and Sound, the Directors Guild of America Award, a Grammy Award for Best Film Soundtrack, the Producers Guild of America Golden Laurel Award, and the Saturn Award for Best Action, Adventure, or Thriller Film.
Today, "Saving Private Ryan" is widely considered to be one of the greatest films ever made. The film has been frequently lauded as an influential work in the war film genre and is credited with contributing to a resurgence in America's interest in World War II. Old and new films, video games, and novels about the war enjoyed renewed popularity after its release. Many scenes from the film were directly translated to scenarios in Electronic Arts 2002 games "" and "". The film's use of desaturated colors, hand-held cameras, and tight angles has profoundly influenced subsequent films and video games.
The American Film Institute has included "Saving Private Ryan" in many of its lists, ranking it as the 71st-greatest American movie in AFI's 100 Years...100 Movies (10th Anniversary Edition), as well as the 45th-most thrilling film in AFI's 100 Years...100 Thrills, the 10th-most inspiring in AFI's 100 Years...100 Cheers, and the eighth-best epic film in "AFI's 10 Top 10". In 2014, the film was selected for preservation in the National Film Registry by the Library of Congress, being deemed "culturally, historically, or aesthetically significant". "Saving Private Ryan" was voted as the greatest war film in a 2008 Channel 4 poll of the 100 greatest war films. In a readers poll for "Rolling Stone", it was voted as the 18th-best film of the 1990s. "Empire" named the film as the 39th-greatest film of all time.
"Saving Private Ryan" has also received critical acclaim for its realistic portrayal of World War II combat. In particular, the sequence depicting the Omaha Beach landings was named the "best battle scene of all time" by "Empire" magazine and was ranked number one on "TV Guide's" list of the "50 Greatest Movie Moments". Filmmaker Robert Altman wrote a letter to Spielberg stating, ""Private Ryan" was awesome — best I've seen." Filmmaker Quentin Tarantino has expressed admiration for the film and has cited it as an influence on his 2009 film, "Inglourious Basterds". Prior to making "Dunkirk", filmmaker Christopher Nolan consulted with Spielberg on how to portray the war scenes.
On Veterans Day from 2001 to 2004, ABC aired the film uncut and with limited commercial interruption. The network airings were given a TV-MA rating, as the violent battle scenes and the profanity were left intact. The 2004 airing was marred by pre-emptions in many markets because of the language, in the backlash of Super Bowl XXXVIII's halftime show controversy. However, critics and veterans' groups such as the American Legion and the Veterans of Foreign Wars assailed those stations and their owners, including Sinclair Broadcast Group (which owned fourteen ABC affiliates at the time), Hearst-Argyle Television (which owned twelve); Scripps Howard Broadcasting (which owned six); Belo (which owned four); and Cox Enterprises (which owned three) for allegedly putting profits ahead of programming and honoring World War II soldiers, saying the stations made more money running their own programming instead of being paid by the network to carry the film, especially during a sweeps period.
A total of 65 ABC affiliates—28% of the network—did not clear the available timeslot for the film, even with The Walt Disney Company, ABC's parent, offer to pay all their fines for broadcasting the movie's strong language to the Federal Communications Commission. In the end, however, no complaints were lodged against ABC affiliates who showed "Ryan", perhaps because even conservative watchdogs like the Parents Television Council supported the unedited rebroadcast of the film. Additionally, some ABC affiliates in other markets that were near affected markets, such as Youngstown, Ohio, ABC affiliate WYTV (which is viewable in parts of the Columbus, Cleveland, and Pittsburgh markets, none of which aired the film), Gainesville, Florida, ABC affiliate WCJB-TV (which is viewable in parts of the Orlando and Tampa markets), and the network's affiliates in Hartford, Connecticut and Providence, Rhode Island (which are viewable in parts of the Boston and Springfield markets) still aired the film and gave those nearby markets the option of viewing the film. TNT and Turner Classic Movies have also broadcast the film. AMC also has broadcast rights to the film as of December 2019.
The film was released on home video in May 1999 with a VHS release that earned over $44 million. The DVD release became available in November of the same year, and was one of the best-selling titles of the year, with over 1.5 million units sold. The DVD was released in two separate versions: one with Dolby Digital and the other with DTS 5.1 surround sound. Besides the different 5.1 tracks, the two DVDs are identical. The film was also issued in a limited 2-disc LaserDisc in November 1999, making it one of the last feature films to be issued in this format, as LaserDiscs ceased manufacturing and distribution by year's end.
In 2004, a "Saving Private Ryan" special-edition DVD was released to commemorate the 60th anniversary of D-Day. This two-disc edition was also included in a box set titled "World War II Collection", along with two documentaries produced by Spielberg, "Price For Peace" (about the Pacific War) and "Shooting War" (about war photographers, narrated by Tom Hanks). The film was released on Blu-ray Disc on April 26, 2010 in the UK and on May 4, 2010 in the US, as part of Paramount Home Video's premium Sapphire Series. However, only weeks after its release, Paramount issued a recall due to audio synchronization problems. The studio issued an official statement acknowledging the problem, which they attributed to an authoring error by Technicolor that escaped the quality control process, and that they had already begun the process of replacing the defective discs.
On May 8, 2018, Paramount Home Media Distribution released "Saving Private Ryan" on Ultra HD Blu-ray to celebrate the 20th anniversary of the release of the film. | https://en.wikipedia.org/wiki?curid=28269 |
Shaggy dog story
In its original sense, a shaggy dog story or yarn is an extremely long-winded anecdote characterized by extensive narration of typically irrelevant incidents and terminated by an anticlimax.
Shaggy dog stories play upon the audience's preconceptions of joke-telling. The audience listens to the story with certain expectations, which are either simply not met or met in some entirely unexpected manner. A lengthy shaggy dog story derives its humour from the fact that the joke-teller held the attention of the listeners for a long time (such jokes can take five minutes or more to tell) for no reason at all, as the end resolution is essentially meaningless. The nature of their delivery is reflected in the English idiom "spin a yarn", by way of analogy with the production of yarn.
The eponymous shaggy dog story serves as the archetype of the genre. The story builds up a repeated emphasizing of the dog's exceptional shagginess. The climax of the story culminates in a character reacting to the animal, stating "That dog's not so shaggy." The expectations of the audience that have been built up by the presentation of the story, both in the details (that the dog is shaggy) and in the delivery of a punchline, are thus subverted. Ted Cohen gives the following example of this story:
However, authorities disagree as to whether this particular story is the archetype after which the category is named. Eric Partridge, for example, provides a very different story, as do William and Mary Morris in "The Morris Dictionary of Word and Phrase Origins".
According to Partridge and the Morrises, the archetypical shaggy dog story involves an advertisement placed in the "Times" announcing a search for a shaggy dog. In the Partridge story, an aristocratic family living in Park Lane is searching for a lost dog, and an American answers the advertisement with a shaggy dog that he has found and personally brought across the Atlantic, only to be received by the butler at the end of the story who takes one look at the dog and shuts the door in his face, saying, "But not so shaggy as "that", sir!" In the Morris story, the advertiser is organizing a competition to find the shaggiest dog in the world, and after a lengthy exposition of the search for such a dog, a winner is presented to the aristocratic instigator of the competition, who says, "I don't think he's so shaggy."
A typical shaggy dog story occurs in Mark Twain's book about his travels west, "Roughing It". Twain's friends encourage him to go find a man called Jim Blaine when he is properly drunk, and ask him to tell "the stirring story about his grandfather's old ram." Twain, encouraged by his friends who have already heard the story, finally finds Blaine, an old silver miner, who sets out to tell Twain and his friends the tale. Blaine starts out with the ram ("There never was a bullier old ram than what he was"), and goes on for four more mostly dull but occasionally hilarious unparagraphed pages.
Along the way, Blaine tells many stories, each of which connects back to the one before by some tenuous thread, and none of which has to do with the old ram. Among these stories are: a tale of boiled missionaries; of a lady who borrows a false eye, a peg leg, and the wig of a coffin-salesman's wife; and a final tale of a man who gets caught in machinery at a carpet factory and whose "widder bought the piece of carpet that had his remains wove in ..." As Blaine tells the story of the carpet man's funeral, he begins to fall asleep, and Twain, looking around, sees his friends "suffocating with suppressed laughter." They now inform him that "at a certain stage of intoxication, no human power could keep [Blaine] from setting out, with impressive unction, to tell about a wonderful adventure which he had once had with his grandfather's old ram — and the mention of the ram in the first sentence was as far as any man had heard him get, concerning it."
A lengthy shaggy dog story (roughly 2,500 words in English translation) takes place in chapter 10 of Nikolai Gogol's novel "Dead Souls", first published in 1842. The novel's central character, Chichikov, arrives in a Russian town and begins purchasing deceased serfs ("souls") from the local landowners, thus relieving the landowners of a tax burden based on an infrequent census. As confusion and suspicion about Chichikov's motives spreads, local officials meet to try to discern Chichikov's motives and background. At one point, the local postmaster interrupts: "He, gentlemen, my dear sir, is none other than Captain Kopeikin!" None of the others in the room are familiar with a Captain Kopeikin, and the postmaster begins to tell his story.
Captain Kopeikin was seriously wounded in battle abroad during military conflict with Napoleonic France in 1812. He was sent back to St. Petersburg due to the severity of his injuries, which include the loss of an arm and a leg. At the time, financial or other support was not readily provided to soldiers in such condition as a result of combat wounds, and Captain Kopeikin struggles to pay for room and board with his quickly-depleted funds. As his situation becomes more and more dire, Kopeikin takes it upon himself to confront the leader of "a kind of high commission, a board or whatever, you understand, and the head of it is general-in-chief so-and-so." It is understood that this senior military figure might have the means to assist Kopeikin or put in a word for a pension of some kind. This is followed by a lengthy summary of Kopeikin's meetings and repeated attempts to solicit help from this leader over a period of time. Eventually the postmaster states, "But forgive me, gentlemen, here begins the thread, one might say, the intrigue of the novel" and begins to introduce a band of robbers into the story.
At this point, a listener interrupts apologetically, "You yourself said that Captain Kopeikin was missing an arm and a leg, while Chichikov..." The postmaster suddenly slaps himself on the head and admits this inconsistency had not occurred to him at the start and "admitted that the saying 'Hindsight is the Russian man's forte,' was perfectly correct."
In the collection of stories by Isaac Asimov titled "Buy Jupiter and Other Stories", is a story titled "Shah Guido G." In his background notes, Asimov identifies the tale as a shaggy dog story, and explains that the title is a play on "shaggy dog". | https://en.wikipedia.org/wiki?curid=28270 |
Sushi
Sushi is traditionally made with medium-grain white rice, though it can be prepared with brown rice or short-grain rice. It is very often prepared with seafood, such as squid, eel, yellowtail, salmon, tuna or imitation crab meat. Many types of sushi are vegetarian. It is often served with pickled ginger ("gari"), "wasabi", and soy sauce. Daikon radish or pickled daikon ("takuan") are popular garnishes for the dish.
Sushi is sometimes confused with sashimi, a related dish in Japanese cuisine that consists of thinly sliced raw fish, or occasionally meat, and an optional serving of rice.
Sushi originates in a Southeast Asian dish, known as "narezushi" ( – "salted fish"), stored in fermented rice for possibly months at a time. The lacto-fermentation of the rice prevented the fish from spoiling; the rice would be discarded before consumption of the fish. This early type of sushi became an important source of protein for its Japanese consumers. The term "sushi" literally means "sour-tasting" and comes from an antiquated し ("shi") terminal-form conjugation, 酸し "sushi", no longer used in other contexts, of the adjectival verb 酸い "sui" "to be sour"; the overall dish has a sour and umami or savoury taste. Narezushi still exists as a regional specialty, notably as "funa-zushi" from Shiga Prefecture.
Vinegar began to be added to the preparation of narezushi in the Muromachi period (1336–1573) for the sake of enhancing both taste and preservation. In addition to increasing the sourness of the rice, the vinegar significantly increased the dish's longevity, causing the fermentation process to be shortened and eventually abandoned. The primitive sushi would be further developed in Osaka, where over several centuries it became "oshi-zushi" or "hako-zushi"; in this preparation, the seafood and rice were pressed into shape with wooden (typically bamboo) molds.
It was not until the Edo period (1603–1868) that fresh fish was served over vinegared rice and nori. The particular style of today's "nigirizushi" became popular in Edo (contemporary Tokyo) in the 1820s or 1830s. One common story of "nigirizushi"'s origins is of the chef Hanaya Yohei (1799–1858), who invented or perfected the technique in 1824 at his shop in Ryōgoku. The dish was originally termed "Edomae zushi" as it used freshly caught fish from the "Edo-mae" (Edo or Tokyo Bay); the term "Edomae nigirizushi" is still used today as a by-word for quality sushi, regardless of its ingredients' origins.
The earliest written mention of "sushi" in English described in the "Oxford English Dictionary" is in an 1893 book, "A Japanese Interior", where it mentions sushi as "a roll of cold rice with fish, sea-weed, or some other flavoring". There is an earlier mention of sushi in James Hepburn's Japanese-English dictionary from 1873, and an 1879 article on Japanese cookery in the journal "Notes and Queries".
The common ingredient in all types of "sushi" is vinegared "sushi" rice. Fillings, toppings, condiments, and preparation vary widely.
Due to "rendaku" consonant mutation, "sushi" is spelled with "zu" instead of "su" when a prefix is attached, as in "nigirizushi".
"Chirashizushi" (, "scattered sushi", also referred to as "barazushi") serves the rice in a bowl and tops it with a variety of raw fish and vegetable garnishes. It is commonly eaten because it is filling, fast and easy to make. It is eaten annually on Hinamatsuri in March.
"Inarizushi" () is a pouch of fried tofu typically filled with "sushi" rice alone. Tales tell that "inarizushi" is named after the Shinto god "Inari". Foxes, messengers of Inari, are believed to have a fondness for fried tofu, and an "Inari-zushi" roll has pointed corners that resemble fox ears.
Regional variations include pouches made of a thin omelette (, "fukusa-zushi", or , "chakin-zushi") instead of tofu. It should not be confused with "inari maki", which is a roll filled with flavored fried tofu.
"Cone sushi" is a variant of "inarizushi" originating in Hawaii that may include green beans, carrots, or gobo along with rice, wrapped in a triangular "abura-age" piece. It is often sold in "okazu-ya" (Japanese delis) and as a component of bento boxes.
"Makizushi" (, "rolled sushi"), "norimaki" (, "Nori roll") or "makimono" (, "variety of rolls") is a cylindrical piece, formed with the help of a bamboo mat known as a "makisu" (). "Makizushi" is generally wrapped in nori (seaweed), but is occasionally wrapped in a thin omelette, soy paper, cucumber, or shiso (perilla) leaves. "Makizushi" is usually cut into six or eight pieces, which constitutes a single roll order. Below are some common types of "makizushi", but many other kinds exist.
"Futomaki" (, "thick, large or fat rolls") is a large cylindrical piece, usually with "nori" on the outside. A typical "futomaki" is in diameter. They are often made with two, three, or more fillings that are chosen for their complementary tastes and colors. During the evening of the Setsubun festival, it is traditional in the Kansai region to eat uncut futomaki in its cylindrical form, where it is called "ehō-maki" (, lit. happy direction rolls). By 2000 the custom had spread to all of Japan. Futomaki are often vegetarian, and may utilize strips of cucumber, "kampyō" gourd, "takenoko" bamboo shoots, or lotus root. Strips of "tamagoyaki" omelette, tiny fish roe, chopped tuna, and "oboro (food)" whitefish flakes are typical non-vegetarian fillings. Traditionally, the vinegared rice is lightly seasoned with salt and sugar. Popular protein ingredients are fish cakes, imitation crab meat, eggs, tunas, or shrimps. Vegetables usually include cucumbers, lettuces, and (pickled radish).
Short grain white rice is usually used, although short-grain brown rice, like olive oil on "nori", is now becoming more widespread among the health-conscious. Rarely, sweet rice is mixed in "makizushi" rice.
Nowadays, the rice in "makizushi" can be many kinds of black rice, boiled rice and cereals etc. Besides the common ingredients listed above, some varieties may include cheese, spicy cooked squid, "yakiniku", "kamaboko", lunch meat, sausage, bacon or spicy tuna. The "nori" may be brushed with sesame oil or sprinkled with sesame seeds. In a variation, sliced pieces of "makizushi" may be lightly fried with egg coating.
"Tamago makizushi" (玉子巻き寿司) is "makizushi" is rolled out by a thin egg. "Tempura Makizushi" (天ぷら 巻き寿司) or "Agezushi" (揚げ寿司ロール) is a fried version of the dish.
"Hosomaki" (, "thin rolls") is a small cylindrical piece, with "nori" on the outside. A typical "hosomaki" has a diameter of about . They generally contain only one filling, often tuna, cucumber, "kanpyō", nattō, umeboshi paste, squid with shiso (Japanese herb). "Kappamaki", () a kind of "Hosomaki" filled with cucumber, is named after the Japanese legendary water imp fond of cucumbers called the kappa. Traditionally, "kappamaki" is consumed to clear the palate between eating raw fish and other kinds of food, so that the flavors of the fish are distinct from the tastes of other foods.
"Tekkamaki" () is a kind of "hosomaki" filled with raw tuna. Although it is believed that the word "tekka", meaning "red hot iron", alludes to the color of the tuna flesh or salmon flesh, it actually originated as a quick snack to eat in gambling dens called "tekkaba" (), much like the sandwich. "Negitoromaki" () is a kind of "hosomaki" filled with scallion ("negi") and chopped tuna ("toro"). Fatty tuna is often used in this style. "Tsunamayomaki" () is a kind of "hosomaki" filled with canned tuna tossed with mayonnaise.
"Ehōmaki" (, "lucky direction roll") is a roll composed of seven ingredients considered to be lucky. Ehōmaki are often eaten on setsubun in Japan. The typical ingredients include "kanpyō", egg, eel, and "shiitake" mushrooms. "Ehōmaki" often include other ingredients too. People usually eat the ehōmaki while facing the direction considered to be auspicious that year.
"Temaki" (, "hand roll") is a large cone-shaped piece of "nori" on the outside and the ingredients spilling out the wide end. A typical "temaki" is about long, and is eaten with fingers because it is too awkward to pick it up with chopsticks. For optimal taste and texture, "temaki" must be eaten quickly after being made because the "nori" cone soon absorbs moisture from the filling and loses its crispness, making it somewhat difficult to bite through. For this reason, the "nori" in pre-made or take-out temaki is sealed in plastic film which is removed immediately before eating.
"Narezushi" (, "matured "sushi"") is a traditional form of fermented "sushi". Skinned and gutted fish are stuffed with salt, placed in a wooden barrel, doused with salt again, then weighed down with a heavy tsukemonoishi (pickling stone). As days pass, water seeps out and is removed. After six months, this "sushi" can be eaten, remaining edible for another six months or more.
The most famous variety of "narezushi" are the ones offered as a specialty dish of Shiga Prefecture, particularly the "funa-zushi" made from fish of the crucian carp genus, the authentic version of which calls for the use "nigorobuna", a particular locally differentiated variety of wild goldfish endemic to Lake Biwa.
"Nigirizushi" (, "hand-pressed sushi") consists of an oblong mound of sushi rice that the chef presses between the palms of the hands to form an oval-shaped ball, and a topping (the "neta") draped over the ball. It is usually served with a bit of "wasabi"; "neta" are typically fish such as salmon, tuna or other seafood. Certain toppings are typically bound to the rice with a thin strip of "nori", most commonly octopus ("tako"), freshwater eel ("unagi"), sea eel ("anago"), squid ("ika"), and sweet egg ("tamago"). One order of a given type of fish typically results in two pieces, while a sushi set (sampler dish) may contain only one piece of each topping.
"Gunkanmaki" (, "warship roll") is a special type of "nigirizushi": an oval, hand-formed clump of sushi rice that has a strip of "nori" wrapped around its perimeter to form a vessel that is filled with some soft, loose or fine-chopped ingredient that requires the confinement of "nori" such as roe, "nattō", oysters, "uni" (sea urchin roe), corn with mayonnaise, scallops, and quail eggs. "Gunkan-maki" was invented at the "Ginza Kyubey" restaurant in 1941; its invention significantly expanded the repertoire of soft toppings used in sushi.
"Temarizushi" (, "ball sushi") is a sushi made by pressing rice and fish into a ball-shaped form by hand using a plastic wrap.
, also known as , is a pressed sushi from the Kansai region, a favorite and specialty of Osaka. A block-shaped piece is formed using a wooden mold, called an "oshibako". The chef lines the bottom of the "oshibako" with the toppings, covers them with sushi rice, and then presses the lid of the mold down to create a compact, rectilinear block. The block is removed from the mold and then cut into bite-sized pieces. Particularly famous is ("battera", pressed mackerel sushi) or ("saba zushi"). In "oshizushi", all the ingredients are either cooked or cured and raw fish is never used.
The increasing popularity of "sushi" around the world has resulted in variations typically found in the Western world, but rarely in Japan. A notable exception to this is the use of salmon, which was introduced by a Norwegian businessman tasked with helping the Norwegian salmon industry in the 1980s. Such creations to suit the Western palate were initially fueled by the invention of the California roll (a "norimaki" with crab (later, imitation crab), cucumber, and avocado). A wide variety of popular rolls ("norimaki" and "uramaki") has evolved since. Norway roll is another variant of "uramakizushi" filled with "tamago" (omelette), imitation crab and cucumber, rolled with "shiso" leaf and "nori", topped with slices of Norwegian salmon, garnished with lemon and mayonnaise.
Uramaki (, "inside-out roll") is a medium-sized cylindrical piece with two or more fillings, and was developed as a result of the creation of the California roll, as a method originally meant to hide the nori. "Uramaki" differs from other "makimono" because the rice is on the outside and the "nori" inside. The filling is in the center surrounded by "nori", then a layer of rice, and optionally an outer coating of some other ingredients such as roe or toasted sesame seeds. It can be made with different fillings, such as tuna, crab meat, avocado, mayonnaise, cucumber or carrots.
Examples of variations include the rainbow roll (an inside-out topped with thinly sliced "maguro, hamachi, ebi, sake" and avocado) and the caterpillar roll (an inside-out topped with thinly sliced avocado). Also commonly found is the "rock and roll" (an inside-out roll with barbecued freshwater eel and avocado with toasted sesame seeds on the outside).
In Japan, "uramaki" is an uncommon type of "makimono"; because sushi is traditionally eaten by hand in Japan, the outer layer of rice can be quite difficult to handle with fingers.
"Futomaki" is a more popular variation of sushi within the United States, and comes in variations that take their names from their places of origin. Other rolls may include a variety of ingredients, including chopped scallops, spicy tuna, beef or chicken teriyaki roll, okra, and assorted vegetables such as cucumber and avocado, and the tempura roll, where shrimp tempura is inside the roll or the entire roll is battered and fried tempura-style. In the Southern United States, many sushi restaurants prepare rolls using crawfish. Sometimes, rolls are made with brown rice or black rice, which appear in Japanese cuisine as well.
Per Food and Drug Administration regulations, raw fish served in the United States must be frozen prior to serving in order to kill parasites. Because of this and the relative difficulty of acquiring fresh seafood compared to Japan, raw seafood (e.g., sashimi) is not as prevalent a component in American-style sushi.
Since rolls are often made to-order it is not unusual for the customer to specify the exact ingredients desired (e.g. salmon roll, cucumber roll, avocado roll, tuna roll, shrimp or tuna tempura roll, etc.). Though the menu names of dishes often vary by restaurant, some examples include:
All "sushi" has a base of specially prepared rice, complemented with other ingredients.
"Sushi-meshi" (also known as "Su-meshi" , "shari" , or "gohan" ) is a preparation of white, short-grained, Japanese rice mixed with a dressing consisting of rice vinegar, sugar, salt, and occasionally kombu and "sake". It has to be cooled to room temperature before being used for a filling in a "sushi" or else it will get too sticky while being seasoned. Traditionally, the mixing is done with a hangiri, which is a round, flat-bottom wooden tub or barrel, and a wooden paddle (shamoji).
Sushi rice is prepared with short-grain Japanese rice, which has a consistency that differs from long-grain strains such as those from India, Sri Lanka, Bangladesh, Thailand, and Vietnam. The essential quality is its stickiness or glutinousness, although the type of rice used for sushi is different from glutinous rice. Freshly harvested rice ("shinmai") typically contains too much water, and requires extra time to drain the rice cooker after washing. In some fusion cuisine restaurants, short-grain brown rice and wild rice are also used.
There are regional variations in sushi rice and individual chefs have their individual methods. Most of the variations are in the rice vinegar dressing: the Kantō region (or East Japan) version of the dressing commonly uses more salt; in Kansai region (or West Japan), the dressing has more sugar.
The black seaweed wrappers used in "makimono" are called "nori" (海苔). "Nori" is a type of algae, traditionally cultivated in the harbors of Japan. Originally, algae was scraped from dock pilings, rolled out into thin, edible sheets, and dried in the sun, in a process similar to making rice paper. Today, the commercial product is farmed, processed, toasted, packaged, and sold in sheets.
The size of a "nori" sheet influences the size of "makimono". A full-size sheet produces "futomaki", and a half produces "hosomaki" and "temaki". To produce "gunkan" and some other "makimono", an appropriately-sized piece of "nori" is cut from a whole sheet.
"Nori" by itself is an edible snack and is available with salt or flavored with teriyaki sauce. The flavored variety, however, tends to be of lesser quality and is not suitable for sushi.
When making "fukusazushi", a paper-thin omelette may replace a sheet of "nori" as the wrapping. The omelette is traditionally made on a rectangular omelette pan, known as a "makiyakinabe", and used to form the pouch for the rice and fillings.
The ingredients used inside sushi are called "gu", and are, typically, varieties of fish. For culinary, sanitary, and aesthetic reasons, the minimum quality and freshness of fish to be eaten raw must be superior to that of fish which is to be cooked. Sushi chefs are trained to recognize important attributes, including smell, color, firmness, and freedom from parasites that may go undetected in commercial inspection. Commonly used fish are tuna ("maguro, shiro-maguro"), Japanese amberjack, yellowtail ("hamachi"), snapper ("kurodai"), mackerel ("saba"), and salmon ("sake"). The most valued sushi ingredient is "toro," the fatty cut of the fish. This comes in a variety of "ōtoro" (often from the bluefin species of tuna) and "chūtoro", meaning "middle toro", implying that it is halfway into the fattiness between "toro" and the regular cut. "Aburi" style refers to "nigiri" sushi where the fish is partially grilled (topside) and partially raw. Most nigiri sushi will have completely raw toppings, called "neta".
Other seafoods such as squid ("ika"), eel ("anago" and "unagi"), pike conger ("hamo"), octopus ("tako"), shrimp ("ebi" and "amaebi"), clam ("mirugai", "aoyagi" and "akagai"), fish roe ("ikura", "masago", "kazunoko" and "tobiko"), sea urchin ("uni"), crab ("kani"), and various kinds of shellfish (abalone, prawn, scallop) are the most popular seafoods in sushi. Oysters, however, are less common, as the taste is not thought to go well with the rice. "Kani kama", or imitation crab stick, is commonly substituted for real crab, most notably in California rolls.
Pickled daikon radish ("takuan") in "shinko maki", pickled vegetables ("tsukemono"), fermented soybeans ("nattō") in "nattō maki", avocado, cucumber in "kappa maki", asparagus, yam, pickled ume ("umeboshi"), gourd ("kanpyō"), burdock ("gobo"), and sweet corn (possibly mixed with mayonnaise) are also used in sushi.
Tofu and eggs (in the form of slightly sweet, layered omelette called "tamagoyaki" and raw quail eggs ride as a "gunkan-maki" topping) are common.
Sushi is commonly eaten with condiments. Sushi may be dipped in "shōyu" (soy sauce), and is usually flavored with "wasabi", a piquant paste made from the grated stem of the "Wasabia japonica" plant. Japanese-style mayonnaise is a common condiment in Japan on salmon, pork and other sushi cuts.
True "wasabi" has anti-microbial properties and may reduce the risk of food poisoning. The traditional grating tool for "wasabi" is a sharkskin grater or "samegawa oroshi". An imitation "wasabi" ("seiyo-wasabi"), made from horseradish, mustard powder and green dye is common. It is found at lower-end "kaiten-zushi" restaurants, in bento box sushi and at most restaurants outside Japan. If manufactured in Japan, it may be labelled "Japanese Horseradish".
Gari (sweet, pickled ginger) is eaten in between sushi courses to both cleanse the palate and aid in digestion. In Japan, green tea ("ocha") is invariably served together with sushi. Better sushi restaurants often use a distinctive premium tea known as "mecha". In sushi vocabulary, green tea is known as "agari".
Sushi may be garnished with gobo, grated daikon, thinly sliced vegetables, carrots/radishes/cucumbers that have been shaped to look like flowers, real flowers, or seaweed salad.
When closely arranged on a tray, different pieces are often separated by green strips called "baran" or "kiri-zasa" (切り笹). These dividers prevent the flavors of neighboring pieces of sushi from mixing and help to achieve an attractive presentation. Originally, these were cut leaves from the "Aspidistra elatior" (葉蘭 "haran") and "Sasa veitchii" (熊笹 "kuma-zasa") plants, respectively. Using actual leaves had the added benefit of releasing antimicrobial phytoncides when cut thereby extending the limited shelf life of the sushi. Sushi bento boxes are a staple of Japanese supermarkets and convenience stores. As these stores began rising in prominence in the 1960s, the labor-intensive cut leaves were increasingly replaced with green plastic in order to lower costs. This coincided with the increased prevalence of refrigeration which acted to extend the shelf life of sushi without the need for the cut leaves. Today the plastic strips are commonly used in sushi bento boxes and to a lesser degree in sushi presentations found in sushi bars and restaurants. In store-sold or to-go packages of sushi, the plastic leaf strips are often used to prevent the rolls from coming into early or unwanted contact with the ginger and "wasabi" included with the dish.
The main ingredients of traditional Japanese sushi, raw fish and rice, are naturally low in fat, high in protein, carbohydrates (the rice only), vitamins, and minerals, as are "gari" and "nori". Other vegetables wrapped within the sushi also offer various vitamins and minerals. Many of the seafood ingredients also contain omega-3 fatty acids, which have a variety of health benefits. The omega-3 fatty acids found in fish has certain beneficial properties, especially on cardiovascular health, natural anti-inflammatory compounds, and play a role in brain function.
Generally sushi is not a particularly fattening food. However, rice in sushi contains a fair amount of carbohydrates, plus the addition of other ingredients such as mayonnaise added into sushi rolls might increase the caloric content. Sushi also has a relatively high sodium content, especially contributed from "shoyu" soy sauce seasoning.
Some of the ingredients in sushi can present health risks. Large marine apex predators such as tuna (especially bluefin) can harbor high levels of methylmercury, which can lead to mercury poisoning when consumed in large quantity or when consumed by certain higher-risk groups, including women who are pregnant or may become pregnant, nursing mothers and young children.
According to recent studies, there have been about 18 million infections worldwide from eating raw fish. This serves as a great risk to expecting mothers due to the health risks that medical interventions or treatment measures may pose on the developing fetus. Parasitic infections can have a wide range of health impacts, including bowel obstruction, anemia, liver disease, and more. The impact of these illnesses alone can pose some health concerns on the expecting mother and baby, but the curative measures that may need to take place to recover, are also a concern as well.
Sashimi or other types of sushi containing raw fish present a risk of infection by three main types of parasites:
For the above reasons, EU regulations forbid the use of fresh raw fish. It must be frozen at temperatures below in all parts of the product for no less than 24 hours. As such, a number of fishing boats, suppliers and end users "super-freeze" fish for sushi to temperatures as low as −60 °C. As well as parasite destruction, super-freezing also prevents oxidation of the blood in tuna flesh, thus preventing the discoloration that happens at temperatures above −20 °C.
Some forms of sushi, notably those containing pufferfish fugu and some kinds of shellfish, can cause severe poisoning if not prepared properly. Particularly, fugu consumption can be fatal. Fugu fish has a lethal dose of tetrodotoxin in its internal organs and, by law in many countries, must be prepared by a licensed fugu chef who has passed the prefectural examination in Japan. The licensing examination process consists of a written test, a fish-identification test, and a practical test that involves preparing the fugu and separating out the poisonous organs. Only about 35 percent of the applicants pass.
Sustainable sushi is sushi made from fished or farmed sources that can be maintained or whose future production does not significantly jeopardize the ecosystems from which it is acquired. Concerns over the sustainability of sushi ingredients arise from greater concerns over environmental, economic and social stability and human health.
Traditionally, sushi is served on minimalist Japanese-style, geometric, mono- or duo-tone wood or lacquer plates, in keeping with the aesthetic qualities of this cuisine.
Many sushi restaurants offer fixed-price sets, selected by the chef from the catch of the day. These are often graded as "shō-chiku-bai" (), "shō/matsu" (, pine), "chiku/take" (, bamboo) and "bai/ume" (, plum), with "matsu" the most expensive and " ume" the cheapest. Sushi restaurants will often have private booth dining, where guests are asked to remove their shoes, leaving them outside the room; However, most sushi bars offer diners a casual experience with an open dining room concept.
Sushi may be served "kaiten zushi" (sushi train) style. Color-coded plates of sushi are placed on a conveyor belt; as the belt passes, customers choose as they please. After finishing, the bill is tallied by counting how many plates of each color have been taken. Newer "kaiten zushi" restaurants use barcodes or RFID tags embedded in the dishes to manage elapsed time after the item was prepared.
Some specialized or slang terms are used in the sushi culture. Most of these terms are used only in sushi bars.
Unlike sashimi, which is almost always eaten with chopsticks, "nigirizushi" is traditionally eaten with the fingers, even in formal settings. Although it is commonly served on a small platter with a side dish for dipping, sushi can also be served in a "bento", a box with small compartments that hold the various dishes of the meal.
Soy sauce is the usual condiment, and sushi is normally served with a small sauce dish, or a compartment in the bento. Traditional etiquette suggests that the sushi is turned over so that only the topping is dipped; this is because the soy sauce is for flavoring the topping, not the rice, and because the rice would absorb too much soy sauce and would fall apart. If it is difficult to turn the sushi upside-down, one can baste the sushi in soy sauce using "gari" (sliced ginger) as a brush. Toppings that have their own sauce (such as eel) should not be eaten with soy sauce.
Traditionally, the sushi chef will add an appropriate amount of "wasabi" to the "sushi" while preparing it, and etiquette suggests eating the sushi as is, since the chef is supposed to know the proper amount of "wasabi" to use. However, today "wasabi" is more a matter of personal taste, and even restaurants in Japan may serve "wasabi" on the side for customers to use at their discretion, even when there is "wasabi" already in the dish. | https://en.wikipedia.org/wiki?curid=28271 |
Shinto
Shinto, also known as kami-no-michi, is a religion originating in Japan. Classified as an East Asian religion by scholars of religion, its practitioners often regard it as Japan's indigenous religion and as a nature religion. Scholars sometimes call its practitioners "Shintoists", although adherents rarely use that term themselves. There is no central authority in control of the movement and much diversity exists among practitioners.
Shinto is polytheistic and revolves around the "kami" ("gods" or "spirits"), supernatural entities believed to inhabit all things. The link between the "kami" and the natural world has led to Shinto being considered animistic and pantheistic. The kami are worshiped at "kamidana" household shrines, family shrines, and public shrines. The latter are staffed by priests who oversee offerings to the kami and the provision of religious paraphernalia such as amulets to the religion's adherents. Other common rituals include the "kagura" ritual dances, age specific celebrations, and seasonal festivals. These festivals and rituals are collectively called "matsuri". A major conceptual focus in Shinto is ensuring purity by cleansing practices of various types including ritual washing or bathing. Shinto does not emphasize specific moral codes other than ritual purity, reverence for "kami", and regular communion following seasonal practices. Shinto has no single creator or specific doctrinal text, but exists in a diverse range of localized and regionalised forms.
Belief in kami can be traced to the Yayoi period (300 BCE – 300 CE), although similar concepts existed during the late Jōmon period. At the end of the Kofun period (300 to 538 CE), Buddhism entered Japan and influenced kami veneration. Through Buddhist influence, kami came to be depicted anthropomorphically and were situated within Buddhist cosmology. Religious syncretisation made kami worship and Buddhism functionally inseparable, a process called "shinbutsu-shūgō". The earliest written tradition regarding kami worship was recorded in the eighth-century "Kojiki" and "Nihon Shoki". In ensuing centuries, "shinbutsu-shūgō" was adopted by Japan's Imperial household. During the Meiji era (1868 – 1912 CE), Japan's leadership expelled Buddhist influence from Shinto and formed State Shinto, which they utilized as a method for fomenting nationalism and imperial worship. Shrines came under growing government influence, and the Emperor of Japan was elevated to a particularly high position as a kami. With the formation of the Japanese Empire in the early 20th century, Shinto was exported to other areas of East Asia. Following Japan's defeat in World War II, Shinto was formally separated from the state.
Shinto is primarily found in Japan, where there are around 80,000 public shrines. Shinto is also practiced elsewhere, in smaller numbers. Only a minority of Japanese people identify as religious, although most of the population take part in Shinto matsuri and Buddhist activities, especially festivals, and seasonal events. This reflects a common view in Japanese culture that the beliefs and practices of different religions need not be exclusive. Aspects of Shinto have also been incorporated into various Japanese new religious movements.
There is no universally agreed definition of Shinto. However, the authors Joseph Cali and John Dougill stated that if there was "one single, broad definition of Shinto" that could be put forward, it would be that "Shinto is a belief in "kami"", the supernatural entities at the centre of the religion. The Japanologist Helen Hardacre stated that "Shinto encompasses doctrines, institutions, ritual, and communal life based on kami worship", while the scholar of religion Inoue Nobutaka observed the term was "often used" in "reference to kami worship and related theologies, rituals and practices."
Various scholars have referred to practitioners of Shinto as "Shintoists". The philosopher Stuart D. B. Picken thought this term to be "untranslatable" and "meaningless" in the Japanese language. Some people prefer to view Shinto not as a religion but as a "way", partly as a pretence for attempting to circumvent the modern Japanese separation of religion and state and restore the historical links between Shinto and the Japanese state.
Scholars have debated at what point in history it is legitimate to start talking about Shinto as a specific phenomenon. The scholar of religion Ninian Smart for instance suggested that one could "speak of the "kami" religion of Japan, which lived symbiotically with organized Buddhism, and only later was institutionalized as Shinto." The scholar of religion Brian Bocking stressed that the term should "be approached with caution", particularly when it was applied to periods before the Meiji era, Inoue Nobutaka stated that "Shinto cannot be considered as a single religious system that existed from the ancient to the modern period", while the historian Kuroda Toshio noted that "before modern times Shinto did not exist as an independent religion".
Many scholars refer to Shinto as a religion. However, religion as a concept arose in Europe and many of the connotations that the term has in Western culture "do not readily apply" to Shinto. Unlike religions familiar in Western countries, such as Christianity and Islam, Shinto has no single founder, nor any single canonical text. Western religions have tended to stress exclusivity, but in Japan, it has long been considered acceptable to practice different religious traditions simultaneously. Japanese religion is therefore highly pluralistic. Shinto is often cited alongside Buddhism as one of the two main religions of Japan, and the two often differ in focus, with Buddhism emphasising the idea of transcending the cosmos, which it regards as being replete with suffering, while Shinto focuses on adapting to the pragmatic requirements of life. Shinto incorporates elements borrowed from religious traditions imported into Japan from mainland Asia, such as Buddhism, Confucianism, Taoism, and Chinese divination practices. It bears many similarities with other East Asian religions, in particular through its belief in many different deities.
Scholars of religion have debated how best to classify Shinto. Inoue argued for categorizing Shinto "as a member of the family of East-Asian religions". Picken suggested that Shinto could be classed as a world religion, while the historian H. Byron Earhart called it a "major religion". In the early 21st century it became increasingly common for practitioners to call Shinto a nature religion.
Shinto is often referred to as an indigenous religion, although this results in debates over the various different definitions of "indigenous" in the Japanese context. The notion of Shinto as Japan's "indigenous religion" stemmed from the growth of modern nationalism in the Edo period to the Meiji era. As a result, the idea that Shinto was an ancient tradition was promoted throughout the population. Associated with this idea of Shinto as Japan's indigenous religion, many priests and practitioners regard it as a prehistoric belief system that has continued uninterrupted throughout Japanese history, regarding it as something like the "underlying will of Japanese culture". The prominent Shinto theologian Sokyo Ono for instance stated that for the Japanese, kami worship was "an expression of their native racial faith which arose in the mystic days of remote antiquity", remaining "as indigenous as the people that brought the Japanese nation into existence and ushered in its new civilization". Many scholars have argued that this classification is inaccurate. Earhart noted that Shinto's history, which involved incorporating a great deal of Buddhist and Chinese influence, was "too complex to be labelled simply" as an "indigenous religion".
Shinto is internally diverse; Nelson noted it was "not a unified, monolithic entity that has a single center and system all its own". There is substantial localised variation in how Shinto is practiced. In representing "a portmanteau term for widely varying types and aspects of religion", Bocking drew comparisons between the word "Shinto" and the term "Hinduism", which is also applied to a varied range of beliefs and practices.
Various different types of Shinto have been identified. "Shrine Shinto" refers to the practices centred around shrines. Some scholars have used the term "Folk Shinto" to designate localised Shinto practices, or the practices of individuals outside of an institutionalised setting, and "Domestic Shinto" to the ways in which "kami" are venerated in the home. In various eras of the past, there was also a "State Shinto", in which Shinto beliefs and practices were closely interwoven with the operations of the Japanese state.
The term "Shinto" is often translated into English as "the way of the kami". It derives from the combination of two Chinese characters: "shen"(神) (pronouncd "shin" in Japanese), which means "kami" or "God", and "dao" (道) (pronounced "michi" or "tō"/"dō", in Japanese), which means "way" or "road". The word "Shintō" was adopted, originally as "Jindō" or "Shindō", from the written Chinese "Shendao" (), combining two "kanji": , meaning "kami"; and , "path", meaning a philosophical path or study (from the Chinese word "dào"). The oldest recorded usage of the word "Shindo" is from the second half of the sixth century.
Among the term's earliest known appearance in Japan is in the "Nihon Shoki", an eighth-century text. Here, it may simply be used in reference to popular belief, and not merely that of Japan. Alternatively, it is possible that in this Japanese context, the early uses of "Shinto" were also a reference to Taoism, as many Taoist practices had recently been imported to Japan. It is apparent that in these early Japanese uses, the word "Shinto" did not apply to a distinct religious tradition nor to anything seen as being uniquely Japanese. In the "Konjaku monogatarishui", composed in the eleventh-century, references are made to a woman in China practicing "Shinto" rather than Buddhism, indicating that at this time the term "Shinto" was not used in reference to purely Japanese traditions. The same text also referred to people in India worshipping "kami", reflecting use of that term to describe localised deities outside of Japan.
In medieval Japan, "kami"-worship was generally seen as being part of Japanese Buddhism, with the "kami" themselves often being interpreted as Buddhas. At this point, the term "Shinto" increasingly referred to "the authority, power, or activity of a "kami", being a "kami", or, in short, the state or attributes of a "kami"." It appears in this form in texts such as "Nakatomi no harai kunge" and "Shintōshū" tales. In the "Japanese Portuguese Dictionary" of 1603, "Shinto" is defined as referring to ""kami" or matters pertaining to "kami"."
In the seventeenth century, under the influence of Edo period thinkers, the practice of "kami" worship came to be seen as distinct from Taoism, Confucianism, and Buddhism. The term "Shinto" only gained common use from the early twentieth century onward, when it superseded the term "taikyō" ('great religion') as the name for the Japanese state religion. The term "Shinto" has been used in different ways throughout Japanese history.
A range of other terms have been used as synonyms for "Shinto". These include "kami no michi" ("Way of the Kami"), "kannagara no michi" ("way of the divine transmitted from time immemorial"), "Kodō" ("the ancient way"), "Daidō" ("the great way"), and "Teidō" ("the imperial way").
Shinto is a polytheistic belief system involving the veneration of many deities, known as "kami", or sometimes as "jingi". As is often the case in the Japanese language, no distinction is made here between singular and plural, and hence the term "kami" refers both to individual kami and the collective group of kami. This term has varyingly been translated into English as "god" or "spirit". However, Earhart noted that there was "no exact English equivalent" for the word "kami", and Kitagawa stated that such English translations were "quite unsatisfactory and misleading". Several scholars have argued against translating "kami" into English. According to Japanese mythology, there are eight million kami, and Shinto practitioners believe that they are present everywhere. They are not regarded as omnipotent, omniscient, or necessarily immortal. Some kami, referred to as the "magatsuhi-no-kami" or "araburu kami", are regarded as being essentially malevolent and destructive.
The term "kami" is "conceptually fluid", and "vague and imprecise". In Japanese it is often applied to the power of phenomena that inspire a sense of wonder and awe in the beholder. Kitagawa referred to this as "the "kami" nature", stating that he thought it "somewhat analogous" to the Western ideas of the numinous and the sacred. Kami are seen to inhabit both the living and the dead, organic and inorganic matter, and natural disasters like earthquakes, droughts, and plagues; their presence is seen in natural forces such as the wind, rain, fire, and sunshine. Accordingly, Nelson commented that Shinto regards "the "actual phenomena" of the world itself" as being "divine". The Shinto understanding of kami has also been characterised as being both pantheistic, and animistic.
In Japan, kami have been venerated since prehistory, and in the Yayoi period were regarded as being formless and invisible. It was only under the influence of Buddhism that they were depicted anthropomorphically.
Kami are often associated with a specific place, often one that is noted as a prominent feature in the landscape such as a waterfall, volcano, large rock, or distinctive tree. The kami is seen as being represented in the shrine by the "go-shintai", objects commonly chosen for this purpose include mirrors, swords, stones, beads, and inscribed tablets. Many practitioners visiting the shrine never see the "go-shintai", which is concealed from their view. Kami are believed to be capable of both benevolent and destructive deeds. Offerings and prayers are given to the kami to gain their blessings and to dissuade them from engaging in destructive actions. Shinto seeks to cultivate and ensure a harmonious relationship between humans and the kami and thus with the natural world. More localised kami may be subject to feelings of intimacy and familiarity from members of the local community that are not directed towards more widespread kami like Amaterasu.
Kami are not understood as being metaphysically different from humanity, and in Shinto it is seen as possible for humans to become kami.
Dead humans are sometimes venerated as kami, being regarded as protector or ancestral figures. One of the most prominent examples is that of the Emperor Ōjin, who on his death was enshrined as the kami Hachiman, believed to be a protector of Japan and a "kami" of war. In Japanese culture, ancestors can be viewed as a form of kami. In Western Japan, the term "jigami" is used to describe the enshrined kami of a village founder. In some cases, living human beings were also viewed as kami; these were called "akitsumi kami" or "arahito-gami". In the State Shinto system of the Meiji era, the Emperor of Japan was declared to be a kami, while several Shinto sects have also viewed their leaders as living kami.
Although some kami are venerated only in a single location, others have shrines devoted to them across many areas of Japan. Hachiman for instance has around 25,000 shrines dedicated to him. The act of establishing a new shrine to a kami who already has one is called "bunrei" ("dividing the spirit"). As part of this, the kami is invited to enter a new place, where it can be venerated, with the instalment ceremony being known as a "kanjo". The new, subsidiary shrine is known as a "bunsha". Individual kami are not believed to have their power diminished by their residence in multiple locations, and there is no limit on the number of places a kami can be enshrined. In some periods, fees were charged for the right to enshrine a particular kami in a new place. Shrines are not necessarily always designed as permanent structures.
Many kami are believed to have messengers, known as "kami no tsukai" or "tsuka washime", and these are generally depicted as taking animal form. The messenger of Inari, for example, is depicted as a fox ("kitsune"), while the messenger of Hachiman is a dove.
Shinto cosmology also includes "bakemono", spirits who cause malevolent acts. "Bakemono" include "oni", "tengu", "kappa", "mononoke", and "yamanba". Japanese folklore also incorporates belief in the "goryō" or "onryō", unquiet or vengeful spirits, particularly of those who have died violently and without appropriate funerary rites. These are believed to inflict suffering on the living, meaning that they must be pacified, usually through Buddhist rites but sometimes through enshrining them as a kami.
The origin of the kami and of Japan itself are recounted in two eighth-century texts, "Kojiki" and "Nihon Shoki". These were texts commissioned by ruling elites to legitimize and consolidate their rule, and drew heavily upon Chinese influence. These texts were never of great importance to the religious life of the Japanese. Views regarding the truth of the cosmological stories recounted in these texts have varied. In the early twentieth century, for instance, the Japanese government proclaimed that it was irrefutable history.
These texts recount that the universe started with "ame-tsuchi", the separation of light and pure elements ("ame", "heaven") from heavy elements ("tsuchi", "earth"). Three kami then appeared: Amenominakanushi, Takamimusuhi no Mikoto, and Kamimusuhi no Mikoto. Other kami followed, including a brother and sister, Izanagi and Izanami. The kami instructed Izanagi and Izanami to create land on earth. To this end, the siblings stirred the briny sea with a jewelled spear, from which Onogoro Island was formed. Izanagi and Izanami then descended to Earth, where she gave birth to further kami. One of these was a fire kami, whose birth killed Izanami. Izanagi then descended to the netherworld ("yomi") to retrieve his sister, but there he saw her body putrefying. Embarrassed to be seen in this state, she chased him out of yomi, and he closed its entrance with a boulder.
Izanagi bathed in the sea to rid himself from the pollution brought about by witnessing Izanami's putrefaction. Through this act, further kami emerged from his body: Amaterasu (the sun kami) was born from his left eye, Tsukiyomi (the moon kami) from his right eye, and Susanoo (the storm kami) from his nose. Susanoo behaved in a destructive manner, and to escape him Amaterasu hid herself within a cave, plunging the earth into darkness. The other kami eventually succeeded in coaxing her out. Susanoo was then banished to earth, where he married and had children. With humans now living on Earth, the "age of the gods" came to an end. According to these texts, Amaterasu then sent her grandson, Ninigi, to rule Japan, giving him curved beads, a mirror, and a sword: the symbols of Japanese imperial authority.
In Shinto, the creative principle permeating all life is known as "mutsubi". Within traditional Japanese thought, there is no concept of an overarching duality between good and evil. The concept of "aki" encompasses misfortune, unhappiness, and disaster, although does not correspond precisely with the Western concept of evil.
Texts such as the "Kojiki" and "Nihon Shoki" attest to the presence of multiple realms in Shinto cosmology. These present a universe divided into three parts: the Plain of High Heaven ("Takama-no-hara"), where the "kami" live; the Phenomenal or Manifested World ("Utsushi-yo"), where humans dwell; and the Nether World ("Yomotsu-kuni"), where unclean spirits reside. The mythological texts nevertheless do not draw firm demarcations between these realms.
Shinto places greater emphasis on this life than on any afterlife. As the historian of religion Joseph Kitagawa noted, "Japanese religion has been singularly preoccupied with "this" world, with its emphasis on finding ways to cohabit with the "kami" and with other human beings". A common view among Shinto priests is that the dead continue to inhabit our world and work towards the prosperity of their descendants and the land. One traditional belief formerly widespread in Japan was that the spirits of the dead resided in the mountains, from where they would descend to take part in agricultural events.
A key theme in Shinto thought is the importance of avoiding "kegare" ("pollution" or "impurity"), while ensuring "harae" ("purity"). In Japanese thought, humans are seen as fundamentally pure. "Kegare" is therefore seen as being a temporary condition that can be corrected through achieving "harae". Rites of purification are conducted so as to restore an individual to "spiritual" health and render them useful to society.
This notion of purity is present in many facets of Japanese culture, such as the focus it places on bathing. Purification is for instance regarded as important in preparation for the planting season, while performers of noh theatre undergo a purification rite before they carry out their performances. Among the things regarded as particular pollutants in Shinto are death, disease, witchcraft, the flaying alive of an animal, incest, bestiality, excrement, and blood associated with either menstruation or childbirth. To avoid "kegare", priests and other practitioners may engage in abstinence and avoid various activities prior to a festival or ritual.
Various words, termed "imi-kotoba", are also regarded as taboo, and people avoid speaking them when at a shrine; these include "shi" (death), "byō" (illness), and "shishi" (meat).
Full immersion in the sea is often regarded as the most ancient and efficacious form of purification. This act links with the mythological tale in which Izanagi immersed himself in the sea to purify himself after discovering his deceased wife; it was from this act that other kami sprang from his body. An alternative is immersion beneath a waterfall.
Salt is often regarded as a purifying substance; some Shinto practitioners will for instance sprinkle salt on themselves after a funeral, while those running restaurants may put a small pile of salt outside before business commences each day. Fire, also, is perceived as a source of purification.
In Shinto, "kannagara" ("way of the kami") describes the law of the natural order. Shinto incorporates morality tales and myths but no overarching, codified ethical doctrine; Offner noted that Shinto specified no "unified, systematized code of behaviour". Its views of "kannagara" influence certain ethical views, focused on sincerity ("makoto") and honesty ("tadashii"). Shintō sometimes includes reference to four virtues known as the "akaki kiyoki kokoro" or "sei-mei-shin". "Makoto" is regarded as a cardinal virtue in Japanese religion more broadly. Offner believed that in Shinto, ideas about goodness linked to "that which possesses, or relates to, beauty, brightness, excellence, good fortune, nobility, purity, suitability, harmony, conformity, [and] productivity." Shinto's flexibility regarding morality and ethics has been a source of frequent criticism, especially from those arguing that Shinto can readily become a pawn for those wishing to use it to legitimise their authority and power.
Throughout Japanese history, the notion of "saisei-itchi", or the union of religious authority and political authority, has long been prominent.
Cali and Dougill noted that Shinto had long been associated with "an insular and protective view" of Japanese society. They added that in the modern world, Shinto tends toward conservatism and nationalism. In the late 1990s, Bocking noted that "an apparently regressive nationalism still seems the natural ally of some central elements" of Shinto. As a result of these associations, Shinto is still viewed suspiciously by various civil liberties groups in Japan and by many of Japan's neighbours.
The priests of Shinto shrines may face various ethical conundrums. In the 1980s, for instance, the priests at the Suwa Shrine in Nagasaki debated whether to invite the crew of a U.S. Navy vessel docked at the port city to their festival celebrations given the sensitivities surrounding the 1945 U.S. use of the atomic bomb on the city. In other cases, priests have opposed construction projects on shrine-owned land, sometimes putting them at odds with other interest groups. At Kaminoseki in the early 2000s, a priest opposed the sale of shrine lands to build a nuclear power plant; he was eventually pressured to resign over the issue. Another issue of considerable debate has been the activities of the Yasukuni Shrine in Tokyo. The shrine is devoted to Japan's war dead, and in 1979 it enshrined 14 men, including Hideki Tojo, who were declared Class-A defendants at the Tokyo War Crimes Trials. This generated both domestic and international condemnation, particularly from China and Korea.
In the 21st century, Shinto has increasingly been portrayed as a nature-centred spirituality with environmentalist credentials. Shinto shrines have increasingly emphasised the preservation of the forests surrounding many of them, and several shrines have collaborated with local environmentalist campaigns. In 2014, an international interreligious conference on environmental sustainability was held at the Ise shrine, attended by United Nations representatives and around 700 Shinto priests. Critical commentators have characterised the presentation of Shinto as an environmentalist movement as a rhetorical ploy rather than a concerted effort by Shinto institutions to become environmentally sustainable. The scholar Aike P. Rots suggested that the repositioning of Shinto as a "nature religion" may have grown in popularity as a means of disassociating the religion from controversial issues "related to war memory and imperial patronage."
Shinto tends to focus on ritual behavior rather than doctrine. The philosophers James W. Boyd and Ron G. Williams stated that Shinto is "first and foremost a ritual tradition", while Picken observed that "Shinto is interested not in "credenda" but in "agenda", not in things that should be believed but in things that should be done." The scholar of religion Clark B. Offner stated that Shinto's focus was on "maintaining communal, ceremonial traditions for the purpose of human (communal) well-being".
It is often difficult to distinguish Shinto practices from Japanese customs more broadly, with Picken observing that the "worldview of Shinto" provided the "principal source of self-understanding within the Japanese way of life". Nelson stated that "Shinto-based orientations and values[…] lie at the core of Japanese culture, society, and character".
Public spaces in which the kami are worshipped are often known under the generic term "jinja" ("kami-place"); this term applies to the location rather than to a specific building. "Jinja" is usually translated as "shrine" in English, although in earlier literature was sometimes translated as "temple", a term now more commonly reserved for Japan's Buddhist structures. By the late twentieth century, the Association of Shinto Shrines estimated that there were approximately 80,000 shrines affiliated to it across Japan, with another 20,000 being unaffiliated. They are found all over the country, from isolated rural areas to dense metropolitan ones. Some of the grand shrines with imperial associations are termed "jingū".
The architectural styles of Shinto shrines had largely developed by the heian period. The inner sanctuary in which the "kami" is believed to live is known as a "honden". Typically, human worshippers carry out their acts outside of the "honden". Near the honden can sometimes be found a subsidiary shrine, the "bekkū", to another kami; the kami inhabiting this shrine is not necessarily perceived as being inferior to that in the honden. At some places, halls of worship have been erected, termed "haiden". On a lower level can be found the hall of offerings, known as a "heiden". Together, the building housing the honden, haiden, and heiden is called a "hongū". In some shrines, there is a separate building in which to conduct additional ceremonies, such as weddings, known as a "gishikiden", or a specific building in which the "kagura" dance is performed, known as the "kagura-den". The precincts of the shrine are known as the "keidaichi".
Shrine entrances are marked by a two-post gateway with either one or two crossbeams atop it, known as "torii". The exact details of these "torii" varies and there are at least twenty different styles. These are regarded as demarcating the area where the "kami" resides; passing under them is often viewed as a form of purification. More broadly, "torii" are internationally recognised symbols of Japan. Their architectural form is distinctly Japanese, although the decision to paint most of them in vermillion reflects a Chinese influence dating from the Nara period. Also set at the entrances to many shrines are "Komainu", statues of lion or dog like animals perceived to scare off malevolent spirits; typically these will come as a pair, one with its mouth open, the other with its mouth closed.
Shrines are often set within gardens, even in cities. Others are surrounded by wooded groves, referred to as "chinju no mori" ("forest of the tutelary "kami""). These vary in size, from just a few trees to sizeable areas of woodland stretching over mountain slopes. Shrines often have an office, known as a "shamusho", and other buildings such as a priests' quarters and a storehouse. Various kiosks often sell amulets to visitors. Since the late 1940s, shrines have had to be financially self-sufficient, relying on the donations of worshippers and visitors. These funds are used to pay the wages of the priests, to finance the upkeep of the buildings, to cover the shrine's membership fees of various regional and national Shinto groups, and to contribute to disaster relief funds.
In Shinto, it is seen as important that the places in which kami are venerated be kept clean and not neglected. Through to the Edo period, it was common for Shinto shrines to be demolished and rebuilt at a nearby location so as to remove any pollutants and ensure purity. This has continued into recent times at certain sites, such as the Ise Grand Shrine, which is moved to an adjacent site every two decades. Separate shrines can also be merged in a process known as "jinja gappei". Shrines may have legends about their foundation, which are known as "en-gi". These sometimes also record miracles associated with the shrine. From the Heian period on, the "en-gi" were often retold on picture scrolls known as "emakimono".
Shrines may be cared for by priests, by local communities, or by families on whose property the shrine is found. Shinto priests are known in Japanese as "Kannushi", meaning "proprietor of kami". Many kannushi take on the role in a line of hereditary succession traced down specific families. In contemporary Japan, there are two main training universities for those wishing to become Shinto priests, at Kokugakuin University in Tokyo and at Kogakkan University in Mie Prefecture. Priests can rise through the ranks over the course of their careers. The number of priests at a particular shrine can vary; some shrines can have over 12 priests, and others have none, instead being administered by local lay volunteers. Some priests earn a living administering to multiple small shrines, sometimes over ten or more.
Priestly dress includes a tall, rounded hat known as an "eboshi", and black lacquered wooden clogs known as "asagutsu". Also part of standard priestly attire is a "hiōgi" fan. The outer garment worn by a priest, usually colored black, red, or light blue, is the "hō", or the "ikan". A white silk version of the ikan, used for formal occasions, is known as the "saifuku". Another priestly robe is the "kariginu", which is modeled on heian-style hunting garments.
The chief priest at a shrine is known as a "gūji". Larger shrines may also have an assistant head priest, the "gon-gūji". As with teachers, instructors, and Buddhist clergy, Shinto priests are often referred to as "sensei" by lay practitioners. Historically, there were various female priests although they were largely pushed out of their positions in 1868. During the Second World War, women were again allowed to become priests to fill the void caused by large numbers of men being enlisted in the military. In the early twenty-first century, male priests have still dominated Shinto institutions. Male priests are free to marry and have children. At smaller shrines, priests often have other full-time jobs, and serve only as priests during special occasions.
Before certain major festivals, priests may undergo a period of abstinence from sexual relations. Some of those involved in festivals also abstain from a range of other things, such as consuming tea, coffee, or alcohol, immediately prior to the events.
The priests are assisted by "jinja miko", sometimes referred to as "shrine-maidens" in English. These "miko" are typically unmarried, although not necessarily virgins. In many cases they are the daughters of a priest or a practitioner. They are subordinate to the priests in the shrine hierarchy. Their most important role is in the "kagura" dance, known as "otome-mai". "Miko" receive only a small salary but gain respect from members of the local community and learn skills such as cooking, calligraphy, painting, and etiquette which can benefit them when later searching for employment or a marriage partner. They generally do not live at the shrines. Sometimes they fill other roles, such as being secretaries in the shrine offices or clerks at the information desks, or as waitresses at the "naorai" feasts. They also assist "Kannushi" in ceremonial rites.
Individual worship conducted at a shrine is known as "hairei". A visit to a shrine, which is known as "jinja mairi" in Japanese, typically takes only a few minutes. Some individuals visit the shrines every day, often on their route to work each morning. These rituals usually take place not inside the honden itself but in an oratory in front of it. The general procedure entails an individual approaching the "honden", where the practitioners places a monetary offering in a box before ringing a bell to call the attention of the "kami". Then, they bow, clap, and stand while silently offering a prayer. The clapping is known as "kashiwade" or "hakushu"; the prayers or supplications as "kigan". When at the shrine, individuals offering prayers are not necessarily praying to a specific kami. A worshipper may not know the name of a kami residing at the shrine nor how many kami are believed to dwell there. Unlike in certain other religious traditions such as Christianity and Islam, Shinto shrines do not have weekly services that practitioners are expected to attend.
Some Shinto practitioners do not offer their prayers to the kami directly, but rather request that a priest offer them on their behalf; these prayers are known as "kitō". Many individuals approach the kami asking for pragmatic requests. Requests for rain, known as "amagoi" ('rain-soliciting') have been found across Japan, with Inari a popular choice for such requests.
Other prayers reflect more contemporary concerns. For instance, people may ask that the priest approaches the kami so as to purify their car in the hope that this will prevent it from being involved in an accident. Similarly, transport companies often request purification rites for new buses or airplanes which are about to go into service.
Before a building is constructed, it is common for either private individuals or the construction company to employ a Shinto priest to come to the land being developed and perform the "jichinsai", or earth sanctification ritual. This purifies the site and asks the kami to bless it.
People often ask the kami to help offset inauspicious events that may affect them. For instance, in Japanese culture, the age 33 is seen as being unlucky for women and the age 42 for men, and thus people can ask the kami to offset any ill-fortune associated with being this age. Certain directions can also be seen as being inauspicious for certain people at certain times and thus people can approach the kami asking them to offset this problem if they have to travel in one of these unlucky directions.
Pilgrimage has long been an important facet of Japanese religion, and Shinto features pilgrimages to shrines, which are known as "junrei". A round of pilgrimages, whereby individuals visit a series of shrines and other sacred sites that are part of an established circuit, is known as a "junpai".
For many centuries, people have also visited the shrines for primarily cultural and recreational reasons, as opposed to spiritual ones. Many of the shrines are recognised as sites of historical importance and some are classified as UNESCO World Heritage Sites. Shrines such as Shimogamo Jinja and Fushimi Inari Taisha in Kyoto, Meiji Jingū in Tokyo, and Atsuta Jingū in Nagoya are among Japan's most popular tourist sites.
Shinto rituals begin with a process of purification, or "harae". This entails an individual sprinkling water on the face and hands, a procedure known as "temizu", using a font known as a "temizuya". Another form of purification at the start of a Shinto rite entails waving a white paper streamer or wand known as the "haraigushi". When not in use, the "haraigushi" is usually kept in a stand. The priest waves the "haraigushi" horizontally over a person or object being purified in a movement known as "sa-yu-sa" ("left-right-left"). Sometimes, instead of a "haraigushi", the purification is carried out with an "o-nusa", a branch of evergreen to which strips of paper have been attached.
The acts of purification accomplished, petitions known as "norito" are spoken to the kami. This is followed by an appearance by the "miko", who commence in a slow circular motion before the main altar.
Following the purification procedure, offerings are presented to the kami by being placed on a table. This act is known as "hōbei".
Historically, the offerings given the "kami" included food, cloth, swords, and horses. In the contemporary period, lay worshippers usually give gifts of money to the kami while priests generally offer them food, drink, and sprigs of the sacred "sakaki" tree. A common offering in the present are sprigs of the "sakaki" tree. Animal sacrifices are not considered appropriate offerings, as the shedding of blood is seen as a vile act that necessitates purification. The offerings presented are sometimes simple and sometimes more elaborate; at the Grand Shrine of Ise, for instance, 100 styles of food are laid out as offerings.
After the offerings have been given, people often sip rice wine known as "o-miki". Drinking the "o-miki" wine is seen as a form of communion with the kami. On important occasions, a feast is then held, known as "naorai", inside a banquet hall attached to the shrine complex.
The Kami are believed to enjoy music. One style of music performed at shrines is "gagaku". Instruments used include three reeds (fue, sho, and hichiriki), the yamato-koto, and the "three drums" (taiko, kakko, and shōko). Other musical styles performed at shrines can have a more limited focus. At shrines such as Ōharano Shrine in Kyoto, "azuma-asobi" ('eastern entertainment') music is performed on April 8th. Also in Kyoto, various festivals make use of the "dengaku" style of music and dance, which originated from rice-planting songs. During rituals, people visiting the shrine are expected to sit in the "seiza" style, with their legs tucked beneath their bottom. To avoid cramps, individuals who hold this position for a lengthy period of time may periodically move their legs and flex their heels.
Many Shinto practitioners also have a "kamidana" or family shrine in their home. These usually consist of shelves placed at an elevated position in the living room. The popularity of "kamidana" increased greatly during the Meiji era. "Kamidana" can also be found in workplaces, restaurants, shops, and ocean-going ships. Some public shrines sell entire kamidana. Along with the "kamidana", many Japanese households also have "butsudan", Buddhist altars enshrining the ancestors of the family; ancestral reverence remains an important aspect of Japanese religious tradition.
Kamidana often enshrine the kami of a nearby public shrine as well as a tutelary kami associated with the house's occupants or their profession. They can be decorated with miniature "torii" and "shimenawa" and include amulets obtained from public shrines. They often contain a stand on which to place offerings; daily offerings of rice, salt, and water are placed there, with sake and other items also offered on special days. Prior to giving these offerings, practitioners often bathe, rinse their mouth, or wash their hands as a form of purification.
Household Shinto can focus attention on the "dōzoku-shin", "kami" who are perceived to be ancestral to the "dōzoku" or extended kinship group. Small village shrines containing the tutelary kami of an extended family are known as "iwai-den".
In addition to the temple shrines and the household shrines, Shinto also features small wayside shrines known as "hokora". Other open spaces used for the worship of kami are "iwasaka", an area surrounded by sacred rocks.
A common feature of Shinto shrines is the provision of "ema", small wooden plaques onto which practitioners will write a wish or desire that they would like to see fulfilled. The practitioner's message is written on one side of the plaque, while on the other is usually a printed picture or pattern related to the shrine itself. Ema are provided both at Shinto shrines and Buddhist temples in Japan; unlike most amulets, which are taken away from the shrine, the ema are typically left there as a message for the resident kami. Those administering the shrine will then often burn all of the collected ema at new year.
A form of divination that is popular at Shinto shrines are the "omikuji". These are small slips of paper which are obtained from the shrine (for a donation) and which are then read to reveal a prediction for the future. Those who receive a bad prediction often then tie the "omikuji" to a nearby tree or frame set up for the purpose. This act is seen as rejecting the prediction, a process called "sute-mikuji", and thus avoiding the misfortune it predicted.
The use of amulets are widely sanctioned and popular in Japan. These may be made of paper, wood, cloth, metal, or plastic.
"Ofuda" act as amulets to keep off misfortune and also serve as talismans to bring benefits and good luck. They typically comprise a tapering piece of wood onto which the name of the shrine and its enshrined kami are written or printed. The ofuda is then wrapped inside white paper and tied up with a colored thread. "Ofuda" are provided both at Shinto shrines and Buddhist temples. Another type of amulet provided at shrines and temples are the "omamori", which are traditionally small, brightly colored drawstring bags with the name of the shrine written on it. Omamori and ofuda are sometimes placed within a charm bag known as a "kinchaku", typically worn by small children.
At new year, many shrines sell "hamaya" ("evil-destroying arrows") which people can purchase and keep in their home over the coming year to bring good luck.
A "daruma" is a round, paper doll of the Indian monk, Bodhidharma. The recipient makes a wish and paints one eye; when the goal is accomplished, the recipient paints the other eye. While this is a Buddhist practice, darumas can be found at shrines, as well. These dolls are very common.
Other protective items include "dorei", which are earthenware bells that are used to pray for good fortune. These bells are usually in the shapes of the zodiacal animals. "Inuhariko" are paper dogs that are used to induce and to bless good births. Collectively, these talismans through which home to manipulate events and influence spirits, as well as related mantras and rites for the same purpose, are known as "majinai".
"Kagura" describes the music and dance performed for the kami. Throughout Japanese history, dance has played an important culture role and in Shinto it is regarded as having the capacity to pacify kami. There is a mythological tale of how "kagura" dance came into existence. According to the "Kojiki" and the "Nihon Shoki", Ame-no-Uzume performed a dance to entice Amaterasu out of the cave in which she had hidden herself. The word "kagura" is thought to be a contracted form of "kami no kura" or "seat of the kami" or the "site where the kami is received."
There are two broad types of kagura. One is Imperial kagura, also known as "mikagura". This style was developed in the imperial court and is still performed on imperial grounds every December. It is also performed at the Imperial harvest festival and at major shrines such as Ise, Kamo, and Iwashimizu Hachiman-gū. It is performed by singers and musicians using "shakubyoshi" wooden clappers, a "hichiriki", a "kagura-bue" flute, and a six-stringed zither. The other main type is "sato-kagura", descended from "mikagura" and performed at shrines across Japan. Depending on the style, it is performed by "miko" or by actors wearing masks to portray various mythological figures. These actors are accompanied by a "hayashi" band using flutes and drums. There are also other, regional types of kagura.
Music plays a very important role in the "kagura" performance. Everything from the setup of the instruments to the most subtle sounds and the arrangement of the music is crucial to encouraging the kami to come down and dance. The songs are used as magical devices to summon the "kami" and as prayers for blessings. Rhythm patterns of five and seven are common, possibly relating to the Shinto belief of the twelve generations of heavenly and earthly deities. There is also vocal accompaniment called "kami uta" in which the drummer sings sacred songs to the "kami". Often the vocal accompaniment is overshadowed by the drumming and instruments, reinforcing that the vocal aspect of the music is more for incantation rather than aesthetics.
In both ancient Japanese collections, the "Kojiki" and the "Nihon Shoki", Ame-no-uzeme's dance is described as "asobi", which in the old Japanese language means a ceremony that is designed to appease the spirits of the departed, and which was conducted at funeral ceremonies. Therefore, "kagura" is a rite of "tama shizume", of pacifying the spirits of the departed. In the heian period, this was one of the important rites at the Imperial Court and had found its fixed place in the "tama shizume" festival in the eleventh month. At this festival people sing as accompaniment to the dance: "Depart! Depart! Be cleansed and go! Be purified and leave!"
This rite of purification is also known as "chinkon". It was used for securing and strengthening the soul of a dying person. It was closely related to the ritual of "tama furi" (shaking the spirit), to call back the departed soul of the dead or to energize a weakened spirit. Spirit pacification and rejuvenation were usually achieved by songs and dances, also called "asobi". The ritual of "chinkon" continued to be performed on the emperors of Japan, thought to be descendants of Amaterasu. It is possible that this ritual is connected with the ritual to revive the sun "kami" during the low point of the winter solstice.
Public festivals are known as "matsuri".
Picken suggested that the festival was "the central act of Shinto worship" because Shinto was a "community- and family-based" religion. According to a traditional view of the lunar calendar, Shinto shrines should hold their festival celebrations on "hare-no-hi" or "clear" days", the days of the new, full, and half moons. Other days, known as "ke-no-hi", were generally avoided for festivities. However, since the late 20th century, many shines have held their festival celebrations on the Saturday or Sunday closest to the date so that fewer individuals will be working and will be able to attend the festivities.
Spring festivals are called "haru-matsuri" and often incorporate prayers for a good harvest. They sometimes incorporate "ta-asobi" ceremonies, in which rice is ritually planted.
Autumn festivals are known as "aki-matsuri" and primarily focus on thanking the kami for the rice or other harvest. The "Niiname-sai", or festival of new rice, is held across many Shinto shrines on 23 November. The Emperor also conducts a ceremony to mark this festival, at which he presents the first fruits of the harvest to the kami at midnight. Winter festivals, called "fuyu no matsuri" often feature on welcoming in the spring, expelling evil, and calling in good influences for the future. There is little difference between winter festivals and specific new year festivals.
Many people visit shrines to celebrate new year; this "first visit" of the year is known as "hatsumōde" or "hatsumairi". There, they buy amulets and talismans to bring them good fortune over the coming year. To celebrate this festival, many Japanese put up rope known as "shimenawa" on their hopes and places of business. Some also put up "kadomatsu" ("gateway pine"), an arrangement of pine branches, plum tree, and bamboo sticks. Also displayed are "kazari", which are smaller and more colourful; their purpose is to keep away misfortune and attract good fortune. In many places, new year celebrations incorporate "hadaka matsuri" ("naked festivals") in which men dressed only in a "fundoshi" loincloth, engage in a particular activity, such as fighting over a specific object or immersing themselves in a river.
Many festivals are specific to particular shrines or regions. The Aoi Matsuri festival, held on May 15th to pray for an abundant grain harvest, takes place at shrines in Kyoto.
Processions or parades during Shinto festivals are known as "gyōretsu". During public processions, the kami travel in portable shrines known as "mikoshi". The processions for "matsuri" can be raucous, with many of the participants being drunk. They are often understood as having a regenerative effect on both the participants and the community. In various cases the mikoshi undergo "hamaori" ("going down to the beach"), a process by which they are carried to the sea shore and sometimes into the sea, either by bearers or a boat. In the Okunchi festival held in the southwestern city of Nagasaki, the kami of the Suwa Shrine are paraded down to Ohato, where they are placed in a shrine there for several days before being paraded back to Suwa.
The formal recognition of events is given great importance in Japanese culture. A common ritual, the "hatsumiyamairi", entails a child's first visit to a Shinto shrine. A tradition holds that, if a boy he should be brought to the shrine on the thirty-second day after birth, and if a girl she should be brought on the thirty-third day. Historically, the child was commonly brought to the shrine not by the mother, who was considered impure after birth, but by another female relative; since the late 20th century it has been more common for the mother to do so.
Another, the "saiten-sai", is a coming of age ritual marking the transition to adulthood and occurs when an individual is around twenty.
Wedding ceremonies are often carried out at Shinto shrines. In Japan, funerals tend to take place at Buddhist temples; with Shintō funerals being rare. Bocking noted that most Japanese people are "still 'born Shinto' yet 'die Buddhist'." In Shinto thought, contact with death is seen as imparting impurity ("kegare"); the period following this contact is known as "kibuku" and is associated with various taboos. In cases when dead humans are enshrined as kami, the physical remains of the dead are not stored at the shrine. Although not common, there have been examples of funerals conducted through Shinto rites. The earliest examples are known from the mid-seventeenth century; these occurred in certain areas of Japan and had the support of the local authorities.
Following the Meiji Restoration, in 1868 the government recognised specifically Shinto funerals for Shinto priests. Five years later, this was extended to cover the entire Japanese population. Despite this Meiji promotion of Shinto funerals, the majority of the population continued to have Buddhist funeral rites.
Ancestral reverence remains an important part of Japanese religious custom.
Divination is the focus of many Shinto rituals. Among the ancient forms of divination found in Japan are "rokuboku" and "kiboku". Several forms of divination entailing archery are also practiced in Shintō, known as "yabusame" and "omato-shinji".
Kitagawa stated that there could be "no doubt" that various types of "shamanic diviners" played a role in early Japanese religion.
Shinto practitioners believe that the "kami" can possess a human being and then speak through them, a process known as "kami-gakari". Several new religious movements drawing upon Shinto, such as Tenrikyo and Oomoto, were founded by individuals claiming to be guided by a possessing kami. The "itako" and "ichiko" are blind women who train to become spiritual mediums in the northern Tohoku region of Japan. In the late twentieth century, they were present in Japanese urban centers. "Itako" train in the role under other itako from childhood, memorialising sacred texts and prayers, fasting, and undertaking acts of severe asceticism, through which they are believed to cultivate supernatural powers. In an initiation ceremony, a kami is believed to possess the young woman, and the two are then ritually "married". After this, the kami becomes her tutelary spirit and she will henceforth be able to call upon it, and a range of other spirits, in future. Through contacting these spirits, she is able to convey their messages to the living. "Itako" usually carry out their rituals independent of the shrine system.
Today, "itako" are most commonly associated with Mount Osore in Aomori Prefecture. There, an annual festival is held beside the Entsuji Buddhist temple, which hangs signs disavowing any connection to the "itako". "Itako" gather there to channel the dead for thousands of tourists. In contemporary Japan, "itako" are on the decline. In 2009, less than 20 remained, all over the age of 40. Contemporary education standards have all but eradicated the need for specialized training for the blind.
Earhart commented that Shinto ultimately "emerged from the beliefs and practices of prehistoric Japan", although Kitagawa noted that it was questionable whether prehistoric Japanese religions could be accurately termed "early Shinto". The historian Helen Hardacre noted that it was the Yayoi period of Japanese prehistory which was the "first to leave artifacts that can reasonably be linked to the later development of Shinto". Kami were worshipped at various landscape features during this period; at this point, their worship consisted largely of beseeching and placating them, with little evidence that they were viewed as compassionate entities. In the subsequent Kofun period, Korean migration to Japan brought with it both Confucianism and Buddhism. Buddhism had a particular impact on the kami cults. Migrant groups and Japanese who increasingly aligned with these foreign influences built Buddhist temples in various parts of the Japanese islands. Several rival clans who were more hostile to these foreign influences began adapting the shrines of their kami to more closely resemble the new Buddhist structures.
From the early sixth century CE, the style of ritual favored by the Yamato clan began spreading to other kami shrines around Japan as the Yamato extended their territorial influence. Buddhism was also growing. According to the "Nihon Shoki", in 587 Emperor Yōmei converted to Buddhism and under his sponsorship Buddhism spread.
From the eighth century, Shinto and Buddhism were thoroughly intertwined in Japanese society.
The great bells and drums, Kofun burial mounds, and the founding of the imperial family are important to this period. This is the period of the development of the feudal state, and the Yamato and Izumo cultures. Both of these dominant cultures have a large and central shrine which still exists today, Ise Shrine in the North East and Izumo Taisha in the South West. This time period is defined by the increase of central power in Naniwa, now Osaka, of the feudal lord system. Also there was an increasing influence of Chinese culture which profoundly changed the practices of government structure, social structure, burial practices, and warfare. The Japanese also held close alliance and trade with the Gaya confederacy which was in the south of the peninsula. The Paekche in the Three Kingdoms of Korea had political alliances with Yamato, and in the 5th century imported the Chinese writing system to record Japanese names and events for trade and political records. In 513 they sent a Confucian scholar to the court to assist in the teachings of Confucian thought. In 552 or 538 a Buddha image was given to the Yamato leader which profoundly changed the course of Japanese religious history, especially in relation to the undeveloped native religious conglomeration that was Shinto. In the latter 6th century, there was a breakdown of the alliances between Japan and Paekche but the influence led to the codification of Shinto as the native religion in opposition to the extreme outside influences of the mainland. Up to this time Shinto had been largely a clan ('uji') based religious practice, exclusive to each clan.
The Theory of Five Elements in Yin and Yang philosophy of Taoism and the esoteric Buddhism had a profound impact on the development of a unified system of Shinto beliefs. In the early Nara period, the "Kojiki" and the "Nihon Shoki" were written by compiling existing myths and legends into a unified account of Japanese mythology. These accounts were written with two purposes in mind: the introduction of Taoist, Confucian, and Buddhist themes into Japanese religion; and garnering support for the legitimacy of the Imperial house, based on its lineage from the sun "kami", Amaterasu. Much of modern Japan was under only fragmentary control by the Imperial family, and rival ethnic groups. The mythological anthologies, along with other poetry anthologies like the "Collection of Ten Thousand Leaves" ("Man'yōshū") and others, were intended to impress others with the worthiness of the Imperial family and their divine mandate to rule.
In particular the Asuka rulers of 552–645 saw disputes between the more major families of the clan Shinto families. There were disputes about who would ascend to power and support the imperial family between the Soga and Mononobe/Nakatomi Shinto families. The Soga family eventually prevailed and supported Empress Suiko and Prince Shōtoku, who helped impress Buddhist faith into Japan. However, it was not until the Hakuhō period of 645–710 that Shinto was installed as the imperial faith along with the Fujiwara Clan and reforms that followed.
Beginning with Emperor Tenmu (672–686), continuing through Empress Jitō (686–697) and Emperor Monmu (697–707), Court Shinto rites are strengthened and made parallel to Buddhist beliefs in court life. Prior to this time clan Shinto had dominated and a codification of "Imperial Shinto" did not exist as such. The Nakatomi family are made the chief court Shinto chaplains and chief priests at Ise Daijingū which held until 1892. Also the practice of sending imperial princesses to the Ise shrine begins. This marks the rise of Ise Daijingū as the main imperial shrine historically. Due to increasing influence from Buddhism and mainland Asian thought, codification of the "Japanese" way of religion and laws begins in earnest. This culminates in three major outcomes: Taihō Code (701 but started earlier), the "Kojiki" (712), and the "Nihon Shoki" (720).
The Taiho Code also called was an attempt to create a bulwark to dynamic external influences and stabilize the society through imperial power. It was a liturgy of rules and codifications, primarily focused on regulation of religion, government structure, land codes, criminal and civil law. All priests, monks, and nuns were required to be registered, as were temples. The Shinto rites of the imperial line were codified, especially seasonal cycles, lunar calendar rituals, harvest festivals, and purification rites. The creation of the imperial Jingi-kan or Shinto Shrine office was completed.
This period hosted many changes to the country, government, and religion. The capital is moved again to Heijō-kyō (modern-day Nara), in AD 710 by Empress Genmei due to the death of the Emperor. This practice was necessary due to the Shinto belief in the impurity of death and the need to avoid this pollution. However, this practice of moving the capital due to "death impurity" is then abolished by the Taihō Code and rise in Buddhist influence. The establishment of the imperial city in partnership with Taihō Code is important to Shinto as the office of the Shinto rites becomes more powerful in assimilating local clan shrines into the imperial fold. New shrines are built and assimilated each time the city is moved. All of the grand shrines are regulated under Taihō and are required to account for incomes, priests, and practices due to their national contributions.
During this time, Buddhism becomes structurally established within Japan by Emperor Shōmu (r. 724–749), and several large building projects are undertaken. The Emperor lays out plans for the Buddha Dainichi (Great Sun Buddha), at Tōdai-ji assisted by the Priest Gyogi (or Gyoki) Bosatsu. The priest Gyogi went to Ise Daijingu Shrine for blessings to build the Buddha Dainichi. They identified the statue of Viarocana with Amaterasu (the sun "kami") as the manifestation of the supreme expression of universality.
The priest Gyogi is known for his belief in assimilation of Shinto Kami and Buddhas. Shinto kami are commonly being seen by Buddhist clergy as guardians of manifestation, guardians, or pupils of Buddhas and bodhisattvas. The priest Gyogi conferred boddhisattva precepts on the Emperor in 749 effectively making the Imperial line the head of state and divine to Shinto while beholden to Buddhism.
With the introduction of Buddhism and its rapid adoption by the court in the 6th century, it was necessary to explain the apparent differences between native Japanese beliefs and Buddhist teachings. One Buddhist explanation saw the "kami" as supernatural beings still caught in the cycle of birth and rebirth (reincarnation). The "kami" are born, live, die, and are reborn like all other beings in the karmic cycle. However, the "kami" played a special role in protecting Buddhism and allowing its teachings of compassion to flourish.
This explanation was later challenged by , who saw the "kami" as different embodiments of the Buddhas themselves ("honji suijaku" theory). For example, he linked Amaterasu (the sun "kami" and ancestor of the Imperial family) with Dainichi Nyorai, a central manifestation of the Buddhists, whose name means literally "Great Sun Buddha". In his view, the "kami" were just Buddhas by another name.
From the eighth century onward up until the Meiji era, the "kami" were incorporated into a Buddhist cosmology in various ways. One view is that the "kami" realised that like all other life-forms, they too were trapped in the cycle of samsara (rebirth) and that to escape this they had to follow Buddhist teachings. Alternative approaches viewed the "kami" as benevolent entities who protected Buddhism, or that the "kami" were themselves Buddhas, or beings who had achieved enlightenment. In this, they could be either "hongaku", the pure spirits of the Buddhas, or "honji suijaku", transformations of the Buddhas in their attempt to help all sentient beings.
Buddhism and Shinto coexisted and were amalgamated in the "shinbutsu-shūgō" and Kūkai's syncretic view held wide sway up until the end of the Edo period. There was no theological study that could be called "Shinto" during medieval and early modern Japanese history, and a mixture of Buddhist and popular beliefs proliferated. At that time, there was a renewed interest in "Japanese studies" ("kokugaku"), perhaps as a result of the closed country policy.
In the 18th century, various Japanese scholars, in particular , tried to isolate ideas and beliefs that were uniquely Japanese, which included tearing apart the "real" Shinto from various foreign influences, especially Buddhism. The attempt was largely unsuccessful; however, the attempt did set the stage for the arrival of State Shinto, following the Meiji Restoration (c. 1868), when Shinto and Buddhism were separated ("shinbutsu bunri").
Fridell argues that scholars call the period 1868–1945 the "State Shinto period" because, "during these decades, Shinto elements came under a great deal of overt state influence and control as the Japanese government systematically utilized shrine worship as a major force for mobilizing imperial loyalties on behalf of modern nation-building." However, the government had already been treating shrines as an extension of government before Meiji; see for example the Tenpō Reforms. Moreover, according to the scholar Jason Ānanda Josephson, It is inaccurate to describe shrines as constituting a "state religion" or a "theocracy" during this period since they had neither organization, nor doctrine, and were uninterested in conversion.
The Meiji Restoration reasserted the importance of the Emperor and the ancient chronicles to establish the Empire of Japan, and in 1868 the government attempted to recreate the ancient imperial Shinto by separating shrines from the temples that housed them. During this period, numerous scholars of "kokugaku" believed that this national Shinto could be the unifying agent of the country around the Emperor while the process of modernization was undertaken with all possible speed. The psychological shock of the Western "Black Ships" and the subsequent collapse of the shogunate convinced many that the nation needed to unify in order to resist being colonized by outside forces.
In 1871, a Ministry of Rites ("jingi-kan)" was formed and Shinto shrines were divided into twelve levels with the Ise Shrine (dedicated to Amaterasu, and thus symbolic of the legitimacy of the Imperial family) at the peak and small sanctuaries of humble towns at the base. The following year, the ministry was replaced with a new Ministry of Religion, charged with leading instruction in ""shushin"" (moral courses). As part of the Great Promulgation Campaign, priests were officially nominated and organized by the state, and they instructed the youth in a form of Shinto theology based on the official dogma of the divinity of Japan's national origins and its Emperor. However, this propaganda did not succeed, and the unpopular Ministry of Rites was dissolved in the mid-1870s.
In 1882, the Meiji government designated 13 religious movements that were neither Buddhist nor Christian to be forms of "Sect Shinto". The number and name of the sects given this formal designation varied.
Although the government sponsorship of shrines declined, Japanese nationalism remained closely linked to the legends of foundation and emperors, as developed by the "kokugaku" scholars. In 1890, the Imperial Rescript on Education was issued, and students were required to ritually recite its oath to "offer yourselves courageously to the State" as well as to protect the Imperial family. Such processes continued to deepen throughout the early Shōwa era, coming to an abrupt end in August 1945 when Japan lost the war in the Pacific. On 1 January 1946, Emperor Shōwa issued the Ningen-sengen, in which he quoted the Five Charter Oath of Emperor Meiji and declared that he was not an "akitsumikami" (a deity in human form).
During the U.S. occupation, a new constitution was drawn up. This both enshrined freedom of religion in Japan and initiated the separation of church and state, a measure designed to eradicate "state Shinto" ("kokka shinto"). As part of this, the Emperor formally declared that he was not a kami; any Shinto rituals performed by the imperial family became their own private affair. This disestablishment meant that the government subsidies to shrines ceased, although it also provided shrines with renewed freedom to organise their own affairs. In 1946 many shrines then formed a voluntary organisation, the Association of Shinto Shrines (Jinja Honchō), through which they could coordinate their efforts. In 1956 the association issued a creedal statement, the "keishin seikatsu no kōryō" ("general characteristics of a life lived in reverence of the kami"), to summarise what they regarded as the principles of Shinto practice. By the late 1990s around 80% of Japan's Shinto shrines were part of this association.
In the post-war decades, many Japanese blamed Shinto for encouraging the militaristic policy which had resulted in defeat and occupation. Conversely, many Shinto practitioners remained nostalgic for the State Shinto system, and concerns were repeatedly expressed that sectors of Japanese society were conspiring to restore it. Post-war, various legal debates have occurred over the involvement of public officials in Shinto. In 1965, for instance, the city of Tsu, Mie Prefecture paid four Shinto priests to purify the site where the municipal athletic hall was to be built. Critics brought the case to court, claiming it contravened the constitutional separation of church and state; in 1971 the high court ruled that the city administration's act had been unconstitutional. In the post-war period, Shinto themes were often blended into Japanese new religious movements; of the Sect Shinto groups, Tenrikyo was probably the most successful in the post-war decades, although in 1970 it repudiated its Shinto identity.
Shinto has also spread abroad to a limited extent, and a few non-Japanese Shinto priests have been ordained. A relatively small number of people practice Shinto in America. There are several Shinto shrines in America. Shrines were also established in Taiwan and Korea during the period of Japanese imperial rule, but following the war, they were either destroyed or converted into some other use.
The Tsubaki Grand Shrine in Suzuka, Mie Prefecture, was the first to establish a branch abroad: the Tsubaki Grand Shrine of America, initially located in California and then moved to Granite Falls, Washington.
Shinto perspectives also exerted an influence on popular culture. The film director Hayao Miyazaki of Studio Ghibli for instance acknowledged Shinto influences on his creation of films such as "Spirited Away".
Shinto is primarily found in Japan, although the period of the empire it was introduced to various Japanese colonies and in the present is also practiced by members of the Japanese diaspora.
Most Japanese people participate in several religious traditions. The main exceptions to this are members of smaller, minority religious groups, including Christianity and several new religions, which promote exclusivist worldviews.
Determining the proportions of the country's population who engage in Shinto activity is hindered by the fact that, if asked, Japanese people will often say "I have no religion". Many Japanese people avoid the term "religion", in part because they dislike the connotations of the word which most closely matches it in the Japanese language, "shūkyō". The latter term derives from "shū" ('sect') and "kyō" ('doctrine').
As much as nearly 80% of the population in Japan participates in Shinto practices or rituals, but only a small percentage of these identify themselves as "Shintoists" in surveys. This is because "Shinto" has different meanings in Japan. Most of the Japanese attend Shinto shrines and beseech "kami" without belonging to an institutional Shinto religion. There are no formal rituals to become a practitioner of "folk Shinto". Thus, "Shinto membership" is often estimated counting only those who do join organised Shinto sects. Shinto has about 81,000 shrines and about 85,000 priests in the country. According to surveys carried out in 2006 and 2008, less than 40% of the population of Japan identifies with an organised religion: around 35% are Buddhists, 3% to 4% are members of Shinto sects and derived religions. In 2008, 26% of the participants reported often visiting Shinto shrines, while only 16.2% expressed belief in the existence of "kami" in general.
"Jinja" established outside of Japan itself are known as "kaigai jinja" ("overseas shrines"), a term coined by Ogasawara Shōzō. These were established both in territories throughout Asia conquered by the Japanese and in areas across the world where Japanese migrants settled. At the time that the Japanese Empire collapsed in the 1940s, there were over 600 public shrines, and over 1,000 smaller shrines, within Japan's conquered territories. Following the collapse of the empire, many of these shrines were disbanded.
Japanese migrants established several shrines in Brazil.
Shinto has attracted interest outside of Japan, in part because it lacks the doctrinal focus of major religions found in other parts of the world.
Shinto was introduced to the United States largely by interested European Americans rather than by Japanese migrants.
In the early twentieth century, and to a lesser extent in the second half, Shinto was depicted as monolithic and intensely indigenous by the Japanese State institution and there were various state induced taboos influencing academic research into Shinto in Japan. Japanese secular academics who questioned the historical claims made by the Imperial institution for various Shinto historical facts and ceremonies, or who personally refused to take part in certain Shinto rituals, could lose their jobs and livelihood. Following the Second World War, many scholars writing on Shinto were also priests; they wrote from the perspective of active proponents. The result of this practice was to depict the actual history of a dynamic and diverse set of beliefs interacting with knowledge and religion from mainland China as static and unchanging formed by the imperial family centuries ago. Some secular scholars accused these individuals of blurring theology with historical analysis. In the late 1970s and 1980s the work of a secular historian Kuroda Toshio attempted to frame the prior held historical views of Shinto not as a timeless "indigenous" entity, but rather an amalgam of various local beliefs infused over time with outside influences through waves of Buddhism, Taoism, and Confucianism. Part of his analysis is that this obfuscation was a cloak for Japanese ethnic nationalism used by state institutions especially in the Meiji and post war era to underpin the Japanese national identity. | https://en.wikipedia.org/wiki?curid=28272 |
Scottish Rite
The Ancient and Accepted Scottish Rite of Freemasonry (the Northern Masonic Jurisdiction in the United States often omits the "and", while the English Constitution in the United Kingdom omits the "Scottish"), commonly known as simply the Scottish Rite (or, in England and Australia, as the Rose Croix although this is only one of its degrees), is one of several Rites of Freemasonry. A Rite is a progressive series of degrees conferred by various Masonic organizations or bodies, each of which operates under the control of its own central authority. In the Scottish Rite the central authority is called a Supreme Council.
The Scottish Rite is one of the appendant bodies of Freemasonry that a Master Mason may join for further exposure to the principles of Freemasonry. It is also concordant, in that some of its degrees relate to the degrees of Symbolic (Craft) Freemasonry. In England and some other countries, while the Scottish Rite is not accorded official recognition by the Grand Lodge, only a recognised Freemason may join and there is no prohibition against his doing so. In the United States, however, the Scottish Rite is officially recognized by Grand Lodges as an extension of the degrees of Freemasonry. The Scottish Rite builds upon the ethical teachings and philosophy offered in the Craft (or Blue) Lodge, through dramatic presentation of the individual degrees.
The seed of the myth of Stuart Jacobite influence on the higher degrees may have been a careless and unsubstantiated remark made by John Noorthouk in the 1784 Book of Constitutions of the Premier Grand Lodge of London. It was stated, without support, that King Charles II (older brother and predecessor to James II) was made a Freemason in the Netherlands during the years of his exile (1649–60). However, there were no documented lodges of Freemasons on the continent during those years. The statement may have been made to flatter the fraternity by claiming membership for a previous monarch. This folly was then embellished by John Robison (1739–1805), a professor of Natural Philosophy at the University of Edinburgh, in an anti-Masonic work published in 1797. The lack of scholarship exhibited by Robison in that work caused the "Encyclopædia Britannica" to denounce it.
A German bookseller and Freemason, living in Paris, working under the assumed name of C. Lenning, embellished the story further in a manuscript titled "Encyclopedia of Freemasonry" probably written between 1822 and 1828 at Leipzig. This manuscript was later revised and published by another German Freemason named Friedrich Mossdorf (1757–1830). Lenning stated that King James II of England, after his flight to France in 1688, resided at the Jesuit College of Clermont, where his followers fabricated certain degrees for the purpose of carrying out their political ends.
By the mid-19th century, the story had gained currency. The well-known English Masonic writer, Dr. George Oliver (1782–1867), in his "Historical Landmarks", 1846, carried the story forward and even claimed that King Charles II was active in his attendance at meetings—an obvious invention, for if it had been true, it would not have escaped the notice of the historians of the time. The story was then repeated by the French writers Jean-Baptiste Ragon (1771–1862) and Emmanuel Rebold, in their Masonic histories. Rebold's claim that the high degrees were created and practiced in Lodge Canongate Kilwinning at Edinburgh are entirely false.
James II died in 1701 at the Palace of St. Germain en Laye, and was succeeded in his claims to the English, Irish and Scottish thrones by his son, James Francis Edward Stuart (1699–1766), the Chevalier St. George, better known as "the Old Pretender", but recognized as James III & VIII by the French King Louis XIV. He was succeeded in his claim by Charles Edward Stuart ("Bonnie Prince Charles"), also known as "the Young Pretender", whose ultimate defeat at the Battle of Culloden in 1746 effectively put an end to any serious hopes of the Stuarts regaining the British crowns.
The natural confusion between the names of the Jesuit College of Clermont, and the short-lived Masonic Chapter of Clermont, a Masonic body that controlled a few high degrees during its brief existence, only served to add fuel to the myth of Stuart Jacobite influence in Freemasonry's high degrees. However, the College and the Chapter had nothing to do with each other. The Jesuit College was located at Clermont, whereas the Masonic Chapter was not. Rather, it was named "Clermont" in honor of the French Grand Master, the Comte de Clermont (Louis de Bourbon, Comte de Clermont) (1709–1771), and not because of any connection with the Jesuit College of Clermont.
A French trader, by the name of Estienne Morin, had been involved in high-degree Masonry in Bordeaux since 1744 and, in 1747, founded an ""Écossais"" lodge (Scottish Lodge) in the city of Le Cap Français, on the north coast of the French colony of Saint-Domingue (now Haiti). Over the next decade, high-degree Freemasonry was carried by French men to other cities in the Western hemisphere. The high-degree lodge at Bordeaux warranted or recognized seven Écossais lodges there.
In Paris in the year 1761, a patent was issued to Estienne Morin, dated 27 August, creating him "Grand Inspector for all parts of the New World". This Patent was signed by officials of the Grand Lodge at Paris and appears to have originally granted him power over the craft lodges only, and not over the high, or "Écossais", degree lodges. Later copies of this Patent appear to have been embellished, probably by Morin, to improve his position over the high-degree lodges in the West Indies.
Morin returned to the West Indies in 1762 or 1763, to Saint-Domingue. Based on his new Patent, he assumed powers to constitute lodges of all degrees, spreading the high degrees throughout the West Indies and North America. Morin stayed in Saint-Domingue until 1766, when he moved to Jamaica. At Kingston, Jamaica, in 1770, Morin created a "Grand Chapter" of his new Rite (the Grand Council of Jamaica). Morin died in 1771 and was buried in Kingston.
Early writers long believed that a "Rite of Perfection" consisting of 25 degrees, (the highest being the "Sublime Prince of the Royal Secret", and being the predecessor of the Scottish Rite), had been formed in Paris by a high-degree council calling itself "The Council of Emperors of the East and West". The title "Rite of Perfection" first appeared in the Preface to the "Grand Constitutions of 1786", the authority for which is now known to be faulty.
It is now generally accepted that this Rite of twenty-five degrees was compiled by Estienne Morin and is more properly called "The Rite of the Royal Secret", or "Morin's Rite".
However, it was known as "The Order of Prince of the Royal Secret" by the founders of the Scottish Rite, who mentioned it in their "Circular throughout the two Hemispheres" or "Manifesto", issued on December 4, 1802.
Henry Andrew Francken, a naturalized French subject born as
"Hendrick Andriese Franken" of Dutch origin, was most important in assisting Morin in spreading the degrees in the New World. Morin appointed him Deputy Grand Inspector General (DGIG) as one of his first acts after returning to the West Indies. Francken worked closely with Morin and, in 1771, produced a manuscript book giving the rituals for the 15th through the 25th degrees. Francken produced at least four such manuscripts. In addition to the 1771 manuscript, there is a second which can be dated to 1783; a third manuscript, of uncertain date, written in Francken’s handwriting, with the rituals 4–25°, which was found in the archives of the Provincial Grand Lodge of Lancashire in Liverpool in approximately 1984; and a fourth, again of uncertain date, with rituals 4–24°, which was known to have been given by H. J. Whymper to the District Grand Lodge of the Punjab and rediscovered about 2010. Additionally, there is a French manuscript dating from 1790-1800 which contains the 25 degrees of the Order of the Royal Secret with additional detail, as well as three other "Hauts Grades" rituals; its literary structure suggests it is derived from a common source as the Francken Manuscripts.
A Loge de Parfaits d' Écosse was formed on 12 April 1764 at New Orleans, becoming the first high-degree lodge on the North American continent. Its life, however, was short, as the Treaty of Paris (1763) ceded New Orleans to Spain, and the Catholic Spanish crown had been historically hostile to Freemasonry. Documented Masonic activity ceased for a time. It did not return to New Orleans until the late 1790s, when French refugees from the revolution in Saint-Domingue settled in the city.
Francken traveled to New York in 1767 where he granted a Patent, dated 26 December 1767, for the formation of a Lodge of Perfection at Albany, which was called "Ineffable Lodge of Perfection". This marked the first time the Degrees of Perfection (the 4th through the 14th) were conferred in one of the Thirteen British colonies in North America. This Patent, and the early minutes of the Lodge, are still extant and are in the archives of Supreme Council, Northern Jurisdiction. (The minutes of Ineffable Lodge of Perfection reveal that it ceased activity on December 5, 1774. It was revived by Giles Fonda Yates about 1820 or 1821, and came under authority of the Supreme Council, Southern Jurisdiction until 1827. That year it was transferred to the Supreme Council, Northern Jurisdiction.)
While in New York City, Francken also communicated the degrees to Moses Michael Hays, a Jewish businessman, and appointed him as a Deputy Inspector General. In 1781, Hays made eight Deputy Inspectors General, four of whom were later important in the establishment of Scottish Rite Freemasonry in South Carolina:
Da Costa returned to Charleston, South Carolina, where he established the "Sublime Grand Lodge of Perfection" in February 1783. After Da Costa's death in November 1783, Hays appointed Myers as Da Costa's successor. Joined by Forst and Spitzer, Myers created additional high-degree bodies in Charleston.
Physician Hyman Isaac Long from the island of Jamaica, who settled in New York City, went to Charleston in 1796 to appoint eight French men; he had received his authority through Spitzer. These men had arrived as refugees from Saint-Domingue, where the slave revolution was underway that would establish Haiti as an independent republic in 1804. They organized a Consistory of the 25th Degree, or "Princes of the Royal Secret," which Masonic historian Brigadier ACF Jackson says became the first Supreme Council of the Scottish Rite. According to Fox, by 1801, the Charleston bodies were the only extant bodies of the Rite in North America.
Although most of the thirty-three degrees of the Scottish Rite existed in parts of previous degree systems, the Scottish Rite did not come into being until the formation of the Mother Supreme Council at Charleston, South Carolina, in May 1801 at Shepheard's Tavern at the corner of Broad and Church Streets (the tavern had been the location of the founding of Freemasonry in South Carolina in 1754). The Founding Fathers of the Scottish Rite who attended became known as "The Eleven Gentlemen of Charleston".
Subsequently, other Supreme Councils were formed in Saint-Domingue (now Haiti) in 1802, in France in 1804, in Italy in 1805, and in Spain in 1811.
On May 1, 1813, an officer from the Supreme Council at Charleston initiated several New York Masons into the Thirty-third Degree and organized a Supreme Council for the "Northern Masonic District and Jurisdiction". On May 21, 1814 this Supreme Council reopened and proceeded to "nominate, elect, appoint, install and proclaim in due, legal and ample form" the elected officers "as forming the "second" Grand and Supreme Council...". Finally, the charter of this organization (written January 7, 1815) added, “We think the "Ratification" ought to be dated 21st day May 5815."
Officially, the Supreme Council, 33°, N.M.J. dates itself from May 15, 1867. This was the date of the "Union of 1867", when it merged with the competing Cerneau "Supreme Council" in New York. The current Ancient and Accepted Scottish Rite, Northern Masonic Jurisdiction of the United States, was thus formed.
Born in Boston, Massachusetts on December 29, 1809, Albert Pike is asserted within the Southern Jurisdiction as the man most responsible for the growth and success of the Scottish Rite from an obscure Masonic Rite in the mid-19th century to the international fraternity that it became. Pike received the 4th through the 32nd Degrees in March 1853 from Albert Mackey, in Charleston, South Carolina, and was appointed Deputy Inspector for Arkansas that same year.
At this point, the degrees were in a rudimentary form, and often included only a brief history and legend of each degree, as well as other brief details which usually lacked a workable ritual for their conferral. In 1855, the Supreme Council appointed a committee to prepare and compile rituals for the 4th through the 32nd Degrees. That committee was composed of Albert G. Mackey, John H. Honour, William S. Rockwell, Claude P. Samory, and Albert Pike. Of these five committee members, Pike did all the work of the committee.
In 1857 Pike completed his first revision of the 4°-32° ritual, and printed 100 copies. This revision, which Mackey dubbed the "Magnum Opus", was never adopted by the Supreme Council. According to Arturo de Hoyos, 33°, the Scottish Rite's Grand Historian, the Magnum Opus became the basis for future ritual revisions.
In March 1858, Pike was elected a member of the Supreme Council for the Southern Jurisdiction of the United States, and in January 1859 he became its Grand Commander. The American Civil War interrupted his work on the Scottish Rite rituals. About 1870 he, and the Supreme Council, moved to Washington, DC. In 1884 his revision of the rituals was complete.
Scottish Rite Grand Archivist and Grand Historian de Hoyos created the following chart of Pike's ritual revisions:
Pike also wrote lectures about all the degrees, which were published in 1871 under the title "Morals and Dogma of the Ancient and Accepted Scottish Rite of Freemasonry".
In 2000 the Southern Jurisdiction revised its ritual. The current ritual is based upon Pike's, but with some significant differences.
The thirty-three degrees of the Scottish Rite are conferred by several controlling bodies. The first of these is the Craft Lodge, which confers the Entered Apprentice, Fellowcraft, and Master Mason degrees. Craft lodges operate under the authority of national (or in the US, state) Grand Lodges, not the Scottish Rite. Attainment of the third Masonic degree, that of a Master Mason, represents the attainment of the highest rank in all of Masonry. Additional degrees such as those of the AASR are sometimes referred to as "appendant degrees", even where the degree numbering might imply a hierarchy. They represent a lateral movement in Masonic education rather than an upward movement, and are degrees of instruction rather than rank.
In 2000, the Southern Jurisdiction in the United States completed a revision of its ritual scripts. In 2004, the Northern Jurisdiction in the United States rewrote and reorganized its degrees. Further changes have occurred in 2006. The current titles of the degrees and their arrangement in the Southern Jurisdiction remains substantially unchanged from the beginning.
The list of degrees for the Supreme Councils of Australia, England and Wales, and most other jurisdictions largely agrees with that of the Southern Jurisdiction of the U.S. However, the list of degrees for the Northern Jurisdiction of the United States is now somewhat different and is given in the table below. The list of degrees of the Supreme Council of Canada reflects a mixture of the two, with some unique titles as well:
The AASR does have its own distinctive versions of the Craft rituals (Entered Apprentice, Fellow Craft, and Master Mason), but most lodges throughout the English-speaking world do not confer them. However, there are a handful of lodges in New Orleans and New York City that confer the Scottish Rite version of these degrees.
The AASR craft degrees are more common in Europe and Latin-American jurisdictions. Most lodges under the jurisdiction of the Grande Loge de France use these degrees, as do a few of the lodges under the jurisdiction of the Grande Loge Nationale Française. It is a dominant ritual, out of the other rituals in use, in the Grand Lodge of Spain. There are two Lodges in Australia that practise the AASR Craft degrees, The Zetland Lodge of Australia No. 9 and Lodge France 1021, both of which are under the United Grand Lodge of New South Wales and the Australian Capital Territory.
According to Masonic historian Alain Bernheim, Belgian Masonic scholar Pierre Noël demonstrated in a 2002 paper that the AASR Craft degrees derived from the French translation of the Masonic exposé "Three Distinct Knocks", issued in London in 1760.
There are records of lodges conferring the degree of "Scots Master" or "Scotch Master" as early as 1733. A lodge at Temple Bar in London is the earliest such lodge on record. Other lodges include a lodge at Bath in 1735, and the French lodge, St. George de l'Observance No. 49 at Covent Garden in 1736. The references to these few occasions indicate that these were special meetings held for the purpose of performing unusual ceremonies, probably by visiting Freemasons. The Copiale cipher, dating from the 1740s says, "The rank of a Scottish master is an entirely new invention..."
The Ancient and Accepted Scottish Rite in each country is governed by a Supreme Council. There is no international governing body; each Supreme Council in each country is sovereign unto itself in its own jurisdiction.
In Canada, whose Supreme Council was warranted in 1874 by that of England and Wales, the Rite is known as Ancient and Accepted Scottish Rite. The council is called "Supreme Council 33° Ancient and Accepted Scottish Rite of Freemasonry of Canada". Canada's Supreme Council office is located at 4 Queen Street South in Hamilton, Ontario. There are 45 local units or "Valleys" across Canada.
When Comte de Grasse-Tilly returned to France in 1804, he worked to establish the Ancient and Accepted Scottish Rite there. He founded the first Supreme Council in France that same year.
The Grand Orient of France signed a treaty of union in December 1804 with the Supreme Council of the 33rd Degree in France; the treaty declared that "the Grand Orient united to itself" the Supreme Council in France. This accord was applied until 1814. Thanks to this treaty, the Grand Orient of France took ownership, as it were, of the Scottish Rite.
From 1805 to 1814, the Grand Orient of France administered the first 18 degrees of the Rite, leaving the Supreme Council of France to administer the last 15. In 1815, five of the leaders of the Supreme Council founded the "Suprême Conseil des Rites" within the Grand Orient of France. The original Supreme Council of France fell dormant from 1815 to 1821.
The "Suprême Conseil des Isles d'Amérique" (founded in 1802 by Grasse-Tilly and revived around 1810 by his father-in-law Delahogue, who had also returned from the United States) breathed new life into the Supreme Council for the 33rd Degree in France. They merged into a single organization: the Supreme Council of France. This developed as an independent and sovereign Masonic power. It created symbolic lodges (those composed of the first three degrees, which otherwise would be federated around a Grand Lodge or a Grand Orient).
In 1894, the Supreme Council of France created the Grand Lodge of France. It became fully independent in 1904, when the Supreme Council of France ceased chartering new lodges. The Supreme Council of France still considers itself the overseer of all 33 degrees of the Rite. Relations between the two structures remain close, as shown by their organizing two joint meetings a year.
In 1964, the Sovereign Grand Commander Charles Riandey, along with 400 to 500 members, left the jurisdiction of the Supreme Council of France and joined the Grande Loge Nationale Française. Because of his resignation and withdrawal of hundreds of members, there was no longer a Supreme Council of France. Riandey then reinitiated the 33 degrees of the rite in Amsterdam. With the support of the Supreme Council of the Southern Jurisdiction of the United States, he founded a new Supreme Council in France, called the "Suprême Conseil pour la France". This was the only one to be recognized by the Supreme Councils of the United States after it was designated in 1970 as the sole authority of the Scottish Rite for France by the Supreme Council of the Southern Jurisdiction (the oldest Supreme Council in the world) at the Barranquilla conference.
France has three different and arguably legitimate Supreme Councils:
The Ancient and Accepted Scottish Rite was established in Romania in 1881, a year after the National Grand Lodge of Romania was founded. On 27 December 1922, the Supreme Council of Scottish Rite of Romania, received the recognition of the Supreme Council of France in 1922, and recognition from the Supreme Council, Southern Jurisdiction of the United States in 1925.
Between 1948 – 1989 all of Romanian Freemasonry, including the Ancient and Accepted Scottish Rite of Romania, was banned by the Communist regime.
The Supreme Council of the Ancient and Accepted Scottish Rite of Romania was reconsecrated in 1993.
In England and Wales, whose Supreme Council was warranted by that of the Northern Jurisdiction of the USA (in 1845), the Rite is known colloquially as the "Rose Croix" or more formally as "The Ancient and Accepted Rite for England and Wales and its Districts and Chapters Overseas" (continental European jurisdictions retain the "Écossais"). England and Wales are divided into Districts, which administer the Rose Croix Chapters within their District; many degrees are conferred in name only, and degrees beyond the 18° are conferred only by the Supreme Council itself.
All candidates for membership must profess the Trinitarian Christian faith and have been Master masons for at least one year.
In England and Wales, the candidate is perfected in the 18th degree with the preceding degrees awarded in name only. Continuing to the 30th degree is restricted to those who have served in the chair of the Chapter. Elevation beyond the 30th degree is as in Scotland.
In Scotland, candidates are perfected in the 18th degree, with the preceding degrees awarded in name only. A minimum of a two-year interval is required before continuing to the 30th degree, again with the intervening degrees awarded by name only. Elevation beyond that is by invitation only, and numbers are severely restricted.
In the United States of America there are two Supreme Councils: one in Washington, D.C. (which controls the Southern Jurisdiction), and one in Lexington, Massachusetts (which controls the Northern Masonic Jurisdiction). They each have particular characteristics that make them different.
In the United States, members of the Scottish Rite can be elected to receive the 33° by the Supreme Council. It is conferred on members who have made major contributions to society or to Masonry in general.
Based in Washington, D.C., the Southern Jurisdiction (often referred to as the "Mother Supreme Council of the World") was founded in Charleston, South Carolina, in 1801. It oversees the Ancient and Accepted Scottish Rite in 35 states, which are referred to as "Orients", which are divided into regions called "Valleys", each containing individual bodies.
In the Southern Jurisdiction of the United States, the Supreme Council consists of no more than 33 members and is presided over by a Sovereign Grand Commander. The current Sovereign Grand Commander is Illustrious Brother James D. Cole, 33°. Other members of the Supreme Council are called "Sovereign Grand Inspectors General" (S.G.I.G.), and each is the head of the AASR bodies in his respective Orient (or state). Other heads of the various Orients who are not members of the Supreme Council are called "Deputies of the Supreme Council". The Supreme Council of the Southern Jurisdiction meets every odd year during the month of August at the House of the Temple, Ancient and Accepted Scottish Rite of Freemasonry Southern Jurisdiction Headquarters, in Washington, D.C. During this conference, closed meetings between the Grand Commander and the S.G.I.G.'s are held, and many members of the fraternity from all over the world attend the open ceremony on the 5th of 6 council meeting days.
In the Southern Jurisdiction, a member who has been a 32° Scottish Rite Mason for 46 months or more is eligible to be elected to receive the "rank and decoration" of Knight Commander of the Court of Honour (K.C.C.H.) in recognition of outstanding service. After 46 months as a K.C.C.H. he is then eligible to be elected to the 33rd degree, upon approval of the Supreme Council and Sovereign Grand Commander.
The Lexington, Massachusetts-based Northern Masonic Jurisdiction, formed in 1813, oversees the bodies in fifteen states: Connecticut, Delaware, Illinois, Indiana, Maine, Massachusetts, Michigan, New Jersey, New Hampshire, New York, Ohio, Pennsylvania, Rhode Island, Wisconsin and Vermont. The Northern Jurisdiction is only divided into "Valleys", not Orients. Each Valley has up to four Scottish Rite bodies, and each body confers a set of degrees.
In the Northern Jurisdiction, the Supreme Council consists of no more than 66 members. Those who are elected to membership on the Supreme Council are then designated "Active." In the Northern Jurisdiction
all recipients of the 33rd Degree are honorary members of the Supreme Council, and all members are referred to as a "Sovereign Grand Inspectors General." The head of the Rite in each State of the Northern Jurisdiction is called a "Deputy of the Supreme Council." Thus the highest ranking Scottish Rite officer in Ohio, is titled, "Deputy for Ohio", and so forth for each state. Additionally, each Deputy has one or more "Actives" to assist him in the administration of the state. Active members of the Supreme Council who have served faithfully for ten years, or reach the age of 75, may be designated "Active, Emeritus". The Northern Jurisdiction Supreme Council meets yearly, in the even years by an executive session, and in the odd years, with the full membership invited. The 33rd Degree is conferred on the odd years at the Annual Meeting.
In the Northern Jurisdiction, there is a 46-month requirement for eligibility to receive the 33rd degree, and while there is a Meritorious Service Award (as well as a Distinguished Service Award), they are not required intermediate steps towards the 33°. | https://en.wikipedia.org/wiki?curid=28278 |
Switch
In electrical engineering, a switch is an electrical component that can disconnect or connect the conducting path in an electrical circuit, interrupting the electric current or diverting it from one conductor to another. The most common type of switch is an electromechanical device consisting of one or more sets of movable electrical contacts connected to external circuits. When a pair of contacts is touching current can pass between them, while when the contacts are separated no current can flow.
Switches are made in many different configurations; they may have multiple sets of contacts controlled by the same knob or actuator, and the contacts may operate simultaneously, sequentially, or alternately. A switch may be operated manually, for example, a light switch or a keyboard button, or may function as a sensing element to sense the position of a machine part, liquid level, pressure, or temperature, such as a thermostat. Many specialized forms exist, such as the toggle switch, rotary switch, mercury switch, pushbutton switch, reversing switch, relay, and circuit breaker. A common use is control of lighting, where multiple switches may be wired into one circuit to allow convenient control of light fixtures. Switches in high-powered circuits must have special construction to prevent destructive arcing when they are opened.
The most familiar form of switch is a manually operated electromechanical device with one or more sets of electrical contacts, which are connected to external circuits. Each set of contacts can be in one of two states: either "closed" meaning the contacts are touching and electricity can flow between them, or "open", meaning the contacts are separated and the switch is nonconducting. The mechanism actuating the transition between these two states (open or closed) are usually (there are other types of actions) either an ""alternate action"" (flip the switch for continuous "on" or "off") or ""momentary"" (push for "on" and release for "off") type.
A switch may be directly manipulated by a human as a control signal to a system, such as a computer keyboard button, or to control power flow in a circuit, such as a light switch. Automatically operated switches can be used to control the motions of machines, for example, to indicate that a garage door has reached its full open position or that a machine tool is in a position to accept another workpiece. Switches may be operated by process variables such as pressure, temperature, flow, current, voltage, and force, acting as sensors in a process and used to automatically control a system. For example, a thermostat is a temperature-operated switch used to control a heating process. A switch that is operated by another electrical circuit is called a relay. Large switches may be remotely operated by a motor drive mechanism. Some switches are used to isolate electric power from a system, providing a visible point of isolation that can be padlocked if necessary to prevent accidental operation of a machine during maintenance, or to prevent electric shock.
An ideal switch would have no voltage drop when closed, and would have no limits on voltage or current rating. It would have zero rise time and fall time during state changes, and would change state without "bouncing" between on and off positions.
Practical switches fall short of this ideal; as the result of roughness and oxide films, they exhibit contact resistance, limits on the current and voltage they can handle, finite switching time, etc. The ideal switch is often used in circuit analysis as it greatly simplifies the system of equations to be solved, but this can lead to a less accurate solution. Theoretical treatment of the effects of non-ideal properties is required in the design of large networks of switches, as for example used in telephone exchanges.
In the simplest case, a switch has two conductive pieces, often metal, called "contacts", connected to an external circuit, that touch to complete (make) the circuit, and separate to open (break) the circuit. The contact material is chosen for its resistance to corrosion, because most metals form insulating oxides that would prevent the switch from working. Contact materials are also chosen on the basis of electrical conductivity, hardness (resistance to abrasive wear), mechanical strength, low cost and low toxicity. The formation of oxide layers at contact surface, as well as surface roughness and contact pressure, determine the contact resistance, and wetting current of a mechanical switch. Sometimes the contacts are plated with noble metals, for their excellent conductivity and resistance to corrosion. They may be designed to wipe against each other to clean off any contamination. Nonmetallic conductors, such as conductive plastic, are sometimes used. To prevent the formation of insulating oxides, a minimum wetting current may be specified for a given switch design.
In electronics, switches are classified according to the arrangement of their contacts. A pair of contacts is said to be ""closed"" when current can flow from one to the other. When the contacts are separated by an insulating air gap, they are said to be ""open"", and no current can flow between them at normal voltages. The terms ""make"" for closure of contacts and ""break"" for opening of contacts are also widely used.
The terms pole and throw are also used to describe switch contact variations. The number of ""poles"" is the number of electrically separate switches which are controlled by a single physical actuator. For example, a ""2-pole"" switch has two separate, parallel sets of contacts that open and close in unison via the same mechanism. The number of ""throws"" is the number of separate wiring path choices other than "open" that the switch can adopt for each pole. A single-throw switch has one pair of contacts that can either be closed or open. A double-throw switch has a contact that can be connected to either of two other contacts, a triple-throw has a contact which can be connected to one of three other contacts, etc.
In a switch where the contacts remain in one state unless actuated, such as a push-button switch, the contacts can either be normally open (abbreviated "n.o." or "no") until closed by operation of the switch, or normally closed ("n.c." or "nc") and opened by the switch action. A switch with both types of contact is called a "changeover switch" or "double-throw switch". These may be "make-before-break" ("MBB" or shorting) which momentarily connects both circuits, or may be "break-before-make" ("BBM" or non-shorting) which interrupts one circuit before closing the other.
These terms have given rise to abbreviations for the types of switch which are used in the electronics industry such as ""single-pole, single-throw"" (SPST) (the simplest type, "on or off") or ""single-pole, double-throw"" (SPDT), connecting either of two terminals to the common terminal. In electrical power wiring (i.e., house and building wiring by electricians), names generally involve the suffix ""-way""; however, these terms differ between British English and American English (i.e., the terms "two way" and "three way" are used with different meanings).
Switches with larger numbers of poles or throws can be described by replacing the "S" or "D" with a number (e.g. 3PST, SP4T, etc.) or in some cases the letter "T" (for "triple") or "Q" (for "quadruple"). In the rest of this article the terms "SPST", "SPDT" and "intermediate" will be used to avoid the ambiguity.
Contact bounce (also called "chatter") is a common problem with mechanical switches and relays, which arises as the result of electrical contact resistance (ECR) phenomena at interfaces. Switch and relay contacts are usually made of springy metals. When the contacts strike together, their momentum and elasticity act together to cause them to bounce apart one or more times before making steady contact. The result is a rapidly pulsed electric current instead of a clean transition from zero to full current. The effect is usually unimportant in power circuits, but causes problems in some analogue and logic circuits that respond fast enough to misinterpret the on‑off pulses as a data stream. In the design of micro-contacts controlling surface structure (surface roughness) and minimizing the formation of passivated layers on metallic surfaces are instrumental in inhibiting chatter.
The effects of contact bounce can be eliminated by use of mercury-wetted contacts, but these are now infrequently used because of the hazards of mercury. Alternatively, contact circuit voltages can be low-pass filtered to reduce or eliminate multiple pulses from appearing. In digital systems, multiple samples of the contact state can be taken at a low rate and examined for a steady sequence, so that contacts can settle before the contact level is considered reliable and acted upon. Bounce in SPDT switch contacts signals can be filtered out using a SR flip-flop (latch) or Schmitt trigger. All of these methods are referred to as 'debouncing'.
By analogy, the term "debounce" has arisen in the software development industry to describe rate-limiting or throttling the frequency of a method's execution.
In the Hammond organ, multiple wires are pressed together under the piano keys of the manuals. Their bouncing and non-synchronous closing of the switches is known as "Hammond Click" and compositions exist that use and emphasize this feature. Some electronic organs have a switchable replica of this sound effect.
When the power being switched is sufficiently large, the electron flow across opening switch contacts is sufficient to ionize the air molecules across the tiny gap between the contacts as the switch is opened, forming a gas plasma, also known as an electric arc. The plasma is of low resistance and is able to sustain power flow, even with the separation distance between the switch contacts steadily increasing. The plasma is also very hot and is capable of eroding the metal surfaces of the switch contacts. Electric current arcing causes significant degradation of the contacts and also significant electromagnetic interference (EMI), requiring the use of arc suppression methods.
Where the voltage is sufficiently high, an arc can also form as the switch is closed and the contacts approach. If the voltage potential is sufficient to exceed the breakdown voltage of the air separating the contacts, an arc forms which is sustained until the switch closes completely and the switch surfaces make contact.
In either case, the standard method for minimizing arc formation and preventing contact damage is to use a fast-moving switch mechanism, typically using a spring-operated tipping-point mechanism to assure quick motion of switch contacts, regardless of the speed at which the switch control is operated by the user. Movement of the switch control lever applies tension to a spring until a tipping point is reached, and the contacts suddenly snap open or closed as the spring tension is released.
As the power being switched increases, other methods are used to minimize or prevent arc formation. A plasma is hot and will rise due to convection air currents. The arc can be quenched with a series of non-conductive blades spanning the distance between switch contacts, and as the arc rises, its length increases as it forms ridges rising into the spaces between the blades, until the arc is too long to stay sustained and is extinguished. A "puffer" may be used to blow a sudden high velocity burst of gas across the switch contacts, which rapidly extends the length of the arc to extinguish it quickly.
Extremely large switches often have switch contacts surrounded by something other than air to more rapidly extinguish the arc. For example, the switch contacts may operate in a vacuum, immersed in mineral oil, or in sulfur hexafluoride.
In AC power service, the current periodically passes through zero; this effect makes it harder to sustain an arc on opening. Manufacturers may rate switches with lower voltage or current rating when used in DC circuits.
When a switch is designed to switch significant power, the transitional state of the switch as well as the ability to withstand continuous operating currents must be considered. When a switch is in the on state, its resistance is near zero and very little power is dropped in the contacts; when a switch is in the off state, its resistance is extremely high and even less power is dropped in the contacts. However, when the switch is flicked, the resistance must pass through a state where a quarter of the load's rated power (or worse if the load is not purely resistive) is briefly dropped in the switch.
For this reason, power switches intended to interrupt a load current have spring mechanisms to make sure the transition between on and off is as short as possible regardless of the speed at which the user moves the rocker.
Power switches usually come in two types. A momentary on‑off switch (such as on a laser pointer) usually takes the form of a button and only closes the circuit when the button is depressed. A regular on‑off switch (such as on a flashlight) has a constant on-off feature. Dual-action switches incorporate both of these features.
When a strongly inductive load such as an electric motor is switched off, the current cannot drop instantaneously to zero; a spark will jump across the opening contacts. Switches for inductive loads must be rated to handle these cases. The spark will cause electromagnetic interference if not suppressed; a snubber network of a resistor and capacitor in series will quell the spark.
When turned on, an incandescent lamp draws a large inrush current of about ten times the steady-state current; as the filament heats up, its resistance rises and the current decreases to a steady-state value. A switch designed for an incandescent lamp load can withstand this inrush current.
"Wetting current" is the minimum current needing to flow through a mechanical switch while it is operated to break through any film of oxidation that may have been deposited on the switch contacts. The film of oxidation occurs often in areas with high humidity. Providing a sufficient amount of wetting current is a crucial step in designing systems that use delicate switches with small contact pressure as sensor inputs. Failing to do this might result in switches remaining electrically "open" due to contact oxidation.
The moving part that applies the operating force to the contacts is called the "actuator", and may be a toggle or "dolly", a rocker, a push-button or any type of mechanical linkage "(see photo)."
A switch normally maintains its set position once operated. A biased switch contains a mechanism that springs it into another position when released by an operator. The momentary push-button switch is a type of biased switch. The most common type is a "push-to-make" (or normally-open or NO) switch, which makes contact when the button is pressed and breaks when the button is released. Each key of a computer keyboard, for example, is a normally-open "push-to-make" switch. A "push-to-break" (or normally-closed or NC) switch, on the other hand, breaks contact when the button is pressed and makes contact when it is released. An example of a push-to-break switch is a button used to release a door held closed by an electromagnet. The interior lamp of a household refrigerator is controlled by a switch that is held open when the door is closed.
A rotary switch operates with a twisting motion of the operating handle with at least two positions. One or more positions of the switch may be momentary (biased with a spring), requiring the operator to hold the switch in the position. Other positions may have a detent to hold the position when released. A rotary switch may have multiple levels or "decks" in order to allow it to control multiple circuits.
One form of rotary switch consists of a spindle or "rotor" that has a contact arm or "spoke" which projects from its surface like a cam. It has an array of terminals, arranged in a circle around the rotor, each of which serves as a contact for the "spoke" through which any one of a number of different electrical circuits can be connected to the rotor. The switch is layered to allow the use of multiple poles, each layer is equivalent to one pole. Usually such a switch has a detent mechanism so it "clicks" from one active position to another rather than stalls in an intermediate position. Thus a rotary switch provides greater pole and throw capabilities than simpler switches do.
Other types use a cam mechanism to operate multiple independent sets of contacts.
Rotary switches were used as channel selectors on television receivers until the early 1970s, as range selectors on electrical metering equipment, as band selectors on multi-band radios and other similar purposes. In industry, rotary switches are used for control of measuring instruments, switchgear, or in control circuits. For example, a radio controlled overhead crane may have a large multi-circuit rotary switch to transfer hard-wired control signals from the local manual controls in the cab to the outputs of the remote control receiver.
A toggle switch or tumbler switch is a class of electrical switches that are manually actuated by a mechanical lever, handle, or rocking mechanism.
Toggle switches are available in many different styles and sizes, and are used in numerous applications. Many are designed to provide the simultaneous actuation of multiple sets of electrical contacts, or the control of large amounts of electric current or mains voltages.
The word "toggle" is a reference to a kind of mechanism or joint consisting of two arms, which are almost in line with each other, connected with an elbow-like pivot. However, the phrase "toggle switch" is applied to a switch with a short handle and a positive snap-action, whether it actually contains a toggle mechanism or not. Similarly, a switch where a definitive click is heard, is called a "positive on-off switch". A very common use of this type of switch is to switch lights or other electrical equipment on or off. Multiple toggle switches may be mechanically interlocked to prevent forbidden combinations.
In some contexts, particularly computing, a toggle switch, or the action of toggling, is understood in the different sense of a mechanical or software switch that alternates between two states each time it is activated, regardless of mechanical construction. For example, the caps lock key on a computer causes all letters to be generated in capitals after it is pressed once; pressing it again reverts to lower-case letters.
Switches can be designed to respond to any type of mechanical stimulus: for example, vibration (the trembler switch), tilt, air pressure, fluid level (a float switch), the turning of a key (key switch), linear or rotary movement (a limit switch or microswitch), or presence of a magnetic field (the reed switch). Many switches are operated automatically by changes in some environmental condition or by motion of machinery. A limit switch is used, for example, in machine tools to interlock operation with the proper position of tools. In heating or cooling systems a sail switch ensures that air flow is adequate in a duct. Pressure switches respond to fluid pressure.
The mercury switch consists of a drop of mercury inside a glass bulb with two or more contacts. The two contacts pass through the glass, and are connected by the mercury when the bulb is tilted to make the mercury roll on to them.
This type of switch performs much better than the ball tilt switch, as the liquid metal connection is unaffected by dirt, debris and oxidation, it wets the contacts ensuring a very low resistance bounce-free connection, and movement and vibration do not produce a poor contact. These types can be used for precision works.
It can also be used where arcing is dangerous (such as in the presence of explosive vapour) as the entire unit is sealed.
Knife switches consist of a flat metal blade, hinged at one end, with an insulating handle for operation, and a fixed contact. When the switch is closed, current flows through the hinged pivot and blade and through the fixed contact. Such switches are usually not enclosed. The knife and contacts are typically formed of copper, steel, or brass, depending on the application. Fixed contacts may be backed up with a spring. Several parallel blades can be operated at the same time by one handle. The parts may be mounted on an insulating base with terminals for wiring, or may be directly bolted to an insulated switch board in a large assembly. Since the electrical contacts are exposed, the switch is used only where people cannot accidentally come in contact with the switch or where the voltage is so low as to not present a hazard.
Knife switches are made in many sizes from miniature switches to large devices used to carry thousands of amperes. In electrical transmission and distribution, gang-operated switches are used in circuits up to the highest voltages.
The disadvantages of the knife switch are the slow opening speed and the proximity of the operator to exposed live parts. Metal-enclosed safety disconnect switches are used for isolation of circuits in industrial power distribution. Sometimes spring-loaded auxiliary blades are fitted which momentarily carry the full current during opening, then quickly part to rapidly extinguish the arc.
A footswitch is a rugged switch which is operated by foot pressure. An example of use is in the control of a machine tool, allowing the operator to have both hands free to manipulate the workpiece. The foot controls of an electric guitarist's effects pedals and amp are also footswitches.
A DPDT switch has six connections, but since polarity reversal is a very common usage of DPDT switches, some variations of the DPDT switch are internally wired specifically for polarity reversal. These crossover switches only have four terminals rather than six. Two of the terminals are inputs and two are outputs. When connected to a battery or other DC source, the 4-way switch selects from either normal or reversed polarity. Such switches can also be used as intermediate switches in a multiway switching system for control of lamps by more than two switches.
In building wiring, light switches are installed at convenient locations to control lighting and occasionally other circuits. By use of multiple-pole switches, multiway switching control of a lamp can be obtained from two or more places, such as the ends of a corridor or stairwell. A wireless light switch allows remote control of lamps for convenience; some lamps include a touch switch which electronically controls the lamp if touched anywhere. In public buildings several types of vandal resistant switches are used to prevent unauthorized use.
Slide switches are mechanical switches using a slider that moves (slides) from the open (off) position to the closed (on) position.
A relay is an electrically operated switch. Many relays use an electromagnet to operate a switching mechanism mechanically, but other operating principles are also used. Solid-state relays control power circuits with no moving parts, instead using a semiconductor device to perform switching—often a silicon-controlled rectifier or triac.
The analogue switch uses two MOSFET transistors in a transmission gate arrangement as a switch that works much like a relay, with some advantages and several limitations compared to an electromechanical relay.
The power transistor(s) in a switching voltage regulator, such as a power supply unit, are used like a switch to alternately let power flow and block power from flowing.
Many people use metonymy to call a variety of devices "switches" that conceptually connect or disconnect signals and communication paths between electrical devices, analogous to the way mechanical switches connect and disconnect paths for electrons to flow between two conductors. Early telephone systems used an automatically operated Strowger switch to connect telephone callers; telephone exchanges contain one or more crossbar switches today.
Since the advent of digital logic in the 1950s, the term "switch" has spread to a variety of digital active devices such as transistors and logic gates whose function is to change their output state between two logic levels or connect different signal lines, and even computers, network switches, whose function is to provide connections between different ports in a computer network. The most widely used electronic switch in digital circuits is the metal–oxide–semiconductor field-effect transistor (MOSFET).
The term 'switched' is also applied to telecommunications networks, and signifies a network that is circuit switched, providing dedicated circuits for communication between end nodes, such as the public switched telephone network. The common feature of all these usages is they refer to devices that control a binary state: they are either "on" or "off", "closed" or "open", "connected" or "not connected". | https://en.wikipedia.org/wiki?curid=28284 |
Sutra
Sutra () in Indian literary traditions refers to an aphorism or a collection of aphorisms in the form of a manual or, more broadly, a condensed manual or text. Sutras are a genre of ancient and medieval Indian texts found in Hinduism, Buddhism and Jainism.
In Hinduism, sutras are a distinct type of literary composition, a compilation of short aphoristic statements. Each sutra is any short rule, like a theorem distilled into few words or syllables, around which teachings of ritual, philosophy, grammar, or any field of knowledge can be woven. The oldest sutras of Hinduism are found in the Brahmana and Aranyaka layers of the Vedas. Every school of Hindu philosophy, Vedic guides for rites of passage, various fields of arts, law, and social ethics developed respective sutras, which help teach and transmit ideas from one generation to the next.
In Buddhism, sutras, also known as "suttas", are canonical scriptures, many of which are regarded as records of the oral teachings of Gautama Buddha. They are not aphoristic, but are quite detailed, sometimes with repetition. This may reflect a philological root of "sukta" (well spoken), rather than "sutra" (thread).
In Jainism, sutras also known as "suyas" are canonical sermons of Mahavira contained in the Jain Agamas as well as some later (post-canonical) normative texts.
The Sanskrit word "Sūtra" (Sanskrit: सूत्र, Pali: "sūtta", Ardha Magadhi: "sūya") means "string, thread". The root of the word is "siv", that which sews and holds things together. The word is related to "sūci" (Sanskrit: सूचि) meaning "needle, list", and "sūnā" (Sanskrit: सूना) meaning "woven".
In the context of literature, "sūtra" means a distilled collection of syllables and words, any form or manual of "aphorism, rule, direction" hanging together like threads with which the teachings of ritual, philosophy, grammar, or any field of knowledge can be woven.
A "sūtra" is any short rule, states Moriz Winternitz, in Indian literature; it is "a theorem condensed in few words". A collection of "sūtras" becomes a text, and this is also called "sūtra" (often capitalized in Western literature).
A "sūtra" is different from other components such as "Shlokas", "Anuvyakhayas" and "Vyakhyas" found in ancient Indian literature. A "sūtra" is a condensed rule which succinctly states the message, while a "Shloka" is a verse that conveys the complete message and is structured to certain rules of musical meter, a "Anuvyakhaya" is an explanation of the reviewed text, while a "Vyakhya" is a comment by the reviewer.
Sutras first appear in the Brahmana and Aranyaka layer of Vedic literature. They grow in the Vedangas, such as the Shrauta Sutras and Kalpa Sutras. These were designed so that they can be easily communicated from a teacher to student, memorized by the recipient for discussion or self-study or as reference.
A sutra by itself is condensed shorthand, and the threads of syllable are difficult to decipher or understand, without associated scholarly Bhasya or deciphering commentary that fills in the "woof".
The oldest manuscripts that have survived into the modern era, that contain extensive sutras, are part of the Vedas dated to be from the late 2nd millennium BCE through mid 1st-millennium BCE. The Aitareya Aranyaka for example, states Winternitz, is primarily a collection of "sutras". Their use and ancient roots are attested by sutras being mentioned in larger genre of ancient non-Vedic Hindu literature called "Gatha", "Narashansi", "Itihasa", and "Akhyana" (songs, legends, epics, and stories).
In the history of Indian literature, large compilations of sutras, in diverse fields of knowledge, have been traced to the period from 600 BCE to 200 BCE (mostly after Buddha and Mahavira), and this has been called the "sutras period". This period followed the more ancient "Chhandas period", "Mantra period" and "Brahmana period".
Some of the earliest surviving specimen of "sutras" of Hinduism are found in the "Anupada Sutras" and "Nidana Sutras". The former distills the epistemic debate whether Sruti or Smriti or neither must be considered the more reliable source of knowledge, while the latter distills the rules of musical meters for Samaveda chants and songs.
A larger collection of ancient sutra literature in Hinduism corresponds to the six Vedangas, or six limbs of the Vedas. These are six subjects that were called in the Vedas as necessary for complete mastery of the Vedas. The six subjects with their own "sutras" were "pronunciation (Shiksha), meter (Chandas), grammar (Vyakarana), explanation of words (Nirukta), time keeping through astronomy (Jyotisha), and ceremonial rituals (Kalpa). The first two, states Max Muller, were considered in the Vedic era to be necessary for reading the Veda, the second two for understanding it, and the last two for deploying the Vedic knowledge at yajnas (fire rituals). The "sutras" corresponding to these are embedded inside the Brahmana and Aranyaka layers of the Vedas. Taittiriya Aranyaka, for example in Book 7, embeds sutras for accurate pronunciation after the terse phrases "On Letters", "On Accents", "On Quantity", "On Delivery", and "On Euphonic Laws".
The fourth and often the last layer of philosophical, speculative text in the Vedas, the Upanishads, too have embedded sutras such as those found in the Taittiriya Upanishad.
The compendium of ancient Vedic sutra literature that has survived, in full or fragments, includes the Kalpa Sutras, Smarta Sutras, Srauta Sutras, Dharma Sutras, Grhya Sutras, and Sulba Sutras. Other fields for which ancient sutras are known include etymology, phonetics, and grammar.
Some examples of sutra texts in various schools of Hindu philosophy include:
In Buddhism, a "sutta" or "sutra" is a part of the canonical literature. These early Buddhist sutras, unlike Hindu texts, are not aphoristic. On the contrary, they are most often quite lengthy. The Buddhist term "sutta" or "sutra" probably has roots in Sanskrit "sūkta" ("su" + "ukta"), "well spoken" from the belief that "all that was spoken by the Lord Buddha was well-spoken". They share the character of sermons of "well spoken" wisdom with the Jaina sutras.
In Chinese, these are known as 經 (pinyin: "jīng"). These teachings are assembled in part of the Tripiṭaka which is called the "Sutta Pitaka". There are many important or influential Mahayana texts, such as the "Platform Sutra" and the "Lotus Sutra", that are called sutras despite being attributed to much later authors.
In Theravada Buddhism suttas comprise the second "basket" (pitaka) of the Pāli Canon. Rewata Dhamma and Bhikkhu Bodhi describe the Sutta pitaka as
In the Jain tradition, sutras are an important genre of "fixed text", which used to be memorized.
The Kalpa Sūtra is, for example, a Jain text that includes monastic rules, as well as biographies of the Jain Tirthankaras. Many sutras discuss all aspects of ascetic and lay life in Jainism. Various ancient sutras particularly from the early 1st millennium CE, for example, recommend devotional bhakti as an essential Jain practice.
The surviving scriptures of Jaina tradition, such as the Acaranga Sutra (Agamas) exist in sutra format, as is the Tattvartha Sutra – a Sanskrit text accepted by all four Jainism sects as the most authoritative philosophical text that completely summarizes the foundations of Jainism. | https://en.wikipedia.org/wiki?curid=28287 |
RNA
Ribonucleic acid (RNA) is a polymeric molecule essential in various biological roles in coding, decoding, regulation and expression of genes. RNA and DNA are nucleic acids. Along with lipids, proteins, and carbohydrates, nucleic acids constitute one of the four major macromolecules essential for all known forms of life. Like DNA, RNA is assembled as a chain of nucleotides, but unlike DNA, RNA is found in nature as a single strand folded onto itself, rather than a paired double strand. Cellular organisms use messenger RNA (mRNA) to convey genetic information (using the nitrogenous bases of guanine, uracil, adenine, and cytosine, denoted by the letters G, U, A, and C) that directs synthesis of specific proteins. Many viruses encode their genetic information using an RNA genome.
Some RNA molecules play an active role within cells by catalyzing biological reactions, controlling gene expression, or sensing and communicating responses to cellular signals. One of these active processes is protein synthesis, a universal function in which RNA molecules direct the synthesis of proteins on ribosomes. This process uses transfer RNA (tRNA) molecules to deliver amino acids to the ribosome, where ribosomal RNA (rRNA) then links amino acids together to form coded proteins.
The chemical structure of RNA is very similar to that of DNA, but differs in three primary ways:
Like DNA, most biologically active RNAs, including mRNA, tRNA, rRNA, snRNAs, and other non-coding RNAs, contain self-complementary sequences that allow parts of the RNA to fold and pair with itself to form double helices. Analysis of these RNAs has revealed that they are highly structured. Unlike DNA, their structures do not consist of long double helices, but rather collections of short helices packed together into structures akin to proteins.
In this fashion, RNAs can achieve chemical catalysis (like enzymes). For instance, determination of the structure of the ribosome—an RNA-protein complex that catalyzes peptide bond formation—revealed that its active site is composed entirely of RNA.
Each nucleotide in RNA contains a ribose sugar, with carbons numbered 1' through 5'. A base is attached to the 1' position, in general, adenine (A), cytosine (C), guanine (G), or uracil (U). Adenine and guanine are purines, cytosine and uracil are pyrimidines. A phosphate group is attached to the 3' position of one ribose and the 5' position of the next. The phosphate groups have a negative charge each, making RNA a charged molecule (polyanion). The bases form hydrogen bonds between cytosine and guanine, between adenine and uracil and between guanine and uracil. However, other interactions are possible, such as a group of adenine bases binding to each other in a bulge,
or the GNRA tetraloop that has a guanine–adenine base-pair.
An important structural component of RNA that distinguishes it from DNA is the presence of a hydroxyl group at the 2' position of the ribose sugar. The presence of this functional group causes the helix to mostly take the A-form geometry, although in single strand dinucleotide contexts, RNA can rarely also adopt the B-form most commonly observed in DNA. The A-form geometry results in a very deep and narrow major groove and a shallow and wide minor groove. A second consequence of the presence of the 2'-hydroxyl group is that in conformationally flexible regions of an RNA molecule (that is, not involved in formation of a double helix), it can chemically attack the adjacent phosphodiester bond to cleave the backbone.
RNA is transcribed with only four bases (adenine, cytosine, guanine and uracil), but these bases and attached sugars can be modified in numerous ways as the RNAs mature. Pseudouridine (Ψ), in which the linkage between uracil and ribose is changed from a C–N bond to a C–C bond, and ribothymidine (T) are found in various places (the most notable ones being in the TΨC loop of tRNA). Another notable modified base is hypoxanthine, a deaminated adenine base whose nucleoside is called inosine (I). Inosine plays a key role in the wobble hypothesis of the genetic code.
There are more than 100 other naturally occurring modified nucleosides. The greatest structural diversity of modifications can be found in tRNA, while pseudouridine and nucleosides with 2'-O-methylribose often present in rRNA are the most common. The specific roles of many of these modifications in RNA are not fully understood. However, it is notable that, in ribosomal RNA, many of the post-transcriptional modifications occur in highly functional regions, such as the peptidyl transferase center and the subunit interface, implying that they are important for normal function.
The functional form of single-stranded RNA molecules, just like proteins, frequently requires a specific tertiary structure. The scaffold for this structure is provided by secondary structural elements that are hydrogen bonds within the molecule. This leads to several recognizable "domains" of secondary structure like hairpin loops, bulges, and internal loops. Since RNA is charged, metal ions such as Mg2+ are needed to stabilise many secondary and tertiary structures.
The naturally occurring enantiomer of RNA is D-RNA composed of D-ribonucleotides. All chirality centers are located in the D-ribose. By the use of L-ribose or rather L-ribonucleotides, L-RNA can be synthesized. L-RNA is much more stable against degradation by RNase.
Like other structured biopolymers such as proteins, one can define topology of a folded RNA molecule. This is often done based on arrangement of intra-chain contacts within a folded RNA, termed as circuit topology.
Synthesis of RNA is usually catalyzed by an enzyme—RNA polymerase—using DNA as a template, a process known as transcription. Initiation of transcription begins with the binding of the enzyme to a promoter sequence in the DNA (usually found "upstream" of a gene). The DNA double helix is unwound by the helicase activity of the enzyme. The enzyme then progresses along the template strand in the 3’ to 5’ direction, synthesizing a complementary RNA molecule with elongation occurring in the 5’ to 3’ direction. The DNA sequence also dictates where termination of RNA synthesis will occur.
Primary transcript RNAs are often modified by enzymes after transcription. For example, a poly(A) tail and a 5' cap are added to eukaryotic pre-mRNA and introns are removed by the spliceosome.
There are also a number of RNA-dependent RNA polymerases that use RNA as their template for synthesis of a new strand of RNA. For instance, a number of RNA viruses (such as poliovirus) use this type of enzyme to replicate their genetic material. Also, RNA-dependent RNA polymerase is part of the RNA interference pathway in many organisms.
Messenger RNA (mRNA) is the RNA that carries information from DNA to the ribosome, the sites of protein synthesis (translation) in the cell. The coding sequence of the mRNA determines the amino acid sequence in the protein that is produced. However, many RNAs do not code for protein (about 97% of the transcriptional output is non-protein-coding in eukaryotes).
These so-called non-coding RNAs ("ncRNA") can be encoded by their own genes (RNA genes), but can also derive from mRNA introns. The most prominent examples of non-coding RNAs are transfer RNA (tRNA) and ribosomal RNA (rRNA), both of which are involved in the process of translation. There are also non-coding RNAs involved in gene regulation, RNA processing and other roles. Certain RNAs are able to catalyse chemical reactions such as cutting and ligating other RNA molecules, and the catalysis of peptide bond formation in the ribosome; these are known as ribozymes.
According to the length of RNA chain, RNA includes small RNA and long RNA. Usually, small RNAs are shorter than 200 nt in length, and long RNAs are greater than 200 nt long. Long RNAs, also called large RNAs, mainly include long non-coding RNA (lncRNA) and mRNA. Small RNAs mainly include 5.8S ribosomal RNA (rRNA), 5S rRNA, transfer RNA (tRNA), microRNA (miRNA), small interfering RNA (siRNA), small nucleolar RNA (snoRNAs), Piwi-interacting RNA (piRNA), tRNA-derived small RNA (tsRNA) and small rDNA-derived RNA (srRNA).
Messenger RNA (mRNA) carries information about a protein sequence to the ribosomes, the protein synthesis factories in the cell. It is coded so that every three nucleotides (a codon) corresponds to one amino acid. In eukaryotic cells, once precursor mRNA (pre-mRNA) has been transcribed from DNA, it is processed to mature mRNA. This removes its introns—non-coding sections of the pre-mRNA. The mRNA is then exported from the nucleus to the cytoplasm, where it is bound to ribosomes and translated into its corresponding protein form with the help of tRNA. In prokaryotic cells, which do not have nucleus and cytoplasm compartments, mRNA can bind to ribosomes while it is being transcribed from DNA. After a certain amount of time, the message degrades into its component nucleotides with the assistance of ribonucleases.
Transfer RNA (tRNA) is a small RNA chain of about 80 nucleotides that transfers a specific amino acid to a growing polypeptide chain at the ribosomal site of protein synthesis during translation. It has sites for amino acid attachment and an anticodon region for codon recognition that binds to a specific sequence on the messenger RNA chain through hydrogen bonding.
Ribosomal RNA (rRNA) is the catalytic component of the ribosomes. Eukaryotic ribosomes contain four different rRNA molecules: 18S, 5.8S, 28S and 5S rRNA. Three of the rRNA molecules are synthesized in the nucleolus, and one is synthesized elsewhere. In the cytoplasm, ribosomal RNA and protein combine to form a nucleoprotein called a ribosome. The ribosome binds mRNA and carries out protein synthesis. Several ribosomes may be attached to a single mRNA at any time. Nearly all the RNA found in a typical eukaryotic cell is rRNA.
Transfer-messenger RNA (tmRNA) is found in many bacteria and plastids. It tags proteins encoded by mRNAs that lack stop codons for degradation and prevents the ribosome from stalling.
The earliest known regulators of gene expression were proteins known as repressors and activators, regulators with specific short binding sites within enhancer regions near the genes to be regulated. More recently, RNAs have been found to regulate genes as well. There are several kinds of RNA-dependent processes in eukaryotes regulating the expression of genes at various points, such as RNAi repressing genes post-transcriptionally, long non-coding RNAs shutting down blocks of chromatin epigenetically, and enhancer RNAs inducing increased gene expression. In addition to these mechanisms in eukaryotes, both bacteria and archaea have been found to use regulatory RNAs extensively. Bacterial small RNA and the CRISPR system are examples of such prokaryotic regulatory RNA systems. Fire and Mello were awarded the 2006 Nobel Prize in Physiology or Medicine for discovering microRNAs (miRNAs), specific short RNA molecules that can base-pair with mRNAs.
Post-transcriptional expression levels of many genes can be controlled by RNA interference, in which miRNAs, specific short RNA molecules, pair with mRNA regions and target them for degradation. This antisense-based process involves steps that first process the RNA so that it can base-pair with a region of its target mRNAs. Once the base pairing occurs, other proteins direct the mRNA to be destroyed by nucleases. Fire and Mello were awarded the 2006 Nobel Prize in Physiology or Medicine for this discovery.
Next to be linked to regulation were Xist and other long noncoding RNAs associated with X chromosome inactivation. Their roles, at first mysterious, were shown by Jeannie T. Lee and others to be the silencing of blocks of chromatin via recruitment of Polycomb complex so that messenger RNA could not be transcribed from them. Additional lncRNAs, currently defined as RNAs of more than 200 base pairs that do not appear to have coding potential, have been found associated with regulation of stem cell pluripotency and cell division.
The third major group of regulatory RNAs is called enhancer RNAs. It is not clear at present whether they are a unique category of RNAs of various lengths or constitute a distinct subset of lncRNAs. In any case, they are transcribed from enhancers, which are known regulatory sites in the DNA near genes they regulate. They up-regulate the transcription of the gene(s) under control of the enhancer from which they are transcribed.
At first, regulatory RNA was thought to be a eukaryotic phenomenon, a part of the explanation for why so much more transcription in higher organisms was seen than had been predicted. But as soon as researchers began to look for possible RNA regulators in bacteria, they turned up there as well, termed as small RNA (sRNA). Currently, the ubiquitous nature of systems of RNA regulation of genes has been discussed as support for the RNA World theory. Bacterial small RNAs generally act via antisense pairing with mRNA to down-regulate its translation, either by affecting stability or affecting cis-binding ability. Riboswitches have also been discovered. They are cis-acting regulatory RNA sequences acting allosterically. They change shape when they bind metabolites so that they gain or lose the ability to bind chromatin to regulate expression of genes.
Archaea also have systems of regulatory RNA. The CRISPR system, recently being used to edit DNA "in situ", acts via regulatory RNAs in archaea and bacteria to provide protection against virus invaders.
Many RNAs are involved in modifying other RNAs.
Introns are spliced out of pre-mRNA by spliceosomes, which contain several small nuclear RNAs (snRNA), or the introns can be ribozymes that are spliced by themselves.
RNA can also be altered by having its nucleotides modified to nucleotides other than A, C, G and U.
In eukaryotes, modifications of RNA nucleotides are in general directed by small nucleolar RNAs (snoRNA; 60–300 nt), found in the nucleolus and cajal bodies. snoRNAs associate with enzymes and guide them to a spot on an RNA by basepairing to that RNA. These enzymes then perform the nucleotide modification. rRNAs and tRNAs are extensively modified, but snRNAs and mRNAs can also be the target of base modification. RNA can also be methylated.
Like DNA, RNA can carry genetic information. RNA viruses have genomes composed of RNA that encodes a number of proteins. The viral genome is replicated by some of those proteins, while other proteins protect the genome as the virus particle moves to a new host cell. Viroids are another group of pathogens, but they consist only of RNA, do not encode any protein and are replicated by a host plant cell's polymerase.
Reverse transcribing viruses replicate their genomes by reverse transcribing DNA copies from their RNA; these DNA copies are then transcribed to new RNA. Retrotransposons also spread by copying DNA and RNA from one another, and telomerase contains an RNA that is used as template for building the ends of eukaryotic chromosomes.
Double-stranded RNA (dsRNA) is RNA with two complementary strands, similar to the DNA found in all cells, but with the replacement of thymine by uracil. dsRNA forms the genetic material of some viruses (double-stranded RNA viruses). Double-stranded RNA, such as viral RNA or siRNA, can trigger RNA interference in eukaryotes, as well as interferon response in vertebrates.
In the late 1970s, it was shown that there is a single stranded covalently closed, i.e. circular form of RNA expressed throughout the animal and plant kingdom (see circRNA). circRNAs are thought to arise via a "back-splice" reaction where the spliceosome joins a downstream donor to an upstream acceptor splice site. So far the function of circRNAs is largely unknown, although for few examples a microRNA sponging activity has been demonstrated.
Research on RNA has led to many important biological discoveries and numerous Nobel Prizes. Nucleic acids were discovered in 1868 by Friedrich Miescher, who called the material 'nuclein' since it was found in the nucleus. It was later discovered that prokaryotic cells, which do not have a nucleus, also contain nucleic acids. The role of RNA in protein synthesis was suspected already in 1939. Severo Ochoa won the 1959 Nobel Prize in Medicine (shared with Arthur Kornberg) after he discovered an enzyme that can synthesize RNA in the laboratory. However, the enzyme discovered by Ochoa (polynucleotide phosphorylase) was later shown to be responsible for RNA degradation, not RNA synthesis. In 1956 Alex Rich and David Davies hybridized two separate strands of RNA to form the first crystal of RNA whose structure could be determined by X-ray crystallography.
The sequence of the 77 nucleotides of a yeast tRNA was found by Robert W. Holley in 1965, winning Holley the 1968 Nobel Prize in Medicine (shared with Har Gobind Khorana and Marshall Nirenberg).
In the early 1970s, retroviruses and reverse transcriptase were discovered, showing for the first time that enzymes could copy RNA into DNA (the opposite of the usual route for transmission of genetic information). For this work, David Baltimore, Renato Dulbecco and Howard Temin were awarded a Nobel Prize in 1975.
In 1976, Walter Fiers and his team determined the first complete nucleotide sequence of an RNA virus genome, that of bacteriophage MS2.
In 1977, introns and RNA splicing were discovered in both mammalian viruses and in cellular genes, resulting in a 1993 Nobel to Philip Sharp and Richard Roberts.
Catalytic RNA molecules (ribozymes) were discovered in the early 1980s, leading to a 1989 Nobel award to Thomas Cech and Sidney Altman. In 1990, it was found in "Petunia" that introduced genes can silence similar genes of the plant's own, now known to be a result of RNA interference.
At about the same time, 22 nt long RNAs, now called microRNAs, were found to have a role in the development of "C. elegans".
Studies on RNA interference gleaned a Nobel Prize for Andrew Fire and Craig Mello in 2006, and another Nobel was awarded for studies on the transcription of RNA to Roger Kornberg in the same year. The discovery of gene regulatory RNAs has led to attempts to develop drugs made of RNA, such as siRNA, to silence genes. Adding to the Nobel prizes awarded for research on RNA in 2009 it was awarded for the elucidation of the atomic structure of the ribosome to Venki Ramakrishnan, Tom Steitz, and Ada Yonath.
In 1967, Carl Woese hypothesized that RNA might be catalytic and suggested that the earliest forms of life (self-replicating molecules) could have relied on RNA both to carry genetic information and to catalyze biochemical reactions—an RNA world.
In March 2015, complex DNA and RNA nucleotides, including uracil, cytosine and thymine, were reportedly formed in the laboratory under outer space conditions, using starter chemicals, such as pyrimidine, an organic compound commonly found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), is one of the most carbon-rich compounds found in the Universe and may have been formed in red giants or in interstellar dust and gas clouds. | https://en.wikipedia.org/wiki?curid=25758 |
Russian Revolution
The Russian Revolution was a period of political and social revolution across the territory of the Russian Empire, commencing with the abolition of the monarchy in 1917, and concluding in 1923 after the Bolshevik establishment of the Soviet Union at the end of the Civil War.
It began during the First World War, with the February Revolution that was focused in and around Petrograd (now Saint Petersburg), the capital of Russia at that time. The revolution erupted in the context of Russia's major military losses during the War, which resulted in much of the Russian Army being ready to mutiny. In the chaos, members of the Duma, Russia's parliament, assumed control of the country, forming the Russian Provisional Government. This was dominated by the interests of large capitalists and the noble aristocracy. The army leadership felt they did not have the means to suppress the revolution, and Emperor Nicholas II abdicated his throne. Grassroots community assemblies called 'Soviets', which were dominated by soldiers and the urban industrial working class, initially permitted the Provisional Government to rule, but insisted on a prerogative to influence the government and control various militias.
A period of dual power ensued, during which the Provisional Government held state power while the national network of Soviets, led by socialists, had the allegiance of the lower classes and, increasingly, the left-leaning urban middle class. During this chaotic period, there were frequent mutinies, protests and strikes. Many socialist political organizations were engaged in daily struggle and vied for influence within the Duma and the Soviets, central among which were the Bolsheviks ("Ones of the Majority") led by Vladimir Lenin. He campaigned for an immediate end of Russia's participation in the War, granting land to the peasants, and providing bread to the urban workers. When the Provisional Government chose to continue fighting the war with Germany, the Bolsheviks and other socialist factions exploited the virtually universal disdain towards the war effort as justification to advance the revolution further. The Bolsheviks turned workers' militias under their control into the Red Guards (later the Red Army), over which they exerted substantial control.
The situation climaxed with the October Revolution in 1917, a Bolshevik-led armed insurrection by workers and soldiers in Petrograd that successfully overthrew the Provisional Government, transferring all its authority to the Soviets. They soon relocated the national capital to Moscow. The Bolsheviks had secured a strong base of support within the Soviets and, as the supreme governing party, established a federal government dedicated to reorganizing the former empire into the world's first socialist state, to practice Soviet democracy on a national and international scale. Their promise to end Russia's participation in the First World War was fulfilled when the Bolshevik leaders signed the Treaty of Brest-Litovsk with Germany in March 1918. To further secure the new state, the Bolsheviks established the Cheka, a secret police that functioned as a revolutionary security service to weed out, execute, or punish those considered to be "enemies of the people" in campaigns consciously modeled on those of the French Revolution.
Soon after, civil war erupted among the "Reds" (Bolsheviks), the "Whites" (counter-revolutionaries), the independence movements, and other socialist factions opposed to the Bolsheviks. It continued for several years, during which the Bolsheviks defeated both the Whites and all rival socialists. Victorious, they reconstituted themselves as the Communist Party. They also established Soviet power in the newly independent republics of Armenia, Azerbaijan, Belarus, Georgia and Ukraine. They brought these jurisdictions into unification under the Union of Soviet Socialist Republics (USSR) in 1922. While many notable historical events occurred in Moscow and Petrograd, there were also major changes in cities throughout the state, and among national minorities throughout the empire and in the rural areas, where peasants took over and redistributed land.
The Russian Revolution of 1905 was said to be a major factor contributing to the cause of the Revolutions of 1917. The events of Bloody Sunday triggered nationwide protests and soldier mutinies. A council of workers called the St. Petersburg Soviet was created in this chaos. While the 1905 Revolution was ultimately crushed, and the leaders of the St. Petersburg Soviet were arrested, this laid the groundwork for the later Petrograd Soviet and other revolutionary movements during the lead up to 1917. The 1905 Revolution also led to the creation of a Duma (parliament), that would later form the Provisional Government following February 1917.
The outbreak of World War I prompted general outcry directed at Tsar Nicholas II and the Romanov family. While the nation was initially engaged in a wave of nationalism, increasing numbers of defeats and poor conditions soon flipped the nation's opinion. The Tsar attempted to remedy the situation by taking personal control of the army in 1915. This proved to be extremely disadvantageous for the Tsar, as he was now held personally responsible for Russia's continuing defeats and losses. In addition, Tsarina Alexandra, left to rule in while the Tsar commanded at the front, was German born, leading to suspicion of collusion, only to be exacerbated by rumors relating to her relationship with the controversial mystic Grigori Rasputin. Rasputin's influence led to disastrous ministerial appointments and corruption, resulting in a worsening of conditions within Russia. This led to general dissatisfaction with the Romanov family, and was a major factor contributing to the retaliation of the Russian Communists against the royal family.
After the entry of the Ottoman Empire on the side of the Central Powers in October 1914, Russia was deprived of a major trade route through the Dardanelles, which further contributed to the economic crisis, in which Russia became incapable of providing munitions to their army in the years leading to 1917. However, the problems were primarily administrative, not industrial, as Germany was able to produce great amounts of munitions whilst constantly fighting on two major battlefronts.
The conditions during the war resulted in a devastating loss of morale within the Russian army and the general population of Russia itself. This was particularly apparent in the cities, owing to a lack of food in response to the disruption of agriculture. Food scarcity had become a considerable problem in Russia, but the cause of this did not lie in any failure of the harvests, which had not been significantly altered during wartime. The indirect reason was that the government, in order to finance the war, printed millions of ruble notes, and by 1917, inflation had made prices increase up to four times what they had been in 1914. Farmers were consequently faced with a higher cost of living, but with little increase in income. As a result, they tended to hoard their grain and to revert to subsistence farming. Thus the cities were constantly short of food. At the same time, rising prices led to demands for higher wages in the factories, and in January and February 1916, revolutionary propaganda, in part aided by German funds, led to widespread strikes. This resulted in a growing criticism of the government, including an increased participation of workers in revolutionary parties.
Liberal parties too had an increased platform to voice their complaints, as the initial fervor of the war resulted in the Tsarist government creating a variety of political organizations. In July 1915, a Central War Industries Committee was established under the chairmanship of a prominent Octobrist, Alexander Guchkov (1862–1936), including ten workers' representatives. The Petrograd Mensheviks agreed to join despite the objections of their leaders abroad. All this activity gave renewed encouragement to political ambitions, and in September 1915, a combination of Octobrists and Kadets in the Duma demanded the forming of a responsible government. which the Tsar rejected.
All these factors had given rise to a sharp loss of confidence in the regime, even within the ruling class, growing throughout the war. Early in 1916, Guchkov discussed with senior army officers and members of the Central War Industries Committee about a possible coup to force the abdication of the Tsar. In December, a small group of nobles assassinated Rasputin, and in January 1917 the Tsar's cousin, Grand Duke Nicholas, was asked indirectly by Prince Lvov whether he would be prepared to take over the throne from his nephew, Tsar Nicholas II. None of these incidents were in themselves the immediate cause of the February Revolution, but they do help to explain why the monarchy survived only a few days after it had broken out.
Meanwhile, Socialist Revolutionary leaders in exile, many of them living in Switzerland, had been the glum spectators of the collapse of international socialist solidarity. French and German Social Democrats had voted in favour of their respective governments' war efforts. Georgi Plekhanov in Paris had adopted a violently anti-German stand, while Alexander Parvus supported the German war effort as the best means of ensuring a revolution in Russia. The Mensheviks largely maintained that Russia had the right to defend herself against Germany, although Julius Martov (a prominent Menshevik), now on the left of his group, demanded an end to the war and a settlement on the basis of national self-determination, with no annexations or indemnities.
It was these views of Martov that predominated in a manifesto drawn up by Leon Trotsky (at the time a Menshevik) at a conference in Zimmerwald, attended by 35 Socialist leaders in September 1915. Inevitably Vladimir Lenin, supported by Zinoviev and Radek, strongly contested them. Their attitudes became known as the Zimmerwald Left. Lenin rejected both the defence of Russia and the cry for peace. Since the autumn of 1914, he had insisted that "from the standpoint of the working class and of the labouring masses the lesser evil would be the defeat of the Tsarist Monarchy"; the war must be turned into a civil war of the proletarian soldiers against their own governments, and if a proletarian victory should emerge from this in Russia, then their duty would be to wage a revolutionary war for the liberation of the masses throughout Europe.
An elementary theory of property, believed by many peasants, was that land should belong to those who work on it. At the same time, peasant life and culture was changing constantly. Change was facilitated by the physical movement of growing numbers of peasant villagers who migrated to and from industrial and urban environments, but also by the introduction of city culture into the village through material goods, the press, and word of mouth.
Workers also had good reasons for discontent: overcrowded housing with often deplorable sanitary conditions, long hours at work (on the eve of the war, a 10-hour workday six days a week was the average and many were working 11–12 hours a day by 1916), constant risk of injury and death from poor safety and sanitary conditions, harsh discipline (not only rules and fines, but foremen's fists), and inadequate wages (made worse after 1914 by steep wartime increases in the cost of living). At the same time, urban industrial life had its benefits, though these could be just as dangerous (in terms of social and political stability) as the hardships. There were many encouragements to expect more from life. Acquiring new skills gave many workers a sense of self-respect and confidence, heightening expectations and desires. Living in cities, workers encountered material goods they had never seen in villages. Most importantly, workers living in cities were exposed to new ideas about the social and political order.
The social causes of the Russian Revolution can be derived from centuries of oppression of the lower classes by the Tsarist regime and Nicholas's failures in World War I. While rural agrarian peasants had been emancipated from serfdom in 1861, they still resented paying redemption payments to the state, and demanded communal tender of the land they worked. The problem was further compounded by the failure of Sergei Witte's land reforms of the early 20th century. Increasing peasant disturbances and sometimes actual revolts occurred, with the goal of securing ownership of the land they worked. Russia consisted mainly of poor farming peasants and substantial inequality of land ownership, with 1.5% of the population owning 25% of the land.
The rapid industrialization of Russia also resulted in urban overcrowding and poor conditions for urban industrial workers (as mentioned above). Between 1890 and 1910, the population of the capital, Saint Petersburg, swelled from 1,033,600 to 1,905,600, with Moscow experiencing similar growth. This created a new 'proletariat' which, due to being crowded together in the cities, was much more likely to protest and go on strike than the peasantry had been in previous times. In one 1904 survey, it was found that an average of 16 people shared each apartment in Saint Petersburg, with six people per room. There was also no running water, and piles of human waste were a threat to the health of the workers. The poor conditions only aggravated the situation, with the number of strikes and incidents of public disorder rapidly increasing in the years shortly before World War I. Because of late industrialization, Russia's workers were highly concentrated. By 1914, 40% of Russian workers were employed in factories of 1,000+ workers (32% in 1901). 42% worked in 100–1,000 worker enterprises, 18% in 1–100 worker businesses (in the US, 1914, the figures were 18, 47 and 35 respectively).
World War I added to the chaos. Conscription across Russia resulted in unwilling citizens being sent off to war. The vast demand for factory production of war supplies and workers resulted in many more labor riots and strikes. Conscription stripped skilled workers from the cities, who had to be replaced with unskilled peasants. When famine began to hit due to the poor railway system, workers abandoned the cities in droves seeking food. Finally, the soldiers themselves, who suffered from a lack of equipment and protection from the elements, began to turn against the Tsar. This was mainly because, as the war progressed, many of the officers who were loyal to the Tsar were killed, being replaced by discontented conscripts from the major cities who had little loyalty to the Tsar.
Many sections of the country had reason to be dissatisfied with the existing autocracy. Nicholas II was a deeply conservative ruler and maintained a strict authoritarian system. Individuals and society in general were expected to show self-restraint, devotion to community, deference to the social hierarchy and a sense of duty to the country. Religious faith helped bind all of these tenets together as a source of comfort and reassurance in the face of difficult conditions and as a means of political authority exercised through the clergy. Perhaps more than any other modern monarch, Nicholas II attached his fate and the future of his dynasty to the notion of the ruler as a saintly and infallible father to his people.
This vision of the Romanov monarchy left him unaware of the state of his country. With a firm belief that his power to rule was granted by Divine Right, Nicholas assumed that the Russian people were devoted to him with unquestioning loyalty. This ironclad belief rendered Nicholas unwilling to allow the progressive reforms that might have alleviated the suffering of the Russian people. Even after the 1905 Revolution spurred the Tsar to decree limited civil rights and democratic representation, he worked to limit even these liberties in order to preserve the ultimate authority of the crown.
Despite constant oppression, the desire of the people for democratic participation in government decisions was strong. Since the Age of Enlightenment, Russian intellectuals had promoted Enlightenment ideals such as the dignity of the individual and the rectitude of democratic representation. These ideals were championed most vociferously by Russia's liberals, although populists, Marxists, and anarchists also claimed to support democratic reforms. A growing opposition movement had begun to challenge the Romanov monarchy openly well before the turmoil of World War I.
Dissatisfaction with Russian autocracy culminated in the huge national upheaval that followed the Bloody Sunday massacre of January 1905, in which hundreds of unarmed protesters were shot by the Tsar's troops. Workers responded to the massacre with a crippling general strike, forcing Nicholas to put forth the October Manifesto, which established a democratically elected parliament (the State Duma). Although the Tsar accepted the 1906 Fundamental State Laws one year later, he subsequently dismissed the first two Dumas when they proved uncooperative. Unfulfilled hopes of democracy fueled revolutionary ideas and violent outbursts targeted at the monarchy.
One of the Tsar's principal rationales for risking war in 1914 was his desire to restore the prestige that Russia had lost amid the debacles of the Russo-Japanese War (1904-1905). Nicholas also sought to foster a greater sense of national unity with a war against a common and old enemy. The Russian Empire was an agglomeration of diverse ethnicities that had demonstrated significant signs of disunity in the years before the First World War. Nicholas believed in part that the shared peril and tribulation of a foreign war would mitigate the social unrest over the persistent issues of poverty, inequality, and inhumane working conditions. Instead of restoring Russia's political and military standing, World War I led to the slaughter of Russian troops and military defeats that undermined both the monarchy and Russian society to the point of collapse.
The outbreak of war in August 1914 initially served to quiet the prevalent social and political protests, focusing hostilities against a common external enemy, but this patriotic unity did not last long. As the war dragged on inconclusively, war-weariness gradually took its toll. Although many ordinary Russians joined anti-German demonstrations in the first few weeks of the war, hostility toward the Kaiser and the desire to defend their land and their lives did not necessarily translate into enthusiasm for the Tsar or the government.
Russia's first major battle of the war was a disaster; in the 1914 Battle of Tannenberg, over 30,000 Russian troops were killed or wounded and 90,000 captured, while Germany suffered just 12,000 casualties. However, Austro-Hungarian forces allied to Germany were driven back deep into the Galicia region by the end of the year. In the autumn of 1915, Nicholas had taken direct command of the army, personally overseeing Russia's main theatre of war and leaving his ambitious but incapable wife Alexandra in charge of the government. Reports of corruption and incompetence in the Imperial government began to emerge, and the growing influence of Grigori Rasputin in the Imperial family was widely resented.
In 1915, things took a critical turn for the worse when Germany shifted its focus of attack to the Eastern front. The superior German army – better led, better trained, and better supplied – was quite effective against the ill-equipped Russian forces, driving the Russians out of Galicia, as well as Russian Poland during the Gorlice–Tarnów Offensive campaign. By the end of October 1916, Russia had lost between 1,600,000 and 1,800,000 soldiers, with an additional 2,000,000 prisoners of war and 1,000,000 missing, all making up a total of nearly 5,000,000 men.
These staggering losses played a definite role in the mutinies and revolts that began to occur. In 1916, reports of fraternizing with the enemy began to circulate. Soldiers went hungry, lacked shoes, munitions, and even weapons. Rampant discontent lowered morale, which was further undermined by a series of military defeats.
Casualty rates were the most vivid sign of this disaster. By the end of 1914, only five months into the war, around 390,000 Russian men had lost their lives and nearly 1,000,000 were injured. Far sooner than expected, inadequately trained recruits were called for active duty, a process repeated throughout the war as staggering losses continued to mount. The officer class also saw remarkable changes, especially within the lower echelons, which were quickly filled with soldiers rising up through the ranks. These men, usually of peasant or working-class backgrounds, were to play a large role in the politicization of the troops in 1917.
The army quickly ran short of rifles and ammunition (as well as uniforms and food), and by mid-1915, men were being sent to the front bearing no arms. It was hoped that they could equip themselves with arms recovered from fallen soldiers, of both sides, on the battlefields. The soldiers did not feel as if they were valuable, rather they felt as if they were expendable.
By the spring of 1915, the army was in steady retreat, which was not always orderly; desertion, plundering, and chaotic flight were not uncommon. By 1916, however, the situation had improved in many respects. Russian troops stopped retreating, and there were even some modest successes in the offensives that were staged that year, albeit at great loss of life. Also, the problem of shortages was largely solved by a major effort to increase domestic production. Nevertheless, by the end of 1916, morale among soldiers was even worse than it had been during the great retreat of 1915. The fortunes of war may have improved, but the fact of war remained which continually took Russian lives. The crisis in morale (as was argued by Allan Wildman, a leading historian of the Russian army in war and revolution) "was rooted fundamentally in the feeling of utter despair that the slaughter would ever end and that anything resembling victory could be achieved."
The war did not only devastate soldiers. By the end of 1915, there were manifold signs that the economy was breaking down under the heightened strain of wartime demand. The main problems were food shortages and rising prices. Inflation dragged incomes down at an alarmingly rapid rate, and shortages made it difficult for an individual to sustain oneself. These shortages were a problem especially in the capital, St. Petersburg, where distance from supplies and poor transportation networks made matters particularly worse. Shops closed early or entirely for lack of bread, sugar, meat, and other provisions, and lines lengthened massively for what remained. Conditions became increasingly difficult to afford food and physically obtain it.
Strikes increased steadily from the middle of 1915, and so did crime, but, for the most part, people suffered and endured, scouring the city for food. Working-class women in St. Petersburg reportedly spent about forty hours a week in food lines, begging, turning to prostitution or crime, tearing down wooden fences to keep stoves heated for warmth, and continued to resent the rich.
Government officials responsible for public order worried about how long people's patience would last. A report by the St. Petersburg branch of the security police, the Okhrana, in October 1916, warned bluntly of "the possibility in the near future of riots by the lower classes of the empire enraged by the burdens of daily existence."
Tsar Nicholas was blamed for all of these crises, and what little support he had left began to crumble. As discontent grew, the State Duma issued a warning to Nicholas in November 1916, stating that, inevitably, a terrible disaster would grip the country unless a constitutional form of government was put in place. Nicholas ignored these warnings and Russia's Tsarist regime collapsed a few months later during the February Revolution of 1917. One year later, the Tsar and his entire family were executed.
At the beginning of February, Petrograd workers began several strikes and demonstrations. On , workers at Putilov, Petrograd's largest industrial plant, announced a strike.
The next day, a series of meetings and rallies were held for International Women's Day, which gradually turned into economic and political gatherings. Demonstrations were organised to demand bread, and these were supported by the industrial working force who considered them a reason for continuing the strikes. The women workers marched to nearby factories bringing out over 50,000 workers on strike. By , virtually every industrial enterprise in Petrograd had been shut down, together with many commercial and service enterprises. Students, white-collar workers, and teachers joined the workers in the streets and at public meetings.
To quell the riots, the Tsar looked to the army. At least 180,000 troops were available in the capital, but most were either untrained or injured. Historian Ian Beckett suggests around 12,000 could be regarded as reliable, but even these proved reluctant to move in on the crowd, since it included so many women. It was for this reason that on , when the Tsar ordered the army to suppress the rioting by force, troops began to revolt. Although few actively joined the rioting, many officers were either shot or went into hiding; the ability of the garrison to hold back the protests was all but nullified, symbols of the Tsarist regime were rapidly torn down around the city, and governmental authority in the capital collapsed – not helped by the fact that Nicholas had prorogued the Duma that morning, leaving it with no legal authority to act. The response of the Duma, urged on by the liberal bloc, was to establish a Temporary Committee to restore law and order; meanwhile, the socialist parties established the Petrograd Soviet to represent workers and soldiers. The remaining loyal units switched allegiance the next day.
The Tsar directed the royal train back towards Petrograd, which was stopped on , by a group of revolutionaries at Malaya Vishera. When the Tsar finally arrived at in Pskov, the Army Chief Nikolai Ruzsky, and the Duma deputies Alexander Guchkov and Vasily Shulgin suggested in unison that he abdicate the throne. He did so on , on behalf of himself, and then, having taken advice on behalf of his son, the Tsarevich. Nicholas nominated his brother, the Grand Duke Michael Alexandrovich, to succeed him. But the Grand Duke realised that he would have little support as ruler, so he declined the crown on , stating that he would take it only if that was the consensus of democratic action. Six days later, Nicholas, no longer Tsar and addressed with contempt by the sentries as "Nicholas Romanov", was reunited with his family at the Alexander Palace at Tsarskoye Selo. He was placed under house arrest with his family by the Provisional Government.
The immediate effect of the February Revolution was a widespread atmosphere of elation and excitement in Petrograd. On , a provisional government was announced. The center-left was well represented, and the government was initially chaired by a liberal aristocrat, Prince Georgy Yevgenievich Lvov, a member of the Constitutional Democratic Party (KD). The socialists had formed their rival body, the Petrograd Soviet (or workers' council) four days earlier. The Petrograd Soviet and the Provisional Government competed for power over Russia.
The effective power of the Provisional Government was challenged by the authority of an institution that claimed to represent the will of workers and soldiers and could, in fact, mobilize and control these groups during the early months of the revolution – the Petrograd Soviet Council of Workers' Deputies. The model for the Soviets were workers' councils that had been established in scores of Russian cities during the 1905 Revolution. In February 1917, striking workers elected deputies to represent them and socialist activists began organizing a citywide council to unite these deputies with representatives of the socialist parties. On 27 February, socialist Duma deputies, mainly Mensheviks and Socialist Revolutionaries, took the lead in organizing a citywide council. The Petrograd Soviet met in the Tauride Palace, the same building where the new government was taking shape.
The leaders of the Petrograd Soviet believed that they represented particular classes of the population, not the whole nation. They also believed Russia was not ready for socialism. They viewed their role as limited to pressuring hesitant "bourgeoisie" to rule and to introduce extensive democratic reforms in Russia (the replacement of the monarchy by a republic, guaranteed civil rights, a democratic police and army, abolition of religious and ethnic discrimination, preparation of elections to a constituent assembly, and so on). They met in the same building as the emerging Provisional Government not to compete with the Duma Committee for state power, but to best exert pressure on the new government, to act, in other words, as a popular democratic lobby.
The relationship between these two major powers was complex from the beginning and would shape the politics of 1917. The representatives of the Provisional Government agreed to "take into account the opinions of the Soviet of Workers' Deputies", though they were also determined to prevent "interference in the actions of the government", which would create "an unacceptable situation of dual power". In fact, this was precisely what was being created, though this "dual power" (dvoevlastie) was the result less of the actions or attitudes of the leaders of these two institutions than of actions outside their control, especially the ongoing social movement taking place on the streets of Russia's cities, factories, shops, barracks, villages, and in the trenches.
A series of political crises – see the chronology below – in the relationship between population and government and between the Provisional Government and the Soviets (which developed into a nationwide movement with a national leadership). The All-Russian Central Executive Committee of Soviets (VTsIK) undermined the authority of the Provisional Government but also of the moderate socialist leaders of the Soviets. Although the Soviet leadership initially refused to participate in the "bourgeois" Provisional Government, Alexander Kerensky, a young, popular lawyer and a member of the Socialist Revolutionary Party (SRP), agreed to join the new cabinet, and became an increasingly central figure in the government, eventually taking leadership of the Provisional Government. As minister of war and later Prime Minister, Kerensky promoted freedom of speech, released thousands of political prisoners, continued the war effort, even organizing another offensive (which, however, was no more successful than its predecessors). Nevertheless, Kerensky still faced several great challenges, highlighted by the soldiers, urban workers, and peasants, who claimed that they had gained nothing by the revolution:
The political group that proved most troublesome for Kerensky, and would eventually overthrow him, was the Bolshevik Party, led by Vladimir Lenin. Lenin had been living in exile in neutral Switzerland and, due to democratization of politics after the February Revolution, which legalized formerly banned political parties, he perceived the opportunity for his Marxist revolution. Although return to Russia had become a possibility, the war made it logistically difficult. Eventually, German officials arranged for Lenin to pass through their territory, hoping that his activities would weaken Russia or even – if the Bolsheviks came to power – lead to Russia's withdrawal from the war. Lenin and his associates, however, had to agree to travel to Russia in a sealed train: Germany would not take the chance that he would foment revolution in Germany. After passing through the front, he arrived in Petrograd in April 1917.
On the way to Russia, Lenin prepared the April Theses, which outlined central Bolshevik policies. These included that the Soviets take power (as seen in the slogan "all power to the Soviets") and denouncing the liberals and social revolutionaries in the Provisional Government, forbidding co-operation with it. Many Bolsheviks, however, had supported the Provisional Government, including Lev Kamenev.
With Lenin's arrival, the popularity of the Bolsheviks increased steadily. Over the course of the spring, public dissatisfaction with the Provisional Government and the war, in particular among workers, soldiers and peasants, pushed these groups to radical parties. Despite growing support for the Bolsheviks, buoyed by maxims that called most famously for "all power to the Soviets", the party held very little real power in the moderate-dominated Petrograd Soviet. In fact, historians such as Sheila Fitzpatrick have asserted that Lenin's exhortations for the Soviet Council to take power were intended to arouse indignation both with the Provisional Government, whose policies were viewed as conservative, and the Soviets themselves, which were viewed as subservients to the conservative government. By some other historians' accounts, Lenin and his followers were unprepared for how their groundswell of support, especially among influential worker and soldier groups, would translate into real power in the summer of 1917.
On 18 June, the Provisional Government launched an attack against Germany that failed miserably. Soon after, the government ordered soldiers to go to the front, reneging on a promise. The soldiers refused to follow the new orders. The arrival of radical Kronstadt sailors – who had tried and executed many officers, including one admiral – further fueled the growing revolutionary atmosphere. Sailors and soldiers, along with Petrograd workers, took to the streets in violent protest, calling for "all power to the Soviets". The revolt, however, was disowned by Lenin and the Bolshevik leaders and dissipated within a few days. In the aftermath, Lenin fled to Finland under threat of arrest while Trotsky, among other prominent Bolsheviks, was arrested. The July Days confirmed the popularity of the anti-war, radical Bolsheviks, but their unpreparedness at the moment of revolt was an embarrassing gaffe that lost them support among their main constituent groups: soldiers and workers.
The Bolshevik failure in the July Days proved temporary. The Bolsheviks had undergone a spectacular growth in membership. Whereas, in February 1917, the Bolsheviks were limited to only 24,000 members, by September 1917 there were 200,000 members of the Bolshevik faction. Previously, the Bolsheviks had been in the minority in the two leading cities of Russia—St. Petersburg and Moscow behind the Mensheviks and the Socialist Revolutionaries, by September the Bolsheviks were in the majority in both cities. Furthermore, the Bolshevik-controlled Moscow Regional Bureau of the Party also controlled the Party organizations of the 13 provinces around Moscow. These 13 provinces held 37% of Russia's population and 20% of the membership of the Bolshevik faction.
In August, poor and misleading communication led General Lavr Kornilov, the recently appointed Supreme Commander of Russian military forces, to believe that the Petrograd government had already been captured by radicals, or was in serious danger thereof. In response, he ordered troops to Petrograd to pacify the city. To secure his position, Kerensky had to ask for Bolshevik assistance. He also sought help from the Petrograd Soviet, which called upon armed Red Guards to "defend the revolution". The Kornilov Affair failed largely due to the efforts of the Bolsheviks, whose influence over railroad and telegraph workers proved vital in stopping the movement of troops. With his coup failing, Kornilov surrendered and was relieved of his position. The Bolsheviks' role in stopping the attempted coup further strengthened their position.
In early September, the Petrograd Soviet freed all jailed Bolsheviks and Trotsky became chairman of the Petrograd Soviet. Growing numbers of socialists and lower-class Russians viewed the government less as a force in support of their needs and interests. The Bolsheviks benefited as the only major organized opposition party that had refused to compromise with the Provisional Government, and they benefited from growing frustration and even disgust with other parties, such as the Mensheviks and Socialist Revolutionaries, who stubbornly refused to break with the idea of national unity across all classes.
In Finland, Lenin had worked on his book "State and Revolution" and continued to lead his party, writing newspaper articles and policy decrees. By October, he returned to Petrograd (present-day St. Petersburg), aware that the increasingly radical city presented him no legal danger and a second opportunity for revolution. Recognising the strength of the Bolsheviks, Lenin began pressing for the immediate overthrow of the Kerensky government by the Bolsheviks. Lenin was of the opinion that taking power should occur in both St. Petersburg and Moscow simultaneously, parenthetically stating that it made no difference which city rose up first, but expressing his opinion that Moscow may well rise up first. The Bolshevik Central Committee drafted a resolution, calling for the dissolution of the Provisional Government in favor of the Petrograd Soviet. The resolution was passed 10–2 (Lev Kamenev and Grigory Zinoviev prominently dissenting) promoting the October Revolution.
The October Revolution, night to Wednesday 7 November 1917 according to the modern Gregorian calendar and night to Wednesday 25 October according to the Julian calendar at the time in tsarist Russia, was organized by the Bolshevik party. Lenin did not have any direct role in the revolution and due to his personal security he was hiding. The Revolutionary Military Committee established by the Bolshevik party was organizing the insurrection and Leon Trotsky was the chairman. However, Lenin played a crucial role in the debate in the leadership of the Bolshevik party for a revolutionary insurrection as the party in the autumn of 1917 received a majority in the soviets. An ally in the left fraction of the Revolutionary-Socialist Party, with huge support among the peasants who opposed Russia's participation in the war, supported the slogan 'All power to the Soviets'.
Liberal and monarchist forces, loosely organized into the White Army, immediately went to war against the Bolsheviks' Red Army, in a series of battles that would become known as the Russian Civil War. This did not happen in 1917. The Civil War began in early 1918 with domestic anti-Bolshevik forces confronting the nascent Red Army. In autumn of 1918 allied countries chose to send troops to support the "Whites" with supplies of weapons, ammunition and logistic equipment being sent from the main Western countries but this was not at all coordinated. Germany did not participate in the civil war as it surrendered to the Allied.
Of more interests is the anarchist movement of Nestor Makhno in Ukraine who fought against the White generals, saved Moscow in 1919 from an attack by the general Denikin and in November 1920 helped the Bolshevik to defeat general Wrangel. However, 26 November 1920 the Bolshevik government invited headquarters staff and many of Makhno's subordinate commanders to a Red Army planning conference in Moscow only to have them imprisoned and executed. At that time was there already a decision to eliminate the Makhno movement. Nestor Makno escaped the hunt by the Red Army and in August 1921 he and 77 of his followers managed to escape into Romania and further to Poland, Germany to reach France where Makno died on 25 July 1934.
The provisional government with its second and third coalition was led by a right wing fraction of the Socialist-Revolutionary party, SR. This non-elected provisional government faced the revolutionary situation and the growing mood against the war by avoiding elections to the state Duma. However, the October revolution forced the political parties behind the newly dissolved provisional government to move and move fast for immediate elections. All happened so fast that the left SR fraction did not have time to reach out and be represented in ballots of the SR party which was part of the coalition in the provisional government. This non-elected government supported continuation of the war on the side of the allied forces. The elections to the State Duma 25 November 1917 therefore did not mirror the true political situation among peasants even if we don't know how the outcome would be if the anti-war left SR fraction had a fair chance to challenge the party leaders. In the elections the Bolshevik party received 25% of the votes and the Socialist-Revolutionaries as much as 58%. It is possible the left SR had a good chance to reach more than 25% of the votes and thereby legitimate the October revolution but we can only guess.
Lenin did not believe as Karl Marx that a socialist revolution presupposed a developed capitalist economy and not in a semi-capitalist country as Russia. Russia was backward, but not that backward, with a working class population of more than some 4-5% of the population.
Though Lenin was the leader of the Bolshevik Party, it has been argued that since Lenin was not present during the actual takeover of the Winter Palace, it was really Trotsky's organization and direction that led the revolution, merely spurred by the motivation Lenin instigated within his party. Critics on the Right have long argued that the financial and logistical assistance of German intelligence via their key agent, Alexander Parvus was a key component as well, though historians are divided, since there is little evidence supporting that claim.
Soviet membership was initially freely elected, but many members of the Socialist Revolutionary Party, anarchists, and other leftists created opposition to the Bolsheviks through the Soviets themselves. The elections to the Russian Constituent Assembly took place 25 November 1917. The Bolsheviks gained 25% of the vote. When it became clear that the Bolsheviks had little support outside of the industrialized areas of Saint Petersburg and Moscow, they simply barred non-Bolsheviks from membership in the Soviets. The Bolsheviks dissolved the Constituent Assembly in January 1918.
The Russian Civil War, which broke out in 1918 shortly after the October Revolution, resulted in the deaths and suffering of millions of people regardless of their political orientation. The war was fought mainly between the Red Army ("Reds"), consisting of the uprising majority led by the Bolshevik minority, and the "Whites" – army officers and cossacks, the "bourgeoisie", and political groups ranging from the far Right, to the Socialist Revolutionaries who opposed the drastic restructuring championed by the Bolsheviks following the collapse of the Provisional Government, to the Soviets (under clear Bolshevik dominance). The Whites had backing from other countries such as Great Britain, France, the United States, and Japan, while the Reds possessed internal support, proving to be much more effective. Though the Allied nations, using external interference, provided substantial military aid to the loosely knit anti-Bolshevik forces, they were ultimately defeated.
The Bolsheviks firstly assumed power in Petrograd, expanding their rule outwards. They eventually reached the Easterly Siberian Russian coast in Vladivostok, four years after the war began, an occupation that is believed to have ended all significant military campaigns in the nation. Less than one year later, the last area controlled by the White Army, the Ayano-Maysky District, directly to the north of the Krai containing Vladivostok, was given up when General Anatoly Pepelyayev capitulated in 1923.
Several revolts were initiated against the Bolsheviks and their army near the end of the war, notably the Kronstadt Rebellion. This was a naval mutiny engineered by Soviet Baltic sailors, former Red Army soldiers, and the people of Kronstadt. This armed uprising was fought against the antagonizing Bolshevik economic policies that farmers were subjected to, including seizures of grain crops by the Communists. This all amounted to large-scale discontent. When delegates representing the Kronstadt sailors arrived at Petrograd for negotiations, they raised 15 demands primarily pertaining to the Russian right to freedom. The Government firmly denounced the rebellions and labelled the requests as a reminder of the Social Revolutionaries, a political party that was popular among Soviets before Lenin, but refused to cooperate with the Bolshevik Army. The Government then responded with an armed suppression of these revolts and suffered ten thousand casualties before entering the city of Kronstadt. This ended the rebellions fairly quickly, causing many of the rebels to flee seeking political exile.
During the Civil War, Nestor Makhno led a Ukrainian anarchist movement, the Black Army allied to the Bolsheviks thrice, one of the powers ending the alliance each time. However, a Bolshevik force under Mikhail Frunze destroyed the Makhnovist movement, when the Makhnovists refused to merge into the Red Army. In addition, the so-called "Green Army" (peasants defending their property against the opposing forces) played a secondary role in the war, mainly in the Ukraine.
Revolutionary tribunals were present during both the Revolution and the Civil War, intended for the purpose of combatting forces of counter-revolution. At the Civil War's zenith, it is reported that upwards of 200,000 cases were investigated by approximately 200 tribunals. These tribunals established themselves more so from the Cheka as a more moderate force that acted under the banner of revolutionary justice, rather than a utilizer of strict brute force as the former did. However, these tribunals did come with their own set of inefficiencies, such as responding to cases in a matter of months and not having a concrete definition of "counter-revolution" that was determined on a case-by-case basis. The "Decree on Revolutionary Tribunals" used by the People's Commissar of Justice, states in article 2 that "In fixing the penalty, the Revolutionary Tribunal shall be guided by the circumstances of the case and the dictates of the revolutionary conscience." Revolutionary tribunals ultimately demonstrated that a form of justice was still prevalent in Russian society where the Russian Provisional Government failed. This, in part, triggered the political transition of the October Revolution and the Civil War that followed in its aftermath.
The Bolsheviks executed the tsar and his family on 16 July 1918. In early March, the Provisional Government placed Nicholas and his family under house arrest in the Alexander Palace at Tsarskoye Selo, south of Petrograd. In August 1917 the Kerensky government evacuated the Romanovs to Tobolsk in the Urals, to protect them from the rising tide of revolution. However, Kerensky lost control after the Bolsheviks came to power in October 1917, and the conditions of their imprisonment grew stricter and talk of putting Nicholas on trial increased. As the counter revolutionary White movement gathered force, leading to full-scale civil war by the summer, the Romanovs were moved during April and May 1918 to Yekaterinburg, a militant Bolshevik stronghold.
During the early morning of 16 July, Nicholas, Alexandra, their children, their physician, and several servants were taken into the basement and shot. According to Edvard Radzinsky and Dmitrii Volkogonov, the order came directly from Lenin and Sverdlov in Moscow. That the order came from the top has long been believed, although there is a lack of hard evidence. The execution may have been carried out on the initiative of local Bolshevik officials, or it may have been an option pre-approved in Moscow should White troops approach Yekaterinburg. Radzinsky noted that Lenin's bodyguard personally delivered the telegram ordering the execution and that he was ordered to destroy the evidence.
The Russian Revolution became the site for many instances of symbolism, both physical and non-physical. Communist symbolism is perhaps the most notable of this time period, such as the debut of the iconic hammer and sickle as a representation of the October Revolution in 1917, eventually becoming the official symbol of the USSR in 1924. Although the Bolsheviks did not have extensive political experience, their portrayal of the revolution itself as both a political and symbolic order resulted in Communism's portrayal as a messianic faith, formally known as communist messianism. Portrayals of notable revolutionary figures such as Lenin were done in iconographic methods, equating them similarly to religious figures, though religion itself was banned in the USSR and groups such as the Russian Orthodox Church were persecuted.
The revolution ultimately led to the establishment of the future Soviet Union as an ideocracy; however, the establishment of such a state came as an ideological paradox, as Marx's ideals of how a socialist state ought to be created were based on the formation being natural and not artificially incited (i.e. by means of revolution). Leon Trotsky said that the goal of socialism in Russia would not be realized without the success of the world revolution. A revolutionary wave caused by the Russian Revolution lasted until 1923, but despite initial hopes for success in the German Revolution of 1918–19, the short-lived Hungarian Soviet Republic, and others like it, no other Marxist movement at the time succeeded in keeping power in its hands.
This issue is subject to conflicting views on communist history by various Marxist groups and parties. Joseph Stalin later rejected this idea, stating that socialism was possible in one country.
The confusion regarding Stalin's position on the issue stems from the fact that, after Lenin's death in 1924, he successfully used Lenin's argument – the argument that socialism's success needs the support of workers of other countries in order to happen – to defeat his competitors within the party by accusing them of betraying Lenin and, therefore, the ideals of the October Revolution.
The Russian Revolution inspired other communist movements around the world in regions such as South Asia, Southeast Asia, and Latin America.
The Chinese Communist Revolution began in 1946 and was part of the ongoing Chinese Civil War. Marx had envisioned European revolutions to be intertwined with Asian revolutions in the mid-19th century with his 1853 "New York Tribune" article, "Revolution in China and Europe," in which he references the Chinese as people in "revolutionary convulsion," brought about by British economic control. The May Fourth Movement is considered a turning point where Communism took root in Chinese society, especially among intellectuals. China was officially made a communist country on 1 October 1949, resulting in the establishment of the People's Republic of China (which still remains to this day) with Chairman Mao Zedong at its head. China's current leaders retain that Mao "developed the theory of revolutionary socialism" whilst reformer Deng Xiopeng "developed the theory of building socialism with Chinese characteristics."
Cuba experienced its own communist revolution as well, known as the Cuban Revolution, which began in July 1953 under the leadership of revolutionary Fidel Castro. Castro's 26th of July Movement and Cuban Revolution followed in the footsteps of the Sergeant's Revolt in Cuba in 1933, similarly to how the 1905 Revolution in Russia preceded the October Revolution. Castro's movement sought "political democracy, political and economic nationalism, agrarian reform, industrialization, social security, and education." Similarly to the October Revolution, the Cuban Revolution removed a more traditional, hierarchical regime with the aim of establishing greater overall equality, specifically in the removal of former authoritarian president Fulgencio Batista. Cuba's revolution contributed to escalating tensions between the United States and USSR during Cold War, such as the CIA's failed Bay of Pigs Invasion by Cuban exiles in April 1961, and the Cuban Missile Crisis in October 1962. Today, Cuba is moving more towards Capitalism and a free-market economy, as the Center for Democracy in the Americas (CDA) believes Castro's policies during his rule fostered "an acceptance that market forces can play a role in economic policy and that economic growth must be the central criterion to judge economic success."
The August Revolution took place on 14 August 1945, led by revolutionary leader Ho Chi Minh with the aid of his Viet Minh. During the Second World War, the French and Japanese fascists in Indochina (now known as Southeast Asia) began to experience significant resistance to their colonial rule. Due to the fact that both France and Japan were engaged in World War II, the Vietnamese people realized an opportunity to engage in an uprising, resulting in the bloody August Insurrection, ending colonial rule in Vietnam. Marxism was manifested in Vietnam as early as the Spring of 1925 when the Vietnamese Revolutionary Youth League was established, with the league being described as "first truly Marxist organization in Indochina" The domino effect caused more concern among Western countries in regards to Communism in Southeast Asia. One interpretation of the United States' involvement in the Vietnam War is "America had lost a guerrilla war in Asia, a loss of caused by failure to appreciate the nuances of counterinsurgency war." Since the Fall of Saigon on 30 April 1975, Vietnam has remained a communist country.
Few events in historical research have been as conditioned by political influences as the October Revolution. The historiography of the Revolution generally divides into three camps: the Soviet-Marxist view, the Western-Totalitarian view, and the Revisionist view. Since the fall of Communism (and the USSR) in Russia in 1991, the Western-Totalitarian view has again become dominant and the Soviet-Marxist view has practically vanished. While the Soviet-Marxist view has been largely discredited, an "anti-Stalinist" version of it attempts to draw a distinction between the "Lenin period" (1917–23) and the "Stalin period" (1923–53)
A Lenin biographer, Robert Service, states he "laid the foundations of dictatorship and lawlessness. Lenin had consolidated the principle of state penetration of the whole society, its economy and its culture. Lenin had practised terror and advocated revolutionary amoralism."
"Dates are correct for the Julian calendar, which was used in Russia until 1918. It was 12 days behind the Gregorian calendar during the 19th century and thirteen days behind it during the 20th century."
George Orwell's classic novella "Animal Farm" is an allegory of the Russian Revolution and its aftermath. It describes the dictator Stalin as a big Berkshire boar named, "Napoleon." Trotsky is represented by a pig called Snowball who is a brilliant talker and makes magnificent speeches. However, Napoleon overthrows Snowball as Stalin overthrew Trotsky and Napoleon takes over the farm the animals live on. Napoleon becomes a tyrant and uses force and propaganda to oppress the animals, while culturally teaching them that they are free.
The Russian Revolution has been portrayed in or served as backdrop for many films. Among them, in order of release date:
The Russian Revolution has been used as a direct backdrop for select video games. Among them, in order of release date: | https://en.wikipedia.org/wiki?curid=25762 |
Raven Software
Raven Software is an American video game developer based in Wisconsin and founded in 1990. In 1997, Raven made an exclusive publishing deal with Activision and was subsequently acquired by them. After the acquisition, many of the studio's original developers, largely responsible for creating the "Heretic" and "" games, left to form Human Head Studios.
Raven Software was founded in 1990 by brothers Brian and Steve Raffel. The company was independent until 1997 when it was acquired by Activision.
Raven has a history of working with id Software, who were briefly located on the same street. They used id's engines for many of their games, such as "Heretic" in 1994. They took over development of id's "Quake" franchise for "Quake 4" and the 2009 iteration of id's "Wolfenstein" series.
The company started with three development teams. In August 2009 following poor performance and possible over-budget of "Wolfenstein," the company made a major layoff of 30-35 staff, leaving two development teams. This was reduced to one after more layoffs in October 2010, after delays with "Singularity;" as many as 40 staff were released. Following the layoffs, Raven has been focused on assisting with the "Call of Duty" series ever since.
In 2012, Raven began hiring employees for a game, and were announced as collaborating with Infinity Ward on "" in May 2013.
On April 3, 2013 following the closure of LucasArts, Raven Software released the source code for "" and "" on Sourceforge.
As of April 2014, the company is the lead developer of the free-to-play Chinese "Call of Duty" title, "". The company also remade "," titled "".
Currently Raven Software are collaborating with Infinity Ward on the game Call Of Duty: Warzone | https://en.wikipedia.org/wiki?curid=25764 |
RNA world
The RNA world is a hypothetical stage in the evolutionary history of life on Earth, in which self-replicating RNA molecules proliferated before the evolution of DNA and proteins. The term also refers to the hypothesis that posits the existence of this stage.
Alexander Rich first proposed the concept of the RNA world in 1962, and Walter Gilbert coined the term in 1986. Alternative chemical paths to life have been proposed, and RNA-based life may not have been the first life to exist. Even so, the evidence for an RNA world is strong enough that the hypothesis has gained wide acceptance. The concurrent formation of all four RNA building blocks further strengthened the hypothesis.
Like DNA, RNA can store and replicate genetic information; like protein enzymes, RNA enzymes (ribozymes) can catalyze (start or accelerate) chemical reactions that are critical for life. One of the most critical components of cells, the ribosome, is composed primarily of RNA. Ribonucleotide moieties in many coenzymes, such as Acetyl-CoA, NADH, FADH and F420, may be surviving remnants of covalently bound coenzymes in an RNA world.
Although RNA is fragile, some ancient RNAs may have evolved the ability to methylate other RNAs to protect them.
If the RNA world existed, it was probably followed by an age characterized by the evolution of ribonucleoproteins (RNP world), which in turn ushered in the era of DNA and longer proteins. DNA has better stability and durability than RNA; this may explain why it became the predominant storage molecule. Protein enzymes may have come to replace RNA-based ribozymes as biocatalysts because their greater abundance and diversity of monomers makes them more versatile. As some co-factors contain both nucleotide and amino-acid characteristics, it may be that amino acids, peptides and finally proteins initially were co-factors for ribozymes.
One of the challenges in studying abiogenesis is that the system of reproduction and metabolism utilized by all extant life involves three distinct types of interdependent macromolecules (DNA, RNA, and protein). This suggests that life could not have arisen in its current form, which has led researchers to hypothesize mechanisms whereby the current system might have arisen from a simpler precursor system. The concept of RNA as a primordial molecule can be found in papers by Francis Crick and Leslie Orgel, as well as in Carl Woese's 1967 book "The Genetic Code". In 1962, the molecular biologist Alexander Rich posited much the same idea in an article he contributed to a volume issued in honor of Nobel-laureate physiologist Albert Szent-Györgyi. Hans Kuhn in 1972 laid out a possible process by which the modern genetic system might have arisen from a nucleotide-based precursor, and this led Harold White in 1976 to observe that many of the cofactors essential for enzymatic function are either nucleotides or could have been derived from nucleotides. He proposed that these nucleotide cofactors represent "fossils of nucleic acid enzymes". The phrase "RNA World" was first used by Nobel laureate Walter Gilbert in 1986, in a commentary on how recent observations of the catalytic properties of various forms of RNA fit with this hypothesis.
In November 2019, scientists reported detecting, for the first time, sugar molecules, including ribose, in meteorites, suggesting that chemical processes on asteroids can produce some fundamentally essential bio-ingredients important to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth, and possibly, as well, the notion of panspermia. In March 2020, astronomer Tomonori Totani presented a statistical approach for explaining how an initial active RNA molecule might have been produced randomly in the universe sometime since the Big Bang.
The properties of RNA make the idea of the RNA world hypothesis conceptually plausible, though its general acceptance as an explanation for the origin of life requires further evidence. RNA is known to form efficient catalysts and its similarity to DNA makes clear its ability to store information. Opinions differ, however, as to whether RNA constituted the first autonomous self-replicating system or was a derivative of a still-earlier system. One version of the hypothesis is that a different type of nucleic acid, termed "pre-RNA", was the first one to emerge as a self-reproducing molecule, to be replaced by RNA only later. On the other hand, the discovery in 2009 that activated pyrimidine ribonucleotides can be synthesized under plausible prebiotic conditions suggests that it is premature to dismiss the RNA-first scenarios. Suggestions for 'simple' "pre-RNA" nucleic acids have included peptide nucleic acid (PNA), threose nucleic acid (TNA) or glycol nucleic acid (GNA). Despite their structural simplicity and possession of properties comparable with RNA, the chemically plausible generation of "simpler" nucleic acids under prebiotic conditions has yet to be demonstrated.
RNA enzymes, or ribozymes, are found in today's DNA-based life and could be examples of living fossils. Ribozymes play vital roles, such as that of the ribosome. The large subunit of 70s Ribosome (50s) contains 23s rRNA which act as a peptide bond forming enzyme called peptidal transferase and helps in protein synthesis. Many other ribozyme functions exist; for example, the hammerhead ribozyme performs self-cleavage and an RNA polymerase ribozyme can synthesize a short RNA strand from a primed RNA template.
Among the enzymatic properties important for the beginning of life are:
RNA is a very similar molecule to DNA, with only two major chemical differences (the backbone of RNA uses ribose instead of deoxyribose and its nucleobases include uracil instead of thymine). The overall structure of RNA and DNA are immensely similar—one strand of DNA and one of RNA can bind to form a double helical structure. This makes the storage of information in RNA possible in a very similar way to the storage of information in DNA. However, RNA is less stable, being more prone to hydrolysis due to the presence of a hydroxyl group at the ribose 2' position.
The major difference between RNA and DNA is the presence of a hydroxyl group at the 2'-position of the ribose sugar in RNA (illustration, right). This group makes the molecule less stable because, when not constrained in a double helix, the 2' hydroxyl can chemically attack the adjacent phosphodiester bond to cleave the phosphodiester backbone. The hydroxyl group also forces the ribose into the C3'-"endo" sugar conformation unlike the C2'-"endo" conformation of the deoxyribose sugar in DNA. This forces an RNA double helix to change from a B-DNA structure to one more closely resembling A-DNA.
RNA also uses a different set of bases than DNA—adenine, guanine, cytosine and uracil, instead of adenine, guanine, cytosine and thymine. Chemically, uracil is similar to thymine, differing only by a methyl group, and its production requires less energy. In terms of base pairing, this has no effect. Adenine readily binds uracil or thymine. Uracil is, however, one product of damage to cytosine that makes RNA particularly susceptible to mutations that can replace a GC base pair with a GU (wobble) or AU base pair.
RNA is thought to have preceded DNA, because of their ordering in the biosynthetic pathways. The deoxyribonucleotides used to make DNA are made from ribonucleotides, the building blocks of RNA, by removing the 2'-hydroxyl group. As a consequence a cell must have the ability to make RNA before it can make DNA.
The chemical properties of RNA make large RNA molecules inherently fragile, and they can easily be broken down into their constituent nucleotides through hydrolysis. These limitations do not make use of RNA as an information storage system impossible, simply energy intensive (to repair or replace damaged RNA molecules) and prone to mutation. While this makes it unsuitable for current 'DNA optimised' life, it may have been acceptable for more primitive life.
Riboswitches have been found to act as regulators of gene expression, particularly in bacteria, but also in plants and archaea. Riboswitches alter their secondary structure in response to the binding of a metabolite. This change in structure can result in the formation or disruption of a terminator, truncating or permitting transcription respectively. Alternatively, riboswitches may bind or occlude the Shine-Dalgarno sequence, affecting translation. It has been suggested that these originated in an RNA-based world. In addition, RNA thermometers regulate gene expression in response to temperature changes.
The RNA world hypothesis is supported by RNA's ability to store, transmit, and duplicate genetic information, as DNA does. RNA can act as a ribozyme, a special type of enzyme. Because it can perform the tasks of both DNA and enzymes, RNA is believed to have once been capable of supporting independent life forms. Some viruses use RNA as their genetic material, rather than DNA. Further, while nucleotides were not found in experiments based on Miller-Urey experiment, their formation in prebiotically plausible conditions was reported in 2009; the purine base known as adenine is merely a pentamer of hydrogen cyanide. Experiments with basic ribozymes, like Bacteriophage Qβ RNA, have shown that simple self-replicating RNA structures can withstand even strong selective pressures (e.g., opposite-chirality chain terminators).
Since there were no known chemical pathways for the abiogenic synthesis of nucleotides from pyrimidine nucleobases cytosine and uracil under prebiotic conditions, it is thought by some that nucleic acids did not contain these nucleobases seen in life's nucleic acids. The nucleoside cytosine has a half-life in isolation of 19 days at and 17,000 years in freezing water, which some argue is too short on the geologic time scale for accumulation. Others have questioned whether ribose and other backbone sugars could be stable enough to find in the original genetic material, and have raised the issue that all ribose molecules would have had to be the same enantiomer, as any nucleotide of the wrong chirality acts as a chain terminator.
Pyrimidine ribonucleosides and their respective nucleotides have been prebiotically synthesised by a sequence of reactions that by-pass free sugars and assemble in a stepwise fashion by including nitrogenous and oxygenous chemistries. In a series of publications, John Sutherland and his team at the School of Chemistry, University of Manchester, have demonstrated high yielding routes to cytidine and uridine ribonucleotides built from small 2 and 3 carbon fragments such as glycolaldehyde, glyceraldehyde or glyceraldehyde-3-phosphate, cyanamide and cyanoacetylene. One of the steps in this sequence allows the isolation of enantiopure ribose aminooxazoline if the enantiomeric excess of glyceraldehyde is 60% or greater, of possible interest towards biological homochirality. This can be viewed as a prebiotic purification step, where the said compound spontaneously crystallised out from a mixture of the other pentose aminooxazolines. Aminooxazolines can react with cyanoacetylene in a mild and highly efficient manner, controlled by inorganic phosphate, to give the cytidine ribonucleotides. Photoanomerization with UV light allows for inversion about the 1' anomeric centre to give the correct beta stereochemistry; one problem with this chemistry is the selective phosphorylation of alpha-cytidine at the 2' position. However, in 2009, they showed that the same simple building blocks allow access, via phosphate controlled nucleobase elaboration, to 2',3'-cyclic pyrimidine nucleotides directly, which are known to be able to polymerise into RNA. Organic chemist Donna Blackmond described this finding as "strong evidence" in favour of the RNA world. However, John Sutherland said that while his team's work suggests that nucleic acids played an early and central role in the origin of life, it did not necessarily support the RNA world hypothesis in the strict sense, which he described as a "restrictive, hypothetical arrangement".
The Sutherland group's 2009 paper also highlighted the possibility for the photo-sanitization of the pyrimidine-2',3'-cyclic phosphates. A potential weakness of these routes is the generation of enantioenriched glyceraldehyde, or its 3-phosphate derivative (glyceraldehyde prefers to exist as its keto tautomer dihydroxyacetone).
On August 8, 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of RNA (adenine, guanine and related organic molecules) may have been formed extraterrestrially in outer space. In 2017, a numerical model suggests that the RNA world may have emerged in warm ponds on the early Earth, and that meteorites were a plausible and probable source of the RNA building blocks (ribose and nucleic acids) to these environments. On August 29, 2012, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary "IRAS 16293-2422", which is located 400 light years from Earth. Because glycolaldehyde is needed to form RNA, this finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation.
Nucleotides are the fundamental molecules that combine in series to form RNA. They consist of a nitrogenous base attached to a sugar-phosphate backbone. RNA is made of long stretches of specific nucleotides arranged so that their sequence of bases carries information. The RNA world hypothesis holds that in the primordial soup (or sandwich), there existed free-floating nucleotides. These nucleotides regularly formed bonds with one another, which often broke because the change in energy was so low. However, certain sequences of base pairs have catalytic properties that lower the energy of their chain being created, enabling them to stay together for longer periods of time. As each chain grew longer, it attracted more matching nucleotides faster, causing chains to now form faster than they were breaking down.
These chains have been proposed by some as the first, primitive forms of life. In an RNA world, different sets of RNA strands would have had different replication outputs, which would have increased or decreased their frequency in the population, i.e. natural selection. As the fittest sets of RNA molecules expanded their numbers, novel catalytic properties added by mutation, which benefitted their persistence and expansion, could accumulate in the population. Such an autocatalytic set of ribozymes, capable of self replication in about an hour, has been identified. It was produced by molecular competition ("in vitro" evolution) of candidate enzyme mixtures.
Competition between RNA may have favored the emergence of cooperation between different RNA chains, opening the way for the formation of the first protocell. Eventually, RNA chains developed with catalytic properties that help amino acids bind together (a process called peptide-bonding). These amino acids could then assist with RNA synthesis, giving those RNA chains that could serve as ribozymes the selective advantage. The ability to catalyze one step in protein synthesis, aminoacylation of RNA, has been demonstrated in a short (five-nucleotide) segment of RNA.
In March 2015, NASA scientists reported that, for the first time, complex DNA and RNA organic compounds of life, including uracil, cytosine and thymine, have been formed in the laboratory under conditions found only in outer space, using starting chemicals, like pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), may have been formed in giant red stars or in interstellar dust and gas clouds, according to the scientists.
In 2018, researchers at Georgia Institute of Technology identified three molecular candidates for the bases that might have formed an earliest version of proto-RNA: barbituric acid, melamine, and 2,4,6-triaminopyrimidine (TAP). These three molecules are simpler versions of the four bases in current RNA, which could have been present in larger amounts and could still be forwards compatibile with them, but may have been discarded by evolution in exchange for more optimal base pairs. Specifically, TAP can form nucleotides with a large range of sugars. Both TAP and melamine base pair with barbituric acid. All three spontaneously form nucleotides with ribose.
One of the challenges posed by the RNA world hypothesis is to discover the pathway by which an RNA-based system transitioned to one based on DNA. Geoffrey Diemer and Ken Stedman, at Portland State University in Oregon, may have found a solution. While conducting a survey of viruses in a hot acidic lake in Lassen Volcanic National Park, California, they uncovered evidence that a simple DNA virus had acquired a gene from a completely unrelated RNA-based virus. Virologist Luis Villareal of the University of California Irvine also suggests that viruses capable of converting an RNA-based gene into DNA and then incorporating it into a more complex DNA-based genome might have been common in the Virus world during the RNA to DNA transition some 4 billion years ago. This finding bolsters the argument for the transfer of information from the RNA world to the emerging DNA world before the emergence of the last universal common ancestor. From the research, the diversity of this virus world is still with us.
Additional evidence supporting the concept of an RNA world has resulted from research on viroids, the first representatives of a novel domain of "subviral pathogens".
Viroids are mostly plant pathogens, which consist of short stretches (a few hundred nucleobases) of highly complementary, circular, single-stranded, and non-coding RNA without a protein coat. Compared with other infectious plant pathogens, viroids are extremely small, ranging from 246 to 467 nucleobases. In comparison, the genome of the smallest known viruses capable of causing an infection are about 2,000 nucleobases long.
In 1989, Diener proposed that, based on their characteristic properties, viroids are more plausible "living relics" of the RNA world than are introns or other RNAs then so considered. If so, viroids have attained potential significance beyond plant pathology to evolutionary biology, by representing the most plausible macromolecules known capable of explaining crucial intermediate steps in the evolution of life from inanimate matter (see: abiogenesis).
Apparently, Diener's hypothesis lay dormant until 2014, when Flores et al. published a review paper, in which Diener's evidence supporting his hypothesis was summarized. In the same year, a New York Times science writer published a popularized version of Diener's proposal, in which, however, he mistakenly credited Flores et al. with the hypothesis' original conception.
Pertinent viroid properties listed in 1989 are:
The existence, in extant cells, of RNAs with molecular properties predicted for RNAs of the RNA World constitutes an additional argument supporting the RNA World hypothesis.
Eigen "et al". and Woese proposed that the genomes of early protocells were composed of single-stranded RNA, and that individual genes corresponded to separate RNA segments, rather than being linked end-to-end as in present-day DNA genomes. A protocell that was haploid (one copy of each RNA gene) would be vulnerable to damage, since a single lesion in any RNA segment would be potentially lethal to the protocell (e.g. by blocking replication or inhibiting the function of an essential gene).
Vulnerability to damage could be reduced by maintaining two or more copies of each RNA segment in each protocell, i.e. by maintaining diploidy or polyploidy. Genome redundancy would allow a damaged RNA segment to be replaced by an additional replication of its homolog. However, for such a simple organism, the proportion of available resources tied up in the genetic material would be a large fraction of the total resource budget. Under limited resource conditions, the protocell reproductive rate would likely be inversely related to ploidy number. The protocell's fitness would be reduced by the costs of redundancy. Consequently, coping with damaged RNA genes while minimizing the costs of redundancy would likely have been a fundamental problem for early protocells.
A cost-benefit analysis was carried out in which the costs of maintaining redundancy were balanced against the costs of genome damage. This analysis led to the conclusion that, under a wide range of circumstances, the selected strategy would be for each protocell to be haploid, but to periodically fuse with another haploid protocell to form a transient diploid. The retention of the haploid state maximizes the growth rate. The periodic fusions permit mutual reactivation of otherwise lethally damaged protocells. If at least one damage-free copy of each RNA gene is present in the transient diploid, viable progeny can be formed. For two, rather than one, viable daughter cells to be produced would require an extra replication of the intact RNA gene homologous to any RNA gene that had been damaged prior to the division of the fused protocell. The cycle of haploid reproduction, with occasional fusion to a transient diploid state, followed by splitting to the haploid state, can be considered to be the sexual cycle in its most primitive form. In the absence of this sexual cycle, haploid protocells with damage in an essential RNA gene would simply die.
This model for the early sexual cycle is hypothetical, but it is very similar to the known sexual behavior of the segmented RNA viruses, which are among the simplest organisms known. Influenza virus, whose genome consists of 8 physically separated single-stranded RNA segments, is an example of this type of virus. In segmented RNA viruses, "mating" can occur when a host cell is infected by at least two virus particles. If these viruses each contain an RNA segment with a lethal damage, multiple infection can lead to reactivation providing that at least one undamaged copy of each virus gene is present in the infected cell. This phenomenon is known as "multiplicity reactivation". Multiplicity reactivation has been reported to occur in influenza virus infections after induction of RNA damage by UV-irradiation, and ionizing radiation.
Patrick Forterre has been working on a novel hypothesis, called "three viruses, three domains": that viruses were instrumental in the transition from RNA to DNA and the evolution of Bacteria, Archaea, and Eukaryota. He believes the last universal common ancestor was RNA-based and evolved RNA viruses. Some of the viruses evolved into DNA viruses to protect their genes from attack. Through the process of viral infection into hosts the three domains of life evolved. Another interesting proposal is the idea that RNA synthesis might have been driven by temperature gradients, in the process of thermosynthesis.
Single nucleotides have been shown to catalyze organic reactions.
Steven Benner has argued that chemical conditions on the planet Mars, such as the presence of boron, molybdenum and oxygen, may have been better for initially producing RNA molecules than those on Earth. If so, life-suitable molecules, originating on Mars, may have later migrated to Earth via mechanisms of panspermia or similar process.
The hypothesized existence of an RNA world does not exclude a "Pre-RNA world", where a metabolic system based on a different nucleic acid is proposed to pre-date RNA. A candidate nucleic acid is peptide nucleic acid (PNA), which uses simple peptide bonds to link nucleobases. PNA is more stable than RNA, but its ability to be generated under prebiological conditions has yet to be demonstrated experimentally.
Threose nucleic acid (TNA) has also been proposed as a starting point, as has glycol nucleic acid (GNA), and like PNA, also lack experimental evidence for their respective abiogenesis.
An alternative — or complementary — theory of RNA origin is proposed in the PAH world hypothesis, whereby polycyclic aromatic hydrocarbons (PAHs) mediate the synthesis of RNA molecules. PAHs are the most common and abundant of the known polyatomic molecules in the visible Universe, and are a likely constituent of the primordial sea. PAHs and fullerenes (also implicated in the origin of life) have been detected in nebulae.
The iron-sulfur world theory proposes that simple metabolic processes developed before genetic materials did, and these energy-producing cycles catalyzed the production of genes.
Some of the difficulties of producing the precursors on earth are bypassed by another alternative or complementary theory for their origin, panspermia. It discusses the possibility that the earliest life on this planet was carried here from somewhere else in the galaxy, possibly on meteorites similar to the Murchison meteorite. This does not invalidate the concept of an RNA world, but posits that this world or its precursors originated not on Earth but rather another, probably older, planet.
There are hypotheses that are in direct conflict to the RNA world hypothesis. The relative chemical complexity of the nucleotide and the unlikelihood of it spontaneously arising, along with the limited number of combinations possible among four base forms, as well as the need for RNA polymers of some length before seeing enzymatic activity, have led some to reject the RNA world hypothesis in favor of a metabolism-first hypothesis, where the chemistry underlying cellular function arose first, along with the ability to replicate and facilitate this metabolism.
Another proposal is that the dual-molecule system we see today, where a nucleotide-based molecule is needed to synthesize protein, and a peptide-based (protein) molecule is needed to make nucleic acid polymers, represents the original form of life. This theory is called RNA-peptide coevolution, or the Peptide-RNA world, and offers a possible explanation for the rapid evolution of high-quality replication in RNA (since proteins are catalysts), with the disadvantage of having to postulate the coincident formation of two complex molecules, an enzyme (from peptides) and a RNA (from nucleotides). In this Peptide-RNA World scenario, RNA would have contained the instructions for life, while peptides (simple protein enzymes) would have accelerated key chemical reactions to carry out those instructions. The study leaves open the question of exactly how those primitive systems managed to replicate themselves — something neither the RNA World hypothesis nor the Peptide-RNA World theory can yet explain, unless polymerases (enzymes that rapidly assemble the RNA molecule) played a role.
A research project completed in March 2015 by the Sutherland group found that a network of reactions beginning with hydrogen cyanide and hydrogen sulfide, in streams of water irradiated by UV light, could produce the chemical components of proteins and lipids, alongside those of RNA. The researchers used the term "cyanosulfidic" to describe this network of reactions. In November 2017, a team at the Scripps Research Institute identified reactions involving the compound diamidophosphate which could have linked the chemical components into short peptide and lipid chains as well as short RNA-like chains of nucleotides.
The RNA world hypothesis, if true, has important implications for the definition of life. For most of the time that followed Watson and Crick's elucidation of DNA structure in 1953, life was largely defined in terms of DNA and proteins: DNA and proteins seemed the dominant macromolecules in the living cell, with RNA only aiding in creating proteins from the DNA blueprint.
The RNA world hypothesis places RNA at center-stage when life originated. The RNA world hypothesis is supported by the observations that ribosomes are ribozymes: the catalytic site is composed of RNA, and proteins hold no major structural role and are of peripheral functional importance. This was confirmed with the deciphering of the 3-dimensional structure of the ribosome in 2001. Specifically, peptide bond formation, the reaction that binds amino acids together into proteins, is now known to be catalyzed by an adenine residue in the rRNA.
RNAs are known to play roles in other cellular catalytic processes, specifically in the targeting of enzymes to specific RNA sequences. In eukaryotes, the processing of pre-mRNA and RNA editing take place at sites determined by the base pairing between the target RNA and RNA constituents of small nuclear ribonucleoproteins (snRNPs). Such enzyme targeting is also responsible for gene down regulation though RNA interference (RNAi), where an enzyme-associated guide RNA targets specific mRNA for selective destruction. Likewise, in eukaryotes the maintenance of telomeres involves copying of an RNA template that is a constituent part of the telomerase ribonucleoprotein enzyme. Another cellular organelle, the vault, includes a ribonucleoprotein component, although the function of this organelle remains to be elucidated.
Interestingly, the "Alanine World" hypothesis places the canonical amino acid Alanine in the centre of the so-called Protein-World. Dominant secondary structures in modern proteins are α-helices and β-sheets. The most commonly selected monomers (i.e. amino acids) for ribosomal protein synthesis are chemical derivatives of the α-amino acid Alanine as they are best suited for the construction of α-helices or β-sheets in modern proteins. | https://en.wikipedia.org/wiki?curid=25765 |
Ribosome
Ribosomes () are macromolecular machines, found within all living cells, that perform biological protein synthesis (mRNA translation). Ribosomes link amino acids together in the order specified by the codons of messenger RNA (mRNA) molecules to form polypeptide chains. Ribosomes consist of two major components: the small and large ribosomal subunits. Each subunit consists of one or more ribosomal RNA (rRNA) molecules and many ribosomal proteins (RPs or r-proteins)). The ribosomes and associated molecules are also known as the "translational apparatus".
The sequence of DNA that encodes the sequence of the amino acids in a protein, is transcribed into a messenger RNA chain. Ribosomes bind to messenger RNAs and use its sequence for determining the correct sequence of amino acids to generate a given protein. Amino acids are selected and carried to the ribosome by transfer RNA (tRNA) molecules, which enter the ribosome and bind to the messenger RNA chain via an anti-codon stem loop. For each coding triplet in the messenger RNA, there is a transfer RNA that matches and carries the correct amino acid for incorporating into a growing polypeptide chain. Once the protein is produced, it can then fold to produce a functional three-dimensional structure.
A ribosome is made from complexes of RNAs and proteins and is therefore a ribonucleoprotein complex. Each ribosome is divided into two subunits:
When a ribosome finishes reading an mRNA molecule, these two subunits split apart. Ribosomes are ribozymes, because the catalytic peptidyl transferase activity that links amino acids together is performed by the ribosomal RNA. Ribosomes are often associated with the intracellular membranes that make up the rough endoplasmic reticulum.
Ribosomes from bacteria, archaea and eukaryotes in the three-domain system resemble each other to a remarkable degree, evidence of a common origin. They differ in their size, sequence, structure, and the ratio of protein to RNA. The differences in structure allow some antibiotics to kill bacteria by inhibiting their ribosomes, while leaving human ribosomes unaffected. In all species, more than one ribosome may move along a single mRNA chain at one time (as a polysome), each "reading" its sequence and producing a corresponding protein molecule.
The mitochondrial ribosomes of eukaryotic cells functionally resemble many features of those in bacteria, reflecting the likely evolutionary origin of mitochondria.
Ribosomes were first observed in the mid-1950s by Romanian-American cell biologist George Emil Palade, using an electron microscope, as dense particles or granules. The term "ribosome" was proposed by scientist Richard B. Roberts in the end of 1950s:
Albert Claude, Christian de Duve, and George Emil Palade were jointly awarded the Nobel Prize in Physiology or Medicine, in 1974, for the discovery of the ribosome. The Nobel Prize in Chemistry 2009 was awarded to Venkatraman Ramakrishnan, Thomas A. Steitz and Ada E. Yonath for determining the detailed structure and mechanism of the ribosome.
The ribosome is a highly complex cellular machine. It is largely made up of specialized RNA known as ribosomal RNA (rRNA) as well as dozens of distinct proteins (the exact number varies slightly between species). The ribosomal proteins and rRNAs are arranged into two distinct ribosomal pieces of different size, known generally as the large and small subunit of the ribosome. Ribosomes consist of two subunits that fit together (Figure 2) and work as one to translate the mRNA into a polypeptide chain during protein synthesis (Figure 1). Because they are formed from two subunits of non-equal size, they are slightly longer in the axis than in diameter.
Prokaryotic ribosomes are around 20 nm (200 Å) in diameter and are composed of 65% rRNA and 35% ribosomal proteins. Eukaryotic ribosomes are between 25 and 30 nm (250–300 Å) in diameter with an rRNA-to-protein ratio that is close to 1. Crystallographic work has shown that there are no ribosomal proteins close to the reaction site for polypeptide synthesis. This suggests that the protein components of ribosomes do not directly participate in peptide bond formation catalysis, but rather that these proteins act as a scaffold that may enhance the ability of rRNA to synthesize protein (See: Ribozyme).
The ribosomal subunits of bacteria and eukaryotes are quite similar.
The unit of measurement used to describe the ribosomal subunits and the rRNA fragments is the Svedberg unit, a measure of the rate of sedimentation in centrifugation rather than size. This accounts for why fragment names do not add up: for example, bacterial 70S ribosomes are made of 50S and 30S subunits.
Bacteria have 70S ribosomes, each consisting of a small (30S) and a large (50S) subunit. "E. coli", for example, has a 16S RNA subunit (consisting of 1540 nucleotides) that is bound to 21 proteins. The large subunit is composed of a 5S RNA subunit (120 nucleotides), a 23S RNA subunit (2900 nucleotides) and 31 proteins.
Affinity label for the tRNA binding sites on the "E. coli" ribosome allowed the identification of A and P site proteins most likely associated with the peptidyltransferase activity; labelled proteins are L27, L14, L15, L16, L2; at least L27 is located at the donor site, as shown by E. Collatz and A.P. Czernilofsky. Additional research has demonstrated that the S1 and S21 proteins, in association with the 3′-end of 16S ribosomal RNA, are involved in the initiation of translation.
Eukaryotes have 80S ribosomes located in their cytosol, each consisting of a small (40S) and large (60S) subunit. Their 40S subunit has an 18S RNA (1900 nucleotides) and 33 proteins. The large subunit is composed of a 5S RNA (120 nucleotides), 28S RNA (4700 nucleotides), a 5.8S RNA (160 nucleotides) subunits and 46 proteins.
During 1977, Czernilofsky published research that used affinity labeling to identify tRNA-binding sites on rat liver ribosomes. Several proteins, including L32/33, L36, L21, L23, L28/29 and L13 were implicated as being at or near the peptidyl transferase center.
In eukaryotes, ribosomes are present in mitochondria (sometimes called mitoribosomes) and in plastids such as chloroplasts (also called plastoribosomes). They also consist of large and small subunits bound together with proteins into one 70S particle. These ribosomes are similar to those of bacteria and these organelles are thought to have originated as symbiotic bacteria Of the two, chloroplastic ribosomes are closer to bacterial ones than mitochrondrial ones are. Many pieces of ribosomal RNA in the mitochrondria are shortened, and in the case of 5S rRNA, replaced by other structures in animals and fungi. In particular, "Leishmania tarentolae" has a minimalized set of mitochondrial rRNA.
The cryptomonad and chlorarachniophyte algae may contain a nucleomorph that resembles a vestigial eukaryotic nucleus. Eukaryotic 80S ribosomes may be present in the compartment containing the nucleomorph.
The differences between the bacterial and eukaryotic ribosomes are exploited by pharmaceutical chemists to create antibiotics that can destroy a bacterial infection without harming the cells of the infected person. Due to the differences in their structures, the bacterial 70S ribosomes are vulnerable to these antibiotics while the eukaryotic 80S ribosomes are not. Even though mitochondria possess ribosomes similar to the bacterial ones, mitochondria are not affected by these antibiotics because they are surrounded by a double membrane that does not easily admit these antibiotics into the organelle. A noteworthy counterexample, however, includes the antineoplastic antibiotic chloramphenicol, which successfully inhibits bacterial 50S and mitochondrial 50S ribosomes. The same of mitochondria cannot be said of chloroplasts, where antibiotic resistance in ribosomal proteins is a trait to be introduced as a marker in genetic engineering.
The various ribosomes share a core structure, which is quite similar despite the large differences in size. Much of the RNA is highly organized into various tertiary structural motifs, for example pseudoknots that exhibit coaxial stacking. The extra RNA in the larger ribosomes is in several long continuous insertions, such that they form loops out of the core structure without disrupting or changing it. All of the catalytic activity of the ribosome is carried out by the RNA; the proteins reside on the surface and seem to stabilize the structure.
The general molecular structure of the ribosome has been known since the early 1970s. In the early 2000s, the structure has been achieved at high resolutions, of the order of a few ångströms.
The first papers giving the structure of the ribosome at atomic resolution were published almost simultaneously in late 2000. The 50S (large prokaryotic) subunit was determined from the archaeon "Haloarcula marismortui" and the bacterium "Deinococcus radiodurans", and the structure of the 30S subunit was determined from "Thermus thermophilus". These structural studies were awarded the Nobel Prize in Chemistry in 2009. In May 2001 these coordinates were used to reconstruct the entire "T. thermophilus" 70S particle at 5.5 Å resolution.
Two papers were published in November 2005 with structures of the "Escherichia coli" 70S ribosome. The structures of a vacant ribosome were determined at 3.5 Å resolution using X-ray crystallography. Then, two weeks later, a structure based on cryo-electron microscopy was published, which depicts the ribosome at 11–15 Å resolution in the act of passing a newly synthesized protein strand into the protein-conducting channel.
The first atomic structures of the ribosome complexed with tRNA and mRNA molecules were solved by using X-ray crystallography by two groups independently, at 2.8 Å and at 3.7 Å. These structures allow one to see the details of interactions of the "Thermus thermophilus" ribosome with mRNA and with tRNAs bound at classical ribosomal sites. Interactions of the ribosome with long mRNAs containing Shine-Dalgarno sequences were visualized soon after that at 4.5–5.5 Å resolution.
In 2011, the first complete atomic structure of the eukaryotic 80S ribosome from the yeast "Saccharomyces cerevisiae" was obtained by crystallography. The model reveals the architecture of eukaryote-specific elements and their interaction with the universally conserved core. At the same time, the complete model of a eukaryotic 40S ribosomal structure in "Tetrahymena thermophila" was published and described the structure of the 40S subunit, as well as much about the 40S subunit's interaction with eIF1 during translation initiation. Similarly, the eukaryotic 60S subunit structure was also determined from "Tetrahymena thermophila" in complex with eIF6.
Ribosomes are minute particles consisting of RNA and associated proteins that function to synthesize proteins. Proteins are needed for many cellular functions such as repairing damage or directing chemical processes. Ribosomes can be found floating within the cytoplasm or attached to the endoplasmic reticulum. Basically, their main function is to convert genetic code into an amino acid sequence and to build protein polymers from amino acid monomers.
Ribosomes act as catalysts in two extremely important biological processes called peptidyl transfer and peptidyl hydrolysis. The "PT center is responsible for producing protein bonds during protein elongation".
Ribosomes are the workplaces of protein biosynthesis, the process of translating mRNA into protein. The mRNA comprises a series of codons which are decoded by the ribosome so as to make the protein. Using the mRNA as a template, the ribosome traverses each codon (3 nucleotides) of the mRNA, pairing it with the appropriate amino acid provided by an aminoacyl-tRNA. Aminoacyl-tRNA contains a complementary anticodon on one end and the appropriate amino acid on the other. For fast and accurate recognition of the appropriate tRNA, the ribosome utilizes large conformational changes (conformational proofreading)
.
The small ribosomal subunit, typically bound to an aminoacyl-tRNA containing the first amino acid methionine, binds to an AUG codon on the mRNA and recruits the large ribosomal subunit. The ribosome contains three RNA binding sites, designated A, P and E. The A-site binds an aminoacyl-tRNA or termination release factors; the P-site binds a peptidyl-tRNA (a tRNA bound to the poly-peptide chain); and the E-site (exit) binds a free tRNA. Protein synthesis begins at a start codon AUG near the 5' end of the mRNA. mRNA binds to the P site of the ribosome first. The ribosome recognizes the start codon by using the Shine-Dalgarno sequence of the mRNA in prokaryotes and Kozak box in eukaryotes.
Although catalysis of the peptide bond involves the C2 hydroxyl of RNA's P-site adenosine in a proton shuttle mechanism, other steps in protein synthesis (such as translocation) are caused by changes in protein conformations. Since their catalytic core is made of RNA, ribosomes are classified as "ribozymes," and it is thought that they might be remnants of the RNA world.
In Figure 5, both ribosomal subunits (small and large) assemble at the start codon (towards the 5' end of the mRNA). The ribosome uses tRNA that matches the current codon (triplet) on the mRNA to append an amino acid to the polypeptide chain. This is done for each triplet on the mRNA, while the ribosome moves towards the 3' end of the mRNA. Usually in bacterial cells, several ribosomes are working parallel on a single mRNA, forming what is called a "polyribosome" or "polysome".
The ribosome is known to actively participate in the protein folding. The structures obtained in this way are usually identical to the ones obtained during protein chemical refolding, however, the pathways leading to the final product may be different. In some cases, the ribosome is crucial in obtaining the functional protein form. For example, one of the possible mechanisms of folding of the deeply knotted proteins relies on the ribosome pushing the chain through the attached loop.
Presence of a ribosome quality control protein Rqc2 is associated with mRNA-independent protein elongation. This elongation is a result of ribosomal addition (via tRNAs brought by Rqc2) of CAT tails": ribosomes extend the C-terminus of a stalled protein with random, translation-independent sequences of alanines and t"hreonines.
Ribosomes are classified as being either "free" or "membrane-bound".
Free and membrane-bound ribosomes differ only in their spatial distribution; they are identical in structure. Whether the ribosome exists in a free or membrane-bound state depends on the presence of an ER-targeting signal sequence on the protein being synthesized, so an individual ribosome might be membrane-bound when it is making one protein, but free in the cytosol when it makes another protein.
Ribosomes are sometimes referred to as organelles, but the use of the term "organelle" is often restricted to describing sub-cellular components that include a phospholipid membrane, which ribosomes, being entirely particulate, do not. For this reason, ribosomes may sometimes be described as "non-membranous organelles".
Free ribosomes can move about anywhere in the cytosol, but are excluded from the cell nucleus and other organelles. Proteins that are formed from free ribosomes are released into the cytosol and used within the cell. Since the cytosol contains high concentrations of glutathione and is, therefore, a reducing environment, proteins containing disulfide bonds, which are formed from oxidized cysteine residues, cannot be produced within it.
When a ribosome begins to synthesize proteins that are needed in some organelles, the ribosome making this protein can become "membrane-bound". In eukaryotic cells this happens in a region of the endoplasmic reticulum (ER) called the "rough ER". The newly produced polypeptide chains are inserted directly into the ER by the ribosome undertaking vectorial synthesis and are then transported to their destinations, through the secretory pathway. Bound ribosomes usually produce proteins that are used within the plasma membrane or are expelled from the cell via "exocytosis".
In bacterial cells, ribosomes are synthesized in the cytoplasm through the transcription of multiple ribosome gene operons. In eukaryotes, the process takes place both in the cell cytoplasm and in the nucleolus, which is a region within the cell nucleus. The assembly process involves the coordinated function of over 200 proteins in the synthesis and processing of the four rRNAs, as well as assembly of those rRNAs with the ribosomal proteins.
The ribosome may have first originated in an RNA world, appearing as a self-replicating complex that only later evolved the ability to synthesize proteins when amino acids began to appear. Studies suggest that ancient ribosomes constructed solely of rRNA could have developed the ability to synthesize peptide bonds. In addition, evidence strongly points to ancient ribosomes as self-replicating complexes, where the rRNA in the ribosomes had informational, structural, and catalytic purposes because it could have coded for tRNAs and proteins needed for ribosomal self-replication. Hypothetical cellular organisms with self-replicating RNA but without DNA are called ribocytes (or ribocells).
As amino acids gradually appeared in the RNA world under prebiotic conditions, their interactions with catalytic RNA would increase both the range and efficiency of function of catalytic RNA molecules. Thus, the driving force for the evolution of the ribosome from an ancient self-replicating machine into its current form as a translational machine may have been the selective pressure to incorporate proteins into the ribosome's self-replicating mechanisms, so as to increase its capacity for self-replication.
Ribosomes are compositionally heterogeneous between species and even within the same cell, as evidenced by the existence of cytoplasmic and mitochondria ribosomes within the same eukaryotic cells. Certain researchers have suggested that heterogeneity in the composition of ribosomal proteins is mammals is important for gene regulation, "i.e.", the specialized ribosome hypothesis. However, this hypothesis is controversial and the topic of ongoing research.
Heterogeneity in ribosome composition was first proposed to be involved in translational control of protein synthesis by Vince Mauro and Gerald Edelman. They proposed the ribosome filter hypothesis to explain the regulatory functions of ribosomes. Evidence has suggested that specialized ribosomes specific to different cell populations may affect how genes are translated. Some ribosomal proteins exchange from the assembled complex with cytosolic copies suggesting that the structure of the "in vivo" ribosome can be modified without synthesizing an entire new ribosome.
Certain ribosomal proteins are absolutely critical for cellular life while others are not. In budding yeast, 14/78 ribosomal proteins are non-essential for growth, while in humans this depends on the cell of study. Other forms of heterogeneity include post-translational modifications to ribosomal proteins such as acetylation, methylation, and phosphorylation. "Arabidopsis", Viral internal ribosome entry sites (IRESs) may mediate translations by compositionally distinct ribosomes. For example, 40S ribosomal units without eS25 in yeast and mammalian cells are unable to recruit the CrPV IGR IRES.
Heterogeneity of ribosomal RNA modifications plays an important role in structural maintenance and/or function and most mRNA modifications are found in highly conserved regions. The most common rRNA modifications are pseudouridylation and 2’-O methylation of ribose. | https://en.wikipedia.org/wiki?curid=25766 |
Real-time computing
Real-time computing (RTC), or reactive computing is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response. Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".
Real-time responses are often understood to be in the order of milliseconds, and sometimes microseconds. A system not specified as operating in real time cannot usually "guarantee" a response within any timeframe, although "typical" or "expected" response times may be given. Real-time processing "fails" if not completed within a specified deadline relative to an event; deadlines must always be met, regardless of system load.
A real-time system has been described as one which "controls an environment by receiving data, processing them, and returning the results sufficiently quickly to affect the environment at that time". The term "real-time" is also used in simulation to mean that the simulation's clock runs at the same speed as a real clock, and in process control and enterprise systems to mean "without significant delay".
Real-time software may use one or more of the following: synchronous programming languages, real-time operating systems, and real-time networks, each of which provide essential frameworks on which to build a real-time software application.
Systems used for many mission critical applications must be real-time, such as for control of fly-by-wire aircraft, or anti-lock brakes, both of which demand immediate and accurate mechanical response.
The term "real-time" derives from its use in early simulation, in which a real-world process is simulated at a rate that matched that of the real process (now called real-time simulation to avoid ambiguity). Analog computers, most often, were capable of simulating at a much faster pace than real-time, a situation that could be just as dangerous as a slow simulation if it were not also recognized and accounted for.
Minicomputers, particularly in the 1970s onwards, when built into dedicated embedded systems such as DOG scanners, increased the need for low-latency priority-driven responses to important interactions with incoming data and so operating systems such as Data General's RDOS (Real-Time Disk Operatings System) and RTOS with background and foreground scheduling as well as Digital Equipment Corporation's RT-11 date from this era. Background-foreground scheduling allowed low priority tasks CPU time when no foreground task needed to execute, and gave absolute priority within the foreground to threads/tasks with the highest priority. Real-time operating systems would also be used for time-sharing multiuser duties. For example, Data General Business Basic could run in the foreground or background of RDOG (and would introduce additional elements to the scheduling algorithm to make it more appropriate for people interacting via dumb terminals.
Once when the MOS Technology 6502 (used in the Commodore 64 and Apple II), and later when the Motorola 68000 (used in the Macintosh, Atari ST, and Commodore Amiga) were popular, anybody could use their home computer as a real-time system. The possibility to deactivate other interrupts allowed for hard-coded loops with defined timing, and the low interrupt latency allowed the implementation of a real-time operating system, giving the user interface and the disk drives lower priority than the real-time thread. Compared to these the programmable interrupt controller of the Intel CPUs (8086..80586) generates a very large latency and the Windows operating system is neither a real-time operating system nor does it allow a program to take over the CPU completely and use its own scheduler, without using native machine language and thus surpassing all interrupting Windows code. However, several coding libraries exist which offer real time capabilities in a high level language on a variety of operating systems, for example Java Real Time. The Motorola 68000 and subsequent family members (68010, 68020 etc.) also became popular with manufacturers of industrial control systems. This application area is one in which real-time control offers genuine advantages in terms of process performance and safety.
A system is said to be "real-time" if the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed. Real-time systems, as well as their deadlines, are classified by the consequence of missing a deadline:
Thus, the goal of a "hard real-time system" is to ensure that all deadlines are met, but for "soft real-time systems" the goal becomes meeting a certain subset of deadlines in order to optimize some application-specific criteria. The particular criteria optimized depend on the application, but some typical examples include maximizing the number of deadlines met, minimizing the lateness of tasks and maximizing the number of high priority tasks meeting their deadlines.
Hard real-time systems are used when it is imperative that an event be reacted to within a strict deadline. Such strong guarantees are required of systems for which not reacting in a certain interval of time would cause great loss in some manner, especially damaging the surroundings physically or threatening human lives (although the strict definition is simply that missing the deadline constitutes failure of the system). Some examples of hard real-time systems:
In the context of multitasking systems the scheduling policy is normally priority driven (pre-emptive schedulers). In some situations, these can guarantee hard real-time performance (for instance if the set of tasks and their priorities is known in advance). There are other hard real-time schedulers such as rate-monotonic which is not common in general-purpose systems, as it requires additional information in order to schedule a task: namely a bound or worst-case estimate for how long the task must execute. Specific algorithms for scheduling such hard real-time tasks exist, such as earliest deadline first, which, ignoring the overhead of context switching, is sufficient for system loads of less than 100%. New overlay scheduling systems, such as an adaptive partition scheduler assist in managing large systems with a mixture of hard real-time and non real-time applications.
Firm real-time systems are more nebulously defined, and some classifications do not include them, distinguishing only hard and soft real-time systems. Some examples of firm real-time systems:
Soft real-time systems are typically used to solve issues of concurrent access and the need to keep a number of connected systems up-to-date through changing situations. Some examples of soft real-time systems:
In a real-time digital signal processing (DSP) process, the analyzed (input) and generated (output) samples can be processed (or generated) continuously in the time it takes to input and output the same set of samples "independent" of the processing delay. It means that the processing delay must be bounded even if the processing continues for an unlimited time. That means that the mean processing time per sample, including overhead, is no greater than the sampling period, which is the reciprocal of the sampling rate. This is the criterion whether the samples are grouped together in large segments and processed as blocks or are processed individually and whether there are long, short, or non-existent input and output buffers.
Consider an audio DSP example; if a process requires 2.01 seconds to analyze, synthesize, or process 2.00 seconds of sound, it is not real-time. However, if it takes 1.99 seconds, it is or can be made into a real-time DSP process.
A common life analogy is standing in a line or queue waiting for the checkout in a grocery store. If the line asymptotically grows longer and longer without bound, the checkout process is not real-time. If the length of the line is bounded, customers are being "processed" and output as rapidly, on average, as they are being inputted then that process "is" real-time. The grocer might go out of business or must at least lose business if they cannot make their checkout process real-time; thus, it is fundamentally important that this process is real-time.
A signal processing algorithm that cannot keep up with the flow of input data with output falling farther and farther behind the input, is not real-time. But if the delay of the output (relative to the input) is bounded regarding a process that operates over an unlimited time, then that signal processing algorithm is real-time, even if the throughput delay may be very long.
Real-time signal processing is necessary, but not sufficient in and of itself, for live signal processing such as what is required in live event support. Live audio digital signal processing requires both real-time operation and a sufficient limit to throughput delay so as to be tolerable to performers using stage monitors or in-ear monitors and not noticeable as lip sync error by the audience also directly watching the performers. Tolerable limits to latency for live, real-time processing is a subject of investigation and debate but is estimated to be between 6 and 20 milliseconds.
Real-time bidirectional telecommunications delays of less than 300 ms ("round trip" or twice the unidirectional delay) are considered "acceptable" to avoid undesired "talk-over" in conversation.
Real-time computing is sometimes misunderstood to be high-performance computing, but this is not an accurate classification. For example, a massive supercomputer executing a scientific simulation may offer impressive performance, yet it is not executing a real-time computation. Conversely, once the hardware and software for an anti-lock braking system have been designed to meet its required deadlines, no further performance gains are obligatory or even useful. Furthermore, if a network server is highly loaded with network traffic, its response time may be slower but will (in most cases) still succeed before it times out (hits its deadline). Hence, such a network server would not be considered a real-time system: temporal failures (delays, time-outs, etc.) are typically small and compartmentalized (limited in effect) but are not catastrophic failures. In a real-time system, such as the FTSE 100 Index, a slow-down beyond limits would often be considered catastrophic in its application context. The most important requirement of a real-time system is consistent output, not high throughput.
Some kinds of software, such as many chess-playing programs, can fall into either category. For instance, a chess program designed to play in a tournament with a clock will need to decide on a move before a certain deadline or lose the game, and is therefore a real-time computation, but a chess program that is allowed to run indefinitely before moving is not. In both of these cases, however, high performance is desirable: the more work a tournament chess program can do in the allotted time, the better its moves will be, and the faster an unconstrained chess program runs, the sooner it will be able to move. This example also illustrates the essential difference between real-time computations and other computations: if the tournament chess program does not make a decision about its next move in its allotted time it loses the game—i.e., it fails as a real-time computation—while in the other scenario, meeting the deadline is assumed not to be necessary. High-performance is indicative of the amount of processing that is performed in a given amount of time, whereas real-time is the ability to get done with the processing to yield a useful output in the available time.
The term "near real-time" or "nearly real-time" (NRT), in telecommunications and computing, refers to the time delay introduced, by automated data processing or network transmission, between the occurrence of an event and the use of the processed data, such as for display or feedback and control purposes. For example, a near-real-time display depicts an event or situation as it existed at the current time minus the processing time, as nearly the time of the live event.
The distinction between the terms "near real time" and "real time" is somewhat nebulous and must be defined for the situation at hand. The term implies that there are no significant delays. In many cases, processing described as "real-time" would be more accurately described as "near real-time".
Near real-time also refers to delayed real-time transmission of voice and video. It allows playing video images, in approximately real-time, without having to wait for an entire large video file to download. Incompatible databases can export/import to common flat files that the other database can import/export on a scheduled basis so that they can sync/share common data in "near real-time" with each other.
The distinction between "near real-time" and "real-time" varies, and the delay is dependent on the type and speed of the transmission. The delay in near real-time is typically of the order of several seconds to several minutes.
Several methods exist to aid the design of real-time systems, an example of which is MASCOT, an old but very successful method which represents the concurrent structure of the system. Other examples are HOOD, Real-Time UML, AADL, the Ravenscar profile, and Real-Time Java. | https://en.wikipedia.org/wiki?curid=25767 |
Ruby (programming language)
Ruby is an interpreted, high-level, general-purpose programming language. It was designed and developed in the mid-1990s by Yukihiro "Matz" Matsumoto in Japan.
Ruby is dynamically typed and uses garbage collection. It supports multiple programming paradigms, including procedural, object-oriented, and functional programming. According to the creator, Ruby was influenced by Perl, Smalltalk, Eiffel, Ada, Basic, and Lisp.
Matsumoto has said that Ruby was conceived in 1993. In a 1999 post to the "ruby-talk" mailing list, he describes some of his early ideas about the language:
Matsumoto describes the design of Ruby as being like a simple Lisp language at its core, with an object system like that of Smalltalk, blocks inspired by higher-order functions, and practical utility like that of Perl.
The name "Ruby" originated during an online chat session between Matsumoto and Keiju Ishitsuka on February 24, 1993, before any code had been written for the language. Initially two names were proposed: "Coral" and "Ruby". Matsumoto chose the latter in a later e-mail to Ishitsuka. Matsumoto later noted a factor in choosing the name "Ruby" – it was the birthstone of one of his colleagues.
The first public release of Ruby 0.95 was announced on Japanese domestic newsgroups on December 21, 1995. Subsequently, three more versions of Ruby were released in two days. The release coincided with the launch of the Japanese-language "ruby-list" mailing list, which was the first mailing list for the new language.
Already present at this stage of development were many of the features familiar in later releases of Ruby, including object-oriented design, classes with inheritance, mixins, iterators, closures, exception handling and garbage collection.
Following the release of Ruby 0.95 in 1995, several stable versions of Ruby were released in the following years:
In 1997, the first article about Ruby was published on the Web. In the same year, Matsumoto was hired by netlab.jp to work on Ruby as a full-time developer.
In 1998, the Ruby Application Archive was launched by Matsumoto, along with a simple English-language homepage for Ruby.
In 1999, the first English language mailing list "ruby-talk" began, which signaled a growing interest in the language outside Japan. In this same year, Matsumoto and Keiju Ishitsuka wrote the first book on Ruby, "The Object-oriented Scripting Language Ruby" (オブジェクト指向スクリプト言語 Ruby), which was published in Japan in October 1999. It would be followed in the early 2000s by around 20 books on Ruby published in Japanese.
By 2000, Ruby was more popular than Python in Japan. In September 2000, the first English language book "Programming Ruby" was printed, which was later freely released to the public, further widening the adoption of Ruby amongst English speakers. In early 2002, the English-language "ruby-talk" mailing list was receiving more messages than the Japanese-language "ruby-list", demonstrating Ruby's increasing popularity in the non-Japanese speaking world.
Ruby 1.8 was initially released August 2003, was stable for a long time, and was retired June 2013. Although deprecated, there is still code based on it. Ruby 1.8 is only partially compatible with Ruby 1.9.
Ruby 1.8 has been the subject of several industry standards. The language specifications for Ruby were developed by the Open Standards Promotion Center of the Information-Technology Promotion Agency (a Japanese government agency) for submission to the Japanese Industrial Standards Committee (JISC) and then to the International Organization for Standardization (ISO). It was accepted as a Japanese Industrial Standard (JIS X 3017) in 2011 and an international standard (ISO/IEC 30170) in 2012.
Around 2005, interest in the Ruby language surged in tandem with Ruby on Rails, a web framework written in Ruby. Rails is frequently credited with increasing awareness of Ruby.
Ruby 1.9 was released on Christmas Day in 2007. Effective with Ruby 1.9.3, released October 31, 2011, Ruby switched from being dual-licensed under the Ruby License and the GPL to being dual-licensed under the Ruby License and the two-clause BSD license. Adoption of 1.9 was slowed by changes from 1.8 that required many popular third party gems to be rewritten.
Ruby 1.9 introduces many significant changes over the 1.8 series. Examples:
Ruby 1.9 has been obsolete since February 23, 2015, and it will no longer receive bug and security fixes. Users are advised to upgrade to a more recent version.
Ruby 2.0 added several new features, including:
Ruby 2.0 is intended to be fully backward compatible with Ruby 1.9.3. As of the official 2.0.0 release on February 24, 2013, there were only five known (minor) incompatibilities.
It has been obsolete since February 22, 2016, and it will no longer receive bug and security fixes. Users are advised to upgrade to a more recent version.
Ruby 2.1.0 was released on Christmas Day in 2013. The release includes speed-ups, bugfixes, and library updates.
Starting with 2.1.0, Ruby's versioning policy is more like semantic versioning. Although similar, Ruby's versioning policy is not compatible with semantic versioning:
Semantic versioning also provides additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format, not available at Ruby.
Ruby 2.1 has been obsolete since April 1, 2017, and it will no longer receive bug and security fixes. Users are advised to upgrade to a more recent version.
Ruby 2.2.0 was released on Christmas Day in 2014. The release includes speed-ups, bugfixes, and library updates and removes some deprecated APIs. Most notably, Ruby 2.2.0 introduces changes to memory handling an incremental garbage collector, support for garbage collection of symbols and the option to compile directly against jemalloc. It also contains experimental support for using vfork(2) with system() and spawn(), and added support for the Unicode 7.0 specification.
Features that were made obsolete or removed include callcc, the DL library, Digest::HMAC, lib/rational.rb, lib/complex.rb, GServer, Logger::Application as well as various C API functions.
Ruby 2.2 has been obsolete since April 1, 2018, and it will no longer receive bug and security fixes. Users are advised to upgrade to a more recent version.
Ruby 2.3.0 was released on Christmas Day in 2015. A few notable changes include:
The 2.3 branch also includes many performance improvements, updates, and bugfixes including changes to Proc#call, Socket and IO use of exception keywords, Thread#name handling, default passive Net::FTP connections, and Rake being removed from stdlib.
Ruby 2.4.0 was released on Christmas Day in 2016. A few notable changes include:
The 2.4 branch also includes performance improvements to hash table, Array#max, Array#min, and instance variable access.
Ruby 2.5.0 was released on Christmas Day in 2017. A few notable changes include:
On top of that come a lot of performance improvements like faster block passing (3 times faster), faster Mutexes, faster ERB templates and improvements on some concatenation methods.
Ruby 2.6.0 was released on Christmas Day in 2018. A few notable changes include:
Ruby 2.7.0 was released on Christmas Day in 2019. A few notable changes include:
Matsumoto has said that Ruby is designed for programmer productivity and fun, following the principles of good user interface design. At a Google Tech Talk in 2008 Matsumoto further stated, "I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language." He stresses that systems design needs to emphasize human, rather than computer, needs:
Ruby is said to follow the principle of least astonishment (POLA), meaning that the language should behave in such a way as to minimize confusion for experienced users. Matsumoto has said his primary design goal was to make a language that he himself enjoyed using, by minimizing programmer work and possible confusion. He has said that he had not applied the principle of least astonishment to the design of Ruby, but nevertheless the phrase has come to be closely associated with the Ruby programming language. The phrase has itself been a source of surprise, as novice users may take it to mean that Ruby's behaviors try to closely match behaviors familiar from other languages. In a May 2005 discussion on the newsgroup comp.lang.ruby, Matsumoto attempted to distance Ruby from POLA, explaining that because any design choice will be surprising to someone, he uses a personal standard in evaluating surprise. If that personal standard remains consistent, there would be few surprises for those familiar with the standard.
Matsumoto defined it this way in an interview:
Ruby is object-oriented: every value is an object, including classes and instances of types that many other languages designate as primitives (such as integers, booleans, and "null"). Variables always hold references to objects. Every function is a method and methods are always called on an object. Methods defined at the top level scope become methods of the Object class. Since this class is an ancestor of every other class, such methods can be called on any object. They are also visible in all scopes, effectively serving as "global" procedures. Ruby supports inheritance with dynamic dispatch, mixins and singleton methods (belonging to, and defined for, a single instance rather than being defined on the class). Though Ruby does not support multiple inheritance, classes can import modules as mixins.
Ruby has been described as a multi-paradigm programming language: it allows procedural programming (defining functions/variables outside classes makes them part of the root, 'self' Object), with object orientation (everything is an object) or functional programming (it has anonymous functions, closures, and continuations; statements all have values, and functions return the last evaluation). It has support for introspection, reflection and metaprogramming, as well as support for interpreter-based threads. Ruby features dynamic typing, and supports parametric polymorphism.
According to the Ruby FAQ, the syntax is similar to Perl and the semantics are similar to Smalltalk, but it differs greatly from Python.
The syntax of Ruby is broadly similar to that of Perl and Python. Class and method definitions are signaled by keywords, whereas code blocks can be both be defined by keywords or braces. In contrast to Perl, variables are not obligatorily prefixed with a sigil. When used, the sigil changes the semantics of scope of the variable. For practical purposes there is no distinction between expressions and statements. Line breaks are significant and taken as the end of a statement; a semicolon may be equivalently used. Unlike Python, indentation is not significant.
One of the differences from Python and Perl is that Ruby keeps all of its instance variables completely private to the class and only exposes them through accessor methods (codice_11, codice_12, etc.). Unlike the "getter" and "setter" methods of other languages like C++ or Java, accessor methods in Ruby can be created with a single line of code via metaprogramming; however, accessor methods can also be created in the traditional fashion of C++ and Java. As invocation of these methods does not require the use of parentheses, it is trivial to change an instance variable into a full function, without modifying a single line of calling code or having to do any refactoring achieving similar functionality to C# and VB.NET property members.
Python's property descriptors are similar, but come with a trade-off in the development process. If one begins in Python by using a publicly exposed instance variable, and later changes the implementation to use a private instance variable exposed through a property descriptor, code internal to the class may need to be adjusted to use the private variable rather than the public property. Ruby's design forces all instance variables to be private, but also provides a simple way to declare codice_13 and codice_14 methods. This is in keeping with the idea that in Ruby, one never directly accesses the internal members of a class from outside the class; rather, one passes a message to the class and receives a response.
See the Examples section below for samples of code demonstrating Ruby syntax.
The Ruby official distribution also includes codice_15, an interactive command-line interpreter that can be used to test code quickly. The following code fragment represents a sample session using codice_15:
$ irb
irb(main):001:0> puts 'Hello, World'
Hello, World
irb(main):002:0> 1+2
The following examples can be run in a Ruby shell such as Interactive Ruby Shell, or saved in a file and run from the command line by typing codice_17.
Classic Hello world example:
puts 'Hello World!'
Some basic Ruby code:
-199.abs # => 199
'ice is nice'.length # => 11
'ruby is cool.'.index('u') # => 1
"Nice Day Isn't It?".downcase.split(").uniq.sort.join
Input:
print 'Please type name >'
name = gets.chomp
puts "Hello #{name}."
Conversions:
puts 'Give me a number'
number = gets.chomp
puts number.to_i
output_number = number.to_i + 1
puts output_number.to_s + ' is a bigger number.'
There are a variety of ways to define strings in Ruby.
The following assignments are equivalent:
a = "\nThis is a double-quoted string\n"
a = %/\nThis is a double-quoted string\n/
a = «-BLOCK
This is a double-quoted string
BLOCK
Strings support variable interpolation:
var = 3.14159
"pi is #{var}"
=> "pi is 3.14159"
The following assignments are equivalent and produce raw strings:
a = 'This is a single-quoted string'
Constructing and using an array:
a = [1, 'hi', 3.14, 1, 2, [4, 5]]
a[2] # => 3.14
a.[](2) # => 3.14
a.reverse # => [[4, 5], 2, 1, 3.14, 'hi', 1]
a.flatten.uniq # => [1, 'hi', 3.14, 2, 4, 5]
Constructing and using an [[associative array]] (in Ruby, called a "hash"):
hash = { :water => 'wet', :fire => 'hot' } # makes the previous line redundant as we are now
puts hash[:fire] # prints "hot"
hash.each_pair do |key, value| # or: hash.each do |key, value|
end
hash.delete :water # deletes the pair :water => 'wet' and returns "wet"
If statement:
if rand(100).even?
else
end
The two syntaxes for creating a code block:
do
end
A code block can be passed to a method as an optional block argument. Many built-in methods have such arguments:
File.open('file.txt', 'w') do |file| # 'w' denotes "write mode"
end # file is automatically closed here
File.readlines('file.txt').each do |line|
end
Parameter-passing a block to be a [[Closure (computer science)|closure]]:
def remember(&a_block)
end
remember {|name| puts "Hello, #{name}!"}
@block.call('Jon') # => "Hello, Jon!"
Creating an [[anonymous function]]:
->(arg) {puts arg} # introduced in Ruby 1.9
Returning [[Closure (computer science)|closures]] from a method:
def create_set_and_get(initial_value=0) # note the default value of 0
end
setter, getter = create_set_and_get # returns two values
setter.call(21)
getter.call # => 21
def create_set_and_get(closure_value=0)
end
Yielding the flow of program control to a block that was provided at calling time:
def use_hello
end
use_hello {|string| puts string} # => 'hello'
Iterating over enumerations and arrays using blocks:
array = [1, 'hi', 3.14]
(3..6).each {|num| puts num }
(3...6).each {|num| puts num }
A method such as codice_18 can accept both a parameter and a block. The codice_18 method iterates over each member of a list, performing some function on it while retaining an aggregate. This is analogous to the codice_20 function in [[functional programming languages]]. For example:
[1,3,5].inject(10) {|sum, element| sum + element} # => 19
On the first pass, the block receives 10 (the argument to inject) as codice_21, and 1 (the first element of the array) as codice_22. This returns 11, which then becomes codice_21 on the next pass. It is added to 3 to get 14, which is then added to 5 on the third pass, to finally return 19.
Using an enumeration and a block to square the numbers 1 to 10 (using a "range"):
(1..10).collect {|x| x*x} # => [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
Or invoke a method on each item (codice_24 is a synonym for codice_25):
(1..5).map(&:to_f) # => [1.0, 2.0, 3.0, 4.0, 5.0]
The following code defines a class named codice_26. In addition to codice_27, the usual constructor to create new objects, it has two methods: one to override the codice_28 comparison operator (so codice_29 can sort by age) and the other to override the codice_30 method (so codice_31 can format its output). Here, codice_12 is an example of metaprogramming in Ruby: codice_33 defines getter and setter methods of instance variables, but codice_12 only getter methods. The last evaluated statement in a method is its return value, allowing the omission of an explicit codice_35 statement.
class Person
end
group = [
puts group.sort.reverse
The preceding code prints three names in reverse age order:
Bob (33)
Ash (23)
Chris (16)
codice_26 is a constant and is a reference to a codice_37 object.
In Ruby, classes are never closed: methods can always be added to an existing class. This applies to "all" classes, including the standard, built-in classes. All that is needed to do is open up a class definition for an existing class, and the new contents specified will be added to the existing contents. A simple example of adding a new method to the standard library's codice_38 class:
class Time
end
today = Time.now # => 2013-09-03 16:09:37 +0300
yesterday = today.yesterday # => 2013-09-02 16:09:37 +0300
Adding methods to previously defined classes is often called [[monkey patch|monkey-patching]]. If performed recklessly, the practice can lead to both behavior collisions with subsequent unexpected results and code scalability problems.
Since Ruby 2.0 it has been possible to use refinements to reduce the potentially negative consequences of monkey-patching, by limiting the scope of the patch to particular areas of the code base.
module RelativeTimeExtensions
end
module MyModule
end
An exception is raised with a codice_39 call:
raise
An optional message can be added to the exception:
raise "This is a message"
Exceptions can also be specified by the programmer:
raise ArgumentError, "Illegal arguments!"
Alternatively, an exception instance can be passed to the codice_39 method:
raise ArgumentError.new("Illegal arguments!")
This last construct is useful when raising an instance of a custom exception class featuring a constructor that takes more than one argument:
class ParseError < Exception
end
raise ParseError.new("Foo", 3, 9)
Exceptions are handled by the codice_41 clause. Such a clause can catch exceptions that inherit from codice_42. Other flow control keywords that can be used when handling exceptions are codice_43 and codice_44:
begin
rescue
else
ensure
end
It is a common mistake to attempt to catch all exceptions with a simple rescue clause. To catch all exceptions one must write:
begin
rescue Exception
end
Or catch particular exceptions:
begin
rescue RuntimeError
end
It is also possible to specify that the exception object be made available to the handler clause:
begin
rescue RuntimeError => e
end
Alternatively, the most recent exception is stored in the magic global codice_45.
Several exceptions can also be caught:
begin
rescue RuntimeError, Timeout::Error => e
end
Ruby code can programmatically modify, at [[Run time (program lifecycle phase)|runtime]], aspects of its own structure that would be fixed in more rigid languages, such as class and method definitions. This sort of [[metaprogramming]] can be used to write more concise code and effectively extend the language.
For example, the following Ruby code generates new methods for the built-in codice_46 class, based on a list of colors. The methods wrap the contents of the string with an HTML tag styled with the respective color.
COLORS = { black: "000",
class String
end
The generated methods could then be used like this:
"Hello, World!".in_blue
To implement the equivalent in many other languages, the programmer would have to write each method (codice_47, codice_48, codice_49, etc.) separately.
Some other possible uses for Ruby metaprogramming include:
The original Ruby [[interpreter (computer software)|interpreter]] is often referred to as [[Ruby MRI|Matz's Ruby Interpreter]] or MRI. This implementation is written in C and uses its own Ruby-specific [[virtual machine]].
The standardized and retired Ruby 1.8 [[Ruby MRI|implementation]] was written in [[C (programming language)|C]], as a single-pass [[interpreted language]].
Starting with Ruby 1.9, and continuing with Ruby 2.x and above, the official Ruby interpreter has been [[YARV]] ("Yet Another Ruby VM"), and this implementation has superseded the slower virtual machine used in previous releases of MRI.
, there are a number of alternative implementations of Ruby, including [[JRuby]], [[Rubinius]], and [[mruby]]. Each takes a different approach, with JRuby and Rubinius providing [[just-in-time compilation]] and mruby also providing [[ahead-of-time compilation]].
Ruby has three major alternate implementations:
Other Ruby implementations include:
Other now defunct Ruby implementations were:
The maturity of Ruby implementations tends to be measured by their ability to run the [[Ruby on Rails]] (Rails) framework, because it is complex to implement and uses many Ruby-specific features. The point when a particular implementation achieves this goal is called "the Rails singularity". The reference implementation, JRuby, and Rubinius are all able to run Rails unmodified in a production environment.
Matsumoto originally did Ruby development on the [[BSD|4.3BSD]]-based [[Sony NEWS|Sony NEWS-OS]] 3.x, but later migrated his work to [[SunOS]] 4.x, and finally to [[Linux]].
By 1999, Ruby was known to work across many different [[operating system]]s, including NEWS-OS, SunOS, [[AIX]], [[SVR4]], [[Solaris (operating system)|Solaris]], [[NEC]] [[UP-UX]], [[NeXTSTEP]], BSD, Linux, [[Classic Mac OS|Mac OS]], [[DOS]], [[Windows (operating system)|Windows]], and [[BeOS]].
Modern Ruby versions and implementations are available on many operating systems, such as Linux, BSD, Solaris, AIX, [[macOS]], Windows, [[Windows Phone]], [[Windows CE]], [[Symbian OS]], BeOS, and [[IBM i]].
Ruby programming language is supported across a number of cloud hosting platforms like [[Jelastic]], [[Heroku]], [[Google Cloud Platform]] and others.
[[RubyGems]] is Ruby's package manager. A Ruby package is called a "gem" and can easily be installed via the command line. Most gems are libraries, though a few exist that are applications, such as [[integrated development environment|IDEs]]. There are over 10,000 Ruby gems hosted on RubyGems.org.
Many new and existing Ruby libraries are hosted on [[GitHub]], a service that offers [[Revision control|version control]] repository hosting for [[Git (software)|Git]].
The Ruby Application Archive, which hosted applications, documentation, and libraries for Ruby programming, was maintained until 2013, when its function was transferred to RubyGems.
[[Category:Ruby (programming language)| ]]
[[Category:Articles with example Ruby code]]
[[Category:Class-based programming languages]]
[[Category:Dynamic programming languages]]
[[Category:Dynamically typed programming languages]]
[[Category:Free software programmed in C]]
[[Category:ISO standards]]
[[Category:Multi-paradigm programming languages]]
[[Category:Object-oriented programming languages]]
[[Category:Programming languages created in 1995]]
[[Category:Programming languages with an ISO standard]]
[[Category:Scripting languages]]
[[Category:Software using the BSD license]]
[[Category:Text-oriented programming languages]]
[[Category:Free compilers and interpreters]] | https://en.wikipedia.org/wiki?curid=25768 |
Render farm
A render farm is a high-performance computer system, e.g. a computer cluster, built to render computer-generated imagery (CGI), typically for film and television visual effects.
The term "render farm" was born during the production of the Autodesk 3D Studio animated short "The Bored Room" in July 1990 when, to meet an unrealistic deadline, a room filled with Compaq 386 computers was configured to do the rendering. At the time the system wasn't networked so each computer had to be set up by hand to render a specific animation sequence. The rendered images would then be 'harvested' via a rolling platform to a large-format optical storage drive, then loaded frame by frame to a Sony CRV disc.
The Autodesk technician assigned to manage this early render farm (Jamie Clay) had a regular habit of wearing farmer's overalls and the product manager for the software (Bob Bennett) joked that what Clay was doing was farming the frames and at that moment he named the collection of computers a "render farm". In the second release of the software, Autodesk introduced network rendering, making the task of running a render farm significantly easier. A BTS of The Bored Room doesn't show Clay in the overalls but does give a glimpse of the production environment.
A render farm is different from a render wall, which is a networked, tiled display used for real-time rendering. The rendering of images is a highly parallelizable activity, as frames and sometimes tiles can be calculated independently of the others, with the main communication between processors being the upload of the initial source material, such as models and textures, and the download of the finished images.
Over the decades, advances in computer capability have allowed an image to take less time to render. However, the increased computation is appropriated to meet demands to achieve state-of-the-art image quality. While simple images can be produced rapidly, more realistic and complicated higher-resolution images can now be produced in more reasonable amounts of time. The time spent producing images can be limited by production time-lines and deadlines, and the desire to create high-quality work drives the need for increased computing power, rather than simply wanting the same images created faster. Project such as the Big and Ugly Rendering Project have been available for rendering images using Blender across both widely distributed networks and local networks.
To manage large farms, one must introduce a "queue manager" that automatically distributes processes to the many processors. Each "process" could be the rendering of one full image, a few images, or even a sub-section (or "tile") of an image. The software is typically a client–server package that facilitates communication between the processors and the queue manager, although some queues have no central manager. Some common features of queue managers are: re-prioritization of the queue, management of software licenses, and algorithms to best optimize throughput based on various types of hardware in the farm. Software licensing handled by a queue manager might involve dynamic allocation of licenses to available CPUs or even cores within CPUs.
A tongue-in-cheek job title for systems engineers who work primarily in the maintenance and monitoring of a render farm is a "render wrangler" to further the "farm" theme. This job title can be seen in film credits.
Beyond on-site render farms, cloud-based render farm options have been facilitated by the rise of high-speed Internet access. Many cloud computing services, including some dedicated to rendering, offer render farm services which bill only for processor time used. Understanding the cost or processing time required to complete rendering is unpredictable so render farms bill using GHz per hour. Those considering outsourcing their renders to a farm or to the cloud can do a number of things to improve their predictions and reduce their costs. These services eliminate the need for a customer to build and maintain their own rendering solution. Another phenomenon is collaborative rendering, in which users join a network of animators who contribute their processing power to the group. However, this has technological and security limitations. | https://en.wikipedia.org/wiki?curid=25774 |
Robert Borden
Sir Robert Laird Borden, (June 26, 1854 – June 10, 1937) was a Canadian lawyer and politician who served as the eighth prime minister of Canada, in office from 1911 to 1920. He is best known for his leadership of Canada during World War I.
Borden was born in Grand-Pré, Nova Scotia. He worked as a schoolteacher for a period and then served his articles of clerkship at a Halifax law firm. He was called to the bar in 1878, and soon became one of Nova Scotia's most prominent barristers. Borden was elected to the House of Commons of Canada in 1896, representing the Conservative Party. He replaced Charles Tupper as party leader in 1901, and became prime minister after the party's victory at the 1911 federal election.
As prime minister, Borden led Canada through World War I and its immediate aftermath. His government passed the "War Measures Act", created the Canadian Expeditionary Force, and eventually introduced compulsory military service, which sparked the 1917 conscription crisis. On the home front, it dealt with the consequences of the Halifax Explosion, introduced women's suffrage for federal elections, and used the North-West Mounted Police to break up the 1919 Winnipeg general strike. For the 1917 federal election (the first in six years), Borden created the Unionist Party, an amalgam of Conservatives and pro-conscription Liberals; his government was re-elected with an overwhelming majority.
Borden retired from politics in 1920, having accepted a knighthood in 1915 – the last Canadian prime minister to be knighted. He was also the last prime minister born before Confederation, and is the most recent Nova Scotian to hold the office. His portrait has appeared on Canadian one hundred-dollar notes produced since 1976, but in late 2016 the government announced Borden's image would be removed during the next redesign.
Robert Laird Borden was born and educated in Grand-Pré, Nova Scotia, a farming community at the eastern end of the Annapolis Valley, where his great-grandfather Perry Borden, Sr. of Tiverton, Rhode Island, had taken up Acadian land in 1760 as one of the New England Planters. The Borden family had immigrated from Headcorn, Kent, England, to New England in the 1600s. Also arriving in this group was a great-great-grandfather, Robert Denison, who had come from Connecticut at about the same time. Perry had accompanied his father, Samuel Borden, the chief surveyor chosen by the government of Massachusetts to survey the former Acadian land and draw up new lots for the Planters in Nova Scotia. Through the marriage of his patrilineal ancestor Richard Borden to Innocent Cornell, Borden is descendant from Thomas Cornell of Portsmouth, Rhode Island.
Borden's father Andrew Borden was judged by his son to be "a man of good ability and excellent judgement", of a "calm, contemplative and philosophical" turn of mind, but "he lacked energy and had no great aptitude for affairs". His mother Eunice Jane Laird was more driven, possessing "very strong character, remarkable energy, high ambition and unusual ability". Her ambition was transmitted to her first-born child, who applied himself to his studies while assisting his parents with the farm work he found so disagreeable. His cousin Sir Frederick Borden was a prominent Liberal politician.
Robert Borden was the last Canadian Prime Minister born before Confederation.
From 1868 to 1874, he worked as a teacher in Grand-Pré and Matawan, New Jersey. Seeing no future in teaching, he returned to Nova Scotia in 1874. Despite having no formal university education, he went to article for four years at a Halifax law firm. In August 1878, he was called to the Nova Scotia Bar, placing first in the bar examinations. Borden went to Kentville, Nova Scotia, as the junior partner of the Conservative lawyer John P. Chipman. In 1880, he was inducted into the Freemasons – St Andrew's lodge #1.
In 1882, he was asked by Wallace Graham to move to Halifax and join the Conservative law firm headed by Graham and Charles Hibbert Tupper. In the Autumn of 1889, when he was only 35, Borden became the senior partner following the departure of Graham and Tupper for the bench and politics, respectively. His financial future guaranteed, on September 25, 1889, he married Laura Bond (1863–1940), the daughter of a Halifax hardware merchant. They would have no children. In 1894, he bought a large property and home on the south side of Quinpool Road, which the couple called "Pinehurst". In 1893, Borden successfully argued the first of two cases which he took to the Judicial Committee of the Privy Council. He represented many of the important Halifax businesses, and sat on the boards of Nova Scotian companies including the Bank of Nova Scotia and the Crown Life Insurance Company. In 1896, he became President of the Nova Scotia Barristers' Society, and took the initiative in organizing the founding meetings of the Canadian Bar Association in Montreal within the same year. By the time he was prevailed upon to enter politics, Borden had what some judged to be the largest legal practice in the Maritime Provinces, and had become a wealthy man.
Borden was a Liberal until he broke with the party in 1891 over the issue of Reciprocity.
He was elected to Parliament in the 1896 federal election as a Conservative and in 1901 was selected by the Conservative caucus to succeed Sir Charles Tupper as leader of the Conservative Party. He was defeated in his Halifax seat in the 1904 federal election and re-entered the House of Commons the next year via a by-election in Carleton. Over the next decade he worked to rebuild the party and establish a reform policy, the Halifax Platform of 1907 which he described as "the most advanced and progressive policy ever put forward in Federal affairs". It called for reform of the Senate and the civil service, a more selective immigration policy, free rural mail delivery, and government regulation of telegraphs, telephones, and railways and eventually national ownership of telegraphs and telephones. Despite his efforts, his party lost the 1908 federal election to Wilfrid Laurier's Liberals. Borden was however elected again for Halifax. His party's fortunes turned around in the 1911 federal election, however, when the Conservatives successfully campaigned against Laurier's proposals for a Reciprocity (free trade) agreement with the United States. Borden countered with a revised version of John A. Macdonald's National Policy and appeals of loyalty to the British Empire and ran on the slogan "Canadianism or Continentalism". In British Columbia, the party ran on the slogan "A White Canada", playing to the fears of British Columbians that resented the increasing presence of cheap Asian labour and the resulting depression in wages. In Quebec, concurrently, Henri Bourassa led a campaign against what he saw as Laurier's capitulation to British imperialism, playing a part in the defeat of Laurier's government and the election of Borden's Tories.
Borden served as Prime Minister for the duration of the 12th Parliament of Canada, and for most of the 13th Parliament of Canada, before his retirement from active political life in July 1920.
As Prime Minister of Canada during the First World War, he transformed his government to a wartime administration, passing the "War Measures Act" in 1914. Borden committed Canada to provide half a million soldiers for the war effort. However, volunteers had quickly dried up when Canadians realized there would be no quick end to the war. Borden's determination to meet that huge commitment led to the "Military Service Act" and the Conscription Crisis of 1917, which split the country on linguistic lines. In 1917 Borden recruited members of the Liberals (with the notable exception of leader Wilfrid Laurier) to create a Unionist government. The 1917 election saw the "Government" candidates (including a number of Liberal-Unionists) crush the Opposition "Laurier Liberals" in English Canada resulting in a large parliamentary majority for Borden.
Sir Robert Borden pledged himself during the campaign to equal suffrage for women. With his return to power, he introduced a bill in 1918 for extending the franchise to women. This passed without division.
The war effort also enabled Canada to assert itself as an independent power. Borden wanted to create a single Canadian army, rather than have Canadian soldiers split up and assigned to British divisions as had happened during the Boer War. Sam Hughes, the Minister of Militia, generally ensured that Canadians were well-trained and prepared to fight in their own divisions, although with mixed results such as the Ross Rifle. Arthur Currie provided sensible leadership for the Canadian divisions in Europe, although they were still under overall British command. Nevertheless, Canadian troops proved themselves to be among the best in the world, fighting at the Somme, Ypres, Passchendaele, and especially at the Battle of Vimy Ridge.
During Borden's first term as Prime Minister, the National Research Council of Canada was established in 1916.
In world affairs, Borden played a crucial role (according to McMillan) in transforming the British Empire into a partnership of equal states, the Commonwealth of Nations, a term that was first discussed at an Imperial Conference in London during the war. Borden also introduced the first Canadian income tax under Income War Tax Act of 1917, which was then meant to be temporary but later became permanent.
Convinced that Canada had become a nation on the battlefields of Europe, Borden demanded that it have a separate seat at the Paris Peace Conference. This was initially opposed not only by Britain but also by the United States, which perceived such a delegation as an extra British vote. Borden responded by pointing out that since Canada had lost a far larger proportion of its men compared to the US in the war (although not more in absolute numbers), Canada at least had the right to the representation of a "minor" power. British Prime Minister David Lloyd George eventually relented, and convinced the reluctant Americans to accept the presence of separate Canadian, Indian, Australian, Newfoundland, New Zealand and South African delegations. Despite this, Borden boycotted the opening ceremony, protesting at the precedence given to the prime minister of the much smaller Newfoundland over him.
Not only did Borden's persistence allow him to represent Canada in Paris as a nation, it also ensured that each of the dominions could sign the Treaty of Versailles in its own right and receive a separate membership in the League of Nations. During the conference, Borden tried to act as an intermediary between the United States and other members of the British Empire delegation, particularly Australia and New Zealand over the issue of the League of Nations Mandate. Borden also discussed with Lloyd George the possibility of Canada taking over the administration of Belize and the West Indies, but no agreement was reached.
At Borden's insistence, the treaty was ratified by the Canadian Parliament. Borden was the last Prime Minister to be knighted after the House of Commons indicated its desire for the discontinuation of the granting of any future titles to Canadians in 1919 with the adoption of the Nickle Resolution.
In 1919, Borden approved the use of troops to put down the Winnipeg general strike, which was feared to be the result of Bolshevik agitation from the Soviet Union.
Sir Robert Borden retired from office in 1920. He was the Chancellor of Queen's University from 1924 to 1930 and also was Chancellor of McGill University from 1918 to 1920 while still Prime Minister. Borden also served as Vice-President of The Champlain Society between 1923 and 1925. He was the Society's first Honorary President between 1925 and 1938. Borden's successor Arthur Meighen was defeated by the new Liberal leader William Lyon Mackenzie King in the 1921 election. Nevertheless, Borden would go on to represent Canada once more on the international stage when he attended the Washington Naval Conference in 1922 and signed the resulting arms reduction treaty on Canada's behalf.
At the time of his death, Borden stood as president of two financial institutions: Barclays Bank of Canada and the Crown Life Insurance Company. Borden died on June 10, 1937, in Ottawa and is buried in the Beechwood Cemetery marked by a simple stone cross.
Robert Laird Borden married Laura Bond, youngest daughter of the late T. H. Bond, September 1889. She served as president of the Local Council of Women of Halifax, until her resignation in 1901. She served as President of the Aberdeen Association, Vice-President of the Women's Work Exchange in Halifax, and Corresponding Secretary of the Associated Charities of the United States.
Borden chose the following jurists to sit as justices of the Supreme Court of Canada:
By Sir Robert
By others | https://en.wikipedia.org/wiki?curid=25776 |
Robot
A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically. Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed on the lines of human form, but most robots are machines designed to perform a task with no regard to their aesthetics.
Robots can be autonomous or semi-autonomous and range from humanoids such as Honda's "Advanced Step in Innovative Mobility" (ASIMO) and TOSY's "TOSY Ping Pong Playing Robot" (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed "swarm" robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own. Autonomous things are expected to proliferate in the coming decade, with home robotics and the autonomous car as some of the main drivers.
The branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing is robotics. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, or cognition. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. These robots have also created a newer branch of robotics: soft robotics.
From the time of ancient civilization there have been many accounts of user-configurable automated devices and even automata resembling animals and humans, designed primarily as entertainment. As mechanical techniques developed through the Industrial age, there appeared more practical applications such as automated machines, remote-control and wireless remote-control.
The term comes from a Slavic root, "robot-", with meanings associated with labor. The word 'robot' was first used to denote a fictional humanoid in a 1920 Czech-language play "R.U.R." "(Rossumovi Univerzální Roboti - Rossum's Universal Robots)" by Karel Čapek, though it was Karel's brother Josef Čapek who was the word's true inventor.
Electronics evolved into the driving force of development with the advent of the first electronic autonomous robots created by William Grey Walter in Bristol, England in 1948, as well as Computer Numerical Control (CNC) machine tools in the late 1940s by John T. Parsons and Frank L. Stulen. The first commercial, digital and programmable robot was built by George Devol in 1954 and was named the Unimate. It was sold to General Motors in 1961 where it was used to lift pieces of hot metal from die casting machines at the Inland Fisher Guide Plant in the West Trenton section of Ewing Township, New Jersey.
Robots have replaced humans in performing repetitive and dangerous tasks which humans prefer not to do, or are unable to do because of size limitations, or which take place in extreme environments such as outer space or the bottom of the sea. There are concerns about the increasing use of robots and their role in society. Robots are blamed for rising technological unemployment as they replace workers in increasing numbers of functions. The use of robots in military combat raises ethical concerns. The possibilities of robot autonomy and potential repercussions have been addressed in fiction and may be a realistic concern in the future.
The word "robot" can refer to both physical robots and virtual software agents, but the latter are usually referred to as bots. There is no consensus on which machines qualify as robots but there is general agreement among experts, and the public, that robots tend to possess some or all of the following abilities and functions: accept electronic programming, process data or physical perceptions electronically, operate autonomously to some degree, move around, operate physical parts of itself or physical processes, sense and manipulate their environment, and exhibit intelligent behavior, especially behavior which mimics humans or other animals. Closely related to the concept of a "robot" is the field of Synthetic Biology, which studies entities whose nature is more comparable to beings than to machines.
The idea of automata originates in the mythologies of many cultures around the world. Engineers and inventors from ancient civilizations, including Ancient China, Ancient Greece, and Ptolemaic Egypt, attempted to build self-operating machines, some resembling animals and humans. Early descriptions of automata include the artificial doves of Archytas, the artificial birds of Mozi and Lu Ban, a "speaking" automaton by Hero of Alexandria, a washstand automaton by Philo of Byzantium, and a human automaton described in the "Lie Zi".
Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the island from pirates.
In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) "applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures." In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called "The Pigeon". Hero of Alexandria , a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.
The 11th century Lokapannatti tells of how the Buddha's relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka.
In ancient China, the 3rd-century text of the "Lie Zi" describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an 'artificer'. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical 'handiwork' made of leather, wood, and artificial organs. There are also accounts of flying automata in the "Han Fei Zi" and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds ("ma yuan") that could successfully fly.
"Samarangana Sutradhara", a Sanskrit treatise by Bhoja (11th century), includes a chapter about the construction of mechanical contrivances (automata), including mechanical bees and birds, fountains shaped like humans and animals, and male and female dolls that refilled oil lamps, danced, played instruments, and re-enacted scenes from Hindu mythology.
13th century Muslim Scientist Ismail al-Jazari created several automated devices. He built automated moving peacocks driven by hydropower. He also invented the earliest known automatic gates, which were driven by hydropower, created automatic doors as part of one of his elaborate water clocks. One of al-Jazari's humanoid automata was a waitress that could serve water, tea or drinks. The drink was stored in a tank with a reservoir from where the drink drips into a bucket and, after seven minutes, into a cup, after which the waitress appears out of an automatic door serving the drink. Al-Jazari invented a hand washing automaton incorporating a flush mechanism now used in modern flush toilets. It features a female humanoid automaton standing by a basin filled with water. When the user pulls the lever, the water drains and the female automaton refills the basin.
Mark E. Rosheim summarizes the advances in robotics made by Muslim engineers, especially al-Jazari, as follows:Unlike the Greek designs, these Arab examples reveal an interest, not only in dramatic illusion, but in manipulating the environment for human comfort. Thus, the greatest contribution the Arabs made, besides preserving, disseminating and building on the work of the Greeks, was the concept of practical application. This was the key element that was missing in Greek robotic science.
In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci's notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo's robot, able to sit up, wave its arms and move its head and jaw. The design was probably based on anatomical research recorded in his "Vitruvian Man". It is not known whether he attempted to build it. According to "Encyclopædia Britannica", Leonardo da Vinci may have been influenced by the classic automata of al-Jazari.
In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century "Karakuri zui" ("Illustrated Machinery", 1796). One such automaton was the karakuri ningyō, a mechanized puppet. Different variations of the karakuri existed: the "Butai karakuri", which were used in theatre, the "Zashiki karakuri", which were small and used in homes, and the "Dashi karakuri" which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.
In France, between 1738 and 1739, Jacques de Vaucanson exhibited several life-sized automatons: a flute player, a pipe player and a duck. The mechanical duck could flap its wings, crane its neck, and swallow food from the exhibitor's hand, and it gave the illusion of digesting its food by excreting matter stored in a hidden compartment.
Remotely operated vehicles were demonstrated in the late 19th century in the form of several types of remotely controlled torpedoes. The early 1870s saw remotely controlled torpedoes by John Ericsson (pneumatic), John Louis Lay (electric wire guided), and Victor von Scheliha (electric wire guided).
The Brennan torpedo, invented by Louis Brennan in 1877, was powered by two contra-rotating propellors that were spun by rapidly pulling out wires from drums wound inside the torpedo. Differential speed on the wires connected to the shore station allowed the torpedo to be guided to its target, making it "the world's first "practical" guided missile". In 1897 the British inventor Ernest Wilson was granted a patent for a torpedo remotely controlled by "Hertzian" (radio) waves and in 1898 Nikola Tesla publicly demonstrated a wireless-controlled torpedo that he hoped to sell to the US Navy.
Archibald Low, known as the "father of radio guidance systems" for his pioneering work on guided rockets and planes during the First World War. In 1917, he demonstrated a remote controlled aircraft to the Royal Flying Corps and in the same year built the first wire-guided rocket.
'Robot' was first applied as a term for artificial automata in the 1920 play "R.U.R." by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot. The word 'robot' itself was not new, having been in the Slavic language as "robota" (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system (see: Robot Patent).
Čapek's fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal "robota" class eloquently fit the imagination of a new class of manufactured, artificial workers.
English pronunciation of the word has evolved relatively quickly since its introduction. In the U.S. during the late '30s to early '40s the second sylable was pronounced with a long "O" like "row-boat." By the late '50s to early '60s, some were pronouncing it with a short "U" like "row-but" while others used a softer "O" like "row-bought." By the '70s, its current pronunciation "row-bot" had become predominant.
In 1928, one of the first humanoid robots, Eric, was exhibited at the annual exhibition of the Model Engineers Society in London, where it delivered a speech. Invented by W. H. Richards, the robot's frame consisted of an aluminium body of armour with eleven electromagnets and one motor powered by a twelve-volt power source. The robot could move its hands and head and could be controlled through remote control or voice control. Both Eric and his "brother" George toured the world.
Westinghouse Electric Corporation built Televox in 1926; it was a cardboard cutout connected to various devices which users could turn on and off. In 1939, the humanoid robot known as Elektro was debuted at the 1939 New York World's Fair. Seven feet tall (2.1 m) and weighing 265 pounds (120.2 kg), it could walk by voice command, speak about 700 words (using a 78-rpm record player), smoke cigarettes, blow up balloons, and move its head and arms. The body consisted of a steel gear, cam and motor skeleton covered by an aluminum skin. In 1928, Japan's first robot, Gakutensoku, was designed and constructed by biologist Makoto Nishimura.
The first electronic autonomous robots with complex behaviour were created by William Grey Walter of the Burden Neurological Institute at Bristol, England in 1948 and 1949. He wanted to prove that rich connections between a small number of brain cells could give rise to very complex behaviors – essentially that the secret of how the brain worked lay in how it was wired up. His first robots, named "Elmer" and "Elsie", were constructed between 1948 and 1949 and were often described as "tortoises" due to their shape and slow rate of movement. The three-wheeled tortoise robots were capable of phototaxis, by which they could find their way to a recharging station when they ran low on battery power.
Walter stressed the importance of using purely analogue electronics to simulate brain processes at a time when his contemporaries such as Alan Turing and John von Neumann were all turning towards a view of mental processes in terms of digital computation. His work inspired subsequent generations of robotics researchers such as Rodney Brooks, Hans Moravec and Mark Tilden. Modern incarnations of Walter's "turtles" may be found in the form of BEAM robotics.
The first digitally operated and programmable robot was invented by George Devol in 1954 and was ultimately called the Unimate. This ultimately laid the foundations of the modern robotics industry. Devol sold the first Unimate to General Motors in 1960, and it was installed in 1961 in a plant in Trenton, New Jersey to lift hot pieces of metal from a die casting machine and stack them. Devol's patent for the first digitally operated programmable robotic arm represents the foundation of the modern robotics industry.
The first palletizing robot was introduced in 1963 by the Fuji Yusoki Kogyo Company. In 1973, a robot with six electromechanically driven axes was patented by KUKA robotics in Germany, and the programmable universal manipulation arm was invented by Victor Scheinman in 1976, and the design was sold to Unimation.
Commercial and industrial robots are now in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for jobs which are too dirty, dangerous or dull to be suitable for humans. Robots are widely used in manufacturing, assembly and packing, transport, earth and space exploration, surgery, weaponry, laboratory research, and mass production of consumer and industrial goods.
Various techniques have emerged to develop the science of robotics and robots. One method is evolutionary robotics, in which a number of differing robots are submitted to tests. Those which perform best are used as a model to create a subsequent "generation" of robots. Another method is developmental robotics, which tracks changes and development within a single robot in the areas of problem-solving and other functions. Another new type of robot is just recently introduced which acts both as a smartphone and robot and is named RoboHon.
As robots become more advanced, eventually there may be a standard computer operating system designed mainly for robots. Robot Operating System is an open-source set of programs being developed at Stanford University, the Massachusetts Institute of Technology and the Technical University of Munich, Germany, among others. ROS provides ways to program a robot's navigation and limbs regardless of the specific hardware involved. It also provides high-level commands for items like image recognition and even opening doors. When ROS boots up on a robot's computer, it would obtain data on attributes such as the length and movement of robots' limbs. It would relay this data to higher-level algorithms. Microsoft is also developing a "Windows for robots" system with its Robotics Developer Studio, which has been available since 2007.
Japan hopes to have full-scale commercialization of service robots by 2025. Much technological research in Japan is led by Japanese government agencies, particularly the Trade Ministry.
Many future applications of robotics seem obvious to people, even though they are well beyond the capabilities of robots available at the time of the prediction.
As early as 1982 people were confident that someday robots would:
1. Clean parts by removing molding flash
2. Spray paint automobiles with absolutely no human presence
3. Pack things in boxes—for example, orient and nest chocolate candies in candy boxes
4. Make electrical cable harness
5. Load trucks with boxes—a packing problem
6. Handle soft goods, such as garments and shoes
7. Shear sheep
8. prosthesis
9. Cook fast food and work in other service industries
10. Household robot.
Generally such predictions are overly optimistic in timescale.
In 2008, Caterpillar Inc. developed a dump truck which can drive itself without any human operator. Many analysts believe that self-driving trucks may eventually revolutionize logistics. By 2014, Caterpillar had a self-driving dump truck which is expected to greatly change the process of mining. In 2015, these Caterpillar trucks were actively used in mining operations in Australia by the mining company Rio Tinto Coal Australia. Some analysts believe that within the next few decades, most trucks will be self-driving.
A literate or 'reading robot' named Marge has intelligence that comes from software. She can read newspapers, find and correct misspelled words, learn about banks like Barclays, and understand that some restaurants are better places to eat than others.
Baxter is a new robot introduced in 2012 which learns by guidance. A worker could teach Baxter how to perform a task by moving its hands in the desired motion and having Baxter memorize them. Extra dials, buttons, and controls are available on Baxter's arm for more precision and features. Any regular worker could program Baxter and it only takes a matter of minutes, unlike usual industrial robots that take extensive programs and coding in order to be used. This means Baxter needs no programming in order to operate. No software engineers are needed. This also means Baxter can be taught to perform multiple, more complicated tasks. Sawyer was added in 2015 for smaller, more precise tasks.
The word "robot" was introduced to the public by the Czech interwar writer Karel Čapek in his play "R.U.R. (Rossum's Universal Robots)", published in 1920. The play begins in a factory that uses a chemical substitute for protoplasm to manufacture living, simplified people called "robots." The play does not focus in detail on the technology behind the creation of these living creatures, but in their appearance they prefigure modern ideas of androids, creatures who can be mistaken for humans. These mass-produced workers are depicted as efficient but emotionless, incapable of original thinking and indifferent to self-preservation. At issue is whether the robots are being exploited and the consequences of human dependence upon commodified labor (especially after a number of specially-formulated robots achieve self-awareness and incite robots all around the world to rise up against the humans).
Karel Čapek himself did not coin the word. He wrote a short letter in reference to an etymology in the "Oxford English Dictionary" in which he named his brother, the painter and writer Josef Čapek, as its actual originator.
In an article in the Czech journal "Lidové noviny" in 1933, he explained that he had originally wanted to call the creatures "laboři" ("workers", from Latin "labor"). However, he did not like the word, and sought advice from his brother Josef, who suggested "roboti". The word "robota" means literally "corvée", "serf labor", and figuratively "drudgery" or "hard work" in Czech and also (more general) "work", "labor" in many Slavic languages (e.g.: Bulgarian, Russian, Serbian, Slovak, Polish, Macedonian, Ukrainian, archaic Czech, as well as "robot" in Hungarian). Traditionally the "robota" (Hungarian "robot") was the work period a serf (corvée) had to give for his lord, typically 6 months of the year. The origin of the word is the Old Church Slavonic (Old Bulgarian) "rabota" "servitude" ("work" in contemporary Bulgarian and Russian), which in turn comes from the Proto-Indo-European root "*orbh-". "Robot" is cognate with the German root "Arbeit" (work).
The word robotics, used to describe this field of study, was coined by the science fiction writer Isaac Asimov. Asimov created the ""Three Laws of Robotics"" which are a recurring theme in his books. These have since been used by many others to define laws used in fiction. (The three laws are pure fiction, and no technology yet created has the ability to understand or follow them, and in fact most robots serve military purposes, which run quite contrary to the first law and often the third law. "People think about Asimov's laws, but they were set up to point out how a simple ethical system doesn't work. If you read the short stories, every single one is about a failure, and they are totally impractical," said Dr. Joanna Bryson of the University of Bath.)
Mobile robots have the capability to move around in their environment and are not fixed to one physical location. An example of a mobile robot that is in common use today is the "automated guided vehicle" or "automatic guided vehicle" (AGV). An AGV is a mobile robot that follows markers or wires in the floor, or uses vision or lasers. AGVs are discussed later in this article.
Mobile robots are also found in industry, military and security environments. They also appear as consumer products, for entertainment or to perform certain tasks like vacuum cleaning. Mobile robots are the focus of a great deal of current research and almost every major university has one or more labs that focus on mobile robot research.
Mobile robots are usually used in tightly controlled environments such as on assembly lines because they have difficulty responding to unexpected interference. Because of this most humans rarely encounter robots. However domestic robots for cleaning and maintenance are increasingly common in and around homes in developed countries. Robots can also be found in military applications.
Industrial robots usually consist of a jointed arm (multi-linked manipulator) and an end effector that is attached to a fixed surface. One of the most common type of end effector is a gripper assembly.
The International Organization for Standardization gives a definition of a manipulating industrial robot in ISO 8373:
"an automatically controlled, reprogrammable, multipurpose, manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications."
This definition is used by the International Federation of Robotics, the European Robotics Research Network (EURON) and many national standards committees.
Most commonly industrial robots are fixed robotic arms and manipulators used primarily for production and distribution of goods. The term "service robot" is less well-defined. The International Federation of Robotics has proposed a tentative definition, "A service robot is a robot which operates semi- or fully autonomously to perform services useful to the well-being of humans and equipment, excluding manufacturing operations."
Robots are used as educational assistants to teachers. From the 1980s, robots such as turtles were used in schools and programmed using the Logo language.
There are robot kits like Lego Mindstorms, BIOLOID, OLLO from ROBOTIS, or BotBrain Educational Robots can help children to learn about mathematics, physics, programming, and electronics. Robotics have also been introduced into the lives of elementary and high school students in the form of robot competitions with the company FIRST (For Inspiration and Recognition of Science and Technology). The organization is the foundation for the FIRST Robotics Competition, FIRST LEGO League, Junior FIRST LEGO League, and FIRST Tech Challenge competitions.
There have also been robots such as the teaching computer, Leachim (1974). Leachim was an early example of speech synthesis using the using the Diphone synthesis method. 2-XL (1976) was a robot shaped game / teaching toy based on branching between audible tracks on an 8-track tape player, both invented by Michael J. Freeman. Later, the 8-track was upgraded to tape cassettes and then to digital.
Modular robots are a new breed of robots that are designed to increase the utilization of robots by modularizing their architecture. The functionality and effectiveness of a modular robot is easier to increase compared to conventional robots. These robots are composed of a single type of identical, several different identical module types, or similarly shaped modules, which vary in size. Their architectural structure allows hyper-redundancy for modular robots, as they can be designed with more than 8 degrees of freedom (DOF). Creating the programming, inverse kinematics and dynamics for modular robots is more complex than with traditional robots. Modular robots may be composed of L-shaped modules, cubic modules, and U and H-shaped modules. ANAT technology, an early modular robotic technology patented by Robotics Design Inc., allows the creation of modular robots from U and H shaped modules that connect in a chain, and are used to form heterogeneous and homogenous modular robot systems. These "ANAT robots" can be designed with "n" DOF as each module is a complete motorized robotic system that folds relatively to the modules connected before and after it in its chain, and therefore a single module allows one degree of freedom. The more modules that are connected to one another, the more degrees of freedom it will have. L-shaped modules can also be designed in a chain, and must become increasingly smaller as the size of the chain increases, as payloads attached to the end of the chain place a greater strain on modules that are further from the base. ANAT H-shaped modules do not suffer from this problem, as their design allows a modular robot to distribute pressure and impacts evenly amongst other attached modules, and therefore payload-carrying capacity does not decrease as the length of the arm increases. Modular robots can be manually or self-reconfigured to form a different robot, that may perform different applications. Because modular robots of the same architecture type are composed of modules that compose different modular robots, a snake-arm robot can combine with another to form a dual or quadra-arm robot, or can split into several mobile robots, and mobile robots can split into multiple smaller ones, or combine with others into a larger or different one. This allows a single modular robot the ability to be fully specialized in a single task, as well as the capacity to be specialized to perform multiple different tasks.
Modular robotic technology is currently being applied in hybrid transportation, industrial automation, duct cleaning and handling. Many research centres and universities have also studied this technology, and have developed prototypes.
A "collaborative robot" or "cobot" is a robot that can safely and effectively interact with human workers while performing simple industrial tasks. However, end-effectors and other environmental conditions may create hazards, and as such risk assessments should be done before using any industrial motion-control application.
The collaborative robots most widely used in industries today are manufactured by Universal Robots in Denmark.
Rethink Robotics—founded by Rodney Brooks, previously with iRobot—introduced Baxter in September 2012; as an industrial robot designed to safely interact with neighboring human workers, and be programmable for performing simple tasks. Baxters stop if they detect a human in the way of their robotic arms and have prominent off switches. Intended for sale to small businesses, they are promoted as the robotic analogue of the personal computer. , 190 companies in the US have bought Baxters and they are being used commercially in the UK.
Roughly half of all the robots in the world are in Asia, 32% in Europe, and 16% in North America, 1% in Australasia and 1% in Africa. 40% of all the robots in the world are in Japan, making Japan the country with the highest number of robots.
As robots have become more advanced and sophisticated, experts and academics have increasingly explored the questions of what ethics might govern robots' behavior, and whether robots might be able to claim any kind of social, cultural, ethical or legal rights. One scientific team has said that it is possible that a robot brain will exist by 2019. Others predict robot intelligence breakthroughs by 2050. Recent advances have made robotic behavior more sophisticated. The social impact of intelligent robots is subject of a 2010 documentary film called "Plug & Pray".
Vernor Vinge has suggested that a moment may come when computers and robots are smarter than humans. He calls this "the Singularity". He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.
In 2009, experts attended a conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns. In 2015, the Nao alderen robots were shown to have a capability for a degree of self-awareness. Researchers at the Rensselaer Polytechnic Institute AI and Reasoning Lab in New York conducted an experiment where a robot became aware of itself, and corrected its answer to a question once it had realised this.
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. There are also concerns about technology which might allow some armed robots to be controlled mainly by other robots. The US Navy has funded a report which indicates that, as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. One researcher states that autonomous robots might be more humane, as they could make decisions more effectively. However, other experts question this.
One robot in particular, the EATR, has generated public concerns over its fuel source, as it can continually refuel itself using organic substances. Although the engine for the EATR is designed to run on biomass and vegetation specifically selected by its sensors, which it can find on battlefields or other local environments, the project has stated that chicken fat can also be used.
Manuel De Landa has noted that "smart missiles" and autonomous bombs equipped with artificial perception can be considered robots, as they make some of their decisions autonomously. He believes this represents an important and dangerous trend in which humans are handing over important decisions to machines.
For centuries, people have predicted that machines would make workers obsolete and increase unemployment, although the causes of unemployment are usually thought to be due to social policy.
A recent example of human replacement involves Taiwanese technology company Foxconn who, in July 2011, announced a three-year plan to replace workers with more robots. At present the company uses ten thousand robots but will increase them to a million robots over a three-year period.
Lawyers have speculated that an increased prevalence of robots in the workplace could lead to the need to improve redundancy laws.
Kevin J. Delaney said "Robots are taking human jobs. But Bill Gates believes that governments should tax companies’ use of them, as a way to at least temporarily slow the spread of automation and to fund other types of employment." The robot tax would also help pay a guaranteed living wage to the displaced workers.
The World Bank's World Development Report 2019 puts forth evidence showing that while automation displaces workers, technological innovation creates more new industries and jobs on balance.
At present, there are two main types of robots, based on their use: general-purpose autonomous robots and dedicated robots.
Robots can be classified by their specificity of purpose. A robot might be designed to perform one particular task extremely well, or a range of tasks less well. All robots by their nature can be re-programmed to behave differently, but some are limited by their physical form. For example, a factory robot arm can perform jobs such as cutting, welding, gluing, or acting as a fairground ride, while a pick-and-place robot can only populate printed circuit boards.
General-purpose autonomous robots can perform a variety of functions independently. General-purpose autonomous robots typically can navigate independently in known spaces, handle their own re-charging needs, interface with electronic doors and elevators and perform other basic tasks. Like computers, general-purpose robots can link with networks, software and accessories that increase their usefulness. They may recognize people or objects, talk, provide companionship, monitor environmental quality, respond to alarms, pick up supplies and perform other useful tasks. General-purpose robots may perform a variety of functions simultaneously or they may take on different roles at different times of day. Some such robots try to mimic human beings and may even resemble people in appearance; this type of robot is called a humanoid robot. Humanoid robots are still in a very limited stage, as no humanoid robot can, as of yet, actually navigate around a room that it has never been in. Thus, humanoid robots are really quite limited, despite their intelligent behaviors in their well-known environments.
Over the last three decades, automobile factories have become dominated by robots. A typical factory contains hundreds of industrial robots working on fully automated production lines, with one robot for every ten human workers. On an automated production line, a vehicle chassis on a conveyor is welded, glued, painted and finally assembled at a sequence of robot stations.
Industrial robots are also used extensively for palletizing and packaging of manufactured goods, for example for rapidly taking drink cartons from the end of a conveyor belt and placing them into boxes, or for loading and unloading machining centers.
Mass-produced printed circuit boards (PCBs) are almost exclusively manufactured by pick-and-place robots, typically with SCARA manipulators, which remove tiny electronic components from strips or trays, and place them on to PCBs with great accuracy. Such robots can place hundreds of thousands of components per hour, far out-performing a human in speed, accuracy, and reliability.
Mobile robots, following markers or wires in the floor, or using vision or lasers, are used to transport goods around large facilities, such as warehouses, container ports, or hospitals.
Limited to tasks that could be accurately defined and had to be performed the same way every time. Very little feedback or intelligence was required, and the robots needed only the most basic exteroceptors (sensors). The limitations of these AGVs are that their paths are not easily altered and they cannot alter their paths if obstacles block them. If one AGV breaks down, it may stop the entire operation.
Developed to deploy triangulation from beacons or bar code grids for scanning on the floor or ceiling. In most factories, triangulation systems tend to require moderate to high maintenance, such as daily cleaning of all beacons or bar codes. Also, if a tall pallet or large vehicle blocks beacons or a bar code is marred, AGVs may become lost. Often such AGVs are designed to be used in human-free environments.
Such as SmartLoader, SpeciMinder, ADAM, Tug Eskorta, and MT 400 with Motivity are designed for people-friendly workspaces. They navigate by recognizing natural features. 3D scanners or other means of sensing the environment in two or three dimensions help to eliminate cumulative errors in dead-reckoning calculations of the AGV's current position. Some AGVs can create maps of their environment using scanning lasers with simultaneous localization and mapping (SLAM) and use those maps to navigate in real time with other path planning and obstacle avoidance algorithms. They are able to operate in complex environments and perform non-repetitive and non-sequential tasks such as transporting photomasks in a semiconductor lab, specimens in hospitals and goods in warehouses. For dynamic areas, such as warehouses full of pallets, AGVs require additional strategies using three-dimensional sensors such as time-of-flight or stereovision cameras.
There are many jobs which humans would rather leave to robots. The job may be boring, such as domestic cleaning or sports field line marking, or dangerous, such as exploring inside a volcano. Other jobs are physically inaccessible, such as exploring another planet, cleaning the inside of a long pipe, or performing laparoscopic surgery.
Almost every unmanned space probe ever launched was a robot. Some were launched in the 1960s with very limited abilities, but their ability to fly and land (in the case of Luna 9) is an indication of their status as a robot. This includes the Voyager probes and the Galileo probes, among others.
Teleoperated robots, or telerobots, are devices remotely operated from a distance by a human operator rather than following a predetermined sequence of movements, but which has semi-autonomous behaviour. They are used when a human cannot be present on site to perform a job because it is dangerous, far away, or inaccessible. The robot may be in another room or another country, or may be on a very different scale to the operator. For instance, a laparoscopic surgery robot allows the surgeon to work inside a human patient on a relatively small scale compared to open surgery, significantly shortening recovery time. They can also be used to avoid exposing workers to the hazardous and tight spaces such as in duct cleaning. When disabling a bomb, the operator sends a small robot to disable it. Several authors have been using a device called the Longpen to sign books remotely. Teleoperated robot aircraft, like the Predator Unmanned Aerial Vehicle, are increasingly being used by the military. These pilotless drones can search terrain and fire on targets. Hundreds of robots such as iRobot's Packbot and the Foster-Miller TALON are being used in Iraq and Afghanistan by the U.S. military to defuse roadside bombs or improvised explosive devices (IEDs) in an activity known as explosive ordnance disposal (EOD).
Robots are used to automate picking fruit on orchards at a cost lower than that of human pickers.
Domestic robots are simple robots dedicated to a single task work in home use. They are used in simple but often disliked jobs, such as vacuum cleaning, floor washing, and lawn mowing. An example of a domestic robot is a Roomba.
Military robots include the SWORDS robot which is currently used in ground-based combat. It can use a variety of weapons and there is some discussion of giving it some degree of autonomy in battleground situations.
Unmanned combat air vehicles (UCAVs), which are an upgraded form of UAVs, can do a wide variety of missions, including combat. UCAVs are being designed such as the BAE Systems Mantis which would have the ability to fly themselves, to pick their own course and target, and to make most decisions on their own. The BAE Taranis is a UCAV built by Great Britain which can fly across continents without a pilot and has new means to avoid detection. Flight trials are expected to begin in 2011.
The AAAI has studied this topic in depth and its president has commissioned a study to look at this issue.
Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane. Several such measures reportedly already exist, with robot-heavy countries such as Japan and South Korea having begun to pass regulations requiring robots to be equipped with safety systems, and possibly sets of 'laws' akin to Asimov's Three Laws of Robotics. An official report was issued in 2009 by the Japanese government's Robot Industry Policy Committee. Chinese officials and researchers have issued a report suggesting a set of ethical rules, and a set of new legal guidelines referred to as "Robot Legal Studies." Some concern has been expressed over a possible occurrence of robots telling apparent falsehoods.
Mining robots are designed to solve a number of problems currently facing the mining industry, including skills shortages, improving productivity from declining ore grades, and achieving environmental targets. Due to the hazardous nature of mining, in particular underground mining, the prevalence of autonomous, semi-autonomous, and tele-operated robots has greatly increased in recent times. A number of vehicle manufacturers provide autonomous trains, trucks and loaders that will load material, transport it on the mine site to its destination, and unload without requiring human intervention. One of the world's largest mining corporations, Rio Tinto, has recently expanded its autonomous truck fleet to the world's largest, consisting of 150 autonomous Komatsu trucks, operating in Western Australia. Similarly, BHP has announced the expansion of its autonomous drill fleet to the world's largest, 21 autonomous Atlas Copco drills.
Drilling, longwall and rockbreaking machines are now also available as autonomous robots. The Atlas Copco Rig Control System can autonomously execute a drilling plan on a drilling rig, moving the rig into position using GPS, set up the drill rig and drill down to specified depths. Similarly, the Transmin Rocklogic system can automatically plan a path to position a rockbreaker at a selected destination. These systems greatly enhance the safety and efficiency of mining operations.
Robots in healthcare have two main functions. Those which assist an individual, such as a sufferer of a disease like Multiple Sclerosis, and those which aid in the overall systems such as pharmacies and hospitals.
Robots used in home automation have developed over time from simple basic robotic assistants, such as the Handy 1, through to semi-autonomous robots, such as FRIEND which can assist the elderly and disabled with common tasks.
The population is aging in many countries, especially Japan, meaning that there are increasing numbers of elderly people to care for, but relatively fewer young people to care for them. Humans make the best carers, but where they are unavailable, robots are gradually being introduced.
FRIEND is a semi-autonomous robot designed to support disabled and elderly people in their daily life activities, like preparing and serving a meal. FRIEND make it possible for patients who are paraplegic, have muscle diseases or serious paralysis (due to strokes etc.), to perform tasks without help from other people like therapists or nursing staff.
Script Pro manufactures a robot designed to help pharmacies fill prescriptions that consist of oral solids or medications in pill form. The pharmacist or pharmacy technician enters the prescription information into its information system. The system, upon determining whether or not the drug is in the robot, will send the information to the robot for filling. The robot has 3 different size vials to fill determined by the size of the pill. The robot technician, user, or pharmacist determines the needed size of the vial based on the tablet when the robot is stocked. Once the vial is filled it is brought up to a conveyor belt that delivers it to a holder that spins the vial and attaches the patient label. Afterwards it is set on another conveyor that delivers the patient's medication vial to a slot labeled with the patient's name on an LED read out. The pharmacist or technician then checks the contents of the vial to ensure it's the correct drug for the correct patient and then seals the vials and sends it out front to be picked up.
McKesson's Robot RX is another healthcare robotics product that helps pharmacies dispense thousands of medications daily with little or no errors. The robot can be ten feet wide and thirty feet long and can hold hundreds of different kinds of medications and thousands of doses. The pharmacy saves many resources like staff members that are otherwise unavailable in a resource scarce industry. It uses an electromechanical head coupled with a pneumatic system to capture each dose and deliver it to its either stocked or dispensed location. The head moves along a single axis while it rotates 180 degrees to pull the medications. During this process it uses barcode technology to verify its pulling the correct drug. It then delivers the drug to a patient specific bin on a conveyor belt. Once the bin is filled with all of the drugs that a particular patient needs and that the robot stocks, the bin is then released and returned out on the conveyor belt to a technician waiting to load it into a cart for delivery to the floor.
While most robots today are installed in factories or homes, performing labour or life saving jobs, many new types of robot are being developed in laboratories around the world. Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robot, alternative ways to think about or design robots, and new ways to manufacture them. It is expected that these new types of robot will be able to solve real world problems when they are finally realized.
One approach to designing robots is to base them on animals. BionicKangaroo was designed and engineered by studying and applying the physiology and methods of locomotion of a kangaroo.
Nanorobotics is the emerging technology field of creating machines or robots whose components are at or close to the microscopic scale of a nanometer (10−9 meters). Also known as "nanobots" or "nanites", they would be constructed from molecular machines. So far, researchers have mostly produced only parts of these complex systems, such as bearings, sensors, and synthetic molecular motors, but functioning robots have also been made such as the entrants to the Nanobot Robocup contest. Researchers also hope to be able to create entire robots as small as viruses or bacteria, which could perform tasks on a tiny scale. Possible applications include micro surgery (on the level of individual cells), utility fog, manufacturing, weaponry and cleaning. Some people have suggested that if there were nanobots which could reproduce, the earth would turn into "grey goo", while others argue that this hypothetical outcome is nonsense.
A few researchers have investigated the possibility of creating robots which can alter their physical form to suit a particular task, like the fictional T-1000. Real robots are nowhere near that sophisticated however, and mostly consist of a small number of cube shaped units, which can move relative to their neighbours. Algorithms have been designed in case any such robots become a reality.
Robots with silicone bodies and flexible actuators (air muscles, electroactive polymers, and ferrofluids) look and feel different from robots with rigid skeletons, and can have different behaviors. Soft, flexible (and sometimes even squishy) robots are often designed to mimic the biomechanics of animals and other things found in nature, which is leading to new applications in medicine, care giving, search and rescue, food handling and manufacturing, and scientific exploration.
Inspired by colonies of insects such as ants and bees, researchers are modeling the behavior of swarms of thousands of tiny robots which together perform a useful task, such as finding something hidden, cleaning, or spying. Each robot is quite simple, but the emergent behavior of the swarm is more complex. The whole set of robots can be considered as one single distributed system, in the same way an ant colony can be considered a superorganism, exhibiting swarm intelligence. The largest swarms so far created include the iRobot swarm, the SRI/MobileRobots CentiBots project and the Open-source Micro-robotic Project swarm, which are being used to research collective behaviors. Swarms are also more resistant to failure. Whereas one large robot may fail and ruin a mission, a swarm can continue even if several robots fail. This could make them attractive for space exploration missions, where failure is normally extremely costly.
Robotics also has application in the design of virtual reality interfaces. Specialized robots are in widespread use in the haptic research community. These robots, called "haptic interfaces", allow touch-enabled user interaction with real and virtual environments. Robotic forces allow simulating the mechanical properties of "virtual" objects, which users can experience through their sense of touch.
Robots are used by contemporary artists to create works that include mechanical automation. There are many branches of robotic art, one of which is robotic installation art, a type of installation art that is programmed to respond to viewer interactions, by means of computers, sensors and actuators. The future behavior of such installations can therefore be altered by input from either the artist or the participant, which differentiates these artworks from other types of kinetic art.
Le Grand Palais in Paris organized an exhibition "Artists & Robots", featuring artworks created by more than forty artists with the help of robots in 2018.
Robotic characters, androids (artificial men/women) or gynoids (artificial women), and cyborgs (also "bionic men/women", or humans with significant mechanical enhancements) have become a staple of science fiction.
The first reference in Western literature to mechanical servants appears in Homer's "Iliad". In Book XVIII, Hephaestus, god of fire, creates new armor for the hero Achilles, assisted by robots. According to the Rieu translation, "Golden maidservants hastened to help their master. They looked like real women and could not only speak and use their limbs but were endowed with intelligence and trained in handwork by the immortal gods." The words "robot" or "android" are not used to describe them, but they are nevertheless mechanical devices human in appearance. "The first use of the word Robot was in Karel Čapek's play R.U.R. (Rossum's Universal Robots) (written in 1920)". Writer Karel Čapek was born in Czechoslovakia (Czech Republic).
Possibly the most prolific author of the twentieth century was Isaac Asimov (1920–1992) who published over five-hundred books. Asimov is probably best remembered for his science-fiction stories and especially those about robots, where he placed robots and their interaction with society at the center of many of his works. Asimov carefully considered the problem of the ideal set of instructions robots might be given in order to lower the risk to humans, and arrived at his Three Laws of Robotics: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given it by human beings, except where such orders would conflict with the First Law; and a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These were introduced in his 1942 short story "Runaround", although foreshadowed in a few earlier stories. Later, Asimov added the Zeroth Law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm"; the rest of the laws are modified sequentially to acknowledge this.
According to the "Oxford English Dictionary," the first passage in Asimov's short story "Liar!" (1941) that mentions the First Law is the earliest recorded use of the word "robotics". Asimov was not initially aware of this; he assumed the word already existed by analogy with "mechanics," "hydraulics," and other similar terms denoting branches of applied knowledge.
Robots appear in many films. Most of the robots in cinema are fictional. Two of the most famous are R2-D2 and C-3PO from the "Star Wars" franchise.
The concept of humanoid sex robots has elicited both public attention and concern. Opponents of the concept have stated that the development of sex robots would be morally wrong. They argue that the introduction of such devices would be socially harmful, and demeaning to women and children.
Fears and concerns about robots have been repeatedly expressed in a wide range of books and films. A common theme is the development of a master race of conscious and highly intelligent robots, motivated to take over or destroy the human race.
"Frankenstein" (1818), often called the first science fiction novel, has become synonymous with the theme of a robot or android advancing beyond its creator.
Other works with similar themes include "The Mechanical Man", "The Terminator, Runaway, RoboCop", the Replicators in "Stargate", the Cylons in "Battlestar Galactica", the Cybermen and Daleks in "Doctor Who", "The Matrix", "Enthiran" and "I, Robot". Some fictional robots are programmed to kill and destroy; others gain superhuman intelligence and abilities by upgrading their own software and hardware. Examples of popular media where the robot becomes evil are "", "Red Planet" and "Enthiran".
The 2017 game Horizon Zero Dawn explores themes of robotics in warfare, robot ethics, and the AI control problem, as well as the positive or negative impact such technologies could have on the environment.
Another common theme is the reaction, sometimes called the "uncanny valley", of unease and even revulsion at the sight of robots that mimic humans too closely.
More recently, fictional representations of artificially intelligent robots in films such as "A.I. Artificial Intelligence" and "Ex Machina" and the 2016 TV adaptation of "Westworld" have engaged audience sympathy for the robots themselves. | https://en.wikipedia.org/wiki?curid=25781 |
R. B. Bennett
Richard Bedford Bennett, 1st Viscount Bennett (July 3, 1870 – June 26, 1947), was a Canadian lawyer, businessman and politician. He served as the 11th prime minister of Canada, in office from 1930 to 1935. He led the Conservative Party from 1927 to 1938.
Bennett's premiership was marked primarily by the Great Depression that it overlapped and by an unsuccessful initiative to establish an imperial preference free trade agreement. Still, he left lasting legacies in the form of the Canadian Broadcasting Corporation (established 1932) and the Bank of Canada (established 1934), and was regarded even by his political opponents as instrumental in mitigating the worst potential effects of the economic depression in Canada.
Bennett was born in Hopewell Hill, New Brunswick, and grew up in nearby Hopewell Cape. He studied law at Dalhousie University, graduating in 1893, and in 1897 moved to Calgary to establish a law firm in partnership with James Lougheed.
Bennett served in the Legislative Assembly of the Northwest Territories from 1898 to 1905, and later in the Alberta Legislature from 1909 to 1911. He was the inaugural leader of the Alberta Conservative Party from 1905, resigning upon his election to the House of Commons in 1911. From 1920 to 1921, Bennett was Minister of Justice under Arthur Meighen. He also served briefly as Minister of Finance in Meighen's second government in 1926, which lasted just a month. Meighen resigned the Conservative Party's leadership after its defeat at the 1926 election, with Bennett elected as his replacement (and thus Leader of the Opposition).
Bennett became prime minister after the 1930 election, where the Conservatives won a landslide victory over Mackenzie King's Liberal Party. He was the first prime minister to represent a constituency in Alberta. The main difficulty during Bennett's prime ministership was the Great Depression. He and his party initially tried to combat the crisis with "laissez-faire" policies, but these were largely ineffective. However, over time Bennett's government became increasingly interventionist, attempting to replicate the popular "New Deal" enacted by Franklin Roosevelt to the south. This about-face prompted a split within Conservative ranks, and was regarded by the general public as evidence of incompetence.
Bennett suffered a landslide defeat at the 1935 election, with Mackenzie King returning for a third term. Bennett remained leader of the Conservative Party until 1938, when he retired to England.
He was created Viscount Bennett, the only Canadian prime minister to be honoured with elevation to the peerage.
Bennett was born on 3 July 1870, when his mother, Henrietta Stiles, was visiting at her parents' home in Hopewell Hill, New Brunswick, Canada. He was the eldest of six children, and grew up nearby at the Bay of Fundy home of his father, Henry John Bennett, in Hopewell Cape, the shire town of Albert County, then a town of 1,800 people.
His father descended from English ancestors who had emigrated to Connecticut in the 17th century. His great-great-grandfather, Zadock Bennett, migrated from New London, Connecticut, to Nova Scotia c. 1760, before the American Revolution, as one of the New England Planters who took the lands forcibly removed from the deported Acadians during the Great Upheaval.
R. B. Bennett's family was poor, subsisting mainly on the produce of a small farm. His early days inculcated a lifelong habit of thrift. The driving force in his family was his mother. She was a Wesleyan Methodist and passed this faith and the Protestant ethic on to her son. Bennett's father does not appear to have been a good provider for his family, though the reason is unclear. He operated a general store for a while and tried to develop some gypsum deposits.
The Bennetts had previously been a relatively prosperous family, operating a shipyard in Hopewell Cape, but the change to steam-powered vessels in the mid-19th century meant the gradual winding down of their business. However, the household was a literate one, subscribing to three newspapers. They were strong Conservatives; indeed one of the largest and last ships launched by the Bennett shipyard (in 1869) was the "Sir John A. Macdonald".
Educated in the local school, Bennett was a very good student, but something of a loner. In addition to his Protestant faith, Bennett grew up with an abiding love of the British Empire, then at its apogee. A small legacy his mother received opened the doors for him to attend the Normal school in Fredericton, where he trained to be a teacher; he then taught for several years at Irishtown, north of Moncton, saving his money for law school.
One day, while Bennett was crossing the Miramichi River on the ferry boat, a well-dressed lad about nine years younger came over to him and struck up a conversation. This was the beginning of an improbable but important friendship with Max Aitken, later the industrialist and British press baron, Lord Beaverbrook. The agnostic Aitken liked to tease the Methodist Bennett, whose fiery temper contrasted with Aitken's ability to turn away wrath with a joke. This friendship would become important to his success later in life, as would his friendship with the Chatham lawyer, Lemuel J. Tweedie, a prominent Conservative politician. He began to study law with Tweedie on weekends and during summer holidays. Another important friendship was with the prominent Shirreff family of Chatham, the father being High Sheriff of Northumberland County for 25 years. The son, Harry, joined the E. B. Eddy Company, a large pulp and paper industrial concern, and was transferred to Halifax. His sister moved there to study nursing, and soon Bennett joined them to study law at Dalhousie University. Their friendship was renewed there, and became crucial to his later life when Jennie Shirreff married the head of the Eddy Company. She later made Bennett the lawyer for her extensive interests.
Bennett started at Dalhousie University in 1890, graduating in 1893 with a law degree and very high standing. He worked his way through with a job as assistant in the library, being recommended by the Dean, Dr. Richard Chapman Weldon, MP, and participated in debating and moot court activities.
He was then a partner in the Chatham law firm of Tweedie and Bennett. Max Aitken (later to become Lord Beaverbrook) was his office boy, while articling as a lawyer, acting as a stringer for the Montreal Gazette, and selling life insurance. Aitken persuaded him to run for alderman in the first Town Council of Chatham, and managed his campaign. Bennett was elected by one vote, and was later furious with Aitken when he heard all the promises he had made on Bennett's behalf.
Despite his election to the Chatham town council, Bennett's days in the town were numbered. He was ambitious and saw that the small community was too narrow a field for him. He was already negotiating with Sir James Lougheed to move to the North-West Territories and become his law partner in Calgary, on Weldon's recommendation. Lougheed was Calgary's richest man and most successful lawyer.
Bennett moved to Calgary in 1897. A lifelong bachelor and teetotaler (although Bennett was known by select associates to occasionally drink alcohol when the press was not around to observe this), he led a rather lonely life in a hotel and later, in a boarding house. For a while a younger brother roomed with him. He ate his noon meal on workdays at the Alberta Hotel. Social life, such as it was, centred on church. There was, however, no scandal attached to his personal life. Bennett worked hard and gradually built up his legal practice. In 1908 he was one of five people appointed to the first Library Board for the city of Calgary and was instrumental in establishing the Calgary Public Library.
In 1910, Bennett became a director of Calgary Power Ltd. (now formally TransAlta Corporation) and just a year later he became President. During his leadership projects completed included the first storage reservoir at Lake Minnewanka, a second transmission line to Calgary and the construction of the Kananaskis Falls hydro station. At that time, he was also director of Rocky Mountains Cement Company and Security Trust.
Bennett developed an extensive legal practice in Calgary. In 1922, he started the partnership Bennett, Hannah & Sanford, which would eventually become Bennett Jones LLP. In 1929-30, he served as national President of the Canadian Bar Association. His successor in that office was Louis St. Laurent, another future Prime Minister.
He was elected to the Legislative Assembly of the North-West Territories in the 1898 general election, representing the riding of West Calgary. He was re-elected to a second term in office in 1902 as an Independent in the North-West Territories legislature.
In 1905, when Alberta was carved out of the Territories and made a province, Bennett became the first leader of the Alberta Conservative Party. In 1909, he won a seat in the provincial legislature, before resigning and switching to federal politics. He was elected to the House of Commons of Canada in 1911.
At age 44, he tried to enlist in the Canadian military once World War I broke out, but was turned down as being medically unfit. In 1916, Bennett was appointed director general of the National service Board, which was in charge of identifying the number of potential recruits in the country.
While Bennett supported the Conservatives, he opposed Prime Minister Robert Borden's proposal for a Union Government that would include both Conservatives and Liberals, fearing that this would ultimately hurt the Conservative Party; he was proven to be correct in this analysis. While he campaigned for Conservative candidates in the 1917 federal election he did not stand for re-election himself.
Nevertheless, Borden's successor, Arthur Meighen appointed Bennett Minister of Justice in his government, as it headed into the 1921 federal election in which both the government and Bennett were defeated. Bennett won the seat of Calgary West in the 1925 federal election and was returned to government as Minister of Finance in Meighen's short-lived government in 1926. The government was defeated in the 1926 federal election. Meighen stepped down as Tory leader, and Bennett became the party's leader in 1927 at the first Conservative leadership convention.
As Opposition leader, Bennett faced off against the more experienced Liberal Prime Minister William Lyon Mackenzie King in Commons debates, and took some time to acquire enough experience to hold his own with King. In 1930, King blundered badly when he made overly partisan statements in response to criticism over his handling of the economic downturn, which was hitting Canada very hard. King's worst error was in stating that he "would not give Tory provincial governments a five-cent piece!" This serious mistake, which drew wide press coverage, gave Bennett his needed opening to attack King, which he did successfully in the election campaign which followed.
As the leader of the Conservative party, Bennett adapted its program, organization image to promote more rapid modernization of Canada. The "New Deal" was largely a mirror of the American program. The party was torn between reaction and reform, with deep internal factionalism that led to its defeat in 1935. Bennett's critics on the left had the last word, and textbooks typically portray him as a hard-driving capitalist, pushing for American-style high tariffs and British-style imperialism, while ignoring his reform efforts.
By defeating the William Lyon Mackenzie King in the 1930 federal election, he had the misfortune of taking office during the Great Depression. Bennett tried to combat the depression by increasing trade within the British Empire and imposing tariffs for imports from outside the Empire, promising that his measures would "blast" Canadian exports into world markets. His success was limited however, and his own wealth (often openly displayed) and impersonal style alienated many struggling Canadians.
While he was the first Prime Minister representing a constituency in Alberta, his party only won four of the province's 16 seats. His speeches to the Empire Clubs in Toronto and Montreal, when chairman of the House of Commons Committee on Representation under Borden, that while settlers from the United States were suitable to be included with those entitled to vote, they however lacked the 'noble element' normally found in the British, caused controversy. At the time, the federal government was required, under a Statute of British Parliament, to re-adjust representation to Alberta and Saskatchewan based on the 1911 census. The re-adjustment made to the four western provinces at the time can only be correlated if only those having British and French origins are considered.
When his "Imperial Preference" policy failed to generate the desired result, Bennett's government had no real contingency plan. The party's pro-business and pro-banking inclinations provided little relief to the millions of increasingly desperate and agitated unemployed. Despite the economic crisis, "laissez-faire" persisted as the guiding economic principle of Conservative Party ideology; similar attitudes dominated worldwide as well during this era. Government relief to the unemployed was considered a disincentive to individual initiative, and was therefore only granted in the most minimal amounts and attached to work programs. An additional concern of the federal government was that large numbers of disaffected unemployed men concentrating in urban centres created a volatile situation. As an "alternative to bloodshed on the streets", the stop-gap solution for unemployment chosen by the Bennett government was to establish military-run and -styled relief camps in remote areas throughout the country, where single unemployed men toiled for twenty cents a day. Any relief beyond this was left to provincial and municipal governments, many of which were either insolvent or on the brink of bankruptcy, and which railed against the inaction of other levels of government. Partisan differences began to sharpen on the question of government intervention in the economy, since lower levels of government were largely in Liberal hands, and protest movements were beginning to send their own parties into the political mainstream, notably the Cooperative Commonwealth Federation and William Aberhart's Social Credit Party in Alberta.
In July 1931, Bennett's government passed the Unemployment and Farm Relief Act in an effort to stanch the depression, but events were rapidly falling out of their control.
Bennett hosted the 1932 Imperial Economic Conference in Ottawa; this was the first time Canada had hosted the meetings. It was attended by the leaders of the independent dominions of the British Empire (which later became the Commonwealth of Nations). Bennett dominated the meetings, which were ultimately unproductive, due to the inability of leaders to agree on policies, mainly to combat the economic woes dominating the world at the time.
A nickname that would stick with Bennett for the remainder of his political career, "Iron Heel Bennett", came from a 1932 speech he gave in Toronto that ironically, if unintentionally, alluded to Jack London's socialist novel:
What do they offer you in exchange for the present order? Socialism, Communism, dictatorship. They are sowing the seeds of unrest everywhere. Right in this city such propaganda is being carried on and in the little out of the way places as well. And we know that throughout Canada this propaganda is being put forward by organizations from foreign lands that seek to destroy our institutions. And we ask that every man and woman put the iron heel of ruthlessness against a thing of that kind.
Reacting to fears of communist subversion, Bennett invoked the controversial Section 98 of the Criminal Code. Enacted in the aftermath of the Winnipeg general strike, section 98 dispensed with the presumption of innocence in outlawing potential threats to the state: specifically, anyone belonging to an organization that officially advocated the violent overthrow of the government. Even if the accused had never committed an act of violence or personally supported such an action, they could be incarcerated merely for attending meetings of such an organization, publicly speaking in its defense, or distributing its literature. Despite the broad power authorized under section 98, it targeted specifically the Communist Party of Canada. Eight of the top party leaders, including Tim Buck, were arrested on 11 August 1931 and convicted under section 98. This plan to stamp out communism backfired, however, and proved to be a damaging embarrassment for the government, especially after Buck was the target of an apparent assassination attempt. While confined to his cell during a prison riot, despite not participating in the riot, shots were fired into his cell. When an agit-prop play depicting these events, "Eight Men Speak", was suppressed on 4 December 1933 by the Toronto police, a protest meeting was held where Communist politician A. E. Smith repeated the play's allegations, and he was consequently arrested for sedition. This created a storm of public protest, compounded when Buck was called as a witness to the trial and repeated the allegations in open court. Although the remarks were stricken from the record, they still discredited the prosecution's case and Smith was acquitted. As a result, the government's case against Buck lost any credibility, and Buck and his comrades were released early and fêted as heroic champions of civil liberties.
Having survived section 98, and benefiting from the public sympathy wrought by persecution, Communist Party members set out to organize workers in the relief camps set up by the Unemployment and Farm Relief Act. Camp workers laboured on a variety of infrastructure projects, including such things as municipal airports, roads, and park facilities, along with a number of other make-work schemes. Conditions in the camps were poor, not only because of the low pay, but also the lack of recreational facilities, isolation from family and friends, poor quality food, and the use of military discipline. Communists thus had ample grounds on which to organize camp workers, although the workers were there of their own volition. The Relief Camp Workers' Union was formed and affiliated with the Workers' Unity League, the trade union umbrella of the Communist Party. Camp workers in BC struck on 4 April 1935, and, after two months of protesting in Vancouver, began the On-to-Ottawa Trek to bring their grievances to Bennett's doorstep. The Prime Minister and his Minister of Justice, Hugh Guthrie, treated the trek as an attempted insurrection, and ordered it to be stopped. The Royal Canadian Mounted Police (RCMP) read the Riot Act to a crowd of 3,000 strikers and their supporters in Regina on 1 July 1935, resulting in two deaths and dozens of injured. All told, Bennett's anti-Communist policy would not bode well for his political career.
In January 1934, Bennett told the provinces that they were "wasteful and extravagant", and even told Quebec and Ontario that they were wealthy enough to manage their own problems. One year later, he had changed his tune. Following the lead of President Roosevelt's New Deal in the United States, Bennett, under the advice of William Duncan Herridge, who was Canada's Envoy to the United States, the government eventually began to follow the Americans' lead. In a series of five radio speeches to the nation in January 1935, Bennett introduced a Canadian version of the "New Deal", involving unprecedented public spending and federal intervention in the economy. Progressive income taxation, a minimum wage, a maximum number of working hours per week, unemployment insurance, health insurance, an expanded pension program, and grants to farmers were all included in the plan.
In one of his addresses to the nation, Bennett said:
Bennett's conversion, however, was seen as too little too late, and he faced criticism that his reforms either went too far, or did not go far enough, including from one of his cabinet ministers H. H. Stevens, who bolted the government to form the Reconstruction Party of Canada. Some of the measures were alleged to have encroached on provincial jurisdictions laid out in section 92 of the British North America Act, 1867. The courts, including the Judicial Committee of the Privy Council, agreed and eventually struck down virtually all of Bennett's reforms. However, some of Bennett's initiatives, such as the Bank of Canada, which he founded in 1934, remain in place to this day, and the Canadian Wheat Board remained in place until 2011 when the government of Stephen Harper abolished it.
Although there was no unity among the motley political groups that constituted Bennett's opposition, a consensus emerged that his handling of the economic crisis was insufficient and inappropriate, even from Conservative quarters. Bennett personally became a symbol of the political failings underscoring the depression. Car owners, for example, who could no longer afford gasoline, had horses pull their vehicles, which they named "Bennett buggies". Unity in his own administration suffered, notably by the defection of his Minister of Trade, Henry Herbert Stevens. Stevens left the Conservatives and formed the Reconstruction Party of Canada, after Bennett refused to implement Stevens' plan for drastic economic reform to deal with the economic crisis.
The beneficiary of the overwhelming opposition during Bennett's tenure was the Liberal Party. The Tories were decimated in the October 1935 general election, winning only 40 seats to 173 for Mackenzie King's Liberals. The Tories would not form a majority government again in Canada until 1958. King's government soon implemented its own moderate reforms, including the replacement of relief camps with a scaled down provincial relief project scheme, and the repeal of section 98. Ultimately, Canada pulled out of the depression as a result of government-funded jobs associated with the preparation for and onset of the Second World War.
Bennett was something of a far sighted man. For example, during his tenure,
Bennett retired to Britain in 1938, and, on 12 June 1941, became the first and only former Canadian Prime Minister to be elevated to the peerage as Viscount Bennett, of Mickleham in the County of Surrey and of Calgary and Hopewell in the Dominion of Canada. The honour, conferred by British PM Winston Churchill, was in recognition for Bennett's valuable unsalaried work in the Ministry of Aircraft Production, managed by his lifelong friend Lord Beaverbrook. Bennett took an active role in the House of Lords, and attended frequently until his death.
Bennett's interest in increasing public awareness and accessibility to Canada's historical records, led him to serve as Vice-President of The Champlain Society from 1933 until his death.
He died after suffering a heart attack while taking a bath on 26 June 1947 at Mickleham. He was exactly one week shy of his 77th birthday. He is buried there in St. Michael's Churchyard, Mickleham. The tomb, and Government of Canada marker outside, are steps from the front doors of the church. He is the only deceased former Canadian Prime Minister not buried in Canada. Unmarried, Bennett was survived by nephews William Herridge, Jr., and Robert Coats, and by brother Ronald V. Bennett. The viscountcy became extinct on his death.
While Bennett was, and is still, often criticized for lack of compassion for the impoverished masses, he stayed up through many nights reading and responding to personal letters from ordinary citizens asking for his help, and often dipped into his personal fortune to send a five-dollar bill to a starving family. The total amount he gave personally is uncertain, although he personally estimated that in 1927–37 he spent well over 2.3 million dollars. Bennett was a controlling owner of the E. B. Eddy match company, which was the largest safety match manufacturer in Canada, and he was one of the richest Canadians at that time. Bennett helped put many poor, struggling young men through university. Relative to the times he lived in, he was likely the wealthiest Canadian to become prime minister.
Bennett worked an exhausting schedule throughout his years as prime minister, often more than 14 hours per day, and dominated his government, usually holding several cabinet posts. He lived in a suite in the Château Laurier hotel, a short walk from Parliament Hill. The respected author Bruce Hutchison wrote that had the economic times been more normal, Bennett would likely have been regarded as a good, perhaps great, Canadian prime minister.
Bennett was also a noted talent spotter. He took note of and encouraged the young Lester Pearson in the early 1930s, and appointed Pearson to significant roles on two major government inquiries: the 1931 Royal Commission on Grain Futures, and the 1934 Royal Commission on Price Spreads. Bennett saw that Pearson was recognized with an OBE after he shone in that work, arranged a bonus of $1,800, and invited him to a London conference. Former Prime Minister John Turner, who as a child, knew Bennett while he was prime minister, praised Bennett's promotion of Turner's economist mother to the highest civil service post held by a Canadian woman to that time.
Most historians consider his premiership to have been a failure at a time of severe economic crisis. H. Blair Neatby says categorically that "as a politician he was a failure". Jack Granatstein and Norman Hillmer, comparing him to all other Canadian prime ministers concluded, "Bennett utterly failed as a leader. Everyone was alienated by the end—Cabinet, caucus, party, voter and foreigner."
Bennett was ranked #12 by a survey of Canadian historians out of the then 20 Prime Ministers of Canada through Jean Chrétien. The results of the survey were included in the book "Prime Ministers: Ranking Canada's Leaders" by J. L. Granatstein and Norman Hillmer.
A 2001 book by Quebec nationalist writer Normand Lester, "Le Livre noir du Canada anglais" (later translated as "The Black Book of English Canada") accused Bennett of having a political affiliation with, and of having provided financial support to, fascist Quebec writer Adrien Arcand. This is based on a series of letters sent to Bennett following his election as Prime Minister by Arcand, his colleague Ménard and two Conservative caucus members asking for financial support for Arcand's antisemitic newspaper "Le Goglu". The book also claims that in a 1936 letter to Bennett, A. W. Reid, a Conservative organizer, estimated that Conservative Party members gave Arcand a total of $27,000 (the modern equivalent $359,284).
Bennett chose the following jurists to be appointed as justices of the Supreme Court of Canada by the Governor General:
Bennett was the Honorary Colonel of The Calgary Highlanders from the year of their designation as such in 1921 to his death in 1947. He visited the Regiment in England during the Second World War, and always ensured the 1st Battalion had a turkey dinner at Christmas every year they were overseas, including the Christmas of 1944 when the battalion was holding front line positions in the Nijmegen Salient.
Bennett served as the Rector of Queen's University in Kingston, Ontario, from 1935 to 1937, even while he was still prime minister. At the time, this role covered mediation for significant disputes between Queen's students and the university administration.
Bennett's Coat of Arms was designed by Alan Beddoe "Argent within two bendlets Gules three maple leaves proper all between two demi-lions rampant couped gules. Crest, a demi-lion Gules grapsing in the dexter paw a battle axe in bend sinister Or and resting the sinister paw on an escallop also Gules. Supporters, Dexter a buffalo, sinister a moose, both proper. Motto, To be Pressed not Oppressed."
The by-election was caused by the resignation of Richard Bennett, who resigned his seat to run for the House of Commons of Canada in the 1900 Canadian federal election.
Empire Relations (1942) – The Peter le Neve Foster Lecture, Delivered on the 3rd June 1942, at the Royal Society of Arts by The Right Hon. The Viscount Bennett, P.C.,K.C., R. B. Bennett 1945 43p,Published by Dorothy Crisp & Co Ltd Holborn London | https://en.wikipedia.org/wiki?curid=25783 |
Renewable energy
Renewable energy is energy that is collected from renewable resources, which are naturally replenished on a human timescale, such as sunlight, wind, rain, tides, waves, and geothermal heat. Renewable energy often provides energy in four important areas: electricity generation, air and water heating/cooling, transportation, and rural (off-grid) energy services.
Based on REN21's 2017 report, renewables contributed 19.3% to humans' global energy consumption and 24.5% to their generation of electricity in 2015 and 2016, respectively. This energy consumption is divided as 8.9% coming from traditional biomass, 4.2% as heat energy (modern biomass, geothermal and solar heat), 3.9% from hydroelectricity and the remaining 2.2% is electricity from wind, solar, geothermal, and other forms of biomass. Worldwide investments in renewable technologies amounted to more than US$286 billion in 2015. In 2017, worldwide investments in renewable energy amounted to US$279.8 billion with China accounting for US$126.6 billion or 45% of the global investments, the United States for US$40.5 billion and Europe for US$40.9 billion. Globally there are an estimated 7.7 million jobs associated with the renewable energy industries, with solar photovoltaics being the largest renewable employer. Renewable energy systems are rapidly becoming more efficient and cheaper and their share of total energy consumption is increasing. As of 2019, more than two-thirds of worldwide newly installed electricity capacity was renewable. Growth in consumption of coal and oil could end by 2020 due to increased uptake of renewables and natural gas.
At the national level, at least 30 nations around the world already have renewable energy contributing more than 20 percent of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.
Some places and at least two countries, Iceland and Norway, generate all their electricity using renewable energy already, and many other countries have the set a goal to reach 100% renewable energy in the future.
At least 47 nations around the world already have over 50 percent of electricity from renewable resources. Renewable energy resources exist over wide geographical areas, in contrast to fossil fuels, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency technologies is resulting in significant energy security, climate change mitigation, and economic benefits. In international public opinion surveys there is strong support for promoting renewable sources such as solar power and wind power.
While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas and developing countries, where energy is often crucial in human development. As most of renewable energy technologies provide electricity, renewable energy deployment is often applied in conjunction with further electrification, which has several benefits: electricity can be converted to heat (where necessary generating higher temperatures than fossil fuels), can be converted into mechanical energy with high efficiency, and is clean at the point of consumption. In addition, electrification with renewable energy is more efficient and therefore leads to significant reductions in primary energy requirements.
Renewable energy flows involve natural phenomena such as sunlight, wind, tides, plant growth, and geothermal heat, as the International Energy Agency explains:
Renewable energy resources and significant opportunities for energy efficiency exist over wide geographical areas, in contrast to other energy sources, which are concentrated in a limited number of countries. Rapid deployment of renewable energy and energy efficiency, and technological diversification of energy sources, would result in significant energy security and economic benefits. It would also reduce environmental pollution such as air pollution caused by burning of fossil fuels and improve public health, reduce premature mortalities due to pollution and save associated health costs that amount to several hundred billion dollars annually only in the United States. Renewable energy sources, that derive their energy from the sun, either directly or indirectly, such as hydro and wind, are expected to be capable of supplying humanity energy for almost another 1 billion years, at which point the predicted increase in heat from the Sun is expected to make the surface of the Earth too hot for liquid water to exist.
Climate change and global warming concerns, coupled with the continuing fall in the costs of some renewable energy equipment, such as wind turbines and solar panels, are driving increased use of renewables. New government spending, regulation and policies helped the industry weather the global financial crisis better than many other sectors. , however, according to the International Renewable Energy Agency, renewables overall share in the energy mix (including power, heat and transport) needs to grow six times faster, in order to keep the rise in average global temperatures "well below" during the present century, compared to pre-industrial levels.
As of 2011, small solar PV systems provide electricity to a few million households, and micro-hydro configured into mini-grids serves many more. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves. United Nations' eighth Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond, and some 120 countries have various policy targets for longer-term shares of renewable energy, including a 20% target of all electricity generated for the European Union by 2020. Some countries have much higher long-term policy targets of up to 100% renewables. Outside Europe, a diverse group of 20 or more other countries target renewable energy shares in the 2020–2030 time frame that range from 10% to 50%.
Renewable energy often displaces conventional fuels in four areas: electricity generation, hot water/space heating, transportation, and rural (off-grid) energy services:
Prior to the development of coal in the mid 19th century, nearly all energy used was renewable. Almost without a doubt the oldest known use of renewable energy, in the form of traditional biomass to fuel fires, dates from more than a million years ago. Use of biomass for fire did not become commonplace until many hundreds of thousands of years later. Probably the second oldest usage of renewable energy is harnessing the wind in order to drive ships over water. This practice can be traced back some 7000 years, to ships in the Persian Gulf and on the Nile. From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times. Moving into the time of recorded history, the primary sources of traditional renewable energy were human labor, animal power, water power, wind, in grain crushing windmills, and firewood, a traditional biomass.
In the 1860s and 1870s there were already fears that civilization would run out of fossil fuels and the need was felt for a better source. In 1873 Professor Augustin Mouchot wrote:
In 1885, Werner von Siemens, commenting on the discovery of the photovoltaic effect in the solid state, wrote:
Max Weber mentioned the end of fossil fuel in the concluding paragraphs of his Die protestantische Ethik und der Geist des Kapitalismus (The Protestant Ethic and the Spirit of Capitalism), published in 1905. Development of solar engines continued until the outbreak of World War I. The importance of solar energy was recognized in a 1911 "Scientific American" article: "in the far distant future, natural fuels having been exhausted [solar power] will remain as the only means of existence of the human race".
The theory of peak oil was published in 1956. In the 1970s environmentalists promoted the development of renewable energy both as a replacement for the eventual depletion of oil, as well as for an escape from dependence on oil, and the first electricity-generating wind turbines appeared. Solar had long been used for heating and cooling, but solar panels were too costly to build solar farms until 1980.
In 2018, worldwide installed capacity of wind power was 564 GW.
Air flow can be used to run wind turbines. Modern utility-scale wind turbines range from around 600 kW to 9 MW of rated power. The power available from the wind is a function of the cube of the wind speed, so as wind speed increases, power output increases up to the maximum output for the particular turbine. Areas where winds are stronger and more constant, such as offshore and high-altitude sites, are preferred locations for wind farms. Typically, full load hours of wind turbines vary between 16 and 57 percent annually, but might be higher in particularly favorable offshore sites.
Wind-generated electricity met nearly 4% of global electricity demand in 2015, with nearly 63 GW of new wind power capacity installed. Wind energy was the leading source of new capacity in Europe, the US and Canada, and the second largest in China. In Denmark, wind energy met more than 40% of its electricity demand while Ireland, Portugal and Spain each met nearly 20%.
Globally, the long-term technical potential of wind energy is believed to be five times total current global energy production, or 40 times current electricity demand, assuming all practical barriers needed were overcome. This would require wind turbines to be installed over large areas, particularly in areas of higher wind resources, such as offshore. As offshore wind speeds average ~90% greater than that of land, so offshore resources can contribute substantially more energy than land-stationed turbines.
In 2017, worldwide renewable hydropower capacity was 1,154 GW.
Since water is about 800 times denser than air, even a slow flowing stream of water, or moderate sea swell, can yield considerable amounts of energy. There are many forms of water energy:
Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. For countries having the largest percentage of electricity from renewables, the top 50 are primarily hydroelectric. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity stations larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela.
Wave power, which captures the energy of ocean surface waves, and tidal power, converting the energy of tides, are two forms of hydropower with future potential; however, they are not yet widely employed commercially. A demonstration project operated by the Ocean Renewable Power Company on the coast of Maine, and connected to the grid, harnesses tidal power from the Bay of Fundy, location of world's highest tidal flow. Ocean thermal energy conversion, which uses the temperature difference between cooler deep and warmer surface waters, currently has no economic feasibility.
In 2017, global installed solar capacity was 390 GW.
Solar energy, radiant light and heat from the sun, is harnessed using a range of ever-evolving technologies such as solar heating, photovoltaics, concentrated solar power (CSP), concentrator photovoltaics (CPV), solar architecture and artificial photosynthesis. Solar technologies are broadly characterized as either passive solar or active solar depending on the way they capture, convert, and distribute solar energy. Passive solar techniques include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties, and designing spaces that naturally circulate air. Active solar technologies encompass solar thermal energy, using solar collectors for heating, and solar power, converting sunlight into electricity either directly using photovoltaics (PV), or indirectly using concentrated solar power (CSP).
A photovoltaic system converts light into electrical direct current (DC) by taking advantage of the photoelectric effect. Solar PV has turned into a multi-billion, fast-growing industry, continues to improve its cost-effectiveness, and has the most potential of any renewable technologies together with CSP. Concentrated solar power (CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Commercial concentrated solar power plants were first developed in the 1980s. CSP-Stirling has by far the highest efficiency among all solar energy technologies.
In 2011, the International Energy Agency said that "the development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits. It will increase countries' energy security through reliance on an indigenous, inexhaustible and mostly import-independent resource, enhance sustainability, reduce pollution, lower the costs of mitigating climate change, and keep fossil fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early deployment should be considered learning investments; they must be wisely spent and need to be widely shared". Italy has the largest proportion of solar electricity in the world; in 2015, solar supplied 7.7% of electricity demand in Italy. In 2017, after another year of rapid growth, solar generated approximately 2% of global power, or 460 TWh.
Global geothermal capacity in 2017 was 12.9 GW.
High temperature geothermal energy is from thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. Earth's geothermal energy originates from the original formation of the planet and from radioactive decay of minerals (in currently uncertain but possibly roughly equal proportions). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective "geothermal" originates from the Greek roots "geo", meaning earth, and "thermos", meaning heat.
The heat that is used for geothermal energy can be from deep within the Earth, all the way down to Earth's core – down. At the core, temperatures may reach over 9,000 °F (5,000 °C). Heat conducts from the core to surrounding rock. Extremely high temperature and pressure cause some rock to melt, which is commonly known as magma. Magma convects upward since it is lighter than the solid rock. This magma then heats rock and water in the crust, sometimes up to .
Low temperature geothermal refers to the use of the outer crust of the Earth as a thermal battery to facilitate renewable thermal energy for heating and cooling buildings, and other refrigeration and industrial uses. In this form of geothermal, a geothermal heat pump and ground-coupled heat exchanger are used together to move heat energy into the Earth (for cooling) and out of the Earth (for heating) on a varying seasonal basis. Low temperature geothermal (generally referred to as "GHP") is an increasingly important renewable technology because it both reduces total annual energy loads associated with heating and cooling, and it also flattens the electric demand curve eliminating the extreme summer and winter peak electric supply requirements. Thus low temperature geothermal/GHP is becoming an increasing national priority with multiple tax credit support and focus as part of the ongoing movement toward net zero energy.
Bioenergy global capacity in 2017 was 109 GW.
Biomass is biological material derived from living, or recently living organisms. It most often refers to plants or plant-derived materials which are specifically called lignocellulosic biomass. As an energy source, biomass can either be used directly via combustion to produce heat, or indirectly after converting it to various forms of biofuel. Conversion of biomass to biofuel can be achieved by different methods which are broadly classified into: "thermal", "chemical", and "biochemical" methods. Wood remains the largest biomass energy source today; examples include forest residues – such as dead trees, branches and tree stumps –, yard clippings, wood chips and even municipal solid waste. In the second sense, biomass includes plant or animal matter that can be converted into fibers or other industrial chemicals, including biofuels. Industrial biomass can be grown from numerous types of plants, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, bamboo, and a variety of tree species, ranging from eucalyptus to oil palm (palm oil).
Plant energy is produced by crops specifically grown for use as fuel that offer high biomass output per hectare with low input energy. The grain can be used for liquid transportation fuels while the straw can be burned to produce heat or electricity. Plant biomass can also be degraded from cellulose to glucose through a series of chemical treatments, and the resulting sugar can then be used as a first generation biofuel.
Biomass can be converted to other usable forms of energy such as methane gas or transportation fuels such as ethanol and biodiesel. Rotting garbage, and agricultural and human waste, all release methane gasalso called landfill gas or biogas. Crops, such as corn and sugarcane, can be fermented to produce the transportation fuel, ethanol. Biodiesel, another transportation fuel, can be produced from left-over food products such as vegetable oils and animal fats. Also, biomass to liquids (BTLs) and cellulosic ethanol are still under research. There is a great deal of research involving algal fuel or algae-derived biomass due to the fact that it is a non-food resource and can be produced at rates 5 to 10 times those of other types of land-based agriculture, such as corn and soy. Once harvested, it can be fermented to produce biofuels such as ethanol, butanol, and methane, as well as biodiesel and hydrogen. The biomass used for electricity generation varies by region. Forest by-products, such as wood residues, are common in the United States. Agricultural waste is common in Mauritius (sugar cane residue) and Southeast Asia (rice husks). Animal husbandry residues, such as poultry litter, are common in the United Kingdom.
Biofuels include a wide range of fuels which are derived from biomass. The term covers solid, liquid, and gaseous fuels. Liquid biofuels include bioalcohols, such as bioethanol, and oils, such as biodiesel. Gaseous biofuels include biogas, landfill gas and synthetic gas. Bioethanol is an alcohol made by fermenting the sugar components of plant materials and it is made mostly from sugar and starch crops. These include maize, sugarcane and, more recently, sweet sorghum. The latter crop is particularly suitable for growing in dryland conditions, and is being investigated by International Crops Research Institute for the Semi-Arid Tropics for its potential to provide fuel, along with food and animal feed, in arid parts of Asia and Africa.
With advanced technology being developed, cellulosic biomass, such as trees and grasses, are also used as feedstocks for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the United States and in Brazil. The energy costs for producing bio-ethanol are almost equal to, the energy yields from bio-ethanol. However, according to the European Environment Agency, biofuels do not address global warming concerns. Biodiesel is made from vegetable oils, animal fats or recycled greases. It can be used as a fuel for vehicles in its pure form, or more commonly as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. Biofuels provided 2.7% of the world's transport fuel in 2010.
Biomass, biogas and biofuels are burned to produce heat/power and in doing so harm the environment. Pollutants such as sulphurous oxides (SOx), nitrous oxides (NOx), and particulate matter (PM) are produced from the combustion of biomass; the World Health Organisation estimates that 7 million premature deaths are caused each year by air pollution. Biomass combustion is a major contributor.
Renewable energy production from some sources such as wind and solar is more variable and more geographically spread than technology based on fossil fuels and nuclear. While integrating it into the wider energy system is feasible, it does lead to some additional challenges. In order for the energy system to remain stable, a set of measurements can be taken. Implementation of energy storage, using a wide variety of renewable energy technologies, and implementing a smart grid in which energy is automatically used at the moment it is produced can reduce risks and costs of renewable energy implementation. In some locations, individual households can opt to purchase renewable energy through a consumer green energy program.
Electrical energy storage is a collection of methods used to store electrical energy. Electrical energy is stored during times when production (especially from intermittent sources such as wind power, tidal power, solar power) exceeds consumption, and returned to the grid when production falls below consumption. Pumped-storage hydroelectricity accounts for more than 90% of all grid power storage. Costs of lithium-ion batteries are dropping rapidly, and are increasingly being deployed grid ancillary services and for domestic storage.
Renewable power has been more effective in creating jobs than coal or oil in the United States. In 2016, employment in the sector increased 6 percent in the United States, causing employment in the non-renewable energy sector to decrease 18 percent. Worldwide, renewables employ about 8.1 million as of 2016.
From the end of 2004, worldwide renewable energy capacity grew at rates of 10–60% annually for many technologies. In 2015 global investment in renewables rose 5% to $285.9 billion, breaking the previous record of $278.5 billion in 2011. 2015 was also the first year that saw renewables, excluding large hydro, account for the majority of all new power capacity (134 GW, making up 53.6% of the total). Of the renewables total, wind accounted for 72 GW and solar photovoltaics 56 GW; both record-breaking numbers and sharply up from 2014 figures (49 GW and 45 GW respectively). In financial terms, solar made up 56% of total new investment and wind accounted for 38%.
In 2014 global wind power capacity expanded 16% to 369,553 MW. Yearly wind energy production is also growing rapidly and has reached around 4% of worldwide electricity usage, 11.4% in the EU, and it is widely used in Asia, and the United States. In 2015, worldwide installed photovoltaics capacity increased to 227 gigawatts (GW), sufficient to supply 1 percent of global electricity demands. Solar thermal energy stations operate in the United States and Spain, and as of 2016, the largest of these is the 392 MW Ivanpah Solar Electric Generating System in California. The world's largest geothermal power installation is The Geysers in California, with a rated capacity of 750 MW. Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane, and ethanol now provides 18% of the country's automotive fuel. Ethanol fuel is also widely available in the United States.
In 2017, investments in renewable energy amounted to US$279.8 billion worldwide, with China accounting for US$126.6 billion or 45% of the global investments, the US for US$40.5 billion, and Europe for US$40.9 billion. The results of a recent review of the literature concluded that as greenhouse gas (GHG) emitters begin to be held liable for damages resulting from GHG emissions resulting in climate change, a high value for liability mitigation would provide powerful incentives for deployment of renewable energy technologies.
Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2018 report from the International Renewable Energy Agency (IRENA), found that the cost of renewable energy is quickly falling, and will likely be equal to or less than the cost non-renewables such as fossil fuels by 2020. The report found that solar power costs have dropped 73% since 2010 and onshore wind costs have dropped by 23% in that same timeframe.
Current projections concerning the future cost of renewables vary however. The EIA has predicted that almost two thirds of net additions to power capacity will come from renewables by 2020 due to the combined policy benefits of local pollution, decarbonisation and energy diversification.
According to a 2018 report by Bloomberg New Energy Finance, wind and solar power are expected to generate roughly 50% of the world's energy needs by 2050, while coal powered electricity plants are expected to drop to just 11%.
Hydro-electricity and geothermal electricity produced at favourable sites are now the cheapest way to generate electricity. Renewable energy costs continue to drop, and the levelised cost of electricity (LCOE) is declining for wind power, solar photovoltaic (PV), concentrated solar power (CSP) and some biomass technologies. Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today". A series of studies by the US National Renewable Energy Laboratory modeled the "grid in the Western US under a number of different scenarios where intermittent renewables accounted for 33 percent of the total power." In the models, inefficiencies in cycling the fossil fuel plants to compensate for the variation in solar and wind energy resulted in an additional cost of "between $0.47 and $1.28 to each MegaWatt hour generated"; however, the savings in the cost of the fuels saved "adds up to $7 billion, meaning the added costs are, at most, two percent of the savings."
In 2017 the world renewable hydropower capacity was 1,154 GW. Only a quarter of the worlds estimated hydroelectric potential of 14,000 TWh/year has been developed, the regional potentials for the growth of hydropower around the world are, 71% Europe, 75% North America, 79% South America, 95% Africa, 95% Middle East, 82% Asia Pacific. However, the political realities of new reservoirs in western countries, economic limitations in the third world and the lack of a transmission system in undeveloped areas, result in the possibility of developing 25% of the remaining potential before 2050, with the bulk of that being in the Asia Pacific area. There is slow growth taking place in Western counties, but not in the conventional dam and reservoir style of the past. New projects take the form of run-of-the-river and small hydro, neither using large reservoirs. It is popular to repower old dams thereby increasing their efficiency and capacity as well as quicker responsiveness on the grid. Where circumstances permit existing dams such as the Russell Dam built in 1985 may be updated with "pump back" facilities for pumped-storage which is useful for peak loads or to support intermittent wind and solar power. Countries with large hydroelectric developments such as Canada and Norway are spending billions to expand their grids to trade with neighboring countries having limited hydro.
Wind power is widely used in Europe, China, and the United States. From 2004 to 2017, worldwide installed capacity of wind power has been growing from 47 GW to 514 GW—a more than tenfold increase within 13 years As of the end of 2014, China, the United States and Germany combined accounted for half of total global capacity. Several other countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, and 14% in Ireland in 2010 and have since continued to expand their installed capacity. More than 80 countries around the world are using wind power on a commercial basis.
Wind turbines are increasing in power with some commercially deployed models generating over 8MW per turbine. More powerful models are in development, see list of most powerful wind turbines.
Solar thermal energy capacity has increased from 1.3 GW in 2012 to 5.0 GW in 2017.
Spain is the world leader in solar thermal power deployment with 2.3 GW deployed. The United States has 1.8 GW, most of it in California where 1.4 GW of solar thermal power projects are operational. Several power plants have been constructed in the Mojave Desert, Southwestern United States. As of 2017 only 4 other countries have deployments above 100 MW: South Africa (300 MW) India (229 MW) Morocco (180 MW) and United Arab Emirates (100 MW).
The United States conducted much early research in photovoltaics and concentrated solar power. The U.S. is among the top countries in the world in electricity generated by the Sun and several of the world's largest utility-scale installations are located in the desert Southwest.
The oldest solar thermal power plant in the world is the 354 megawatt (MW) SEGS thermal power plant, in California. The Ivanpah Solar Electric Generating System is a solar thermal power project in the California Mojave Desert, 40 miles (64 km) southwest of Las Vegas, with a gross capacity of 377 MW. The 280 MW Solana Generating Station is a solar power plant near Gila Bend, Arizona, about southwest of Phoenix, completed in 2013. When commissioned it was the largest parabolic trough plant in the world and the first U.S. solar plant with molten salt thermal energy storage.
In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.
Photovoltaics (PV) is rapidly-growing with global capacity increasing from 177 GW at the end of 2014 to 385 GW in 2017.
PV uses solar cells assembled into solar panels to convert sunlight into electricity. PV systems range from small, residential and commercial rooftop or building integrated installations, to large utility-scale photovoltaic power station. The predominant PV technology is crystalline silicon, while thin-film solar cell technology accounts for about 10 percent of global photovoltaic deployment. In recent years, PV technology has improved its electricity generating efficiency, reduced the installation cost per watt as well as its energy payback time, and reached grid parity in at least 30 different markets by 2014.
Building-integrated photovoltaics or "onsite" PV systems use existing land and structures and generate power close to where it is consumed.
Photovoltaics grew fastest in China, followed by Japan and the United States. Italy meets 7.9 percent of its electricity demands with photovoltaic power—the highest share worldwide. Solar power is forecasted to become the world's largest source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16% and 11%, respectively. This requires an increase of installed PV capacity to 4,600 GW, of which more than half is expected to be deployed in China and India.
Commercial concentrated solar power plants were first developed in the 1980s. As the cost of solar electricity has fallen, the number of grid-connected solar PV systems has grown into the millions and utility-scale solar power stations with hundreds of megawatts are being built. Many solar photovoltaic power stations have been built, mainly in Europe, China and the United States. The 1.5 GW Tengger Desert Solar Park, in China is the world's largest PV power station. Many of these plants are integrated with agriculture and some use tracking systems that follow the sun's daily path across the sky to generate more electricity than fixed-mounted systems.
Bioenergy global capacity in 2017 was 109 GW.
Biofuels provided 3% of the world's transport fuel in 2017.
Mandates for blending biofuels exist in 31 countries at the national level and in 29 states/provinces. According to the International Energy Agency, biofuels have the potential to meet more than a quarter of world demand for transportation fuels by 2050.
Since the 1970s, Brazil has had an ethanol fuel program which has allowed the country to become the world's second largest producer of ethanol (after the United States) and the world's largest exporter. Brazil's ethanol fuel program uses modern equipment and cheap sugarcane as feedstock, and the residual cane-waste (bagasse) is used to produce heat and power. There are no longer light vehicles in Brazil running on pure gasoline. By the end of 2008 there were 35,000 filling stations throughout Brazil with at least one ethanol pump. Unfortunately, Operation Car Wash has seriously eroded public trust in oil companies and has implicated several high ranking Brazilian officials.
Nearly all the gasoline sold in the United States today is mixed with 10% ethanol, and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, Daimler AG, and GM are among the automobile companies that sell "flexible-fuel" cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol. By mid-2006, there were approximately 6 million ethanol compatible vehicles on U.S. roads.
Global geothermal capacity in 2017 was 12.9 GW.
Geothermal power is cost effective, reliable, sustainable, and environmentally friendly, but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are usually much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels.
In 2017, the United States led the world in geothermal electricity production with 12.9 GW of installed capacity. The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California. The Philippines follows the US as the second highest producer of geothermal power in the world, with 1.9 GW of capacity online.
Renewable energy technology has sometimes been seen as a costly luxury item by critics, and affordable only in the affluent developed world. This erroneous view has persisted for many years, however between 2016 and 2017, investments in renewable energy were higher in developing countries than in developed countries, with China leading global investment with a record 126.6 billion dollars. Many Latin American and African countries increased their investments significantly as well.
Renewable energy can be particularly suitable for developing countries. In rural and remote areas, transmission and distribution of energy generated from fossil fuels can be difficult and expensive. Producing renewable energy locally can offer a viable alternative.
Technology advances are opening up a huge new market for solar power: the approximately 1.3 billion people around the world who don't have access to grid electricity. Even though they are typically very poor, these people have to pay far more for lighting than people in rich countries because they use inefficient kerosene lamps. Solar power costs half as much as lighting with kerosene. As of 2010, an estimated 3 million households get power from small solar PV systems. Kenya is the world leader in the number of solar power systems installed per capita. More than 30,000 very small solar panels, each producing 1 2 to 30 watts, are sold in Kenya annually. Some Small Island Developing States (SIDS) are also turning to solar power to reduce their costs and increase their sustainability.
Micro-hydro configured into mini-grids also provide power. Over 44 million households use biogas made in household-scale digesters for lighting and/or cooking, and more than 166 million households rely on a new generation of more-efficient biomass cookstoves. Clean liquid fuel sourced from renewable feedstocks are used for cooking and lighting in energy-poor areas of the developing world. Alcohol fuels (ethanol and methanol) can be produced sustainably from non-food sugary, starchy, and cellulostic feedstocks. Project Gaia, Inc. and CleanStar Mozambique are implementing clean cooking programs with liquid ethanol stoves in Ethiopia, Kenya, Nigeria and Mozambique.
Renewable energy projects in many developing countries have demonstrated that renewable energy can directly contribute to poverty reduction by providing the energy needed for creating businesses and employment. Renewable energy technologies can also make indirect contributions to alleviating poverty by providing energy for cooking, space heating, and lighting. Renewable energy can also contribute to education, by providing electricity to schools.
Policies to support renewable energy have been vital in their expansion. Where Europe dominated in establishing energy policy in early 2000s, most countries around the world now have some form of energy policy.
The International Renewable Energy Agency (IRENA) is an intergovernmental organization for promoting the adoption of renewable energy worldwide. It aims to provide concrete policy advice and facilitate capacity building and technology transfer. IRENA was formed in 2009, by 75 countries signing the charter of IRENA. As of April 2019, IRENA has 160 member states. The then United Nations' Secretary-General Ban Ki-moon has said that renewable energy has the ability to lift the poorest nations to new levels of prosperity, and in September 2011 he launched the UN Sustainable Energy for All initiative to improve energy access, efficiency and the deployment of renewable energy.
The 2015 Paris Agreement on climate change motivated many countries to develop or improve renewable energy policies. In 2017, a total of 121 countries have adapted some form of renewable energy policy. National targets that year existed in at 176 countries. In addition, there is also a wide range of policies at state/provincial and local levels. Some public utilities help plan or install residential energy upgrades. Under president Barack Obama, the United States policy encouraged the uptake of renewable energy in line with commitments to the Paris agreement. Even though Trump has abandoned these goals, renewable investment is still on the rise.
Many national, state, and local governments have created green banks. A green bank is a quasi-public financial institution that uses public capital to leverage private investment in clean energy technologies. Green banks use a variety of financial tools to bridge market gaps that hinder the deployment of clean energy. The US military has also focused on the use of renewable fuels for military vehicles. Unlike fossil fuels, renewable fuels can be produced in any country, creating a strategic advantage. The US military has already committed itself to have 50% of its energy consumption come from alternative sources.
The incentive to use 100% renewable energy, for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. Renewable energy use has grown much faster than even advocates anticipated. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. Also, Professors S. Pacala and Robert H. Socolow have developed a series of "stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources," in aggregate, constitute the largest number of their "wedges".
Using 100% renewable energy was first suggested in a Science paper published in 1975 by Danish physicist Bent Sørensen. It was followed by several other proposals, until in 1998 the first detailed analysis of scenarios with very high shares of renewables were published. These were followed by the first detailed 100% scenarios. In 2006 a PhD thesis was published by Czisch in which it was shown that in a 100% renewable scenario energy supply could match demand in every hour of the year in Europe and North Africa. In the same year Danish Energy professor Henrik Lund published a first paper in which he addresses the optimal combination of renewables, which was followed by several other papers on the transition to 100% renewable energy in Denmark. Since then Lund has been publishing several papers on 100% renewable energy. After 2009 publications began to rise steeply, covering 100% scenarios for countries in Europe, America, Australia and other parts of the world.
In 2011 Mark Z. Jacobson, professor of civil and environmental engineering at Stanford University, and Mark Delucchi published a study on 100% renewable global energy supply in the journal Energy Policy. They found producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic". They also found that energy costs with a wind, solar, water system should be similar to today's energy costs.
Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand."
The most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies are primarily political and not technological. According to the 2013 "Post Carbon Pathways" report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.
According to World Bank the "below 2°C" climate scenario requires 3 billions of tonnes of metals and minerals by 2050. Supply of mined resources such as zinc, molybdenum, silver, nickel, copper must increase by up to 500%. A 2018 analysis estimated required increases in stock of metals required by various sectors from 1000% (wind power) to 87'000% (personal vehicle batteries).
Other renewable energy technologies are still under development, and include cellulosic ethanol, hot-dry-rock geothermal power, and marine energy. These technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research, development and demonstration (RD&D) funding.
There are numerous organizations within the academic, federal, and commercial sectors conducting large scale advanced research in the field of renewable energy. This research spans several areas of focus across the renewable energy spectrum. Most of the research is targeted at improving efficiency and increasing overall energy yields.
Multiple federally supported research organizations have focused on renewable energy in recent years. Two of the most prominent of these labs are Sandia National Laboratories and the National Renewable Energy Laboratory (NREL), both of which are funded by the United States Department of Energy and supported by various corporate partners. Sandia has a total budget of $2.4 billion while NREL has a budget of $375 million.
Collection of static electricity charges from water droplets on metal surfaces is an experimental technology that would be especially useful in low-income countries with relative air humidity over 60%.
Renewable electricity production, from sources such as wind power and solar power, is intermittent which results in reduced capacity factor and require either energy storage of capacity equal to its total output, or base load power sources based on fossil fuels or nuclear power.
Since renewable energy sources power density per land area is at best three orders of magnitude smaller than fossil or nuclear power, renewable power plants tends to occupy thousands of hectares causing environmental concerns and opposition from local residents, especially in densely populated countries. Solar power plants are competing with arable land and nature reserves, while on-shore wind farms face opposition due to aesthetic concerns and noise, which is impacting both humans and wildlife. In the United States, the Massachusetts Cape Wind project was delayed for years partly because of aesthetic concerns. However, residents in other areas have been more positive. According to a town councilor, the overwhelming majority of locals believe that the Ardrossan Wind Farm in Scotland has enhanced the area. These concerns, when directed against renewable energy, are sometimes described as "not in my back yard" attitude (NIMBY).
A recent UK Government document states that "projects are generally more likely to succeed if they have broad public support and the consent of local communities. This means giving communities both a say and a stake". In countries such as Germany and Denmark many renewable projects are owned by communities, particularly through cooperative structures, and contribute significantly to overall levels of renewable energy deployment.
The market for renewable energy technologies has continued to grow. Climate change concerns and increasing in green jobs, coupled with high oil prices, peak oil, oil wars, oil spills, promotion of electric vehicles and renewable electricity, nuclear disasters and increasing government support, are driving increasing renewable energy legislation, incentives and commercialization. New government spending, regulation and policies helped the industry weather the 2009 economic crisis better than many other sectors.
While renewables have been very successful in their ever-growing contribution to electrical power there are no countries dominated by fossil fuels who have a plan to stop and get that power from renwables. Only Scotland and Ontario have stopped burning coal, largely due to good natural gas supplies. In the area of transportation, fossil fuels are even more entrenched and solutions harder to find. It's unclear if there are failures with policy or renewable energy, but twenty years after the Kyoto Protocol fossil fuels are still our primary energy source and consumption continues to grow.
The International Energy Agency has stated that deployment of renewable technologies usually increases the diversity of electricity sources and, through local generation, contributes to the flexibility of the system and its resistance to central shocks.
From around 2010 onwards, there was increasing discussion about the geopolitical impact of the growing use of renewable energy. It was argued that former fossil fuels exporters would experience a weakening of their position in international affairs, while countries with abundant sunshine, wind, hydropower, or geothermal resources would be strengthened. Also countries rich in critical materials for renewable energy technologies were expected to rise in importance in international affairs.
The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former fossil fuels exporters are expected to lose power, while the positions of former fossil fuel importers and countries rich in renewable energy resources is expected to strengthen.
The ability of biomass and biofuels to contribute to a reduction in emissions is limited because both biomass and biofuels emit large amounts of air pollution when burned and in some cases compete with food supply. Furthermore, biomass and biofuels consume large amounts of water. Other renewable sources such as wind power, photovoltaics, and hydroelectricity have the advantage of being able to conserve water, lower pollution and reduce emissions.
The installations used to produce wind, solar and hydro power are an increasing threat to key conservation areas, with facilities built in areas set aside for nature conservation and other environmentally sensitive areas. They are often much larger than fossil fuel power plants, needing areas of land up to 10 times greater than coal or gas to produce equivalent energy amounts. More than 2000 renewable energy facilities are built, and more are under construction, in areas of environmental importance and threaten the habitats of plant and animal species across the globe. The authors' team emphazised that their work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions. The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity.
Renewable energy devices depend on non-renewable resources such as mined metals and use vast amounts of land due to their small surface power density. Manufacturing of photovoltaic panels, wind turbines and batteries requires significant amounts of rare-earth elements and increases mining operations, which have social and environmental impact. Due to co-ocurrence of rare-earth and radioactive elements (thorium, uranium and radium), rare-earth mining results in production of low-level radioactive waste. | https://en.wikipedia.org/wiki?curid=25784 |
List of Roman emperors
The Roman emperors were the rulers of the Roman Empire dating from the granting of the title of to Gaius Julius Caesar Octavianus by the Roman Senate in 27 BC, after major roles played by the populist dictator and military leader Julius Caesar. Augustus maintained a facade of Republican rule, rejecting monarchical titles but calling himself (first man of the council) and (first citizen of the state). The title of Augustus was conferred on his successors to the imperial position. The style of government instituted by Augustus is called the Principate and continued until reforms by Diocletian. The modern word 'emperor' derives from the title , which was granted by an army to a successful general; during the initial phase of the empire, the title was generally used only by the . For example, Augustus' official name was "Imperator Caesar Divi Filius Augustus".
The territory under command of the emperor had developed under the period of the Roman Republic as it invaded and occupied most of Europe and portions of northern Africa and western Asia. Under the republic, regions of the empire were ruled by provincial governors answerable to and authorised by the Senate and People of Rome. During the republic, the chief magistrates of Rome were two consuls elected each year; consuls continued to be elected in the imperial period, but their authority was subservient to that of the emperor, and the election was controlled by the emperor.
In the late 3rd century, after the Crisis of the Third Century, Diocletian formalised and embellished the recent manner of imperial rule, establishing the so-called Dominate period of the Roman Empire. This was characterised by the explicit increase of authority in the person of the Emperor, and the use of the style ("Our Lord"). The rise of powerful Barbarian tribes along the borders of the empire and the challenge they posed to defense of far-flung borders and unstable imperial succession led Diocletian to divide the administration geographically of the Empire in 286 with a co-Augustus.
In 313, Constantine the Great, the first Christian emperor, issued the Edict of Milan along with Licinius that granted freedom in the worship of Christianity. In 330, he established a second capital in Byzantium, which he renamed Constantinople. For most of the period from 286 to 480, there was more than one recognised senior emperor, with the division usually based in geographic terms. This division was consistently in place after the death of Theodosius I in 395, which historians have dated as the division between the Western Roman Empire and the Eastern Roman Empire. However, formally the Empire remained a single polity, with separate co-emperors in the separate courts. The fall of the Western Roman Empire, and so the end of a separate list of emperors below, is dated either from the date of 476 when Romulus Augustulus was deposed by the Germanic Herulians led by Odoacer or the date of 480, on the death of Julius Nepos, when Eastern Emperor Zeno ended recognition of a separate Western court. In the period that followed, the Empire is usually treated by historians as the Byzantine Empire governed by the Byzantine Emperors, although this designation is not used universally, and continues to be a subject of specialist debate today.
In the 7th century reign of Heraclius, the Byzantine–Sasanian War of 602–628 saw much of Rome's eastern territory lost to the Sasanian Empire, recovered by Heraclius, and then lost permanently to Arab Muslim conquests after the death of Muhammad and establishment of Islam. The Sasanian Empire was conquered by the Rashidun Caliphate, ending the Byzantine-Sasanian Wars.
The line of emperors continued until the death of Constantine XI Palaiologos during the Fall of Constantinople in 1453, when the remaining territories were captured by the Ottoman Empire under Mehmed II. The Ottoman dynasty carried on using the title of Caesar of Rome.
Counting all individuals to have possessed the full imperial title, including those who did not technically rule in their own right (e.g. co-emperors or minors during regencies), this list contains 196 emperors and 3 ruling empresses, for a total of 199 monarchs.
The emperors listed in this article are those generally agreed to have been 'legitimate' emperors, and who appear in published regnal lists. The word 'legitimate' is used by most authors, but usually without clear definition, perhaps not surprisingly, since the emperorship was itself rather vaguely defined legally. In Augustus' original formulation, the "princeps" was selected by either the Senate or "the people" of Rome, but quite quickly the legions became an acknowledged stand-in for "the people." A person could be proclaimed as emperor by their troops or by "the mob" in the street, but in theory needed to be confirmed by the Senate. The coercion that frequently resulted was implied in this formulation. Furthermore, a sitting emperor was empowered to name a successor and take him on as apprentice in government and in that case the Senate had no role to play, although it sometimes did when a successor lacked the power to inhibit bids by rival claimants. By the medieval (or Byzantine) period, the very definition of the Senate became vague as well, adding to the complication.
Lists of legitimate emperors are therefore partly influenced by the subjective views of those compiling them, and also partly by historical convention. Many of the 'legitimate' emperors listed here acceded to the position by usurpation, and many 'illegitimate' claimants had a legitimate claim to the position. Historically, the following criteria have been used to derive emperor lists:
So for instance, Aurelian, though acceding to the throne by usurpation, was the sole and undisputed monarch between 270 and 275, and thus was a legitimate emperor. Gallienus, though not in control of the whole Empire, and plagued by other claimants, was the legitimate heir of (the legitimate emperor) Valerian. Claudius Gothicus, though acceding illegally, and not in control of the whole Empire, was the only claimant accepted by the Senate, and thus, for his reign, was the legitimate emperor. Equally, during the Year of the Four Emperors, all claimants, though not undisputed, were at some point accepted by the Senate and are thus included; conversely, during the Year of the Five Emperors neither Pescennius Niger nor Clodius Albinus were accepted by the Senate, and are thus not included. There are a few examples where individuals were made co-emperor, but never wielded power in their own right (typically the child of an emperor); these emperors are legitimate, but are not included in regnal lists, and in this article are listed together with the senior emperor.
After 395, the list of emperors in the East is based on the same general criteria, with the exception that the emperor only had to be in undisputed control of the Eastern part of the empire, or be the legitimate heir of the Eastern emperor.
The situation in the West is more complex. Throughout the final years of the Western Empire (395–480) the Eastern emperor was considered the senior emperor, and a Western emperor was only legitimate if recognized as such by the Eastern emperor. Furthermore, after 455 the Western emperor ceased to be a relevant figure and there was sometimes no claimant at all. For the sake of historical completeness, all Western Emperors after 455 are included in this list, even if they were not recognized by the Eastern Empire; some of these technically illegitimate emperors are included in regnal lists, while others are not. For instance, Romulus Augustulus was technically a usurper who ruled only the Italian peninsula and was never legally recognized. However, he was traditionally considered the "last Roman Emperor" by 18th and 19th century western scholars and his overthrow by Odoacer used as the marking point between historical epochs, and as such he is usually included in regnal lists. However, modern scholarship has confirmed that Romulus Augustulus' predecessor, Julius Nepos continued to rule as emperor in the other Western holdings and as a figurehead for Odoacer's rule in Italy until Nepos' death in 480. Since the question of what constitutes an emperor can be ambiguous, and dating the "fall of the Western Empire" arbitrary, this list includes details of both figures.
"Note: all dates AD hereafter."
Note: To maintain control and improve administration, various schemes to divide the work of the Roman Emperor by sharing it between individuals were tried after 285. The "Tetrarchy" proclaimed by Diocletian in 293 split the empire into two halves each to be ruled separately by two emperors, a senior "Augustus", and a junior "Caesar".
Note: Theodosius I was the last person to rule both halves of the Roman Empire, dividing the administration between his sons Arcadius and Honorius on his death.
Note: The classical Roman Empire is usually said to have ended with the deposition of Romulus Augustulus, with its continuation in the East referred to by modern scholars as the Byzantine Empire.
Note: Theodosius I was the last person to rule both halves of the Roman Empire, dividing the administration between his sons Arcadius and Honorius on his death.
Note: Between 1204 and 1261 there was an interregnum when Constantinople was occupied by the crusaders of the Fourth Crusade and the Empire was divided into the Empire of Nicaea, the Empire of Trebizond and the Despotate of Epirus, which were all contenders for rule of the Empire. The Laskarid dynasty of the Empire of Nicaea is considered the legitimate continuation of the Roman Empire because they had the support of the (Orthodox) Patriarch of Constantinople and managed to re-take Constantinople. | https://en.wikipedia.org/wiki?curid=25791 |
Roman calendar
The Roman calendar was the calendar used by the Roman kingdom and republic. The term often includes the Julian calendar established by the reforms of the dictator Julius Caesar and emperor Augustus in the late 1stcentury and sometimes includes any system dated by inclusive counting towards months' kalends, nones, and ides in the Roman manner. The term usually excludes the Alexandrian calendar of Roman Egypt, which continued the unique months of that land's former calendar; the Byzantine calendar of the later Roman Empire, which usually dated the Roman months in the simple count of the ancient Greek calendars; and the Gregorian calendar, which refined the Julian system to bring it into still closer alignment with the tropical year.
Roman dates were counted inclusively forward to the next of three principal days: the first of the month (the kalends), a day shortly before the middle of the month (the ides), and eight days—nine, counting inclusively—before this (the nones). The original calendar consisted of ten months beginning in spring with March; winter was left as an unassigned span of days. These months ran for 38 nundinal cycles, each forming an eight-day week (nine days counted inclusively, hence the name) ended by religious rituals and a public market. The winter period was later divided into two months, January and February. The legendary early kings Romulus and Numa Pompilius were traditionally credited with establishing this early fixed calendar, which bears traces of its origin as an observational lunar one. In particular, the kalends, nones, and ides seem to have derived from the first sighting of the crescent moon, the first-quarter moon, and the full moon respectively. The system ran well short of the solar year, and it needed constant intercalation to keep religious festivals and other activities in their proper seasons. This is a typical element of lunisolar calendars. For superstitious reasons, such intercalation occurred within the month of February even after it was no longer considered the last month.
After the establishment of the Roman Republic, years began to be dated by consulships and control over intercalation was granted to the pontifices, who eventually abused their power by lengthening years controlled by their political allies and shortening the years in their rivals' terms of office. Having won his war with Pompey, Caesar used his position as Rome's chief pontiff to enact a calendar reform in 46, coincidentally making the year of his third consulship last for 446days. In order to avoid interfering with Rome's religious ceremonies, the reform added all its days towards the ends of months and did not adjust any nones or ides, even in months which came to have 31days. The Julian calendar was supposed to have a single leap day on 24 February (a doubled ") every fourth year but following Caesar's assassination the priests figured this using inclusive counting and mistakenly added the bissextile day every three years. In order to bring the calendar back to its proper place, Augustus was obliged to suspend intercalation for one or two decades. The revised calendar remained slightly longer than the solar year; by the 16th century the date of Easter had shifted so far away from the vernal equinox that Pope Gregory XIII ordered the calendar’s adjustment, resulting in the Gregorian calendar.
The original Roman calendar is believed to have been an observational lunar calendar whose months began from the first signs of a new crescent moon. Because a lunar cycle is about days long, such months would have varied between . Twelve such months would have fallen short of the solar year; without adjustment, such a year would have quickly rotated out of alignment with the seasons in the manner of the Islamic calendar. Given the seasonal aspects of the later calendar and its associated religious festivals, this was presumably avoided through some form of intercalation or the suspension of the calendar during winter.
Rome's 8-day week, the nundinal cycle, was shared with the Etruscans, who used it as the schedule of royal audiences. It was presumably a part of the early calendar and was credited in Roman legend variously to Romulus and Servius Tullius.
The Romans themselves described their first organized year as one with ten fixed months, each of . Such a decimal division fitted general Roman practice. The four 31 day months were called "full" (') and the others "hollow" ('). Its 304 days made up exactly 38 nundinal cycles. The system is usually said to have left the remaining 50 odd days of the year as an unorganized "winter", although Licinius Macer's lost history apparently stated the earliest Roman calendar employed intercalation instead and Macrobius claims the 10 month calendar was allowed to shift until the summer and winter months were completely misplaced, at which time additional days belonging to no month were simply inserted into the calendar until it seemed things were restored to their proper place.
Later Roman writers credited this calendar to Romulus, their legendary first king and culture hero, although this was common with other practices and traditions whose origin had been lost to them. Some scholars doubt the existence of this calendar at all, as it is only attested in late Republican and Imperial sources and supported only by the misplaced names of the months from September to December. Rüpke also finds the coincidence of the length of the supposed "Romulan" year with the length of the first ten months of the Julian calendar to be suspicious.
Other traditions existed alongside this one, however. Plutarch's "Parallel Lives" recounts that Romulus's calendar had been solar but adhered to the general principle that the year should last for 360 days. Months were employed secondarily and haphazardly, with some counted as 20 days and others as 35 or more.
The attested calendar of the Roman Republic was quite different. It followed Greek calendars in assuming a lunar cycle of days and a solar year of synodic months ( days), which align every fourth year after the addition of two intercalary months. Two months were added at the end of the year to complete the cycle during winter, January and February, before the intercalary month inserted every two years; the intercalary month was sometimes known as Mercedonius.
The Romans did not follow the usual Greek practice in alternating 29- and 30-day months and a every other year. Instead, their 1st, 3rd, 5th, and 8thmonths had 31days each; all the other months had 29days except February, which had 28days for three years and then . The total of these months over 4years differed from the Greeks by 5days, meaning the Roman intercalary month always had 27days. Similarly, within each month, the weeks did not vary in the Greek fashion between ; instead, the full months had two additional days in their first week and the other three weeks of every month ran for 8days ("nine" by Roman reckoning). Still more unusually, the intercalary month was not placed at the end of the year but "within" the month of February after the Terminalia on the 23rd (""); the remaining days of February followed its completion (these last of February were actually named and counted inclusively in days before the calends of March and were traditionally part of the celebration for the new year). This seems to have arisen from Roman superstitions concerning the numbering and order of the months. The arrangement of the Roman calendar similarly seems to have arisen from Pythagorean superstitions concerning the luckiness of odd numbers.
These Pythagorean-based changes to the Roman calendar were generally credited by the Romans to Numa Pompilius, Romulus's successor and the second of Rome's seven kings, as were the two new months of the calendar. Most sources thought he had established intercalation with the rest of his calendar. Although Livy's Numa instituted a "lunar" calendar, the author claimed the king had instituted a 19-year system of intercalation equivalent to the Metonic cycle centuries before its development by Babylonian and Greek astronomers. Plutarch's account claims he ended the former chaos of the calendar by employing 12months totaling 354days—the length of the lunar and Greek years—and biennial intercalary months of 22days.
According to Livy's Periochae, the beginning of the consular year changed from March to January1 in 154BC to respond to a rebellion in Hispania. Plutarch believed Numa was responsible for placing January and February first in the calendar; Ovid states January began as the first month and February the last, with its present order owing to the Decemvirs. W. Warde Fowler believed the Roman priests continued to treat January and February as the last months of the calendar throughout the Republican period.
The consuls' terms of office were not always a modern calendar year, but ordinary consuls were elected or appointed annually. The traditional list of Roman consuls used by the Romans to date their years began in 509.
Gnaeus Flavius, a secretary ("scriba") to censor App. Claudius Caecus introduced a series of reforms in 304. Their exact nature is uncertain, although he is thought to have begun the custom of publishing the calendar in advance of the month, depriving the priests of some of their power but allowing for a more consistent calendar for official business.
Julius Caesar, following his victory in his civil war and in his role as "pontifex maximus", ordered a reformation of the calendar in 46. This was undertaken by a group of scholars apparently including the Alexandrian Sosigenes and the Roman M. Flavius. Its main lines involved the insertion of ten additional days throughout the calendar and regular intercalation of a single leap day every fourth year to bring the Roman calendar into close agreement with the solar year. The year 46 was the last of the old system and included 3 intercalary months, the first inserted in February and two more—' and '—before the kalends of December.
After Caesar's assassination, Mark Antony had Caesar's birth month Quintilis renamed July (') in his honor. After Antony's defeat at Actium, Augustus assumed control of Rome and, finding the priests had (owing to their inclusive counting) been intercalating every third year instead of every fourth, suspended the addition of leap days to the calendar for one or two decades until its proper position had been restored. See Julian calendar: Leap year error. In 8, the plebiscite "Lex Pacuvia de Mense Augusto" renamed Sextilis August (') in his honor.
In large part, this calendar continued unchanged under the Roman Empire. (Egyptians used the related Alexandrian calendar, which Augustus had adapted from their wandering ancient calendar to maintain its alignment with Rome's.) A few emperors altered the names of the months after themselves or their family, but such changes were abandoned by their successors. Diocletian began the 15-year indiction cycles beginning from the 297 census; these became the required format for official dating under Justinian. Constantine formally established the 7-day week by making Sunday an official holiday in 321. Consular dating became obsolete following the abandonment of appointing nonimperial consuls in 541. The Roman method of numbering the days of the month never became widespread in the Hellenized eastern provinces and was eventually abandoned by the Byzantine Empire in its calendar.
Roman dates were counted inclusively forward to the next one of three principal days within each month:
These are thought to reflect a prehistoric lunar calendar, with the kalends proclaimed after the sighting of the first sliver of the new crescent moon a day or two after the new moon, the nones occurring on the day of the first-quarter moon, and the ides on the day of the full moon. The kalends of each month were sacred to Juno and the ides to Jupiter. The day before each was known as its eve ('); the day after each (') was considered particularly unlucky.
The days of the month were expressed in early Latin using the ablative of time, denoting points in time, in the contracted form "the 6thDecember Kalends" ('). In classical Latin, this use continued for the three principal days of the month but other days were idiomatically expressed in the accusative case, which usually expressed a duration of time, and took the form "6th day before the December Kalends" ('). This anomaly may have followed the treatment of days in Greek, reflecting the increasing use of such date phrases as an absolute phrase able to function as the object of another preposition, or simply originated in a mistaken agreement of ' with the preposition ' once it moved to the beginning of the expression. In late Latin, this idiom was sometimes abandoned in favor of again using the ablative of time.
The kalends were the day for payment of debts and the account books (") kept for them gave English its word "calendar". The public Roman calendars were the "fasti", which designated the religious and legal character of each month's days. The Romans marked each day of such calendars with the letters:
Each day was also marked by a letter from A to H to indicate its place within the nundinal cycle of market days.
The nundinae were the market days which formed a kind of weekend in Rome, Italy, and some other parts of Roman territory. By Roman inclusive counting, they were reckoned as "ninth days" although they actually occurred every eighth day. Because the republican and Julian years were not evenly divisible into eight-day periods, Roman calendars included a column giving every day of the year a nundinal letter from A to H marking its place in the cycle of market days. Each year, the letter used for the markets would shift along the cycle. As a day when the city swelled with rural plebeians, they were overseen by the aediles and took on an important role in Roman legislation, which was supposed to be announced for three nundinal weeks (between ) in advance of its coming to a vote. The patricians and their clients sometimes exploited this fact as a kind of filibuster, since the tribunes of the plebs were required to wait another three-week period if their proposals could not receive a vote before dusk on the day they were introduced. Superstitions arose concerning the bad luck that followed a nundinae on the nones of a month or, later, on the first day of January. Intercalation was supposedly used to avoid such coincidences, even after the Julian reform of the calendar.
The 7-day week began to be observed in Italy in the early imperial period, as practitioners and converts to eastern religions introduced Hellenistic and Babylonian astrology, the Jewish Saturday sabbath, and the Christian Lord's Day. The system was originally used for private worship and astrology but had replaced the nundinal week by the time Constantine made Sunday ("") an official day of rest in 321. The hebdomadal week was also reckoned as a cycle of letters from A to G; these were adapted for Christian use as the dominical letters.
The names of Roman months originally functioned as adjectives (e.g., the January kalends occur in the January month) before being treated as substantive nouns in their own right (e.g., the kalends of January occur in January). Some of their etymologies are well-established: January and March honor the gods Janus and Mars; July and August honor the dictator Julius Caesar and his successor, the emperor Augustus; and the months Quintilis, Sextilis, September, October, November, and December are archaic adjectives formed from the ordinal numbers from , their position in the calendar when it began around the spring equinox in March. Others are uncertain. February may derive from the Februa festival or its eponymous ' ("purifications, expiatory offerings"), whose name may be either Sabine or preserve an archaic word for sulphuric. April may relate to the Etruscan goddess Apru or the verb ' ("to open"). May and June may honor Maia and Juno or derive from archaic terms for "senior" and "junior". A few emperors attempted to add themselves to the calendar after Augustus, but without enduring success.
In classical Latin, the days of each month were usually reckoned as:
Dates after the ides count forward to the kalends of the next month and are expressed as such. For example, March19 was expressed as "the 14th day before the April Kalends" ('), without a mention of March itself. The day after a kalends, nones, or ides was also often expressed as the "day after" (') owing to their special status as particularly unlucky "black days".
The anomalous status of the new 31-day months under the Julian calendar was an effect of Caesar's desire to avoid affecting the festivals tied to the nones and ides of various months. However, because the dates at the ends of the month all counted forward to the next kalends, they were all shifted by one or two days by the change. This created confusion with regard to certain anniversaries. For instance, Augustus's birthday on the 23rdday of September was ' in the old calendar but ' under the new system. The ambiguity caused honorary festivals to be held on either or both dates.
The Republican calendar only had 355days, which meant that it would quickly unsynchronize from the solar year, causing, for example, agricultural festivals to occur out of season. The Roman solution to this problem was to periodically lengthen the calendar by adding extra days "within" February. February was broken into two parts, each with an odd number of days. The first part ended with the Terminalia on the 23rd ('), which was considered the end of the religious year; the five remaining days beginning with the Regifugium on the 24th (') formed the second part; and the intercalary month Mercedonius was inserted between them. In such years, the days between the ides and the Regifugium were counted down to either the Intercalary Kalends or to the Terminalia. The intercalary month counted down to nones and ides on its 5th and 13th day in the manner of the other short months. The remaining days of the month counted down towards the March Kalends, so that the end of Mercedonius and the second part of February were indistinguishable to the Romans, one ending on ' and the other picking up at ' and bearing the normal festivals of such dates.
Apparently because of the confusion of these changes or uncertainty as to whether an intercalary month would be ordered, dates after the February ides are attested as sometimes counting down towards the Quirinalia (Feb.17), the Feralia (Feb.21), or Terminalia (Feb.23) rather than the intercalary or March kalends.
The third-century writer Censorinus says:
When it was thought necessary to add (every two years) an intercalary month of , so that the civil year should correspond to the natural (solar) year, this intercalation was in preference made in February, between Terminalia [23rd] and Regifugium [24th].
The fifth-century writer Macrobius says that the Romans intercalated in alternate years ("Saturnalia", 1.13.12); the intercalation was placed after 23February and the remaining five days of February followed ("Saturnalia", 1.13.15). To avoid the nones falling on a nundine, where necessary an intercalary day was inserted "in the middle of the Terminalia, where they placed the intercalary month".
This is historically correct. In 167BC Intercalaris began on the day after 23February and in 170BC it began on the second day after 23February. Varro, writing in the first centuryBC, says "the twelfth month was February, and when intercalations take place the five last days of this month are removed." Since all the days after the Ides of Intercalaris were counted down to the beginning of March Intercalaris had either 27days (making 377 for the year) or 28 (making 378 for the year).
There is another theory which says that in intercalary years February had and Intercalaris had 27. No date is offered for the Regifugium in 378-day years. Macrobius describes a further refinement whereby, in one 8-year period within a 24-year cycle, there were only three intercalary years, each of 377days. This refinement brings the calendar back in line with the seasons, and averages the length of the year to 365.25days over 24years.
The Pontifex Maximus determined when an intercalary month was to be inserted. On average, this happened in alternate years. The system of aligning the year through intercalary months broke down at least twice: the first time was during and after the Second Punic War. It led to the reform of the 191 Acilian Law on Intercalation, the details of which are unclear, but it appears to have successfully regulated intercalation for over a century. The second breakdown was in the middle of the first century and may have been related to the increasingly chaotic and adversarial nature of Roman politics at the time. The position of Pontifex Maximus was not a full-time job; it was held by a member of the Roman elite, who would almost invariably be involved in the machinations of Roman politics. Because the term of office of elected Roman magistrates was defined in terms of a Roman calendar year, a Pontifex Maximus would have reason to lengthen a year in which he or his allies were in power or shorten a year in which his political opponents held office.
Although there are many stories to interpret the intercalation, a period of is always synodic month short. Obviously, the month beginning shifts forward (from the new moon, to the third quarter, to the full moon, to the first quarter, back the new moon) after intercalation.
As mentioned above, Rome's legendary 10-month calendar notionally lasted for 304days but was usually thought to make up the rest of the solar year during an unorganized winter period. The unattested but almost certain lunar year and the pre-Julian civil year were long, with the difference from the solar year more or less corrected by an irregular intercalary month. The Julian year was 365days long, with a leap day doubled in length every fourth year, almost equivalent to the present Gregorian system.
The calendar era before and under the Roman kings is uncertain but dating by regnal years was common in antiquity. Under the Roman Republic, from 509, years were most commonly described in terms of their reigning ordinary consuls. (Temporary and honorary consuls were sometimes elected or appointed but were not used in dating.) Consular lists were displayed on the public calendars. After the institution of the Roman Empire, regnal dates based on the emperors' terms in office became more common. Some historians of the later republic and early imperial eras dated from the legendary founding of the city of Rome (" or ). Varro's date for this was 753 but other writers used different dates, varying by several decades. Such dating was, however, never widespread. After the consuls waned in importance, most Roman dating was regnal or followed Diocletian's 15-year Indiction tax cycle. These cycles were not distinguished, however, so that "year 2 of the indiction" may refer to any of 298, 313, 328, &c. The Orthodox subjects of the Byzantine Empire used various Christian eras, including those based on Diocletian's persecutions, Christ's incarnation, and the supposed age of the world.
The Romans did not have records of their early calendars but, like modern historians, assumed the year originally began in March on the basis of the names of the months following June. The consul M. Fulvius Nobilior (r.189) wrote a commentary on the calendar at the Temple of Hercules Musarum that claimed January had been named for Janus because the god faced both ways, suggesting it had been instituted as a first month. It was, however, usually said to have been instituted along with February, whose nature and festivals suggest it had originally been considered the last month of the year. The consuls' term of office—and thus the order of the years under the republic—seems to have changed several times. Their inaugurations were finally moved to 1January (') in 153 to allow Q. Fulvius Nobilior to attack Segeda in Spain during the Celtiberian Wars, before which they had occurred on 15March ('). There is reason to believe the inauguration date had been 1May during the until 222 and Livy mentions earlier inaugurations on 15May ('), 1July ('), 1August ('), 1October ('), and 15December ("). Under the Julian calendar, the year began on 1January but years of the Indiction cycle began on 1September.
In addition to Egypt's separate calendar, some provinces maintained their records using a local era. Africa dated its records sequentially from 39; Spain from 38. This dating system continued as the Spanish era used in medieval Spain.
The continuity of names from the Roman to the Gregorian calendar can lead to the mistaken belief that Roman dates correspond to Julian or Gregorian ones. In fact, the essentially complete list of Roman consuls allows general certainty of years back to the establishment of the republic but the uncertainty as to the end of lunar dating and the irregularity of Roman intercalation means that dates which can be independently verified are invariably weeks to months outside of their "proper" place. Two astronomical events dated by Livy show the calendar 4 months out of alignment with the Julian date in 190 and 2 months out of alignment in 168. Thus, "the year of the consulship of Publius Cornelius Sciopio Africanus and Publius Licinius Crassus" (usually given as "205") actually began on 15March 205 and ended on 14March 204 according to the Roman calendar but may have begun as early as November or December 206 owing to its misalignment. Even following the establishment of the Julian calendar, the leap years were not applied correctly by the Roman priests, meaning dates are a few days out of their "proper" place until a few decades into Augustus's reign.
Given the paucity of records regarding the state of the calendar and its intercalation, historians have reconstructed the correspondence of Roman dates to their Julian and Gregorian equivalents from disparate sources. There are detailed accounts of the decades leading up to the Julian reform, particularly the speeches and letters of Cicero, which permit an established chronology back to about 58. The nundinal cycle and a few known synchronisms—e.g., a Roman date in terms of the Attic calendar and Olympiad—are used to generate contested chronologies back to the start of the First Punic War in 264. Beyond that, dates are roughly known based on clues such as the dates of harvests and seasonal religious festivals. | https://en.wikipedia.org/wiki?curid=25792 |
Revolver
A revolver (also called a wheel gun) is a repeating handgun that has a revolving cylinder containing multiple chambers (each holding a single cartridge) and at least one barrel for firing. Before firing a round, the hammer is cocked and the cylinder rotates partially, indexing one of the cylinder chambers into alignment with the barrel, which allows the bullet to be fired through the bore. The hammer cocking can be achieved by either the user manually pulling the hammer back (as in single-action), via internal linkage relaying a rearward movement of the trigger (as in double-action), or both (as in double/single-action). By sequentially rotating through each chamber, the revolver allows the user to fire multiple times until having to reload the gun, unlike older single-shot firearms that had to be reloaded after each shot.
Although largely surpassed in convenience and ammunition capacity by semi-automatic pistols, revolvers still remain popular as back-up and off-duty handguns among American law enforcement officers and security guards and are still common in the American private sector as defensive and sporting/hunting firearms. Famous revolvers models include the Colt 1851 Navy Revolver, the Webley, the Colt Single Action Army, the Colt Official Police, Smith & Wesson Model 10, the Smith & Wesson Model 29 of "Dirty Harry" fame, the Nagant M1895, and the Colt Python.
Though the majority of weapons using a revolver mechanism are handguns, other firearms may also have a revolver action. These include some models of grenade launchers, shotguns, rifles and cannons. Revolver weapons differ from Gatling-style rotary weapons in that in a revolver only the chambers rotate, while in a rotary weapon there are multiple full firearm actions with their own barrels which rotate around a common ammunition feed.
In the development of firearms, an important limiting factor was the time required to reload the weapon after it was fired. While the user was reloading, the weapon was useless, effectively providing an adversary the opportunity to attack the user. Several approaches to the problem of increasing the rate of fire were developed, the earliest involving multi-barrelled weapons which allowed two or more shots without reloading.
Later weapons featured multiple barrels revolving along a single axis.
During the late 16th century in China, Zhao Shi-zhen invented the Xun Lei Chong, a five-barreled musket revolver spear. Around the same time, the earliest examples of what today is called a revolver were made in Germany. These weapons featured a single barrel with a revolving cylinder holding the powder and ball. They would soon be made by many European gun-makers, in numerous designs and configurations. However, these weapons were difficult to use, complicated and prohibitively expensive to make, and as such they were not widely distributed.
In the early 19th century multiple-barrel handguns called "pepper-boxes" were popular. Originally they were muzzle loaders, but in 1837 the Belgian gunsmith Mariette invented a hammerless pepperbox with a ring trigger and turn-off barrels that could be unscrewed.
In 1836, an American, Samuel Colt, patented a popular revolver which led to the widespread use of the revolver. According to Colt, he came up with the idea for the revolver while at sea, inspired by the capstan, which had a ratchet and pawl mechanism on it, a version of which was used in his guns to rotate the cylinder by cocking the hammer. This provided a reliable and repeatable way to index each round and did away with the need to manually rotate the cylinder. Revolvers proliferated largely due to Colt's ability as a salesman. But his influence spread in other ways as well; the build quality of his company's guns became famous, and its armories in America and England trained several seminal generations of toolmakers and other machinists, who had great influence in other manufacturing efforts of the next half century.
Early revolvers were caplocks and loaded as a muzzle-loader: the user poured black powder into each chamber, rammed down a bullet on top of it, then placed percussion caps on the nipple at the rear of each chamber, where the hammer would fall on it. This was similar to loading a traditional single-shot muzzle-loading pistol, except that the powder and shot could be loaded directly into the front of the cylinder rather than having to be loaded down the whole length of the barrel. Importantly, this allowed the barrel itself to be rifled, since the user wasn't required to force the tight fitting bullet down the barrel in order to load it (a traditional muzzle-loading pistol had a smoothbore and relatively loose fitting shot, which allowed easy loading, but gave much less accuracy). When firing the next shot, the user would raise his pistol vertically as he cocked the hammer back so as to let the fragments of the burst percussion cap fall out so as to not jam the mechanism. Some of the most popular cap-and-ball revolvers were the Colt Model 1851 "Navy" Model, 1860 "Army" Model, and Colt Pocket Percussion revolvers, all of which saw extensive use in the American Civil War. Although American revolvers were the most common, European arms makers were making numerous revolvers by that time as well, many of which found their way into the hands of the American forces, including the single action Lefaucheux and LeMat revolver and the Beaumont–Adams and Tranter revolvers, which were early double-action weapons, in spite of being muzzle-loaders.
In 1854, Eugene Lefaucheux introduced the Lefaucheux Model 1854, the first revolver to use self-contained metallic cartridges rather than loose powder, pistol ball, and percussion caps. It is a single-action, pinfire revolver holding six rounds.
On November 17, 1856, Daniel B. Wesson and Horace Smith signed an agreement for the exclusive use of the Rollin White Patent at a rate of 25 cents for every revolver. Smith & Wesson began production late in 1857 and enjoyed years of exclusive production of rear-loading cartridge revolvers in America, due to their association with Rollin White, who held the patent and vigorously defended it against any perceived infringement by other manufacturers (much as Colt had done with his original patent on the revolver). Although White held the patent, other manufacturers were able to sell firearms using the design, provided they were willing to pay royalties.
After White's patent expired in April 1869, a 3rd extension was refused. Other gun-makers were then allowed to produce their own weapons using the rear-loading method, without having to pay a royalty on each gun sold. Early guns were often conversions of earlier cap-and-ball revolvers, modified to accept metallic cartridges loaded from the rear, but later models, such as the Colt Model 1872 "Open Top" and the Smith & Wesson Model 3, were designed from the start as cartridge revolvers.
In 1873, Colt introduced the famous Model 1873, also known as the Single Action Army, the "Colt .45" (not to be confused with Colt-made models of the M1911 semi-automatic) or simply, "the Peacemaker", one of the most famous handguns ever made. This popular design, which was a culmination of many of the advances introduced in earlier weapons, fired 6 metallic cartridges and was offered in over 30 different calibers and various barrel lengths. It is still in production, along with numerous clones and lookalikes, and its overall appearance has remained the same since 1873. Although originally made for the United States Army, the Model 1873 was widely distributed and popular with civilians, ranchers, lawmen, and outlaws alike. Its design has influenced countless other revolvers. Colt has discontinued its production twice, but brought it back due to popular demand and continues to make it to this day.
In the U.S. the traditional single-action revolver still reigned supreme until the late 19th century. In Europe, however, arms makers were quick to adopt the double-action trigger. While the US was producing weapons like the Model 1873, the Europeans were building double-action models like the French MAS Modèle 1873 and the somewhat later British Enfield Mk I and II revolvers (Britain relied on cartridge conversions of the earlier Beaumont–Adams double-action prior to this). Colt's first attempt at a double action revolver to compete with the European manufacturers was the Colt Model 1877, which earned lasting notoriety for its overly complex, expensive and fragile trigger mechanism, which in addition to failing frequently, also had a terrible trigger pull unless given the attentions of a competent gunsmith.
In 1889, Colt introduced the Model 1889, the first truly modern double action revolver, which differed from earlier double action revolvers by having a "swing-out" cylinder, as opposed to a "top-break" or "side-loading" cylinder. Swing out cylinders quickly caught on, because they combined the best features of earlier designs. Top-break actions gave the ability to eject all empty shells simultaneously, and exposed all chambers for easy reloading, but having the frame hinged into two halves weakened the gun and negatively affected accuracy, due to lack of rigidity. "Side-loaders", like the earlier Colt Model 1871 and 1873, gave a rigid frame, but required the user to eject and load one chamber at a time, as they rotated the cylinder to line each chamber up with the side-mounted loading gate. Smith & Wesson followed 7 years later with the "Hand Ejector, Model 1896" in .32 S&W Long caliber, followed by the very similar, yet improved, Model 1899 (later known as the Model 10), which introduced the new .38 Special cartridge. The Model 10 went on to become the best selling handgun of the 20th century, at 6,000,000 units, and the .38 Special is still the most popular chambering for revolvers in the world. These new guns were an improvement over the Colt 1889 design since they incorporated a combined center-pin and ejector rod to lock the cylinder in position. The 1889 did not use a center pin and the cylinder was prone to move out of alignment.
Revolvers have remained popular to the present day in many areas, although in the military and law enforcement, they have largely been supplanted by magazine-fed semi-automatic pistols such as the Beretta M9, especially in circumstances where reload time and higher cartridge capacity are deemed important.
Elisha Collier of Boston, Massachusetts patented a flintlock revolver in Britain in 1818, and significant numbers were being produced in London by 1822. The origination of this invention is in doubt, as similar designs were patented in the same year by Artemus Wheeler in the United States and by Cornelius Coolidge in France. Samuel Colt submitted a British patent for his revolver in 1835 and an American patent (number 138) on February 25, 1836 for a "Revolving gun", and made the first production model on March 5 of that year.
Another revolver patent was issued to Samuel Colt on August 29, 1839. The February 25, 1836 patent was then reissued as entitled "Revolving gun" on October 24, 1848. This was followed by on September 3, 1850 for a "Revolver", and by on September 10, 1850 for a "Revolver". was issued to Roger C. Field for an economical device for minimizing the flash gap of a revolver between the barrel and the cylinder. In 1855, Rollin White patented the bored-through cylinder entitled "Improvement in revolving fire-arms" . In 1856 Horace Smith & Daniel Wesson formed a partnership (S&W), developed and manufactured a revolver chambered for a self-contained metallic cartridge.
A revolver works by having several firing chambers arranged in a circle in a cylindrical block that are brought into alignment with the firing mechanism and barrel one at a time. In contrast, other repeating firearms, such as bolt-action, lever-action, pump-action, and semi-automatic, have a single firing chamber and a mechanism to load and extract cartridges into it.
A single-action revolver requires the hammer to be pulled back by hand before each shot, which also revolves the cylinder. This leaves the trigger with just one "single action" left to perform - releasing the hammer to fire the shot - so the force and distance required to pull the trigger can be minimal. In contrast, with a self-cocking revolver, one long squeeze of the trigger pulls back the hammer and revolves the cylinder, then finally fires the shot. They can generally be fired faster than a single-action, but with reduced accuracy in the hands of most shooters.
Most modern revolvers are "traditional double-action", which means they may operate either in single-action or self-cocking mode. The accepted meaning of "double-action" has, confusingly, come to be the same as "self-cocking", so modern revolvers that cannot be pre-cocked are called "double-action-only". These are intended for concealed carry, because the hammer of a traditional design is prone to snagging on clothes when drawn. Most revolvers do not come with accessory rails, which are used for mounting lights and lasers, except for the Smith & Wesson M&P R8 (.357 Magnum), Smith & Wesson Model 325 Thunder Ranch (.45 ACP), and all versions of the Chiappa Rhino (.357 Magnum, 9×19mm, .40 S&W, or 9×21mm) except for the 2" model, respectively. However, certain revolvers, such as the Taurus Judge and Charter Arms revolvers, can be fitted with accessory rails.
Most commonly, such revolvers have 5 or 6 chambers, hence the common names of "six-gun" or "six-shooter". However, some revolvers have 7, 8, 9, or 10 chambers, often depending on the caliber, and at least one revolver has 12 chambers (the US Fire Arms Model 12/22). Each chamber has to be reloaded manually, which makes reloading a revolver a much slower procedure than reloading a semi-automatic pistol.
Compared to autoloading handguns, a revolver is often much simpler to operate and may have greater reliability. For example, should a semiautomatic pistol fail to fire, clearing the chamber requires manually cycling the action to remove the errant round, as cycling the action normally depends on the energy of a cartridge firing. With a revolver, this is not necessary as none of the energy for cycling the revolver comes from the firing of the cartridge, but is supplied by the user either through cocking the hammer or, in a double-action design, by just squeezing the trigger. Another significant advantage of revolvers is superior ergonomics, particularly for users with small hands. A revolver's grip does not hold a magazine, and it can be designed or customized much more than the grip of a typical semi-automatic. Partially because of these reasons, revolvers still hold significant market share as concealed carry and home-defense weapons.
A revolver can be kept loaded and ready to fire without fatiguing any springs and is not very dependent on lubrication for proper firing. Additionally, in the case of double-action-only revolvers there is no risk of accidental discharge from dropping alone, as the hammer is cocked by the trigger pull. However, the revolver's clockwork-like internal parts are relatively delicate and can become misaligned after a severe impact, and its revolving cylinder can become jammed by excessive dirt or debris.
Over the long period of development of the revolver, many calibers have been used. Some of these have proved more durable during periods of standardization and some have entered general public awareness. Among these are the .22 rimfire, a caliber popular for target shooting and teaching novice shooters; .38 Special and .357 Magnum, known for police use; the .44 Magnum, famous from Clint Eastwood's "Dirty Harry" films; and the .45 Colt, used in the Colt revolver of the Wild West. Introduced in 2003, the Smith & Wesson Model 500 is one of the most powerful revolvers, utilizing the .500 S&W Magnum cartridge.
Because the rounds in a revolver are headspaced on the rim, some revolvers are capable of chambering more than one type of ammunition. The .44 Magnum round will chamber the shorter .44 Special and shorter .44 Colt, likewise the .357 Magnum will safely chamber .38 Special and .38 Short Colt. In 1996 a revolver known as the Medusa M47 was made that could chamber 25 different cartridges with bullet diameters between .355" and .357".
Revolver technology lives on in other weapons used by the military. Some autocannons and grenade launchers use mechanisms similar to revolvers, and some riot shotguns use spring-loaded cylinders holding up to 12 rounds. In addition to serving as backup guns, revolvers still fill the specialized niche role as a shield gun; law enforcement personnel using a "bulletproof" gun shield sometimes opt for a revolver instead of a self-loading pistol, because the slide of a pistol may strike the front of the shield when fired. Revolvers do not suffer from this disadvantage. A second revolver may be secured behind the shield to provide a quick means of continuity of fire. Many police also still use revolvers as their duty weapon due to their relative mechanical simplicity and user friendliness.
With the advancement of technology and design in 2010 major revolver manufacturers are coming out with polymer frame revolvers like the Ruger LCR, Smith & Wesson Bodyguard 38, and Taurus Protector Polymer. The new innovative design incorporates advanced polymer technology that lowers weight significantly, helps absorbs recoil, and strong enough to handle .38 Special +P and .357 Magnum loads. The polymer is only used on the lower frame and joined to a metal alloy upper frame, barrel, and cylinder. Polymer technology is considered one of the major advancements in revolver history because the frame has always been metal alloy and mostly one piece frame design.
Another recent development in revolver technology is the Rhino, a revolver introduced by Italian manufacturer Chiappa in 2009 and first sold in the U.S. in 2010. The Rhino, built with the U.S. concealed carry market in mind, is designed so that the bullet fires from the bottom chamber of the cylinder instead of the top chamber as in standard revolvers. This is intended to reduce muzzle flip, allowing for faster and more accurate repeat shots. In addition, the cylinder cross-section is hexagonal instead of circular, further reducing the weapon's profile.
The first revolvers were "front loading" (also referred to as muzzleloading), and were a bit like muskets in that the powder and bullet were loaded separately. These were caplocks or "cap and ball" revolvers, because the caplock method of priming was the first to be compact enough to make a practical revolver feasible. When loading, each chamber in the cylinder was rotated out of line with the barrel, and charged from the front with loose powder and an oversized bullet. Next, the chamber was aligned with the ramming lever underneath the barrel. Pulling the lever would drive a rammer into the chamber, pushing the ball securely in place. Finally, the user would place percussion caps on the nipples on the rear face of the cylinder.
After each shot, a user was advised to raise his revolver vertically while cocking back the hammer so as to allow the fragments of the spent percussion cap to fall out safely. Otherwise, the fragments could fall into the revolver's mechanism and jam it. Caplock revolvers were vulnerable to "chain fires", wherein hot gas from a shot ignited the powder in the other chambers. This could be prevented by sealing the chambers with cotton, wax, or grease.
Loading a cylinder in this manner was a slow and awkward process and generally could not be done in the midst of battle. Some soldiers solved this by carrying multiple revolvers in the field. Another solution was to use a revolver with a detachable cylinder design. These revolvers allowed the shooter to quickly remove a cylinder and replace it with a full one.
In many of the first generation of cartridge revolvers (especially those that were converted after manufacture), the base pin on which the cylinder revolved was removed, and the cylinder taken from the revolver for loading. Most revolvers using this method of loading are single-action revolvers, although Iver Johnson produced double-action models with removable cylinders. The removable-cylinder design is employed in some modern "micro-revolvers" (usually in .22 caliber), in order to simplify their design. These weapons are small enough to fit in the palm of the hand.
Later single-action revolver models with a fixed cylinder used a loading gate at the rear of the cylinder that allowed insertion of one cartridge at a time for loading, while a rod under the barrel could be pressed rearward to eject the fired case.
The loading gate on the original Colt designs (and on nearly all single-action revolvers since, such as the famous Colt Single Action Army) is on the right side, which was done to facilitate loading while on horseback; with the revolver held in the left hand with the reins of the horse, the cartridges can be ejected and loaded with the right hand.
Because the cylinders in these types of revolvers are firmly attached at the front and rear of the frame, and the frame is typically full thickness all the way around, fixed cylinder revolvers are inherently strong designs. Accordingly, many modern large caliber hunting revolvers tend to be based on the fixed cylinder design. Fixed cylinder revolvers can fire the strongest and most powerful cartridges, but at the price of being the slowest to load and reload and they cannot use speedloaders or moon clips for loading, as only one chamber is exposed at a time to the loading gate.
In a top-break revolver, the frame is hinged at the bottom front of the cylinder. Releasing the lock and pushing the barrel down exposes the rear face of the cylinder. In most top-break revolvers, this act also operates an extractor that pushes the cartridges in the chambers back far enough that they will fall free, or can be removed easily. Fresh rounds are then inserted into the cylinder. The barrel and cylinder are then rotated back and locked in place, and the revolver is ready to fire.
Top break revolvers can be loaded more rapidly than fixed-frame revolvers, especially with the aid of a speedloader or moon clip. However, this design is much weaker and cannot handle high pressure rounds. While this design is mostly obsolete today, supplanted by the stronger yet equally convenient swing-out design, manufacturers have begun making reproductions of late 19th century designs for use in cowboy action shooting.
The most commonly found top-break revolvers were manufactured by Smith & Wesson, Webley & Scott, Iver Johnson, Harrington & Richardson, Manhattan Fire Arms, Meriden Arms and Forehand & Wadsworth.
The tip-up was the first revolver design for use with metallic cartridges in the Smith & Wesson Model 1, on which the barrel pivoted upwards, hinged on the forward end of the topstrap. On the S & W tip-up revolvers, the barrel release catch is located on both sides of the frame in front of the trigger. Smith & Wesson discontinued it in the third series of the Smith & Wesson Model 1 1/2 but it was fairly widely used in Europe in the 19th century, after a patent by Spirlet in 1870, which also included an ejector.
The most modern method of loading and unloading a revolver is by means of the "swing out cylinder". The cylinder is mounted on a pivot that is parallel to the chambers, and the cylinder swings out and down (to the left in most cases). An extractor is fitted, operated by a rod projecting from the front of the cylinder assembly. When pressed, it will push all fired rounds free simultaneously (as in top break models, the travel is designed to not completely extract longer, unfired rounds). The cylinder may then be loaded, singly or again with a speedloader, closed, and latched in place.
The pivoting part that supports the cylinder is called the crane; it is the weak point of swing-out cylinder designs. Using the method often portrayed in movies and television of flipping the cylinder open and closed with a flick of the wrist can in fact cause the crane to bend over time, throwing the cylinder out of alignment with the barrel. Lack of alignment between chamber and barrel is a dangerous condition, as it can impede the bullet's transition from chamber to barrel. This gives rise to higher pressures in the chamber, bullet damage, and the potential for an explosion if the bullet becomes stuck.
The shock of firing can exert a great deal of stress on the crane, as in most designs the cylinder is only held closed at one point, the rear of the cylinder. Stronger designs, such as the Ruger Super Redhawk, use a lock in the crane as well as the lock at the rear of the cylinder. This latch provides a more secure bond between cylinder and frame, and allows the use of larger, more powerful cartridges. Swing out cylinders are rather strong, but not as strong as fixed cylinders, and great care must be taken with the cylinder when loading, so as not to damage the crane.
One unique design was designed by Merwin Hulbert in which the barrel and cylinder assembly were rotated 90° and pulled forward to eject shells from the cylinder.
In a single-action revolver, the hammer is manually cocked, usually with the thumb of the firing or supporting hand. This action advances the cylinder to the next round and locks the cylinder in place with the chamber aligned with the barrel. The trigger, when pulled, releases the hammer, which fires the round in the chamber. To fire again, the hammer must be manually cocked again. This is called "single-action" because the trigger only performs a single action, of releasing the hammer. Because only a single action is performed and trigger pull is lightened, firing a revolver in this way allows most shooters to achieve greater accuracy. Additionally, the need to cock the hammer manually acts as a safety. Unfortunately with some revolvers, since the hammer rests on the primer or nipple, accidental discharge from impact is more likely if all 6 chambers are loaded. The Colt Paterson Revolver, the Walker Colt, the Colt's Dragoon and the Colt Single Action Army pistol of the American Frontier era are all good examples of this system.
In double-action (DA), the stroke of the trigger pull generates two actions:
Thus, DA means that a cocking action separate from the trigger pull is unnecessary; every trigger pull will result in a complete cycle. This allows uncocked carry, while also allowing draw-and-fire using only the trigger. A longer and harder trigger stroke is the trade-off. However, this drawback can also be viewed as a safety feature, as the gun is safer against accidental discharges from being dropped.
Most double-action revolvers may be fired in two ways.
Certain revolvers, called "double-action-only" (DAO) or, more correctly but less commonly, "self-cocking", lack the latch that enables the hammer to be locked to the rear, and thus can only be fired in the double-action mode. With no way to lock the hammer back, DAO designs tend to have "bobbed" or "spurless" hammers, and may even have the hammer completely covered by the revolver's frame (i.e., shrouded or hooded). These are generally intended for concealed carrying, where a hammer spur could snag when the revolver is drawn. The potential reduction in accuracy in aimed fire is offset by the increased capability for concealment.
DA and DAO revolvers were the standard-issue sidearm of countless police departments for many decades. Only in the 1980s and 1990s did the semiautomatic pistol begin to make serious inroads after the advent of safe actions. The reasons for these choices are the modes of carry and use. Double action is good for high-stress situations because it allows a mode of carry in which "draw and pull the trigger" is the only requirement—no safety catch release nor separate cocking stroke is required.
In the cap-and-ball days of the mid 19th century, two revolver models, the English Tranter and the American Savage "Figure Eight", used a method whereby the hammer was cocked by the shooter’s middle finger pulling on a second trigger below the main trigger.
Iver Johnson made an unusual model from 1940 to 1947 called the "Trigger Cocking Double Action". If the hammer was down, pulling the trigger would cock the hammer. If the trigger was pulled with the hammer cocked, it would then fire. This meant that to fire the revolver from a hammer down state, the trigger must be pulled twice.
The Zig zag revolver is a 3D printed .38 Revolver made public in May 2014. It was created by a $500 3D-printer using plastic filament, but the name of the printer was not revealed by the creator. It was created by a Japanese citizen from Kawasaki named Yoshitomo Imura. He was arrested in May 2014 after he had posted a video online of himself firing a 3D printed Zig Zag revolver. It is the first 3D printed Japanese gun in the world which can discharge live cartridges.
As a general rule, revolvers cannot be effective with a sound suppressor ("silencer"), as there is usually a small gap between the revolving cylinder and the barrel which a bullet must traverse or jump when fired. From this opening, a rather loud report is produced. A suppressor can only suppress noise coming from the muzzle.
A suppressible revolver design does exist in the Nagant M1895, a Belgian designed revolver used by Imperial Russia and later the Soviet Union from 1895 through World War II. This revolver uses a unique cartridge whose case extends beyond the tip of the bullet, and a cylinder that moves forward to place the end of the cartridge inside the barrel when ready to fire. This bridges the gap between the cylinder and the barrel, and expands to seal the gap when fired. While the tiny gap between cylinder and barrel on most revolvers is insignificant to the internal ballistics, the seal is especially effective when used with a suppressor, and a number of suppressed Nagant revolvers have been used since its invention.
There is a modern revolver of Russian design, the OTs-38, which uses ammunition that incorporates the silencing mechanism into the cartridge case, making the gap between cylinder and barrel irrelevant as far as the suppression issue is concerned. The OTs-38 does need an unusually close and precise fit between the cylinder and barrel due to the shape of bullet in the special ammunition (Soviet SP-4), which was originally designed for use in a semi-automatic.
Additionally, the US Military experimented with designing a special version of the Smith & Wesson Model 29 for Tunnel Rats, called the Quiet Special Purpose Revolver or QSPR. Using special .40 caliber ammunition, it never entered official service.
The term "automatic revolver" has two different meanings, the first being used in the late nineteenth and early twentieth centuries when "automatic" referred not to the operational mechanism of firing, but of extraction and ejection of spent casings. An "automatic revolver" in this context is one which extracts empty fired cases "automatically," i.e., upon breaking open the action, rather than requiring manual extraction of each case individually with a sliding rod or pin (as in the Colt Single Action Army design). This term was widely used in the advertising of the period as a way to distinguish such revolvers from the far more common rod-extraction types.
In the second sense, "automatic revolver" refers to the mechanism of firing rather than extraction. Double-action revolvers use a long trigger pull to cock the hammer, thus negating the need to manually cock the hammer between shots. The disadvantage of this is that the long, heavy pull cocking the hammer makes the double-action revolver much harder to shoot accurately than a single-action revolver (although cocking the hammer of a double-action reduces the length and weight of the trigger pull). A rare class of revolvers, called automatic for its firing design, attempts to overcome this restriction, giving the high speed of a double-action with the trigger effort of a single-action. The Webley-Fosbery Automatic Revolver is the most famous commercial example. It was recoil-operated, and the cylinder and barrel recoiled backwards to cock the hammer and revolve the cylinder. Cam grooves were milled on the outside of the cylinder to provide a means of advancing to the next chamber—half a turn as the cylinder moved back, and half a turn as it moved forward. .38 caliber versions held eight shots, .455 caliber versions six. At the time, the few available automatic pistols were larger, less reliable, and more expensive. The automatic revolver was popular when it first came out, but was quickly superseded by the creation of reliable, inexpensive semi-automatic pistols.
In 1997, the Mateba company developed a type of recoil-operated automatic revolver, commercially named the Mateba Autorevolver, which uses the recoil energy to auto-rotate a normal revolver cylinder holding six or seven cartridges, depending on the model. The company has made several versions of its Autorevolver, including longer-barrelled and carbine variations, chambered for .357 Magnum, .44 Magnum and .454 Casull.
The Pancor Jackhammer is a combat shotgun based on a similar mechanism to an automatic revolver. It uses a blow-forward action to move the barrel forward (which unlocks it from the cylinder) and then rotate the cylinder and cock the hammer.
Revolvers were not limited to handguns and as a longer barrelled arm is more useful in military applications than a sidearm, the idea was applied to both rifles and shotguns throughout the history of the revolver mechanism with mixed degrees of success.
Revolving rifles were an attempt to increase the rate of fire of rifles by combining them with the revolving firing mechanism that had been developed earlier for revolving pistols. Colt began experimenting with revolving rifles in the early 19th century, making them in a variety of calibers and barrel lengths. Colt revolving rifles were the first repeating rifles adopted by the U.S. Government, but they had their problems. They were officially given to soldiers because of their rate of fire. But after firing six shots, the shooter had to take an excessive amount of time to reload. Also, on occasion Colt rifles discharged all their rounds at once, endangering the shooter. Even so, an early model was used in the Seminole Wars in 1838. During the Civil War a LeMat Carbine was made based on the LeMat revolver.
Colt briefly manufactured several revolving shotguns that were met with mixed success. The Colt Model 1839 Shotgun was manufactured between 1839 and 1841. Later, the Colt Model 1855 Shotgun, based on the Model 1855 revolving rifle, was manufactured between 1860 and 1863. Because of their low production numbers and age they are among the rarest of all Colt firearms.
The Armsel Striker was a modern take on the revolving shotgun that held 10 rounds of 12 Gauge ammunition in its cylinder. It was copied by Cobray as the Streetsweeper.
Taurus manufactures a carbine variant of the Taurus Judge revolver along with its Australian partner company, Rossi known as the "Taurus/Rossi Circuit Judge". It comes in the original combination chambering of .410 bore and .45 Long Colt, as well as the .44 Remington Magnum chambering. The rifle has small blast shields attached to the cylinder to protect the shooter from hot gases escaping between the cylinder and barrel.
The MTs255 () is a shotgun fed by a 5-round internal revolving cylinder. It is produced by the TsKIB SOO, Central Design and Research Bureau of Sporting and Hunting Arms. They are available in 12, 20, 28 and 32 gauges, and .410 bore.
The Hawk MM-1, Milkor MGL, RG-6, and RGP-40 are grenade launchers that use a revolver action. Because the cylinders are much more massive, they use a spring-wound mechanism to index the cylinder.
Revolver cannons use a motor-driven revolver-like mechanism to fire.
A six gun is a revolver that holds six cartridges. The cylinder in a six gun is often called a "wheel", and the six gun is itself often called a "wheel gun". Although a "six gun" can refer to any six-chambered revolver, it is typically a reference to the Colt Single Action Army, or its modern look-alikes such as the Ruger Vaquero and Beretta Stampede.
Until the 1970s, when older-design revolvers such as the Colt Single Action Army and Ruger Blackhawk were re-engineered with drop safeties (such as firing pin blocks, hammer blocks, or transfer bars) that prevent the firing pin from contacting the cartridge's primer unless the trigger is pulled, safe carry required the hammer being positioned over an empty chamber, reducing the available cartridges from six to five, or, on some models, in between chambers on either a pin or in a groove for that purpose, thus keeping the full six rounds available. This kept the uncocked hammer from resting directly on the primer of a cartridge. If not used in this manner, the hammer rests directly on a primer and unintentional firing may occur if the gun is dropped or the hammer is struck. Some holster makers provided a thick leather thong to place underneath the hammer that both allowed the carry of a gun fully loaded with all six rounds and secured the gun in the holster to help prevent its accidental loss.
Six guns are used commonly by single-action shooting enthusiasts in shooting competitions, designed to mimic the gunfights of the Old West, and for general target shooting, hunting and personal defense. | https://en.wikipedia.org/wiki?curid=25794 |
Robert Freitas
Robert A. Freitas Jr. (born 1952) is a nanotechnology scientist.
Freitas holds a 1974 Bachelor's degree majoring in both physics and psychology from Harvey Mudd College, and a 1978 Juris Doctor (J.D.) degree from Santa Clara University School of Law. He has written more than 150 technical papers, book chapters, or popular articles on a diverse set of scientific, engineering, and legal topics.
Freitas began writing his "Nanomedicine" book series in 1994. Volume I, published in October 1999 by Landes Bioscience while Freitas was a Research Fellow at the Institute for Molecular Manufacturing. Volume IIA was published in October 2003 by Landes Bioscience.
In 2004, Freitas and Ralph Merkle coauthored and published "Kinematic Self-Replicating Machines", a comprehensive survey of the field of physical and hypothetical self-replicating machines.
In 2009, Freitas was awarded the Feynman Prize in Nanotechnology. | https://en.wikipedia.org/wiki?curid=25795 |
Reykjavík
Reykjavík ( ; ) is the capital and largest city of Iceland. It is located in southwestern Iceland, on the southern shore of Faxaflói bay. Its latitude is 64°08' N, making it the world's northernmost capital of a sovereign state. With a population of around 131,136 (and 233,034 in the Capital Region), it is the center of Iceland's cultural, economic, and governmental activity, and is a popular tourist destination.
Reykjavík is believed to be the location of the first permanent settlement in Iceland, which, according to Landnámabók, was established by Ingólfr Arnarson in AD 874. Until the 19th century, there was no urban development in the city location. The city was founded in 1785 as an official trading town and grew steadily over the following decades, as it transformed into a regional and later national centre of commerce, population, and governmental activities. It is among the cleanest, greenest, and safest cities in the world.
The first permanent settlement in Iceland by Norsemen is believed to have been established at Reykjavík by Ingólfr Arnarson around AD 870; this is described in "Landnámabók", or the Book of Settlement. Ingólfur is said to have decided the location of his settlement using a traditional Norse method: he cast his high seat pillars (Öndvegissúlur) into the ocean when he saw the coastline, then settled where the pillars came to shore. This story is widely regarded as a legend; it appears likely that he settled near the hot springs to keep warm in the winter and would not have decided the location by happenstance. Furthermore, it seems unlikely that the pillars drifted to that location from where they were said to have been thrown from the boat. Nevertheless, that is what the "Landnamabok" says, and it says furthermore that Ingólfur's pillars are still to be found in a house in the town.
Steam from hot springs in the region is said to have inspired Reykjavík's name, which loosely translates to Smoke Cove (the city is sometimes referred to as "Bay of Smoke" or "Smoky Bay" in English language travel guides). In the modern language, as in English, the word for 'smoke' and the word for fog or steamy vapour are not commonly confused, but this is believed to have been the case in the old language.
The original name was Reykjarvík (with an additional "r" representing the usual genitive ending of strong nouns) but this had vanished around 1800.
The Reykjavík area was farmland until the 18th century. In 1752, King Frederik V of Denmark donated the estate of Reykjavík to the Innréttingar Corporation; the name comes from the Danish-language word "indretninger", meaning institution. The leader of this movement was . In the 1750s, several houses were built to house the wool industry, which was Reykjavík's most important employer for a few decades and the original reason for its existence. Other industries were undertaken by the Innréttingar, such as fisheries, sulphur mining, agriculture, and shipbuilding.
The Danish Crown abolished monopoly trading in 1786 and granted six communities around the country an exclusive trading charter. Reykjavík was one of them and the only one to hold on to the charter permanently. 1786 is thus regarded as the date of the city's founding. Trading rights were limited to subjects of the Danish Crown, and Danish traders continued to dominate trade in Iceland. Over the following decades, their business in Iceland expanded. After 1880, free trade was expanded to all nationalities, and the influence of Icelandic merchants started to grow.
Icelandic nationalist sentiment gained influence in the 19th century, and the idea of Icelandic independence became widespread. Reykjavík, as Iceland's only city, was central to such ideas. Advocates of an independent Iceland realized that a strong Reykjavík was fundamental to that objective. All the important events in the history of the independence struggle were important to Reykjavík as well. In 1845 Alþingi, the general assembly formed in 930 AD, was re-established in Reykjavík; it had been suspended a few decades earlier when it was located at Þingvellir. At the time it functioned only as an advisory assembly, advising the king about Icelandic affairs. The location of Alþingi in Reykjavík effectively established the city as the capital of Iceland.
In 1874, Iceland was given a constitution; with it, Alþingi gained some limited legislative powers and in essence became the institution that it is today. The next step was to move most of the executive power to Iceland: Home Rule was granted in 1904 when the office of Minister For Iceland was established in Reykjavík. The biggest step towards an independent Iceland was taken on 1 December 1918 when Iceland became a sovereign country under the Crown of Denmark, the Kingdom of Iceland.
By the 1920s and 1930s most of the growing Icelandic fishing trawler fleet sailed from Reykjavík; cod production was its main industry, but the Great Depression hit Reykjavík hard with unemployment, and labour union struggles sometimes became violent.
On the morning of 10 May 1940, following the German occupation of Denmark and Norway on 9 April 1940, four British warships approached Reykjavík and anchored in the harbour. In a few hours, the allied occupation of Reykjavík was complete. There was no armed resistance, and taxi and truck drivers even assisted the invasion force, which initially had no motor vehicles. The Icelandic government had received many requests from the British government to consent to the occupation, but it always declined on the basis of the Neutrality Policy. For the remaining years of World War II, British and later American soldiers occupied camps in Reykjavík, and the number of foreign soldiers in Reykjavík became about the same as the local population of the city. The Royal Regiment of Canada formed part of the garrison in Iceland during the early part of the war.
The economic effects of the occupation were positive for Reykjavík: the unemployment of the Depression years vanished, and construction work began. The British built Reykjavík Airport, which is still in service today, mostly for short haul flights (to domestic destinations and Greenland). The Americans, meanwhile, built Keflavík Airport, situated west of Reykjavík, which became Iceland's primary international airport. In 1944, the Republic of Iceland was founded and a president, elected by the people, replaced the king; the office of the president was placed in Reykjavík.
In the post-war years, the growth of Reykjavík accelerated. An exodus from the rural countryside began, largely because improved technology in agriculture reduced the need for manpower, and because of a population boom resulting from better living conditions in the country. A once-primitive village was rapidly transformed into a modern city. Private cars became common, and modern apartment complexes rose in the expanding suburbs.
In 1972, Reykjavík hosted the world chess championship between Bobby Fischer and Boris Spassky. The 1986 Reykjavík Summit between Ronald Reagan and Mikhail Gorbachev underlined Reykjavík's international status. Deregulation in the financial sector and the computer revolution of the 1990s again transformed Reykjavík. The financial and IT sectors are now significant employers in the city. The city has fostered some world-famous talents in recent decades, such as Björk, Ólafur Arnalds and bands Múm, Sigur Rós and Of Monsters and Men, poet Sjón and visual artist Ragnar Kjartansson.
Reykjavík is located in the southwest of Iceland. The Reykjavík area coastline is characterized by peninsulas, coves, straits, and islands.
During the Ice Age (up to 10,000 years ago) a large glacier covered parts of the city area, reaching as far out as Álftanes. Other parts of the city area were covered by sea water. In the warm periods and at the end of the Ice Age, some hills like Öskjuhlíð were islands. The former sea level is indicated by sediments (with clams) reaching (at Öskjuhlíð, for example) as far as above the current sea level. The hills of Öskjuhlíð and Skólavörðuholt appear to be the remains of former shield volcanoes which were active during the warm periods of the Ice Age. After the Ice Age, the land rose as the heavy load of the glaciers fell away, and began to look as it does today.
The capital city area continued to be shaped by earthquakes and volcanic eruptions, like the one 4,500 years ago in the mountain range Bláfjöll, when the lava coming down the Elliðaá valley reached the sea at the bay of Elliðavogur.
The largest river to run through Reykjavík is the Elliðaá River, which is non-navigable. It is one of the best salmon fishing rivers in the country. Mount Esja, at , is the highest mountain in the vicinity of Reykjavík.
The city of Reykjavík is mostly located on the Seltjarnarnes peninsula, but the suburbs reach far out to the south and east. Reykjavík is a spread-out city: most of its urban area consists of low-density suburbs, and houses are usually widely spaced. The outer residential neighbourhoods are also widely spaced from each other; in between them are the main traffic arteries and a lot of empty space. The city's latitude is 64°08' N, making it the world's northernmost capital of a sovereign state (Nuuk, the capital of Greenland, is slightly further north at 64°10', but Greenland is a constituent country, not an independent state).
Reykjavík has a subpolar oceanic climate (Köppen: "Cfc") closely bordering on a continental subarctic climate (Köppen: "Dfc") in the 0 °C isoterm. While not much different from a tundra climate (Köppen : "ET"), the city has had its present climate classification since the beginning of the twentieth century.
At 64° north, Reykjavik is characterized by extremes of day and night length over the course of the year. From 20 May to 24 July, daylight is essentially permanent as the sun never gets more than 5° below the horizon. Day length drops to less than five hours between 2 December and 10 January. The sun climbs just 3° above the horizon during this time. However, day length begins increasing rapidly during January and by month's end there are seven hours of daylight.
Despite its northern latitude, temperatures very rarely drop below in the winter. The proximity to the Arctic Circle and the strong moderation of the Atlantic Ocean in the Icelandic coast (influence of North Atlantic Current, an extension of the Gulf Stream) shape a relatively mild winter and cool summer. The city's coastal location does make it prone to wind, however, and gales are common in winter. Summers are cool, with temperatures fluctuating between , rarely exceeding . Rain in Reykjavík averages 147 days at the threshold of 1 mm per year. Droughts are uncommon, although they occur in some summers. July and August are the warmest months of the year on average and January and February the coldest.
In the summer of 2007, no rain was measured for one month. Summer tends to be the sunniest season, although May receives the most sunshine of any individual month. Overall, the city receives around 1,300 annual hours of sunshine, which is comparable with other places in northern and north-western Europe such as Ireland and Scotland, but substantially less than equally northern regions with a more continental climate, including Finland. Nonetheless, Reykjavík is one of the cloudiest and coolest capitals of any nation in the world. The highest temperature recorded in Reykjavík was , reported on 30 July 2008, while the lowest-ever recorded temperature was , recorded on 30 January 1971. The coldest month on record is January 1918, with a mean temperature of . The warmest is July 1917, with a mean temperature of .
The Reykjavík City Council governs the city of Reykjavík and is directly elected by those aged over 18 domiciled in the city. The council has 23 members who are elected using the open list method for four-year terms.
The council selects members of boards, and each board controls a different field under the city council's authority. The most important board is the City Board that wields the executive rights along with the City Mayor. The City Mayor is the senior public official and also the director of city operations. Other public officials control city institutions under the mayor's authority. Thus, the administration consists of two different parts:
The Independence Party was historically the city's ruling party; it had an overall majority from its establishment in 1929 until 1978, when it narrowly lost. From 1978 until 1982, there was a three-party coalition composed of the People's Alliance, the Social Democratic Party, and the Progressive Party. In 1982, the Independence Party regained an overall majority, which it held for three consecutive terms. The 1994 election was won by Reykjavíkurlistinn (the R-list), an alliance of Icelandic socialist parties, led by Ingibjörg Sólrún Gísladóttir. This alliance won a majority in three consecutive elections, but was dissolved for the 2006 election when five different parties were on the ballot. The Independence Party won seven seats, and together with the one Progressive Party they were able to form a new majority in the council which took over in June 2006.
In October 2007 a new majority was formed on the council, consisting of members of the Progressive Party, the Social Democratic Alliance, the Left-Greens and the F-list (liberals and independents), after controversy regarding REI, a subsidiary of OR, the city's energy company. However, three months later the F-list formed a new majority together with the Independence Party. Ólafur F. Magnússon, the leader of the F-list, was elected mayor on 24 January 2008, and in March 2009 the Independence Party was due to appoint a new mayor. This changed once again on 14 August 2008 when the fourth coalition of the term was formed, by the Independence Party and the Social Democratic Alliance, with Hanna Birna Kristjánsdóttir becoming mayor.
The City Council election in May 2010 saw a new political party, The Best Party, win six of 15 seats, and they formed a coalition with the Social Democratic Alliance; comedian Jón Gnarr became mayor. At the 2014 election, the Social Democratic Alliance had its best showing yet, gaining five seats in the council, while Bright Future (successor to the Best Party) received two seats and the two parties formed a coalition with the Left-Green movement and the Pirate Party, which won one seat each. The Independence Party had its worst election ever, with only four seats.
The mayor is appointed by the city council; usually one of the council members is chosen, but they may also appoint a mayor who is not a member of the council.
The post was created in 1907 and advertised in 1908. Two applications were received, from Páll Einarsson, sheriff and town mayor of Hafnarfjörður and from Knud Zimsen, town councillor in Reykjavík. Páll was appointed on 7 May and was mayor for six years. At that time the city mayor received a salary of 4,500 ISK per year and 1,500 ISK for office expenses. The current mayor is Dagur B. Eggertsson.
Reykjavík is by far the largest and most populous settlement in Iceland. The municipality of Reykjavík had a population of 131,136 on 1 January 2020; that is 36% of the country's population. The Capital Region, which includes the capital and six municipalities around it, was home to 233,034 people; that is about 64% of the country's population.
On 1 January 2019, of the city's population of 128,793, immigrants of the first and second generation numbered 23,995 (18.6%), increasing from 12,352 (10.4%) in 2008 and 3,106 (2.9%) in 1998.
The most common foreign citizens are Poles, Lithuanians, and Latvians. About 80% of the city's foreign residents originate in European Union and EFTA member states, and over 58% are from the new member states of the EU, mainly former Eastern Bloc countries, which joined in 2004, 2007 and 2013.
Children of foreign origin form a more considerable minority in the city's schools: as many as a third in places. The city is also visited by thousands of tourists, students, and other temporary residents, at times outnumbering natives in the city centre.
Reykjavík is divided into 10 districts:
In addition there are hinterland areas (lightly shaded on the map) which are not assigned to any district.
Borgartún is the financial centre of Reykjavík, hosting a large number of companies and three investment banks.
Reykjavík has been at the centre of Iceland's economic growth and subsequent economic contraction over the 2000s, a period referred to in foreign media as the "Nordic Tiger" years, or "Iceland's Boom Years". The economic boom led to a sharp increase in construction, with large redevelopment projects such as Harpa concert hall and conference centre and others. Many of these projects came to a screeching halt in the following economic crash of 2008.
Per capita car ownership in Iceland is among the highest in the world at roughly 522 vehicles per 1,000 residents, though Reykjavík is not severely affected by congestion. Several multi-lane highways (mainly dual carriageways) run between the most heavily populated areas and most frequently driven routes. Parking spaces are also plentiful in most areas. Public transportation consists of a bus system called Strætó bs. Route 1 (the Ring Road) runs through the city outskirts and connects the city to the rest of Iceland.
Reykjavík Airport, the second largest airport in the country (after Keflavík International Airport), is positioned inside the city, just south of the city centre. It is mainly used for domestic flights, as well as flights to Greenland and the Faroe Islands. Since 1962, there has been some controversy regarding the location of the airport, since it takes up a lot of valuable space in central Reykjavík.
Reykjavík has two seaports, the old harbour near the city centre which is mainly used by fishermen and cruise ships, and "Sundahöfn" in the east city which is the largest cargo port in the country.
There are no public railways in Iceland, because of its sparse population, but the locomotives used to build the docks are on display. Proposals have been made for a high-speed rail link between the city and Keflavík.
Volcanic activity provides Reykjavík with geothermal heating systems for both residential and industrial districts. In 2008, natural hot water was used to heat roughly 90% of all buildings in Iceland. Of total annual use of geothermal energy of 39 PJ, space heating accounted for 48%.
Most of the district heating in Iceland comes from three main geothermal power plants:
Safnahúsið (the Culture House) was opened in 1909 and has a number of important exhibits. Originally built to house the National Library and National Archives and also previously the location of the National Museum and Natural History Museum, in 2000 it was re-modeled to promote the Icelandic national heritage. Many of Iceland's national treasures are on display, such as the Poetic Edda, and the Sagas in their original manuscripts. There are also changing exhibitions of various topics.
Alcohol is expensive at bars. People tend to drink at home before going out. Beer was banned in Iceland until 1 March 1989 but has since become popular among many Icelanders as their alcoholic drink of choice.
The Iceland Airwaves music festival is staged annually in November. This festival takes place all over the city, and the concert venue Harpa is one of the main locations. Other venues that frequently organise live music events are Kex, Húrra, Gaukurinn (grunge, metal, punk), Mengi (centre for contemporary music, avant-garde music and experimental music), the Icelandic Opera and the National Theatre of Iceland for classical music.
The arrival of the new year is a particular cause for celebration to the people of Reykjavík. Icelandic law states that anyone may purchase and use fireworks during a certain period around New Year's Eve. As a result, every New Year's Eve the city is lit up with fireworks displays.
Reykjavik Golf Club was established in 1934. It is the oldest and largest golf club in Iceland. It consists of two 18-hole courses—one at Grafarholt and the other at Korpa. The Grafarholt golf course opened in 1963, which makes it the oldest 18-hole golf course in Iceland. The Korpa golf course opened in 1997.
Reykjavík is twinned with:
In July 2013, mayor Jón Gnarr filed a motion before the city council to terminate the city's relationship with Moscow, in response to a trend of anti-gay legislation in Russia. | https://en.wikipedia.org/wiki?curid=25798 |
Retrovirus
A retrovirus is a type of RNA virus that inserts a copy of its genome into the DNA of a host cell that it invades, thus changing the genome of that cell. Once inside the host cell's cytoplasm, the virus uses its own reverse transcriptase enzyme to produce DNA from its RNA genome, the reverse of the usual pattern, thus "retro" (backwards). The new DNA is then incorporated into the host cell genome by an integrase enzyme, at which point the retroviral DNA is referred to as a provirus. The host cell then treats the viral DNA as part of its own genome, transcribing and translating the viral genes along with the cell's own genes, producing the proteins required to assemble new copies of the virus.
Although retroviruses have different subfamilies, they have three basic groups. The oncoretroviruses (oncogenic retroviruses), the lentiviruses (slow retroviruses) and the spumaviruses (foamy viruses). The oncoretroviruses are able to cause cancer in some species, the lentiviruses able to cause severe immunodeficiency and death in humans and other animals, and the spumaviruses being benign and not linked to any disease in humans or animals.
Many retroviruses cause serious diseases in humans, other mammals, and birds. Human retroviruses include HIV-1 and HIV-2, the cause of the disease AIDS. Also the Human T-lymphotropic virus (HTLV) causes disease in humans. The murine leukemia viruses (MLVs) cause cancer in mouse hosts. Retroviruses are valuable research tools in molecular biology, and they have been used successfully in gene delivery systems.
Virions of retroviruses consist of enveloped particles about 100 nm in diameter. The outer lipid envelope consists of glycoprotein. The virions also contain two identical single-stranded RNA molecules 7–10 kilobases in length.The two molecules are present as a dimer, formed by base pairing between complementary sequences. Interaction sites between the two RNA molecules have been identified as a "kissing -loop complex". Although virions of different retroviruses do not have the same morphology or biology, all the virion components are very similar.
The main virion components are:
The retroviral genome is packaged as viral particles. These viral particles are dimers of single-stranded, positive-sense, linear RNA molecules.
Retroviruses (and orterviruses in general) follow a layout of 5'–"gag"–"pro"–"pol"–"env"–3' in the RNA genome. "gag" and "pol" encode polyproteins, each managing the capsid and replication. The "pol" region encodes enzymes necessary for viral replication, such as reverse transcriptase, protease and integrase. Depending on the virus, the genes may overlap or fuse into larger polyprotein chains. Some viruses contain additional genes. The lentivirus genus, the spumavirus genus, the HTLV / bovine leukemia virus (BLV) genus, and a newly introduced fish virus genus are retroviruses classified as complex. These viruses have genes called accessory genes, in addition to gag, pro, pol and env genes. Accessory genes are located between pol and env, downstream from the env, including the U3 region of LTR, or in the env and overlapping portions. While accessory genes have auxiliary roles, they also coordinate and regulate viral gene expression.
In addition, some retroviruses may carry genes called oncogenes or onc genes from another class. Retroviruses with these genes (also called transforming viruses) are known for their ability to quickly cause tumors in animals and transform cells in culture into an oncogenic state.
The polyproteins are cleaved into smaller proteins each with their own function. The nucleotides encoding them are known as "subgenes".
When retroviruses have integrated their own genome into the germ line, their genome is passed on to a following generation. These endogenous retroviruses (ERVs), contrasted with exogenous ones, now make up 5–8% of the human genome. Most insertions have no known function and are often referred to as "junk DNA". However, many endogenous retroviruses play important roles in host biology, such as control of gene transcription, cell fusion during placental development in the course of the germination of an embryo, and resistance to exogenous retroviral infection. Endogenous retroviruses have also received special attention in the research of immunology-related pathologies, such as autoimmune diseases like multiple sclerosis, although endogenous retroviruses have not yet been proven to play any causal role in this class of disease.
While transcription was classically thought to occur only from DNA to RNA, reverse transcriptase transcribes RNA into DNA. The term "retro" in retrovirus refers to this reversal (making DNA from RNA) of the usual direction of transcription. It still obeys the central dogma of molecular biology, which states that information can be transferred from nucleic acid to nucleic acid but cannot be transferred back from protein to either protein or nucleic acid. Reverse transcriptase activity outside of retroviruses has been found in almost all eukaryotes, enabling the generation and insertion of new copies of retrotransposons into the host genome. These inserts are transcribed by enzymes of the host into new RNA molecules that enter the cytosol. Next, some of these RNA molecules are translated into viral proteins. The proteins encoded by the gag and pol genes are translated from genome-length mRNAs into Gag and Gag–Pol polyproteins. In example, for the "gag" gene; it is translated into molecules of the capsid protein, and for the "pol" gene; it is translated into molecules of reverse transcriptase. Retroviruses need a lot more amount of the Gag proteins than the Pol proteins and have developed advanced systems to synthesize the required amount of each. As an example, after Gag synthesis nearly 95 percent of the ribosomes terminate translation, while other ribosomes continue translation to synthesize Gag–Pol. In the rough endoplasmic reticulum glycosylation begins and the "env" gene is translated from spliced mRNAs in the rough endoplasmic reticulum, into molecules of the envelope protein. When the envelope protein molecules are carried to the Golgi complex, they are divided into surface glycoprotein and transmembrane glycoprotein by a host protease. These two glycoprotein products stay in close affiliation, and they are transported to the plasma membrane after further glycosylation.
It is important to note that a retrovirus must "bring" its own reverse transcriptase in its capsid, otherwise it is unable to utilize the enzymes of the infected cell to carry out the task, due to the unusual nature of producing DNA from RNA.
Industrial drugs that are designed as protease and reverse-transcriptase inhibitors are made such that they target specific sites and sequences within their respective enzymes. However these drugs can quickly become ineffective due to the fact that the gene sequences that code for the protease and the reverse transcriptase quickly mutate. These changes in bases cause specific codons and sites with the enzymes to change and thereby avoid drug targeting by losing the sites that the drug actually targets.
Because reverse transcription lacks the usual proofreading of DNA replication, a retrovirus mutates very often. This enables the virus to grow resistant to antiviral pharmaceuticals quickly, and impedes the development of effective vaccines and inhibitors for the retrovirus.
One difficulty faced with some retroviruses, such as the Moloney retrovirus, involves the requirement for cells to be actively dividing for transduction. As a result, cells such as neurons are very resistant to infection and transduction by retroviruses. This gives rise to a concern that insertional mutagenesis due to integration into the host genome might lead to cancer or leukemia. This is unlike "Lentivirus", a genus of "Retroviridae", which are able to integrate their RNA into the genome of non-dividing host cells.
Two RNA genomes are packaged into each retrovirus particle, but, after an infection, each virus generates only one provirus. After infection, reverse transcription occurs and this process is accompanied by recombination. Recombination involves template strand switching between the two genome copies (copy choice recombination) during reverse transcription. From 5 to 14 recombination events per genome occur at each replication cycle. Genetic recombination appears to be necessary for maintaining genome integrity and as a repair mechanism for salvaging damaged genomes.
The DNA formed after reverse transcription (the provirus) is longer than the RNA genome because each of the terminals have the U3 - R - U5 sequences called long terminal repeat (LTR). Thus, 5' terminal has the extra U3 sequence, while the other terminal has the U5 sequence. LTRs are able to send signals for vital tasks to be carried out such as initiation of RNA production or management of the rate of transcription. This way, LTRs can control replication, hence, the entire progress of the viral cycle. Although located in the nucleus, the non-integrated retroviral cDNA is a very weak substrate for transcription. For this reason, an integrated provirus is a necessary for permanent and an effective expression of retroviral genes.
This DNA can be incorporated into host genome as a provirus that can be passed on to progeny cells. The retrovirus DNA is inserted at random into the host genome. Because of this, it can be inserted into oncogenes. In this way some retroviruses can convert normal cells into cancer cells. Some provirus remains latent in the cell for a long period of time before it is activated by the change in cell environment.
Studies of retroviruses led to the first demonstrated synthesis of DNA from RNA templates, a fundamental mode for transferring genetic material that occurs in both eukaryotes and prokaryotes. It has been speculated that the RNA to DNA transcription processes used by retroviruses may have first caused DNA to be used as genetic material. In this model, the RNA world hypothesis, cellular organisms adopted the more chemically stable DNA when retroviruses evolved to create DNA from the RNA templates.
An estimate of the date of evolution of the foamy-like endogenous retroviruses placed the time of the most recent common ancestor at > .
Gammaretroviral and lentiviral vectors for gene therapy have been developed that mediate stable genetic modification of treated cells by chromosomal integration of the transferred vector genomes. This technology is of use, not only for research purposes, but also for clinical gene therapy aiming at the long-term correction of genetic defects, e.g., in stem and progenitor cells. Retroviral vector particles with tropism for various target cells have been designed. Gammaretroviral and lentiviral vectors have so far been used in more than 300 clinical trials, addressing treatment options for various diseases. Retroviral mutations can be developed to make transgenic mouse models to study various cancers and their metastatic models.
Retroviruses that cause tumor growth include "Rous sarcoma virus" and "Mouse mammary tumor virus". Cancer can be triggered by proto-oncogenes that were mistakenly incorporated into proviral DNA or by the disruption of cellular proto-oncogenes. Rous sarcoma virus contains the src gene that triggers tumor formation. Later it was found that a similar gene in cells is involved in cell signaling, which was most likely excised with the proviral DNA. Nontransforming viruses can randomly insert their DNA into proto-oncogenes, disrupting the expression of proteins that regulate the cell cycle. The promoter of the provirus DNA can also cause over expression of regulatory genes.
Retroviruses can cause diseases such as cancer and immunodeficiency. If viral DNA is integrated into host chromosomes, it can lead to permanent infections. It is therefore important to discover the body's response to retroviruses. Especially exogenous retroviruses are associated with pathogenic diseases. For example, mice have mouse mammary tumor virus (MMTV), which is a retrovirus. This virus passes to newborn mice through mammary milk. The mice carrying the virus get mammary cancer because of the retrovirus when they are 6 months old. In addition, leukemia virus I (HTLV-1), found in human T cell, has been found in humans for many years. It is estimated that this retrovirus causes leukemia in the ages of 40 and 50. It has a replicable structure that can induce cancer. In addition to the usual gene sequence of retroviruses, HTLV-1 contains a fourth region, PX. This region encodes Tax, Rex, p12, p13 and p30 regulatory proteins. The Tax protein initiates the leukemic process and organizes the transcription of all viral genes in the integrated HTLV proviral DNA.
These are infectious RNA- or DNA-containing viruses which are transmitted from person to person.
Reverse-transcribing viruses fall into 2 groups of the Baltimore classification.
All members of Group VI use virally encoded reverse transcriptase, an RNA-dependent DNA polymerase, to produce DNA from the initial virion RNA genome. This DNA is often integrated into the host genome, as in the case of retroviruses and pseudoviruses, where it is replicated and transcribed by the host.
Group VI includes:
The family "Retroviridae" was previously divided into three subfamilies ("Oncovirinae", "Lentivirinae", and "Spumavirinae"), but are now divided into two: "Orthoretrovirinae" and "Spumaretrovirinae". The term oncovirus is now commonly used to describe a cancer-causing virus. This family now includes the following genera:
Note that according to ICTV 2017, genus "Spumavirus" has been divided into five genera, and its former type species "Simian foamy virus" is now upgraded to genus "Simiispumavirus" with not less than 14 species, including new type species "Eastern chimpanzee simian foamy virus".
Both families in Group VII have DNA genomes contained within the invading virus particles. The DNA genome is transcribed into both mRNA, for use as a transcript in protein synthesis, and pre-genomic RNA, for use as the template during genome replication. Virally encoded reverse transcriptase uses the pre-genomic RNA as a template for the creation of genomic DNA.
Group VII includes:
The latter family is closely related to the newly proposed
whilst families "Belpaoviridae", "Metaviridae", "Pseudoviridae", "Retroviridae", and "Caulimoviridae" constitute the order "Ortervirales".
Endogenous retroviruses are not formally included in this classification system, and are broadly classified into three classes, on the basis of relatedness to exogenous genera:
Antiretroviral drugs are medications for the treatment of infection by retroviruses, primarily HIV. Different classes of antiretroviral drugs act on different stages of the HIV life cycle. Combination of several (typically three or four) antiretroviral drugs is known as highly active anti-retroviral therapy (HAART).
"Feline leukemia virus" and "Feline immunodeficiency virus" infections are treated with biologics, including the only immunomodulator currently licensed for sale in the United States, Lymphocyte T-Cell Immune Modulator (LTCI). | https://en.wikipedia.org/wiki?curid=25799 |
Reincarnation
Reincarnation is the philosophical or religious belief that the non-physical essence of a living being starts a new life in a different physical form or body after biological death. It is also called rebirth or transmigration.
Reincarnation is a central tenet of Indian religions, namely Jainism, Buddhism, Sikhism and Hinduism, although there are Hindu groups that do not believe in reincarnation but believe in an afterlife. It is an esoteric belief in many streams of Orthodox Judaism and is found (in different forms) in some beliefs of North American Natives and some Indigenous Australians (while most believe in an afterlife or spirit world). A belief in rebirth/metempsychosis was held by Greek historic figures, such as Pythagoras, Socrates, and Plato. It is also a belief in various modern religions. Although the majority of denominations within Christianity and Islam do not believe that individuals reincarnate, particular groups within these religions do refer to reincarnation; these groups include the mainstream historical and contemporary followers of Cathars, Alawites, the Druze, and the Rosicrucians. The historical relations between these sects and the beliefs about reincarnation that were characteristic of Neoplatonism, Orphism, Hermeticism, Manicheanism, and Gnosticism of the Roman era as well as the Indian religions have been the subject of recent scholarly research. In recent decades, many Europeans and North Americans have developed an interest in reincarnation, and many contemporary works mention it.
The word "reincarnation" derives from Latin, literally meaning, "entering the flesh again". The Greek equivalent "metempsychosis" (μετεμψύχωσις) derives from "meta" (change) and "empsykhoun" (to put a soul into), a term attributed to Pythagoras. An alternate term is transmigration implying migration from one life (body) to another. Reincarnation refers to the belief that an aspect of every human being (or all living beings in some cultures) continues to exist after death, this aspect may be the soul or mind or consciousness or something transcendent which is reborn in an interconnected cycle of existence; the transmigration belief varies by culture, and is envisioned to be in the form of a newly born human being, or animal, or plant, or spirit, or as a being in some other non-human realm of existence. The term has been used by modern philosophers such as Kurt Gödel and has entered the English language. Another Greek term sometimes used synonymously is "palingenesis", "being born again".
Rebirth is a key concept found in major Indian religions, and discussed with various terms. "Punarjanman" (Sanskrit: पुनर्जन्मन्) means "rebirth, transmigration". Reincarnation is discussed in the ancient Sanskrit texts of Hinduism, Buddhism, and Jainism, with many alternate terms such as "punarāvṛtti" (पुनरावृत्ति), "punarājāti" (पुनराजाति), "punarjīvātu" (पुनर्जीवातु), "punarbhava" (पुनर्भव), "āgati-gati" (आगति-गति, common in Buddhist Pali text), "nibbattin" (निब्बत्तिन्), "upapatti" (उपपत्ति), and "uppajjana" (उप्पज्जन). These religions believe that this reincarnation is cyclic and an endless Saṃsāra, unless one gains spiritual insights that ends this cycle leading to liberation. The reincarnation concept is considered in Indian religions as a step that starts each "cycle of aimless drifting, wandering or mundane existence", but one that is an opportunity to seek spiritual liberation through ethical living and a variety of meditative, yogic ("marga"), or other spiritual practices. They consider the release from the cycle of reincarnations as the ultimate spiritual goal, and call the liberation by terms such as moksha, nirvana, "mukti" and "kaivalya". However, the Buddhist, Hindu and Jain traditions have differed, since ancient times, in their assumptions and in their details on what reincarnates, how reincarnation occurs and what leads to liberation.
"Gilgul", "Gilgul neshamot" or "Gilgulei Ha Neshamot" (Heb. גלגול הנשמות) is the concept of reincarnation in Kabbalistic Judaism, found in much Yiddish literature among Ashkenazi Jews. "Gilgul" means "cycle" and "neshamot" is "souls". Kabbalistic reincarnation says that humans reincarnate only to humans unless YHWH/Ein Sof/God chooses.
The origins of the notion of reincarnation are obscure. Discussion of the subject appears in the philosophical traditions of India. The Greek Pre-Socratics discussed reincarnation, and the Celtic Druids are also reported to have taught a doctrine of reincarnation.
The idea of reincarnation, saṃsāra, did not exist in the early Vedic religions. The idea of reincarnation has roots in the Upanishads of the late Vedic period (c. 1100 – c. 500 BCE), predating the Buddha and the Mahavira. The concepts of the cycle of birth and death, samsara, and liberation partly derive from ascetic traditions that arose in India around the middle of the first millennium BCE. Though no direct evidence of this has been found, the tribes of the Ganges valley or the Dravidian traditions of South India have been proposed as another early source of reincarnation beliefs.
The early Vedas do not mention the doctrine of Karma and rebirth but mention the belief in an afterlife. It is in the early Upanishads, which are pre-Buddha and pre-Mahavira, where these ideas are developed and described in a general way. Detailed descriptions first appear around the mid 1st millennium BCE in diverse traditions, including Buddhism, Jainism and various schools of Hindu philosophy, each of which gave unique expression to the general principle.
The texts of ancient Jainism that have survived into the modern era are post-Mahavira, likely from the last centuries of the 1st millennium BCE, and extensively mention rebirth and karma doctrines. The Jaina philosophy assumes that the soul ("Jiva" in Jainism, "Atman" in Hinduism) exists and is eternal, passing through cycles of transmigration and rebirth. After death, reincarnation into a new body is asserted to be instantaneous in early Jaina texts. Depending upon the accumulated karma, rebirth occurs into a higher or lower bodily form, either in heaven or hell or earthly realm. No bodily form is permanent: everyone dies and reincarnates further. Liberation ("kevalya") from reincarnation is possible, however, through removing and ending karmic accumulations to one's soul. From the early stages of Jainism on, a human being was considered the highest mortal being, with the potential to achieve liberation, particularly through asceticism.
The early Buddhist texts discuss rebirth as part of the doctrine of "Saṃsāra". This asserts that the nature of existence is a "suffering-laden cycle of life, death, and rebirth, without beginning or end". Also referred to as the wheel of existence ("Bhavacakra"), it is often mentioned in Buddhist texts with the term "punarbhava" (rebirth, re-becoming). Liberation from this cycle of existence, "Nirvana", is the foundation and the most important purpose of Buddhism. Buddhist texts also assert that an enlightened person knows his previous births, a knowledge achieved through high levels of meditative concentration. Tibetan Buddhism discusses death, bardo (an intermediate state), and rebirth in texts such as the "Tibetan Book of the Dead". While Nirvana is taught as the ultimate goal in the Theravadin Buddhism, and is essential to Mahayana Buddhism, the vast majority of contemporary lay Buddhists focus on accumulating good karma and acquiring merit to achieve a better reincarnation in the next life.
In early Buddhist traditions, "Saṃsāra" cosmology consisted of five realms through which the wheel of existence cycled. This included hells ("niraya"), hungry ghosts ("pretas"), animals ("tiryak"), humans ("manushya"), and gods ("devas", heavenly). In latter Buddhist traditions, this list grew to a list of six realms of rebirth, adding demi-gods ("asuras").
The earliest layers of Vedic text incorporate the concept of life, followed by an afterlife in heaven and hell based on cumulative virtues (merit) or vices (demerit). However, the ancient Vedic Rishis challenged this idea of afterlife as simplistic, because people do not live equally moral or immoral lives. Between generally virtuous lives, some are more virtuous; while evil too has degrees, and the texts assert that it would be unfair for people, with varying degrees of virtue or vices, to end up in heaven or hell, in "either or" and disproportionate manner irrespective of how virtuous or vicious their lives were. They introduced the idea of an afterlife in heaven or hell in proportion to one's merit.
Early texts of Hinduism, Buddhism and Jainism share the concepts and terminology related to reincarnation. They also emphasize similar virtuous practices and karma as necessary for liberation and what influences future rebirths. For example, all three discuss various virtues – sometimes grouped as Yamas and Niyamas – such as non-violence, truthfulness, non-stealing, non-possessiveness, compassion for all living beings, charity and many others.
Hinduism, Buddhism and Jainism disagree in their assumptions and theories about rebirth. Hinduism relies on its foundational assumption that "soul, Self exists" (Atman, attā), in contrast to Buddhist assumption that there is "no soul, no Self" (Anatta, anatman). Hindu traditions consider soul to be the unchanging eternal essence of a living being, and what journeys across reincarnations until it attains self-knowledge. Buddhism, in contrast, asserts a rebirth theory without a Self, and considers realization of non-Self or Emptiness as Nirvana (nibbana). Thus Buddhism and Hinduism have a very different view on whether a self or soul exists, which impacts the details of their respective rebirth theories.
The reincarnation doctrine in Jainism differs from those in Buddhism, even though both are non-theistic Sramana traditions. Jainism, in contrast to Buddhism, accepts the foundational assumption that soul exists ("Jiva") and asserts this soul is involved in the rebirth mechanism. Further, Jainism considers asceticism as an important means to spiritual liberation that ends all reincarnation, while Buddhism does not.
Early Greek discussion of the concept dates to the 6th century BCE. An early Greek thinker known to have considered rebirth is Pherecydes of Syros (fl. 540 BCE). His younger contemporary Pythagoras (c. 570–c. 495 BCE), its first famous exponent, instituted societies for its diffusion. Some authorities believe that Pythagoras was Pherecydes' pupil, others that Pythagoras took up the idea of reincarnation from the doctrine of Orphism, a Thracian religion, or brought the teaching from India.
Plato (428/427–348/347 BCE) presented accounts of reincarnation in his works, particularly the "Myth of Er". In "Phaedo", Plato has his teacher Socrates, prior to his death, state: "I am confident that there truly is such a thing as living again, and that the living spring from the dead." However Xenophon does not mention Socrates as believing in reincarnation and Plato may have systematised Socrates' thought with concepts he took directly from Pythagoreanism or Orphism.
The Orphic religion, which taught reincarnation, about the 6th century BC, organized itself into mystery schools at Eleusis and elsewhere, and produced a copious literature. Orpheus, its legendary founder, is said to have taught that the immortal soul aspires to freedom while the body holds it prisoner. The wheel of birth revolves, the soul alternates between freedom and captivity round the wide circle of necessity. Orpheus proclaimed the need of the grace of the gods, Dionysus in particular, and of self-purification until the soul has completed the spiral ascent of destiny to live for ever.
An association between Pythagorean philosophy and reincarnation was routinely accepted throughout antiquity. In the "Republic" Plato makes Socrates tell how Er, the son of Armenius, miraculously returned to life on the twelfth day after death and recounted the secrets of the other world. There are myths and theories to the same effect in other dialogues, in the Chariot allegory of the Phaedrus, in the Meno, Timaeus and Laws. The soul, once separated from the body, spends an indeterminate amount of time in "formland" (see The Allegory of the Cave in "The Republic") and then assumes another body.
In later Greek literature the doctrine is mentioned in a fragment of Menander and satirized by Lucian. In Roman literature it is found as early as Ennius, who, in a lost passage of his "Annals", told how he had seen Homer in a dream, who had assured him that the same soul which had animated both the poets had once belonged to a peacock. Persius in his satires (vi. 9) laughs at this, it is referred to also by Lucretius and Horace.
Virgil works the idea into his account of the Underworld in the sixth book of the Aeneid. It persists down to the late classic thinkers, Plotinus and the other Neoplatonists. In the Hermetica, a Graeco-Egyptian series of writings on cosmology and spirituality attributed to Hermes Trismegistus/Thoth, the doctrine of reincarnation is central.
In Greco-Roman thought, the concept of metempsychosis disappeared with the rise of Early Christianity, reincarnation being incompatible with the Christian core doctrine of salvation of the faithful after death. It has been suggested that some of the early Church Fathers, especially Origen, still entertained a belief in the possibility of reincarnation, but evidence is tenuous, and the writings of Origen as they have come down to us speak explicitly against it.
Some early Christian Gnostic sects professed reincarnation. The Sethians and followers of Valentinus believed in it. The followers of Bardaisan of Mesopotamia, a sect of the 2nd century deemed heretical by the Catholic Church, drew upon Chaldean astrology, to which Bardaisan's son Harmonius, educated in Athens, added Greek ideas including a sort of metempsychosis. Another such teacher was Basilides (132–? CE/AD), known to us through the criticisms of Irenaeus and the work of Clement of Alexandria (see also Neoplatonism and Gnosticism and Buddhism and Gnosticism).
In the third Christian century Manichaeism spread both east and west from Babylonia, then within the Sassanid Empire, where its founder Mani lived about 216–276. Manichaean monasteries existed in Rome in 312 AD. Noting Mani's early travels to the Kushan Empire and other Buddhist influences in Manichaeism, Richard Foltz attributes Mani's teaching of reincarnation to Buddhist influence. However the inter-relation of Manicheanism, Orphism, Gnosticism and neo-Platonism is far from clear.
In the 1st century BCE Alexander Cornelius Polyhistor wrote:
Julius Caesar recorded that the druids of Gaul, Britain and Ireland had metempsychosis as one of their core doctrines:
Hippolytus of Rome believed the Gauls had been taught the doctrine of reincarnation by a slave of Pythagoras named Zalmoxis. Conversely, Clement of Alexandria believed Pythagoras himself had learned it from the Celts and not the opposite, claiming he had been taught by Galatian Gauls, Hindu priests and Zoroastrians.
Surviving texts indicate that there was a belief in rebirth in Germanic paganism. Examples include figures from eddic poetry and sagas, potentially by way of a process of naming and/or through the family line. Scholars have discussed the implications of these attestations and proposed theories regarding belief in reincarnation among the Germanic peoples prior to Christianization and potentially to some extent in folk belief thereafter.
The belief in reincarnation had first existed among Jewish mystics in the Ancient World, among whom differing explanations were given of the afterlife, although with a universal belief in an immortal soul. Today, reincarnation is an esoteric belief within many streams of modern Judaism. Kabbalah teaches a belief in "gilgul", transmigration of souls, and hence the belief in reincarnation is universal in Hasidic Judaism, which regards the Kabbalah as sacred and authoritative, and is also held as an esoteric belief within Modern Orthodox Judaism. In Judaism, the Zohar, first published in the 13th century, discusses reincarnation at length, especially in the Torah portion "Balak." The most comprehensive kabbalistic work on reincarnation, "Shaar HaGilgulim", was written by Chaim Vital, based on the teachings of his mentor, the 16th century kabbalist Isaac Luria, who was said to know the past lives of each person through his semi-prophetic abilities. The 18th century Lithuanian master scholar and kabbalist, Elijah of Vilna, known as the Vilna Gaon, authored a commentary on the biblical Book of Jonah as an allegory of reincarnation.
The practice of conversion to Judaism is sometimes understood within Orthodox Judaism in terms of reincarnation. According to this school of thought in Judaism, when non-Jews are drawn to Judaism, it is because they had been Jews in a former life. Such souls may "wander among nations" through multiple lives, until they find their way back to Judaism, including through finding themselves born in a gentile family with a "lost" Jewish ancestor.
There is an extensive literature of Jewish folk and traditional stories that refer to reincarnation.
Taoist documents from as early as the Han Dynasty claimed that Lao Tzu appeared on earth as different persons in different times beginning in the legendary era of Three Sovereigns and Five Emperors. The (ca. 3rd century BC) "Chuang Tzu" states: "Birth is not a beginning; death is not an end. There is existence without limitation; there is continuity without a starting-point. Existence without limitation is Space. Continuity without a starting point is Time. There is birth, there is death, there is issuing forth, there is entering in."
Around the 11–12th century in Europe, several reincarnationist movements were persecuted as heresies, through the establishment of the Inquisition in the Latin west. These included the Cathar, Paterene or Albigensian church of western Europe, the Paulician movement, which arose in Armenia, and the Bogomils in Bulgaria.
Christian sects such as the Bogomils and the Cathars, who professed reincarnation and other gnostic beliefs, were referred to as "Manichean", and are today sometimes described by scholars as "Neo-Manichean". As there is no known Manichaean mythology or terminology in the writings of these groups there has been some dispute among historians as to whether these groups truly were descendants of Manichaeism.
While reincarnation has been a matter of faith in some communities from an early date it has also frequently been argued for on principle, as Plato does when he argues that the number of souls must be finite because souls are indestructible, Benjamin Franklin held a similar view. Sometimes such convictions, as in Socrates' case, arise from a more general personal faith, at other times from anecdotal evidence such as Plato makes Socrates offer in the "Myth of Er".
During the Renaissance translations of Plato, the Hermetica and other works fostered new European interest in reincarnation. Marsilio Ficino argued that Plato's references to reincarnation were intended allegorically, Shakespeare alluded to the doctrine of reincarnation but Giordano Bruno was burned at the stake by authorities after being found guilty of heresy by the Roman Inquisition for his teachings. But the Greek philosophical works remained available and, particularly in north Europe, were discussed by groups such as the Cambridge Platonists.
By the 19th century the philosophers Schopenhauer and Nietzsche could access the Indian scriptures for discussion of the doctrine of reincarnation, which recommended itself to the American Transcendentalists Henry David Thoreau, Walt Whitman and Ralph Waldo Emerson and was adapted by Francis Bowen into "Christian Metempsychosis".
By the early 20th century, interest in reincarnation had been introduced into the nascent discipline of psychology, largely due to the influence of William James, who raised aspects of the philosophy of mind, comparative religion, the psychology of religious experience and the nature of empiricism. James was influential in the founding of the American Society for Psychical Research (ASPR) in New York City in 1885, three years after the British Society for Psychical Research (SPR) was inaugurated in London, leading to systematic, critical investigation of paranormal phenomena. Famous World War II American General George Patton was a strong believer in reincarnation, believing, among other things, he was a reincarnation of the Carthaginian General Hannibal.
At this time popular awareness of the idea of reincarnation was boosted by the Theosophical Society's dissemination of systematised and universalised Indian concepts and also by the influence of magical societies like The Golden Dawn. Notable personalities like Annie Besant, W. B. Yeats and Dion Fortune made the subject almost as familiar an element of the popular culture of the west as of the east. By 1924 the subject could be satirised in popular children's books. Humorist Don Marquis created a fictional cat named Mehitabel who claimed to be a reincarnation of Queen Cleopatra.
Théodore Flournoy was among the first to study a claim of past-life recall in the course of his investigation of the medium Hélène Smith, published in 1900, in which he defined the possibility of cryptomnesia in such accounts.
Carl Gustav Jung, like Flournoy based in Switzerland, also emulated him in his thesis based on a study of cryptomnesia in psychism. Later Jung would emphasise the importance of the persistence of memory and ego in psychological study of reincarnation: "This concept of rebirth necessarily implies the continuity of personality... (that) one is able, at least potentially, to remember that one has lived through previous existences, and that these existences were one's own..." Hypnosis, used in psychoanalysis for retrieving forgotten memories, was eventually tried as a means of studying the phenomenon of past life recall.
According to various Buddhist scriptures, Gautama Buddha believed in the existence of an afterlife in another world and in reincarnation,
The Buddha also asserted that karma influences rebirth, and that the cycles of repeated births and deaths are endless. Before the birth of Buddha, ancient Indian scholars had developed competing theories of afterlife, including the materialistic school such as Charvaka, which posited that death is the end, there is no afterlife, no soul, no rebirth, no karma, and they described death to be a state where a living being is completely annihilated, dissolved. Buddha rejected this theory, adopted the alternate existing theories on rebirth, criticizing the materialistic schools that denied rebirth and karma, states Damien Keown. Such beliefs are inappropriate and dangerous, stated Buddha, because such annihilationism views encourage moral irresponsibility and material hedonism; he tied moral responsibility to rebirth.
The Buddha introduced the concept that there is no permanent self (soul), and this central concept in Buddhism is called "anattā". Major contemporary Buddhist traditions such as Theravada, Mahayana and Vajrayana traditions accept the teachings of Buddha. These teachings assert there is rebirth, there is no permanent self and no irreducible ātman (soul) moving from life to another and tying these lives together, there is impermanence, that all compounded things such as living beings are aggregates dissolve at death, but every being reincarnates. The rebirth cycles continue endlessly, states Buddhism, and it is a source of "Dukkha" (suffering, pain), but this reincarnation and "Dukkha" cycle can be stopped through nirvana. The "anattā" doctrine of Buddhism is a contrast to Hinduism, the latter asserting that "soul exists, it is involved in rebirth, and it is through this soul that everything is connected".
Different traditions within Buddhism have offered different theories on what reincarnates and how reincarnation happens. One theory suggests that it occurs through consciousness (Pali: "samvattanika-viññana") or stream of consciousness (Pali: "viññana-sotam", Sanskrit: "vijñāna-srotām, vijñāna-santāna", or "citta-santāna") upon death, which reincarnates into a new aggregation. This process, states this theory, is similar to the flame of a dying candle lighting up another. The consciousness in the newly born being is neither identical to nor entirely different from that in the deceased but the two form a causal continuum or stream in this Buddhist theory. Transmigration is influenced by a being's past "karma" ("kamma"). The root cause of rebirth, states Buddhism, is the abiding of consciousness in ignorance (Pali: "avijja", Sanskrit: "avidya") about the nature of reality, and when this ignorance is uprooted, rebirth ceases.
Buddhist traditions also vary in their mechanistic details on rebirth. Theravada Buddhists assert that rebirth is immediate while the Tibetan schools hold to the notion of a "bardo" (intermediate state) that can last up to 49 days. The "bardo" rebirth concept of Tibetan Buddhism, along with "yidam", developed independently in Tibet without Indian influence, and involves 42 peaceful deities, and 58 wrathful deities. These ideas led to mechanistic maps on karma and what form of rebirth one takes after death, discussed in texts such as "The Tibetan Book of the Dead". The major Buddhist traditions accept that the reincarnation of a being depends on the past karma and merit (demerit) accumulated, and that there are six realms of existence in which the rebirth may occur after each death.
Within Japanese Zen, reincarnation is accepted by some, but rejected by others. A distinction can be drawn between "folk Zen", as in the Zen practiced by devotional lay people, and "philosophical Zen". Folk Zen generally accepts the various supernatural elements of Buddhism such as rebirth. Philosophical Zen, however, places more emphasis on the present moment.
Some schools conclude that karma continues to exist and adhere to the person until it works out its consequences. For the Sautrantika school, each act "perfumes" the individual or "plants a seed" that later germinates. Tibetan Buddhism stresses the state of mind at the time of death. To die with a peaceful mind will stimulate a virtuous seed and a fortunate rebirth; a disturbed mind will stimulate a non-virtuous seed and an unfortunate rebirth.
In the major Christian denominations, the concept of reincarnation is absent and it is nowhere explicitly referred to in the Bible. However, the impossibility of a second earthly death is stated by , where it affirms that Jesus Christ God died once forever (in Latin: "semel", a single time) for the sins of all the human kind. In , king Herod Antipas identified Jesus Christ God with a risen John the Baptist, before ordering his necking execution.
In a survey by the Pew Forum in 2009, 24% of American Christians expressed a belief in reincarnation and in a 1981 survey 31% of regular churchgoing European Catholics expressed a belief in reincarnation.
Some Christian theologians interpret certain Biblical passages as referring to reincarnation. These passages include the questioning of Jesus as to whether he is Elijah, John the Baptist, Jeremiah, or another prophet (Matthew 16:13–15 and John 1:21–22) and, less clearly (while Elijah was said not to have died, but to have been taken up to heaven), John the Baptist being asked if he is not Elijah (John 1:25). Geddes MacGregor, an Episcopalian priest and professor of philosophy, has made a case for the compatibility of Christian doctrine and reincarnation.
There is evidence that Origen, a Church father in early Christian times, taught reincarnation in his lifetime but that when his works were translated into Latin these references were concealed. One of the epistles written by St. Jerome, "To Avitus" (Letter 124; Ad Avitum. Epistula CXXIV), which asserts that Origen's "On First Principles" (Latin: "De Principiis"; Greek: Περὶ Ἀρχῶν) was mistranscribed:
Under the impression that Origen was a heretic like Arius, St. Jerome criticizes ideas described in "On First Principles". Further in "To Avitus" (Letter 124), St. Jerome writes about "convincing proof" that Origen teaches reincarnation in the original version of the book:
The original text of "On First Principles" has almost completely disappeared. It remains extant as "De Principiis" in fragments faithfully translated into Latin by St. Jerome and in "the not very reliable Latin translation of Rufinus."
Belief in reincarnation was rejected by Augustine of Hippo in The City of God.
Reincarnation is a paramount tenet in the Druze faith. There is an eternal duality of the body and the soul and it is impossible for the soul to exist without the body. Therefore, reincarnations occur instantly at one's death. While in the Hindu and Buddhist belief system a soul can be transmitted to any living creature, in the Druze belief system this is not possible and a human soul will only transfer to a human body. Furthermore, souls cannot be divided into different or separate parts and the number of souls existing is finite.
Few Druzes are able to recall their past but, if they are able to they are called a "Nateq". Typically souls who have died violent deaths in their previous incarnation will be able to recall memories. Since death is seen as a quick transient state, mourning is discouraged. Unlike other Abrahamic faiths, heaven and hell are spiritual. Heaven is the ultimate happiness received when soul escapes the cycle of rebirths and reunites with the Creator, while hell is conceptualized as the bitterness of being unable to reunite with the Creator and escape from the cycle of rebirth.
The body dies, assert the Hindu traditions, but not the soul, which they assume to be the eternal reality, indestructible and bliss. Everything and all existence is believed to be connected and cyclical in many Hinduism-sects, all living beings composed of two things, the soul and the body or matter. Atman does not change and cannot change by its innate nature in the Hindu belief. Current Karma impacts the future circumstances in this life, as well as the future forms and realms of lives. Good intent and actions lead to good future, bad intent and actions lead to bad future, impacting how one reincarnates, in the Hindu view of existence.
There is no permanent heaven or hell in most Hinduism-sects. In the afterlife, based on one's karma, the soul is reborn as another being in heaven, hell, or a living being on earth (human, animal). Gods too die once their past karmic merit runs out, as do those in hell, and they return getting another chance on earth. This reincarnation continues, endlessly in cycles, until one embarks on a spiritual pursuit, realizes self-knowledge, and thereby gains "mokṣa", the final release out of the reincarnation cycles. This release is believed to be a state of utter bliss, which Hindu traditions believe is either related or identical to Brahman, the unchanging reality that existed before the creation of universe, continues to exist, and shall exist after the universe ends.
The Upanishads, part of the scriptures of the Hindu traditions, primarily focus on the liberation from reincarnation. The Bhagavad Gita discusses various paths to liberation. The Upanishads, states Harold Coward, offer a "very optimistic view regarding the perfectibility of human nature", and the goal of human effort in these texts is a continuous journey to self-perfection and self-knowledge so as to end "Saṃsāra" – the endless cycle of rebirth and redeath. The aim of spiritual quest in the Upanishadic traditions is find the true self within and to know one's soul, a state that they assert leads to blissful state of freedom, moksha.
The Bhagavad Gita states:
There are internal differences within Hindu traditions on reincarnation and the state of moksha. For example, the dualistic devotional traditions such as Madhvacharya's Dvaita Vedanta tradition of Hinduism champion a theistic premise, assert that human soul and Brahman are different, loving devotion to Brahman (god Vishnu in Madhvacharya's theology) is the means to release from Samsara, it is the grace of God which leads to moksha, and spiritual liberation is achievable only in after-life ("videhamukti"). The nondualistic traditions such as Adi Shankara's Advaita Vedanta tradition of Hinduism champion a monistic premise, asserting that the individual human soul and Brahman are identical, only ignorance, impulsiveness and inertia leads to suffering through Saṃsāra, in reality there are no dualities, meditation and self-knowledge is the path to liberation, the realization that one's soul is identical to Brahman is moksha, and spiritual liberation is achievable in this life ("jivanmukti").
Most Islamic schools of thought reject any idea of reincarnation of human beings or God. It teaches a linear concept of life, wherein a human being has only one life and upon death he or she is judged by God, then rewarded in heaven or punished in hell. Islam teaches final resurrection and Judgement Day, but there is no prospect for the reincarnation of a human being into a different body or being. During the early history of Islam, some of the Caliphs persecuted all reincarnation-believing people, such as Manichaeism, to the point of extinction in Mesopotamia and Persia (modern day Iraq and Iran). However, some Muslim minority sects such as those found among Sufis, and some Muslims in South Asia and Indonesia have retained their pre-Islamic Hindu and Buddhist beliefs in reincarnation. For instance, historically, South Asian Isma'ilis performed chantas yearly, one of which is for seeking forgiveness of sins committed in past lives. However Inayat Khan has criticized the idea as unhelpful to the spiritual seeker.
From the teachings of Modern Sufi Shaikh M.R. Bawa Muhaiyadeen (Guru Bawa); a person's state continuously changes during his one lifetime (angry/violent at once and being gentle/nice in another). So when a person's state changes, his previous state dies. Even though it dies, the earlier state (of anger) will be reborn in another minute. According to Guru Bawa; the changing of a person's state is described as “rebirth” or reincarnation, this should not be confused with the physical death & rebirth. Although some scholars wrongly misquote that Guru Bawa accepts the common belief of reincarnation.
The idea of reincarnation is accepted by a few Shia Muslim sects, particularly of the Ghulat. Alawis, belonging to the Shia denomination of Islam, hold that they were originally stars or divine lights that were cast out of heaven through disobedience and must undergo repeated reincarnation (or metempsychosis) before returning to heaven. They can be reincarnated as Christians or others through sin and as animals if they become infidels.
In Jainism, the reincarnation doctrine, along with its theories of "Saṃsāra" and Karma, are central to its theological foundations, as evidenced by the extensive literature on it in the major sects of Jainism, and their pioneering ideas on these topics from the earliest times of the Jaina tradition. Reincarnation in contemporary Jainism traditions is the belief that the worldly life is characterized by continuous rebirths and suffering in various realms of existence.
Karma forms a central and fundamental part of Jain faith, being intricately connected to other of its philosophical concepts like transmigration, reincarnation, liberation, non-violence ("ahiṃsā") and non-attachment, among others. Actions are seen to have consequences: some immediate, some delayed, even into future incarnations. So the doctrine of karma is not considered simply in relation to one life-time, but also in relation to both future incarnations and past lives. "Uttarādhyayana-sūtra" 3.3–4 states: "The "jīva" or the soul is sometimes born in the world of gods, sometimes in hell. Sometimes it acquires the body of a demon; all this happens on account of its karma. This "jīva" sometimes takes birth as a worm, as an insect or as an ant." The text further states (32.7): "Karma is the root of birth and death. The souls bound by karma go round and round in the cycle of existence."
Actions and emotions in the current lifetime affect future incarnations depending on the nature of the particular karma. For example, a good and virtuous life indicates a latent desire to experience good and virtuous themes of life. Therefore, such a person attracts karma that ensures that their future births will allow them to experience and manifest their virtues and good feelings unhindered. In this case, they may take birth in heaven or in a prosperous and virtuous human family. On the other hand, a person who has indulged in immoral deeds, or with a cruel disposition, indicates a latent desire to experience cruel themes of life. As a natural consequence, they will attract karma which will ensure that they are reincarnated in hell, or in lower life forms, to enable their soul to experience the cruel themes of life.
There is no retribution, judgment or reward involved but a natural consequences of the choices in life made either knowingly or unknowingly. Hence, whatever suffering or pleasure that a soul may be experiencing in its present life is on account of choices that it has made in the past. As a result of this doctrine, Jainism attributes supreme importance to pure thinking and moral behavior.
The Jain texts postulate four "gatis", that is states-of-existence or birth-categories, within which the soul transmigrates. The four "gatis" are: "deva" (demi-gods), "manuṣya" (humans), "nāraki" (hell beings) and "tiryañca" (animals, plants and micro-organisms). The four "gatis" have four corresponding realms or habitation levels in the vertically tiered Jain universe: demi-gods occupy the higher levels where the heavens are situated; humans, plants and animals occupy the middle levels; and hellish beings occupy the lower levels where seven hells are situated.
Single-sensed souls, however, called "nigoda", and element-bodied souls pervade all tiers of this universe. "Nigodas" are souls at the bottom end of the existential hierarchy. They are so tiny and undifferentiated, that they lack even individual bodies, living in colonies. According to Jain texts, this infinity of "nigodas" can also be found in plant tissues, root vegetables and animal bodies. Depending on its karma, a soul transmigrates and reincarnates within the scope of this cosmology of destinies. The four main destinies are further divided into sub-categories and still smaller sub-sub-categories. In all, Jain texts speak of a cycle of 8.4 million birth destinies in which souls find themselves again and again as they cycle within "samsara".
In Jainism, God has no role to play in an individual's destiny; one's personal destiny is not seen as a consequence of any system of reward or punishment, but rather as a result of its own personal karma. A text from a volume of the ancient Jain canon, "Bhagvati sūtra" 8.9.9, links specific states of existence to specific karmas. Violent deeds, killing of creatures having five sense organs, eating fish, and so on, lead to rebirth in hell. Deception, fraud and falsehood lead to rebirth in the animal and vegetable world. Kindness, compassion and humble character result in human birth; while austerities and the making and keeping of vows lead to rebirth in heaven.
Each soul is thus responsible for its own predicament, as well as its own salvation. Accumulated karma represent a sum total of all unfulfilled desires, attachments and aspirations of a soul. It enables the soul to experience the various themes of the lives that it desires to experience. Hence a soul may transmigrate from one life form to another for countless of years, taking with it the karma that it has earned, until it finds conditions that bring about the required fruits. In certain philosophies, heavens and hells are often viewed as places for eternal salvation or eternal damnation for good and bad deeds. But according to Jainism, such places, including the earth are simply the places which allow the soul to experience its unfulfilled karma.
Jewish mystical texts (the Kabbalah), from their classic Medieval canon onward, teach a belief in "Gilgul Neshamot" (Hebrew for metempsychosis of souls: literally "soul cycle", plural ""gilgulim""). The Zohar and the Sefer HaBahir specifically discuss reincarnation. It is a common belief in contemporary Hasidic Judaism, which regards the Kabbalah as sacred and authoritative, though understood in light of a more innate psychological mysticism. Kabbalah also teaches that "The soul of Moses is reincarnated in every generation." Other, Non-Hasidic, Orthodox Jewish groups while not placing a heavy emphasis on reincarnation, do acknowledge it as a valid teaching. Its popularization entered modern secular Yiddish literature and folk motif.
The 16th century mystical renaissance in communal Safed replaced scholastic Rationalism as mainstream traditional Jewish theology, both in scholarly circles and in the popular imagination. References to "gilgul" in former Kabbalah became systematized as part of the metaphysical purpose of creation. Isaac Luria (the Ari) brought the issue to the centre of his new mystical articulation, for the first time, and advocated identification of the reincarnations of historic Jewish figures that were compiled by Haim Vital in his Shaar HaGilgulim. "Gilgul" is contrasted with the other processes in Kabbalah of Ibbur ("pregnancy"), the attachment of a second soul to an individual for (or by) good means, and Dybuk ("possession"), the attachment of a spirit, demon, etc. to an individual for (or by) "bad" means.
In Lurianic Kabbalah, reincarnation is not retributive or fatalistic, but an expression of Divine compassion, the microcosm of the doctrine of cosmic rectification of creation. "Gilgul" is a heavenly agreement with the individual soul, conditional upon circumstances. Luria's radical system focused on rectification of the Divine soul, played out through Creation. The true essence of anything is the divine spark within that gives it existence. Even a stone or leaf possesses such a soul that "came into this world to receive a rectification". A human soul may occasionally be exiled into lower inanimate, vegetative or animal creations. The most basic component of the soul, the nefesh, must leave at the cessation of blood production. There are four other soul components and different nations of the world possess different forms of souls with different purposes. Each Jewish soul is reincarnated in order to fulfill each of the 613 Mosaic commandments that elevate a particular spark of holiness associated with each commandment. Once all the Sparks are redeemed to their spiritual source, the Messianic Era begins. Non-Jewish observance of the 7 Laws of Noah assists the Jewish people, though Biblical adversaries of Israel reincarnate to oppose.
Among the many rabbis who accepted reincarnation are Nahmanides (the Ramban) and Rabbenu Bahya ben Asher, Levi ibn Habib (the Ralbah), Shelomoh Alkabez, Moses Cordovero, Moses Chaim Luzzatto; early Hasidic masters such as the Baal Shem Tov, Schneur Zalman of Liadi and Nachman of Breslov, as well as virtually all later Hasidic masters; contemporary Hasidic teachers such as DovBer Pinson, Moshe Weinberger and Joel Landau; and key Mitnagdic leaders, such as the Vilna Gaon and Chaim Volozhin and their school, as well as Rabbi Shalom Sharabi (known at the RaShaSH), the Ben Ish Chai of Baghdad, and the Baba Sali. Rabbis who have rejected the idea include Saadia Gaon, David Kimhi, Hasdai Crescas, Joseph Albo, Abraham ibn Daud, Leon de Modena, Solomon ben Aderet, Maimonides and Asher ben Jehiel. Among the Geonim, Hai Gaon argued in favour of "gilgulim".
Reincarnation is an intrinsic part of some northern Native American and Inuit traditions. In the now heavily Christian Polar North (now mainly parts of Greenland and Nunavut), the concept of reincarnation is enshrined in the Inuit language.
The following is a story of human-to-human reincarnation as told by Thunder Cloud, a Winnebago (Ho-Chunk tribe) shaman referred to as T. C. in the narrative. Here T. C. talks about his two previous lives and how he died and came back again to this his third lifetime. He describes his time between lives, when he was “blessed” by Earth Maker and all the abiding spirits and given special powers, including the ability to heal the sick.
T. C.'s Account of His Two Reincarnations:
Founded in the 15th century, Sikhism's founder Guru Nanak had a choice between the cyclical reincarnation concept of ancient Indian religions and the linear concept of Islam and he chose the cyclical concept of time. Sikhism teaches reincarnation theory similar to those in Hinduism, but with some differences from its traditional doctrines. Sikh rebirth theories about the nature of existence are similar to ideas that developed during the devotional Bhakti movement particularly within some Vaishnava traditions, which define liberation as a state of union with God attained through the grace of God.
The doctrines of Sikhism teach that the soul exists, and is passed from one body to another in endless cycles of Saṃsāra, until liberation from the death and re-birth cycle. Each birth begins with karma ("karam"), and these actions leave a "karni" (karmic signature) on one's soul which influences future rebirths, but it is God whose grace that liberates from the death and re-birth cycle. The way out of the reincarnation cycle, asserts Sikhism, is to live an ethical life, devote oneself to God and constantly remember God's name. The precepts of Sikhism encourage the bhakti of One Lord for "mukti" (liberation from the death and re-birth cycle.).
Spiritism, a Christian philosophy codified in the 19th century by the French educator Allan Kardec, teaches reincarnation or rebirth into human life after death. According to this doctrine, free will and cause and effect are the corollaries of reincarnation, and reincarnation provides a mechanism for man's spiritual evolution in successive lives.
The Theosophical Society draws much of its inspiration from India. In the Theosophical world-view reincarnation is the vast rhythmic process by which the soul, the part of a person which belongs to the formless non-material and timeless worlds, unfolds its spiritual powers in the world and comes to know itself. It descends from sublime, free, spiritual realms and gathers experience through its effort to express itself in the world. Afterwards there is a withdrawal from the physical plane to successively higher levels of reality, in death, a purification and assimilation of the past life. Having cast off all instruments of personal experience it stands again in its spiritual and formless nature, ready to begin its next rhythmic manifestation, every lifetime bringing it closer to complete self-knowledge and self-expression. However it may attract old mental, emotional, and energetic "karma" patterns to form the new personality.
Anthroposophy describes reincarnation from the point of view of Western philosophy and culture. The ego is believed to transmute transient soul experiences into universals that form the basis for an individuality that can endure after death. These universals include ideas, which are intersubjective and thus transcend the purely personal (spiritual consciousness), intentionally formed human character (spiritual life), and becoming a fully conscious human being (spiritual humanity). Rudolf Steiner described both the general principles he believed to be operative in reincarnation, such as that one's will activity in one life forms the basis for the thinking of the next, and a number of successive lives of various individualities.
Inspired by Helena Blavatsky's major works, including "Isis Unveiled" and "The Secret Doctrine", astrologers in the early twentieth-century integrated the concepts of karma and reincarnation into the practice of Western astrology. Notable astrologers who advanced this development included Alan Leo, Charles E. O. Carter, Marc Edmund Jones, and Dane Rudhyar. A new synthesis of East and West resulted as Hindu and Buddhist concepts of reincarnation were fused with Western astrology's deep roots in Hermeticism and Neoplatonism. In the case of Rudhyar, this synthesis was enhanced with the addition of Jungian depth psychology. This dynamic integration of astrology, reincarnation and depth psychology has continued into the modern era with the work of astrologers Steven Forrest and Jeffrey Wolf Green. Their respective schools of Evolutionary Astrology are based on "an acceptance of the fact that human beings incarnate in a succession of lifetimes."
Past reincarnation, usually termed "past lives", is a key part of the principles and practices of the Church of Scientology. Scientologists believe that the human individual is actually a "thetan", an immortal spiritual entity, that has fallen into a degraded state as a result of past-life experiences. Scientology auditing is intended to free the person of these past-life traumas and recover past-life memory, leading to a higher state of spiritual awareness. This idea is echoed in their highest fraternal religious order, the Sea Organization, whose motto is "Revenimus" or "We Come Back", and whose members sign a "billion-year contract" as a sign of commitment to that ideal. L. Ron Hubbard, the founder of Scientology, does not use the word "reincarnation" to describe its beliefs, noting that: "The common definition of reincarnation has been altered from its original meaning. The word has come to mean 'to be born again in different life forms' whereas its actual definition is 'to be born again into the flesh of another body.' Scientology ascribes to this latter, original definition of reincarnation."
The first writings in Scientology regarding past lives date from around 1951 and slightly earlier. In 1960, Hubbard published a book on past lives entitled "Have You Lived Before This Life". In 1968 he wrote "Mission into Time", a report on a five-week sailing expedition to Sardinia, Sicily and Carthage to see if specific evidence could be found to substantiate L. Ron Hubbard's recall of incidents in his own past, centuries ago.
The Indian spiritual teacher Meher Baba stated that reincarnation occurs due to desires and once those desires are extinguished the ego-mind ceases to reincarnate.
Wicca is a neo-pagan religion focused on nature, guided by the philosophy of Wiccan Rede that advocates the tenets "Harm None, Do As Ye Will". Wiccans believe in a form of karmic return where one's deeds are returned, either in the current life or in another life, threefold or multiple times in order to teach one lessons (The Threefold Law). Reincarnation is therefore an accepted part of the Wiccan faith. Wiccans also believe that death and afterlife are important experiences for the soul to transform and prepare for future lifetimes.
Before the late nineteenth century, reincarnation was a relatively rare theme in the West. In ancient Greece, the Orphic Mysteries and Pythagoreans believed in various forms of reincarnation. Emanuel Swedenborg believed that we leave the physical world once, but then go through several lives in the spiritual world — a kind of hybrid of Christian tradition and the popular view of reincarnation.
More recently, many people in the West have developed an interest in and acceptance of reincarnation. Many new religious movements include reincarnation among their beliefs, e.g. modern Neopagans, Spiritism, Astara, Dianetics, and Scientology. Many esoteric philosophies also include reincarnation, e.g. Theosophy, Anthroposophy, Kabbalah, and Gnostic and Esoteric Christianity such as the works of Martinus Thomsen.
Demographic survey data from 1999–2002 shows a significant minority of people from Europe (22%) and America (20%) believe in the existence of life before birth and after death, leading to a physical rebirth. The belief in reincarnation is particularly high in the Baltic countries, with Lithuania having the highest figure for the whole of Europe, 44%, while the lowest figure is in East Germany, 12%. A quarter of U.S. Christians, including 10% of all born again Christians, embrace the idea.
Skeptic Carl Sagan asked the Dalai Lama what he would do if a fundamental tenet of his religion (reincarnation) were definitively disproved by science. The Dalai Lama answered, "If science can disprove reincarnation, Tibetan Buddhism would abandon reincarnation… but it's going to be mighty hard to disprove reincarnation.". Sagan considers claims of memories of past lives to be worthy of research, although he considers reincarnation to be an unlikely explanation for these..
Ian Stevenson reported that belief in reincarnation is held (with variations in details) by adherents of almost all major religions except Christianity and Islam. In addition, between 20 and 30 percent of persons in western countries who may be nominal Christians also believe in reincarnation.
Edgar Cayce an American self-professed clairvoyant answered questions on subjects as varied as healing, reincarnation, wars, Atlantis, and future events while allegedly asleep.
According to Dr. Brian Weiss, in 1980 one of his patients, "Catherine", began discussing past-life experiences under hypnosis. Weiss did not believe in reincarnation at the time but, after confirming elements of Catherine's stories through public records, came to be convinced of the survival of an element of the human personality after death. Weiss claims he has regressed more than 4,000 patients since 1980.
Neale Donald Walsch, an American author of the series Conversations with God who says his books are not channelled, but rather that they are inspired by God and that they can help a person relate to God from a modern perspective claims that he has reincarnated more than 600 times.
Other influential contemporary figures that have written on reincarnation include Alice Ann Bailey, one of the first writers to use the terms New Age and age of Aquarius, Torkom Saraydarian, an Armenian-American musician and religious author, Dolores Cannon, Atul Gawande, Michael Newton, Bruce Greyson, Raymond Moody and Unity Church founder Charles Fillmore.
One 1999 study by Walter and Waterhouse reviewed the previous data on the level of reincarnation belief and performed a set of thirty in-depth interviews in Britain among people who did not belong to a religion advocating reincarnation. The authors reported that surveys have found about one fifth to one quarter of Europeans have some level of belief in reincarnation, with similar results found in the USA. In the interviewed group, the belief in the existence of this phenomenon appeared independent of their age, or the type of religion that these people belonged to, with most being Christians. The beliefs of this group also did not appear to contain any more than usual of "new age" ideas (broadly defined) and the authors interpreted their ideas on reincarnation as "one way of tackling issues of suffering", but noted that this seemed to have little effect on their private lives.
Waterhouse also published a detailed discussion of beliefs expressed in the interviews. She noted that although most people "hold their belief in reincarnation quite lightly" and were unclear on the details of their ideas, personal experiences such as past-life memories and near-death experiences had influenced most believers, although only a few had direct experience of these phenomena. Waterhouse analyzed the influences of second-hand accounts of reincarnation, writing that most of the people in the survey had heard other people's accounts of past-lives from regression hypnosis and dreams and found these fascinating, feeling that there "must be something in it" if other people were having such experiences.
Experts generally regard claims of recovered memories of past lives as fantasies or delusions or a type of confabulation. Attempts to retrieve such memories using past life regression are widely considered discredited and unscientific by medical practitioners. The use of hypnosis and suggestive questions can tend to leave the subject particularly likely to hold distorted or false memories. The source of the memories is more likely cryptomnesia and confabulations that combine experiences, knowledge, imagination and suggestion or guidance from the hypnotist than recall of a previous existence. Once created, those memories are indistinguishable from memories based on events that occurred during the subject's life. Past life regression has been critiqued for being unethical on the premises that it lacks any evidence to support its claims, and that the act increases one's susceptibility to false memories. Luis Cordón states that this can be problematic as it creates delusions under the guise of therapy. The memories are experienced as vivid as those based on events experienced in one's life, impossible to differentiate from true memories of actual events, and accordingly any damage can be difficult to undo.
Investigations of memories reported during past-life regression have revealed that they contain historical inaccuracies which originate from common beliefs about history, modern popular culture, or books that discuss historical events. Experiments with subjects undergoing past-life regression indicate that a belief in reincarnation and suggestions by the hypnotist are the two most important factors regarding the contents of memories reported. As past life regression is rooted on the premise of reincarnation, many APA accredited organizations have begun to refute this as a therapeutic method on the basis of it being unethical. Additionally, the hypnotic methodology that underpins past life regression places the participant in a vulnerable position, susceptible to implantation of false memories. Because the implantation of false memories may be harmful, Gabriel Andrade points out that past life regression violates the principle of "first, do no harm" (non-maleficence), part of the Hippocratic Oath.
Psychiatrist Ian Stevenson, from the University of Virginia, having grown up with a mother who was a theosophist, dedicated his latter career to investigating claims of reincarnation in hopes of providing evidence that reincarnation happens. Other people who have undertaken similar pursuits include Jim B. Tucker, Antonia Mills, Satwant Pasricha, Godwin Samararatne, and Erlendur Haraldsson, but Stevenson's publications remain the most well-known. Stevenson conducted more than 2,500 case studies of young children who claimed to remember past lives over a period of 40 years and published twelve books, including "Twenty Cases Suggestive of Reincarnation", Reincarnation and Biology: A Contribution to the Etiology of Birthmarks and Birth Defects, a two-part monograph and "Where Reincarnation and Biology Intersect". He documented the family's and child's statements along with correlates to a deceased person he believed matched the child's memory. Stevenson also claimed that some birthmarks and birth defects matched wounds and scars on the deceased, sometimes providing medical records like autopsy photographs to make his case. Expecting controversy and skepticism, Stevenson also searched for disconfirming evidence and alternative explanations for the reports, but he argued (not without criticism) that his methods ruled out all possible "normal" explanations for the child's memories. Stevenson's work in this regard was impressive enough to Carl Sagan that he referred to what was apparently Stevenson's investigations in his book "The Demon-Haunted World" as an example of carefully collected empirical data, though he rejected reincarnation as a parsimonious explanation for the stories, although he admitted that the phenomenon of alleged past life memories should be researched. Sam Harris cited Stevenson's works in his book "The End of Faith" as part of a body of data that seems to attest to the reality of psychic phenomena, but that only relies on subjective personal experience.
Critical reviews of these claims include work by Paul Edwards who criticized the accounts of reincarnation as being purely anecdotal and cherry-picked. Instead, Edwards says such stories are attributable to selective thinking, suggestion, and false memories that can result from the family's or researcher's belief systems, and thus cannot be counted as empirical evidence. The philosopher Keith Augustine wrote in critique that the fact that "the vast majority of Stevenson's cases come from countries where a religious belief in reincarnation is strong, and rarely elsewhere, seems to indicate that cultural conditioning (rather than reincarnation) generates claims of spontaneous past-life memories." Further, Ian Wilson pointed out that a large number of Stevenson's cases consisted of poor children remembering wealthy lives or belonging to a higher caste. In these societies, claims of reincarnation are sometimes used as schemes to obtain money from the richer families of alleged former incarnations. Following these types of criticism, Stevenson published a book on "European Cases of the Reincarnation Type" in order to show the reports were cross-cultural. Even still, Robert Baker asserted that all the past-life experiences investigated by Stevenson and other parapsychologists are understandable in terms of known psychological factors including a mixture of cryptomnesia and confabulation. Edwards also objected that reincarnation invokes assumptions that are inconsistent with modern science. | https://en.wikipedia.org/wiki?curid=25806 |
Robert Noyce
Robert Norton Noyce (December 12, 1927 – June 3, 1990), nicknamed "the Mayor of Silicon Valley," was an American physicist who co-founded Fairchild Semiconductor in 1957 and Intel Corporation in 1968. He is also credited with the realization of the first monolithic integrated circuit or microchip, which fueled the personal computer revolution and gave Silicon Valley its name.
Noyce was born on December 12, 1927, in Burlington, Iowa the third of four sons of the Rev. Ralph Brewster Noyce. His father graduated from Doane College, Oberlin College, and the Chicago Theological Seminary and was also nominated for a Rhodes Scholarship. In the 1930s and 1940s, the Reverend Noyce worked as a Congregational clergyman and as the associate superintendent of the Iowa Conference of Congregational Churches.
His mother, Harriet May Norton, was the daughter of the Rev. Milton J. Norton, a Congregational clergyman, and Louise Hill. She was a graduate of Oberlin College and prior to her marriage, she had dreams of becoming a missionary. She has been described as an intelligent woman with a commanding will.
Noyce had three siblings: Donald Sterling Noyce, Gaylord Brewster Noyce and Ralph Harold Noyce. His earliest childhood memory involved beating his father at ping pong and feeling absolutely shocked when his mother reacted to the thrilling news of his victory with a distracted "Wasn't that nice of Daddy to let you win?" Even at the age of five, Noyce felt offended by the notion of intentionally losing at anything. "That's not the game", he sulked to his mother. "If you're going to play, play to win!"
When Noyce was 12 years old in the summer of 1940, he and his brother built a boy-sized aircraft, which they used to fly from the roof of the Grinnell College stables. Later he built a radio from scratch and motorized his sled by welding a propeller and an engine from an old washing machine to the back of it. His parents were both religious but Noyce became an agnostic and irreligious in later life.
Noyce grew up in Grinnell, Iowa. While in high school, he exhibited a talent for mathematics and science and took the Grinnell College freshman physics course in his senior year. He graduated from Grinnell High School in 1945 and entered Grinnell College in the fall of that year. He was the star diver on the 1947 Midwest Conference Championship swim team. While at Grinnell College, Noyce sang, played the oboe and acted. In Noyce's junior year, he got in trouble for stealing a 25-pound pig from the Grinnell mayor's farm and roasting it at a school luau. The mayor sent a letter home to Noyce's parents stating that “In the agricultural state of Iowa, stealing a domestic animal is a felony which carries a minimum penalty of a year in prison and a fine of one dollar.” So essentially, Noyce would have to be expelled from school. Grant Gale, Noyce's physics professor and president of the college, did not want to lose a student with Robert's potential. They were able to compromise with the mayor so that Grinnell would compensate him for the pig, Noyce would only be suspended for one semester, and no further charges would be pressed. He returned in February 1949. He graduated Phi Beta Kappa with a BA in physics and mathematics in 1949. He also received a signal honor from his classmates: the Brown Derby Prize, which recognized "the senior man who earned the best grades with the least amount of work".
While Noyce was an undergraduate, he was fascinated by the field of physics and took a course in the subject that was taught by professor Grant Gale. Gale obtained two of the very first transistors ever to come out of Bell Labs and showed them off to his class. Noyce was hooked. Gale suggested that he apply to the doctoral program in physics at MIT, which he did.
Noyce had a mind so quick that his graduate school friends called him "Rapid Robert." He received his doctorate in physics from MIT in 1953.
After graduating from MIT in 1953, Noyce took a job as a research engineer at the Philco Corporation in Philadelphia. He left in 1956 to join William Shockley, a co-inventor of the transistor and eventual Nobel Prize winner, at the Shockley Semiconductor Laboratory in Mountain View, California.
Noyce left a year later with the "traitorous eight" upon having issues with Shockley's management style, and co-founded the influential Fairchild Semiconductor corporation. According to Sherman Fairchild, Noyce's impassioned presentation of his vision was the reason Fairchild had agreed to create the semiconductor division for the traitorous eight.
After Jack Kilby invented the first hybrid integrated circuit (hybrid IC) in 1958, Noyce in 1959 independently invented a new type of integrated circuit, the monolithic integrated circuit (monolithic IC). It was more practical than Kilby's implementation. Noyce's design was made of silicon, whereas Kilby's chip was made of germanium. Noyce's invention was the first monolithic integrated circuit chip. Unlike Kilby's IC which had external wire connections and could not be mass-produced, Noyce's monolithic IC chip put all components on a chip of silicon and connected them with copper lines. The basis for Noyce's monolithic IC was the planar process, developed in early 1959 by Jean Hoerni. In turn, the basis for Hoerni's planar process were the silicon surface passivation and thermal oxidation methods developed by Mohamed Atalla in 1957.
Noyce and Gordon Moore founded Intel in 1968 when they left Fairchild Semiconductor. Arthur Rock, the chairman of Intel's board and a major investor in the company, said that for Intel to succeed, the company needed Noyce, Moore and Andrew Grove. And it needed them in that order. Noyce: the visionary, born to inspire; Moore: the virtuoso of technology; and Grove: the technologist turned management scientist. The relaxed culture that Noyce brought to Intel was a carry-over from his style at Fairchild Semiconductor. He treated employees as family, rewarding and encouraging teamwork. Noyce's management style could be called "roll up your sleeves". He shunned fancy corporate cars, reserved parking spaces, private jets, offices, and furnishings in favor of a less-structured, relaxed working environment in which everyone contributed and no one received lavish benefits. By declining the usual executive perks he stood as a model for future generations of Intel CEOs.
At Intel, he oversaw Ted Hoff's invention of the microprocessor, which was his second revolution.
In 1953, Noyce married Elizabeth Bottomley, who was a 1951 graduate of Tufts University. While living in Los Altos, California they had four children: William B., Pendred, Priscilla, and Margaret. Elizabeth loved New England, so the family acquired a 50-acre coastal summer home in Bremen, Maine. Elizabeth and the children would summer there. Robert would visit during the summer, but he continued working at Intel during the summer. They divorced in 1974.
On November 27, 1974, Noyce married Ann Schmeltz Bowers. Bowers, a graduate of Cornell University, also received an honorary Ph.D. from Santa Clara University, where she was a trustee for nearly 20 years. She was the first Director of Personnel for Intel Corporation and the first Vice President of Human Resources for Apple Inc. She currently serves as Chair of the Board and the founding trustee of the Noyce Foundation.
Noyce kept active his entire life. He enjoyed reading Hemingway, and he flew his own airplane and also participated in hang-gliding and scuba diving. Noyce believed that microelectronics would continue to advance in complexity and sophistication well beyond its current state; this led to the question of what use society would make of the technology. In his last interview, Noyce was asked what he would do if he were "emperor" of the United States. He said that he would, among other things, "…make sure we are preparing our next generation to flourish in a high-tech age. And that means education of the lowest and the poorest, as well as at the graduate school level."
Noyce suffered a heart attack at age 62 at home on June 3, 1990, and later died at the Seton Medical Center in Austin, Texas.
In July 1959, he filed for "Semiconductor Device and Lead Structure", a type of integrated circuit. This independent effort was recorded only a few months after the key findings of inventor Jack Kilby. For his co-invention of the integrated circuit and its world-transforming impact, three presidents of the United States honored him.
Noyce was a holder of many honors and awards. President Ronald Reagan awarded him the National Medal of Technology in 1987. Two years later, he was inducted into the U.S. Business Hall of Fame sponsored by Junior Achievement, during a black tie ceremony keynoted by President George H. W. Bush. In 1990 Noycealong with, among others, Jack Kilby and transistor inventor John Bardeenreceived a "Lifetime Achievement Medal" during the bicentennial celebration of the Patent Act.
Noyce received the Franklin Institute's Stuart Ballantine Medal in 1966. He was awarded the IEEE Medal of Honor in 1978 "for his contributions to the silicon integrated circuit, a cornerstone of modern electronics." In 1979, he was awarded the National Medal of Science. Noyce was elected a Fellow of the American Academy of Arts and Sciences in 1980. The National Academy of Engineering awarded him its 1989 Charles Stark Draper Prize.
The science building at his alma mater, Grinnell College, is named after him.
On December 12, 2011, Noyce was honored with a Google Doodle celebrating the 84th anniversary of his birth.
December 8, 2000 According to the book 'The Innovators' Noyce was mentioned/credited as the honorary co-recipient in the Nobel Prize acceptance speech given by Kilby http://www.nobelprize.org/nobel_prizes/physics/laureates/2000/kilby-lecture.html
The Noyce Foundation was founded in 1990 by his family. The foundation was dedicated to improving public education in mathematics and science in grades K-12. The foundation announced that it would end operations in 2015.
Noyce was granted 15 patents. Patents are listed in order issued, not filed.
Note: In 1960 Clevite Corporation acquired Shockley Semiconductor Laboratory, a subsidiary of Beckman Instruments, for whom Noyce worked. | https://en.wikipedia.org/wiki?curid=25808 |
Riemann zeta function
The Riemann zeta function or Euler–Riemann zeta function, , is a function of a complex variable "s" that analytically continues the sum of the Dirichlet series
which converges when the real part of is greater than 1. More general representations of for all are given below. The Riemann zeta function plays a pivotal role in analytic number theory and has applications in physics, probability theory, and applied statistics.
As a function of a real variable, Leonhard Euler first introduced and studied it in the first half of the eighteenth century without using complex analysis, which was not available at the time. Bernhard Riemann's 1859 article "On the Number of Primes Less Than a Given Magnitude" extended the Euler definition to a complex variable, proved its meromorphic continuation and functional equation, and established a relation between its zeros and the distribution of prime numbers.
The values of the Riemann zeta function at even positive integers were computed by Euler. The first of them, , provides a solution to the Basel problem. In 1979 Roger Apéry proved the irrationality of . The values at negative integer points, also found by Euler, are rational numbers and play an important role in the theory of modular forms. Many generalizations of the Riemann zeta function, such as Dirichlet series, Dirichlet -functions and -functions, are known.
The Riemann zeta function is a function of a complex variable . (The notation , , and is used traditionally in the study of the zeta function, following Riemann.)
For the special case where formula_2, the zeta function can be expressed by the following integral:
where
is the gamma function.
In the case , the integral for always converges, and can be simplified to the following infinite series:
The Riemann zeta function is defined as the analytic continuation of the function defined for by the sum of the preceding series.
Leonhard Euler considered the above series in 1740 for positive integer values of , and later Chebyshev extended the definition to formula_6.
The above series is a prototypical Dirichlet series that converges absolutely to an analytic function for such that and diverges for all other values of . Riemann showed that the function defined by the series on the half-plane of convergence can be continued analytically to all complex values . For the series is the harmonic series which diverges to , and
Thus the Riemann zeta function is a meromorphic function on the whole complex -plane, which is holomorphic everywhere except for a simple pole at with residue 1.
For any positive even integer :
where is the th Bernoulli number.
For odd positive integers, no such simple expression is known, although these values are thought to be related to the algebraic -theory of the integers; see Special values of -functions.
For nonpositive integers, one has
for (using the convention that ).
In particular, vanishes at the negative even integers because for all odd other than 1. These are the so-called "trivial zeros" of the zeta function.
Via analytic continuation, one can show that:
Taking the limit formula_19, one obtains formula_20.
The connection between the zeta function and prime numbers was discovered by Euler, who proved the identity
where, by definition, the left hand side is and the infinite product on the right hand side extends over all prime numbers (such expressions are called Euler products):
Both sides of the Euler product formula converge for . The proof of Euler's identity uses only the formula for the geometric series and the fundamental theorem of arithmetic. Since the harmonic series, obtained when , diverges, Euler's formula (which becomes ) implies that there are infinitely many primes.
The Euler product formula can be used to calculate the asymptotic probability that randomly selected integers are set-wise coprime. Intuitively, the probability that any single number is divisible by a prime (or any integer) is . Hence the probability that numbers are all divisible by this prime is , and the probability that at least one of them is "not" is . Now, for distinct primes, these divisibility events are mutually independent because the candidate divisors are coprime (a number is divisible by coprime divisors and if and only if it is divisible by , an event which occurs with probability ). Thus the asymptotic probability that numbers are coprime is given by a product over all primes,
(More work is required to derive this result formally.)
The zeta function satisfies the functional equation:
where is the gamma function. This is an equality of meromorphic functions valid on the whole complex plane. The equation relates values of the Riemann zeta function at the points and , in particular relating even positive integers with odd negative integers. Owing to the zeros of the sine function, the functional equation implies that has a simple zero at each even negative integer , known as the trivial zeros of . When is an even positive integer, the product on the right is non-zero because has a simple pole, which cancels the simple zero of the sine factor.
A proof of the functional equation proceeds as follows:
We observe that if formula_25, then
As a result, if formula_27 then
where the "η"-series is convergent (albeit non-absolutely) in the larger half-plane (for a more detailed survey on the history of the functional equation, see e.g. Blagouchine).
Riemann also found a symmetric version of the functional equation applying to the xi-function:
which satisfies:
The functional equation shows that the Riemann zeta function has zeros at . These are called the trivial zeros. They are trivial in the sense that their existence is relatively easy to prove, for example, from being 0 in the functional equation. The non-trivial zeros have captured far more attention because their distribution not only is far less understood but, more importantly, their study yields impressive results concerning prime numbers and related objects in number theory. It is known that any non-trivial zero lies in the open strip , which is called the critical strip. The Riemann hypothesis, considered one of the greatest unsolved problems in mathematics, asserts that any non-trivial zero has . In the theory of the Riemann zeta function, the set is called the critical line. For the Riemann zeta function on the critical line, see -function.
In 1914, Godfrey Harold Hardy proved that has infinitely many real zeros.
Hardy and John Edensor Littlewood formulated two conjectures on the density and distance between the zeros of on intervals of large positive real numbers. In the following, is the total number of real zeros and the total number of zeros of odd order of the function lying in the interval .
These two conjectures opened up new directions in the investigation of the Riemann zeta function.
The location of the Riemann zeta function's zeros is of great importance in the theory of numbers. The prime number theorem is equivalent to the fact that there are no zeros of the zeta function on the line. A better result that follows from an effective form of Vinogradov's mean-value theorem is that whenever and
The strongest result of this kind one can hope for is the truth of the Riemann hypothesis, which would have many profound consequences in the theory of numbers.
It is known that there are infinitely many zeros on the critical line. Littlewood showed that if the sequence () contains the imaginary parts of all zeros in the upper half-plane in ascending order, then
The critical line theorem asserts that a positive proportion of the nontrivial zeros lies on the critical line. (The Riemann hypothesis would imply that this proportion is 1.)
In the critical strip, the zero with smallest non-negative imaginary part is (). The fact that
for all complex implies that the zeros of the Riemann zeta function are symmetric about the real axis. Combining this symmetry with the functional equation, furthermore, one sees that the non-trivial zeros are symmetric about the critical line .
For sums involving the zeta-function at integer and half-integer values, see rational zeta series.
The reciprocal of the zeta function may be expressed as a Dirichlet series over the Möbius function :
for every complex number with real part greater than 1. There are a number of similar relations involving various well-known multiplicative functions; these are given in the article on the Dirichlet series.
The Riemann hypothesis is equivalent to the claim that this expression is valid when the real part of is greater than .
The critical strip of the Riemann zeta function has the remarkable property of universality. This zeta-function universality states that there exists some location on the critical strip that approximates any holomorphic function arbitrarily well. Since holomorphic functions are very general, this property is quite remarkable. The first proof of universality was provided by Sergei Mikhailovitch Voronin in 1975. More recent work has included effective versions of Voronin's theorem and extending it to Dirichlet L-functions.
Let the functions and be defined by the equalities
Here is a sufficiently large positive number, , , , . Estimating the values and from below shows, how large (in modulus) values can take on short intervals of the critical line or in small neighborhoods of points lying in the critical strip .
The case was studied by Kanakanahalli Ramachandra; the case , where is a sufficiently large constant, is trivial.
Anatolii Karatsuba proved, in particular, that if the values and exceed certain sufficiently small constants, then the estimates
hold, where and are certain absolute constants.
The function
is called the argument of the Riemann zeta function. Here is the increment of an arbitrary continuous branch of along the broken line joining the points , and .
There are some theorems on properties of the function . Among those results are the mean value theorems for and its first integral
on intervals of the real line, and also the theorem claiming that every interval for
contains at least
points where the function changes sign. Earlier similar results were obtained by Atle Selberg for the case
An extension of the area of convergence can be obtained by rearranging the original series. The series
converges for , while
converges even for . In this way, the area of convergence can be extended to for any negative integer .
The Mellin transform of a function is defined as
in the region where the integral is defined. There are various expressions for the zeta-function as Mellin transform-like integrals. If the real part of is greater than one, we have
where denotes the gamma function. By modifying the contour, Riemann showed that
for all (where denotes the Hankel contour).
Starting with the integral formula formula_47 one can show by substitution and iterated differentation for natural formula_48
using the notation of umbral calculus where each power formula_50 is to be replaced by formula_51, so e.g. for formula_52 we have formula_53 while for formula_54 this becomes
We can also find expressions which relate to prime numbers and the prime number theorem. If is the prime-counting function, then
for values with .
A similar Mellin transform involves the Riemann prime-counting function , which counts prime powers with a weight of , so that
Now we have
These expressions can be used to prove the prime number theorem by means of the inverse Mellin transform. Riemann's prime-counting function is easier to work with, and can be recovered from it by Möbius inversion.
The Riemann zeta function can be given by a Mellin transform
in terms of Jacobi's theta function
However, this integral only converges if the real part of is greater than 1, but it can be regularized. This gives the following expression for the zeta function, which is well defined for all except 0 and 1:
The Riemann zeta function is meromorphic with a single pole of order one at . It can therefore be expanded as a Laurent series about ; the series development is then
The constants here are called the Stieltjes constants and can be defined by the limit
The constant term is the Euler–Mascheroni constant.
For all , , the integral relation (cf. Abel–Plana formula)
holds true, which may be used for a numerical evaluation of the zeta-function.
Another series development using the rising factorial valid for the entire complex plane is
This can be used recursively to extend the Dirichlet series definition to all complex numbers.
The Riemann zeta function also appears in a form similar to the Mellin transform in an integral over the Gauss–Kuzmin–Wirsing operator acting on ; that context gives rise to a series expansion in terms of the falling factorial.
On the basis of Weierstrass's factorization theorem, Hadamard gave the infinite product expansion
where the product is over the non-trivial zeros of and the letter again denotes the Euler–Mascheroni constant. A simpler infinite product expansion is
This form clearly displays the simple pole at , the trivial zeros at −2, −4, ... due to the gamma function term in the denominator, and the non-trivial zeros at . (To ensure convergence in the latter formula, the product should be taken over "matching pairs" of zeros, i.e. the factors for a pair of zeros of the form and should be combined.)
A globally convergent series for the zeta function, valid for all complex numbers except for some integer , was conjectured by Konrad Knopp and proven by Helmut Hasse in 1930 (cf. Euler summation):
The series only appeared in an appendix to Hasse's paper, and did not become generally known until it was discussed by Jonathan Sondow in 1994.
Hasse also proved the globally converging series
in the same publication, but research by Iaroslav Blagouchine
has found that this latter series was actually first published by Joseph Ser in 1926. Other similar globally convergent series include
where are the harmonic numbers, formula_71 are the Stirling numbers of the first kind, formula_72 is the Pochhammer symbol, are the Gregory coefficients, are the Gregory coefficients of higher order, are the Cauchy numbers of the second kind (, , ...), and
are the Bernoulli polynomials of the second kind, see Blagouchine's paper.
Peter Borwein has developed an algorithm that applies Chebyshev polynomials to the Dirichlet eta function to produce a very rapidly convergent series suitable for high precision numerical calculations.
Here is the primorial sequence and is Jordan's totient function.
The function can be represented, for , by the infinite series
where , is the th branch of the Lambert -function, and is an incomplete poly-Bernoulli number.
The function :formula_75 is iterated to find the coefficients appearing in Engel expansions.
The Mellin transform of the map formula_76 is related to the Riemann zeta function by the formula
For formula_78 , the Riemann zeta function has for fixed formula_79 and for all formula_80 the following representation in terms of three absolutely and uniformly converging series,formula_81where for positive integer formula_82 one has to take the limit value formula_83. The derivatives of formula_84 can be calculated by differentiating the above series termwise. From this follows an algorithm which allows to compute, to arbitrary precision, formula_84 and its derivatives using at most formula_86 summands for any formula_87, with explicit error bounds. For formula_84, these are as follows:
For a given argument formula_89 with formula_90 and formula_91 one can approximate formula_84 to any accuracy formula_93 by summing the first series to formula_94, formula_95 to formula_96 and neglecting formula_97, if one chooses formula_98 as the next higher integer of the unique solution of formula_99 in the unknown formula_100, and from this formula_101. For formula_102 one can neglect formula_95 altogether. Under the mild condition formula_104 one needs at most formula_105 summands. Hence this algorithm is essentially as fast as the Riemann-Siegel formula. Similar algorithms are possible for Dirichlet L-functions.
In February 2020, Sandeep Tyagi showed that a quantum computer can evaluate formula_106 in the critical strip with computational complexity that is polylogarithmic in formula_107. Following work by Ghaith Ayesh Hiary, the required exponential sums may be rescaled as formula_108, for integer formula_109.
The zeta function occurs in applied statistics (see Zipf's law and Zipf–Mandelbrot law).
Zeta function regularization is used as one possible means of regularization of divergent series and divergent integrals in quantum field theory. In one notable example, the Riemann
zeta-function shows up explicitly in one method of calculating the Casimir effect. The zeta function is also useful for the analysis of dynamical systems.
The zeta function evaluated at equidistant positive integers appears in infinite series representations of a number of constants.
In fact the even and odd terms give the two sums
and
Parametrized versions of the above sums are given by
and
with formula_115 and where formula_116 and formula_117 are the polygamma function and Euler's constant, as well as
all of which are continuous at formula_119. Other sums include
where denotes the imaginary part of a complex number.
There are yet more formulas in the article Harmonic number.
There are a number of related zeta functions that can be considered to be generalizations of the Riemann zeta function. These include the Hurwitz zeta function
(the convergent series representation was given by Helmut Hasse in 1930, cf. Hurwitz zeta function), which coincides with the Riemann zeta function when (the lower limit of summation in the Hurwitz zeta function is 0, not 1), the Dirichlet -functions and the Dedekind zeta-function. For other related functions see the articles zeta function and -function.
The polylogarithm is given by
which coincides with the Riemann zeta function when .
The Lerch transcendent is given by
which coincides with the Riemann zeta function when and (the lower limit of summation in the Lerch transcendent is 0, not 1).
The Clausen function that can be chosen as the real or imaginary part of .
The multiple zeta functions are defined by
One can analytically continue these functions to the -dimensional complex space. The special values taken by these functions at positive integer arguments are called multiple zeta values by number theorists and have been connected to many different branches in mathematics and physics. | https://en.wikipedia.org/wiki?curid=25809 |
Rice University
William Marsh Rice University, commonly known as Rice University, is a private research university in Houston, Texas. The university is situated on a 300-acre (121 ha) campus near the Houston Museum District and is adjacent to the Texas Medical Center.
Opened in 1912 after the murder of its namesake William Marsh Rice, Rice is a research university with an undergraduate focus. Its emphasis on education is demonstrated by a small student body and 6:1 student-faculty ratio. The university has a very high level of research activity, with $140.2 million in sponsored research funding in 2016. Rice is noted for its applied science programs in the fields of artificial heart research, structural chemical analysis, signal processing, space science, and nanotechnology. In 2010, it was ranked first in the world in materials science research by Times Higher Education (THE). Rice is a member of the Association of American Universities.
The university is organized into eleven residential colleges and eight schools of academic study, including the Wiess School of Natural Sciences, the George R. Brown School of Engineering, the School of Social Sciences, School of Architecture, Shepherd School of Music and the School of Humanities. Rice's undergraduate program offers more than fifty majors and two dozen minors, and allows a high level of flexibility in pursuing multiple degree programs. Additional graduate programs are offered through the Jesse H. Jones Graduate School of Business and the Susanne M. Glasscock School of Continuing Studies. Rice students are bound by the strict Honor Code, which is enforced by a student-run Honor Council.
Rice competes in 14 NCAA Division I varsity sports and is a part of Conference USA, often competing with its cross-town rival the University of Houston. Intramural and club sports are offered in a wide variety of activities such as jiu jitsu, water polo, and crew.
The university's alumni include more than two dozen Marshall Scholars and a dozen Rhodes Scholars. Given the university's close links to NASA, it has produced a significant number of astronauts and space scientists. In business, Rice graduates include CEOs and founders of Fortune 500 companies; in politics, alumni include congressmen, cabinet secretaries, judges, and mayors. Two alumni have won the Nobel Prize, and numerous others are researchers in science, technology, and engineering.
Rice University's history began with the demise of Massachusetts businessman William Marsh Rice, who had made his fortune in real estate, railroad development and cotton trading in the state of Texas. In 1891, Rice decided to charter a free-tuition educational institute in Houston, bearing his name, to be created upon his death, earmarking most of his estate towards funding the project. Rice's will specified the institution was to be "a competitive institution of the highest grade" and that only white students would be permitted to attend. On the morning of September 23, 1900, Rice, age 84, was found dead by his valet, Charles F. Jones, and was presumed to have died in his sleep. Shortly thereafter, a large check made out to Rice's New York City lawyer, signed by the late Rice, aroused the suspicion of a bank teller, due to the misspelling of the recipient's name. The lawyer, Albert T. Patrick, then announced that Rice had changed his will to leave the bulk of his fortune to Patrick, rather than to the creation of Rice's educational institute. A subsequent investigation led by the District Attorney of New York resulted in the arrests of Patrick and of Rice's butler and valet Charles F. Jones, who had been persuaded to administer chloroform to Rice while he slept. Rice's friend and personal lawyer in Houston, Captain James A. Baker, aided in the discovery of what turned out to be a fake will with a forged signature. Jones was not prosecuted since he cooperated with the district attorney, and testified against Patrick. Patrick was found guilty of conspiring to steal Rice's fortune and he was convicted of murder in 1901 (he was pardoned in 1912 due to conflicting medical testimony). Baker helped Rice's estate direct the fortune, worth $4.6 million in 1904 ($ million today), towards the founding of what was to be called the Rice Institute, later to become Rice University. The board took control of the assets on April 29 of that year.
In 1907, the Board of Trustees selected the head of the Department of Mathematics and Astronomy at Princeton University, Edgar Odell Lovett, to head the Institute, which was still in the planning stages. He came recommended by Princeton's president, Woodrow Wilson. In 1908, Lovett accepted the challenge, and was formally inaugurated as the Institute's first president on October 12, 1912. Lovett undertook extensive research before formalizing plans for the new Institute, including visits to 78 institutions of higher learning across the world on a long tour between 1908 and 1909. Lovett was impressed by such things as the aesthetic beauty of the uniformity of the architecture at the University of Pennsylvania, a theme which was adopted by the Institute, as well as the residential college system at Cambridge University in England, which was added to the Institute several decades later. Lovett called for the establishment of a university "of the highest grade," "an institution of liberal and technical learning" devoted "quite as much to investigation as to instruction." [We must] "keep the standards up and the numbers down," declared Lovett. "The most distinguished teachers must take their part in undergraduate teaching, and their spirit should dominate it all."
In 1911, the cornerstone was laid for the Institute's first building, the Administration Building, now known as Lovett Hall in honor of the founding president. On September 23, 1912, the 12th anniversary of William Marsh Rice's murder, the "William Marsh Rice Institute for the Advancement of Letters, Science, and Art" began course work with 59 enrolled students, who were known as the "59 immortals," and about a dozen faculty. After 18 additional students joined later, Rice's initial class numbered 77, 48 male and 29 female. Unusual for the time, Rice accepted coeducational admissions from its beginning, but on-campus housing would not become co-ed until 1957. Three weeks after opening, a spectacular international academic festival was held, bringing Rice to the attention of the entire academic world.
Per William Marsh Rice's will and Rice Institute's initial charter, the students paid no tuition. Classes were difficult, however, and about half of Rice's students had failed after the first 1912 term. At its first commencement ceremony, held on June 12, 1916, Rice awarded 35 bachelor's degrees and one master's degree. That year, the student body also voted to adopt the Honor System, which still exists today. Rice's first doctorate was conferred in 1918 on mathematician Hubert Evelyn Bray.
The Founder's Memorial Statue, a bronze statue of a seated William Marsh Rice, holding the original plans for the campus, was dedicated in 1930, and installed in the central academic quad, facing Lovett Hall. The statue was crafted by John Angel.
During World War II, Rice Institute was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program, which offered students a path to a Navy commission.
The residential college system proposed by President Lovett was adopted in 1958, with the East Hall residence becoming Baker College, South Hall residence becoming Will Rice College, West Hall becoming Hanszen College, and the temporary Wiess Hall becoming Wiess College.
In 1959, the Rice Institute Computer went online. 1960 saw Rice Institute formally renamed William Marsh Rice University. Rice acted as a temporary intermediary in the transfer of land between Humble Oil and Refining Company and NASA, for the creation of NASA's Manned Spacecraft Center (now called Johnson Space Center) in 1962. President John F. Kennedy then made a speech at Rice Stadium reiterating that the United States intended to reach the moon before the end of the decade of the 1960s, and "to become the world's leading space-faring nation". The relationship of NASA with Rice University and the city of Houston has remained strong .
The original charter of Rice Institute dictated that the university admit and educate, tuition-free, "the white inhabitants of Houston, and the state of Texas". In 1963, the governing board of Rice University filed a lawsuit to allow the university to modify its charter to admit students of all races and to charge tuition. Ph.D. student Raymond Johnson became the first black Rice student when he was admitted that year. In 1964, Rice officially amended the university charter to desegregate its graduate and undergraduate divisions. The Trustees of Rice University prevailed in a lawsuit to void the racial language in the trust in 1966. Rice began charging tuition for the first time in 1965. In the same year, Rice launched a $33 million ($ million) development campaign. $43 million ($ million) was raised by its conclusion in 1970. In 1974, two new schools were founded at Rice, the Jesse H. Jones Graduate School of Management and the Shepherd School of Music. The Brown Foundation Challenge, a fund-raising program designed to encourage annual gifts, was launched in 1976 and ended in 1996 having raised $185 million ($ million). The Rice School of Social Sciences was founded in 1979.
On-campus housing was exclusively for men for the first forty years, until 1957. Jones College was the first women's residence on the Rice campus, followed by Brown College. According to legend, the women's colleges were purposefully situated at the opposite end of campus from the existing men's colleges as a way of preserving campus propriety, which was greatly valued by Edgar Odell Lovett, who did not even allow benches to be installed on campus, fearing that they "might lead to co-fraternization of the sexes". The path linking the north colleges to the center of campus was given the tongue-in-cheek name of "Virgin's Walk". Individual colleges became coeducational between 1973 and 1987, with the single-sex floors of colleges that had them becoming co-ed by 2006. By then, several new residential colleges had been built on campus to handle the university's growth, including Lovett College, Sid Richardson College, and Martel College.
The Economic Summit of Industrialized Nations was held at Rice in 1990. Three years later, in 1993, the James A. Baker III Institute for Public Policy was created. In 1997, the Edythe Bates Old Grand Organ and Recital Hall and the Center for Nanoscale Science and Technology, renamed in 2005 for the late Nobel Prize winner and Rice professor Richard E. Smalley, were dedicated at Rice. In 1999, the Center for Biological and Environmental Nanotechnology was created. The Rice Owls baseball team was ranked #1 in the nation for the first time in that year (1999), holding the top spot for eight weeks.
In 2003, the Owls won their first national championship in baseball, which was the first for the university in any team sport, beating Southwest Missouri State in the opening game and then the University of Texas and Stanford University twice each en route to the title. In 2008, President David Leebron issued a ten-point plan titled "Vision for the Second Century" outlining plans to increase research funding, strengthen existing programs, and increase collaboration. The plan has brought about another wave of campus constructions, including the erection the newly renamed BioScience Research Collaborative building (intended to foster collaboration with the adjacent Texas Medical Center), a new recreational center and the renovated Autry Court basketball stadium, and the addition of two new residential colleges, Duncan College and McMurtry College.
Beginning in late 2008, the university considered a merger with Baylor College of Medicine, though the merger was ultimately rejected in 2010. Select Rice undergraduates are currently guaranteed admission to Baylor College of Medicine upon graduation as part of the Rice/Baylor Medical Scholars program. According to History Professor John Boles' recent book "University Builder: Edgar Odell Lovett and the Founding of the Rice Institute," the first president's original vision for the university included hopes for future medical and law schools.
In 2018, the university added an online MBA program, MBA@Rice.
In June 2019, the university's president announced plans for a task force on Rice's "past in relation to slave history and racial injustice", stating that "Rice has some historical connections to that terrible part of American history and the segregation and racial disparities that resulted directly from it".
Rice's campus is a heavily wooded tract of land in the museum district of Houston, located close to the city of West University Place.
Five streets demarcate the campus: Greenbriar Street, Rice Boulevard, Sunset Boulevard, Main Street, and University Boulevard. For most of its history, all of Rice's buildings have been contained within this "outer loop". In recent years, new facilities have been built close to campus, but the bulk of administrative, academic, and residential buildings are still located within the original pentagonal plot of land. The new Collaborative Research Center, all graduate student housing, the Greenbriar building, and the Wiess President's House are located off-campus.
Rice prides itself on the amount of green space available on campus; there are only about 50 buildings spread between the main entrance at its easternmost corner, and the parking lots and Rice Stadium at the West end. The Lynn R. Lowrey Arboretum, consisting of more than 4000 trees and shrubs (giving birth to the legend that Rice has a tree for every student), is spread throughout the campus.
The university's first president, Edgar Odell Lovett, intended for the campus to have a uniform architecture style to improve its aesthetic appeal. To that end, nearly every building on campus is noticeably Byzantine in style, with sand and pink-colored bricks, large archways and columns being a common theme among many campus buildings. Noteworthy exceptions include the glass-walled Brochstein Pavilion, Lovett College with its Brutalist-style concrete gratings, Moody Center for the Arts with its contemporary design, and the eclectic-Mediterranean Duncan Hall. In September 2011, Travel+Leisure listed Rice's campus as one of the most beautiful in the United States.
Lovett Hall, named for Rice's first president, is the university's most iconic campus building. Through its Sallyport arch, new students symbolically enter the university during matriculation and depart as graduates at commencement. Duncan Hall, Rice's computational engineering building, was designed to encourage collaboration between the four different departments situated there. The building's foyer, drawn from many world cultures, was designed by the architect to symbolically express this collaborative purpose.
The campus is organized in a number of quadrangles. The Academic Quad, anchored by a statue of founder William Marsh Rice, includes Ralph Adams Cram's masterpiece, the asymmetrical Lovett Hall, the original administrative building; Fondren Library; Herzstein Hall, the original physics building and home to the largest amphitheater on campus; Sewall Hall for the social sciences and arts; Rayzor Hall for the languages; and Anderson Hall of the Architecture department. The Humanities Building, winner of several architectural awards, is immediately adjacent to the main quad. Further west lies a quad surrounded by McNair Hall of the Jones Business School, the Baker Institute, and Alice Pratt Brown Hall of the Shepherd School of Music. These two quads are surrounded by the university's main access road, a one-way loop referred to as the "inner loop". In the Engineering Quad, a trinity of sculptures by Michael Heizer, collectively entitled "45 Degrees, 90 Degrees, 180 Degrees", are flanked by Abercrombie Laboratory, the Cox Building, and the Mechanical Laboratory, housing the Electrical, Mechanical, and Earth Science/Civil Engineering departments, respectively. Duncan Hall is the latest addition to this quad, providing new offices for the Computer Science, Computational and Applied Math, Electrical and Computer Engineering, and Statistics departments.
Roughly three-quarters of Rice's undergraduate population lives on campus. Housing is divided among eleven residential colleges, which form an integral part of student life at the university (see Residential colleges of Rice University). The colleges are named for university historical figures and benefactors, and while there is wide variation in their appearance, facilities, and dates of founding, are an important source of identity for Rice students, functioning as dining halls, residence halls, sports teams, among other roles. Rice does not have or endorse a Greek system, with the residential college system taking its place. Five colleges, McMurtry, Duncan, Martel, Jones, and Brown are located on the north side of campus, across from the "South Colleges", Baker, Will Rice, Lovett, Hanszen, Sid Richardson, and Wiess, on the other side of the Academic Quadrangle. Of the eleven colleges, Baker is the oldest, originally built in 1912, and the twin Duncan and McMurtry colleges are the newest, and opened for the first time for the 2009–10 school year. Will Rice, Baker, and Lovett colleges are undergoing renovation to expand their dining facilities as well as the number of rooms available for students.
The on-campus football facility, Rice Stadium, opened in 1950 with a capacity of 70,000 seats. After improvements in 2006, the stadium is currently configured to seat 47,000 for football but can readily be reconfigured to its original capacity of 70,000, more than the total number of Rice alumni, living and deceased. The stadium was the site of Super Bowl VIII and a speech by John F. Kennedy on September 12, 1962 in which he challenged the nation to send a man to the moon by the end of the decade. The recently renovated Tudor Fieldhouse, formerly known as Autry Court, is home to the basketball and volleyball teams. Other stadia include the Rice Track/Soccer Stadium and the Jake Hess Tennis Stadium. A new Rec Center now houses the intramural sports offices and provide an outdoor pool, training and exercise facilities for all Rice students, while athletics training will solely be held at Tudor Fieldhouse and the Rice Football Stadium.
The university and Houston Independent School District jointly established The Rice School, a kindergarten through 8th grade public magnet school in Houston. The school opened in August 1994. Through Cy-Fair ISD Rice University offers a credit course based summer school for grades 8 through 12. They also have skills based classes during the summer in the Rice Summer School.
Rice University is chartered as a non-profit organization and is governed by a privately appointed board of trustees. The board consists of a maximum of 25 voting members who serve four-year terms. The trustees serve without compensation and a simple majority of trustees must reside in Texas, including at least four within the greater Houston area. The board of trustees delegates its power by appointing a President to serve as the chief executive of the university. David W. Leebron was appointed president in 2004 and succeeded Malcolm Gillis who served since 1993. The provost, six vice presidents, and other university officials report to the President. The President is advised by a University Council composed of the Provost, eight members of the Faculty Council, two staff members, one graduate student, and two undergraduate students. The President presides over a Faculty Council which has the authority to alter curricular requirements, establish new degree programs, and approve candidates for degrees.
Rice's undergraduate students benefit from a centralized admissions process, which admits new students to the university as a whole, rather than a specific school (the schools of Music and Architecture are decentralized). Students are encouraged to select the major path that best suits their desires; a student can later decide that they would rather pursue study in another field, or continue their current coursework and add a second or third major. These transitions are designed to be simple at Rice, with students not required to decide on a specific major until their sophomore year of study.
Rice's academics are organized into six schools which offer courses of study at the graduate and undergraduate level, with two more being primarily focused on graduate education, while offering select opportunities for undergraduate students. Rice offers 360 degrees in over 60 departments. There are 40 undergraduate degree programs, 51 masters programs, and 29 doctoral programs.
Undergraduate tuition for the 2011–2012 school year was $34,900. $651 was charged for fees, and Rice projected an $800 budget for books and $1550 for personal expenses. Rice students were charged $12,270 for room and board. Per year, the total cost of a Rice University education was $50,171.
Faculty members of each of the departments elect chairs to represent the department to each School's dean and the deans report to the Provost who serves as the chief officer for academic affairs.
Rice is a medium-sized, highly residential research university. The majority of enrollments are in the full-time, four-year undergraduate program emphasizing arts & sciences and professions. There is a high graduate coexistence with the comprehensive graduate program and a very high level of research activity. It is accredited by the Southern Association of Colleges and Schools as well as the professional accreditation agencies for engineering, management, and architecture.
Each of Rice's departments is organized into one of three distribution groups, and students whose major lies within the scope of one group must take at least 3 courses of at least 3 credit hours each of approved distribution classes in each of the other two groups, as well as completing one physical education course as part of the LPAP (Lifetime Physical Activity Program) requirement. All new students must take a Freshman Writing Intensive Seminar (FWIS) class, and for students who do not pass the university's writing composition examination (administered during the summer before matriculation), FWIS 100, a writing class, becomes an additional requirement.
The majority of Rice's undergraduate degree programs grant B.S. or B.A. degrees. Rice has recently begun to offer minors in areas such as business, energy and water sustainability, and global health.
As of fall 2014, men make up 52% of the undergraduate body and 64% of the professional and post-graduate student body. The student body consists of students from all 50 states, including the District of Columbia, two U.S. Territories, and 83 foreign countries. Forty percent of degree-seeking students are from Texas.
Students enrolled at Rice University in full-time Undergraduate programs are majority White Male (20.5%), followed by White Female (15.8%) and Asian Female (12.5%). Students enrolled in full-time Graduate programs are majority White Male (26.2%), followed by White Female (11.7%) and Asian Male (4.98%).
The Rice Honor Code plays an integral role in academic affairs. Almost all Rice exams are unproctored and professors give timed, closed-book exams that students take home and complete at their own convenience. Potential infractions are reported to the student Honor Council, elected by popular vote. The penalty structure is established every year by Council consensus; typically, penalties have ranged from a letter of reprimand to an 'F' in the course and a two semester suspension. During Orientation Week, students must take and pass a test demonstrating that they understand the Honor System's requirements and sign a Matriculation Pledge. On assignments, Rice students affirm their commitment to the Honor Code by writing "On my honor, I have neither given nor received any unauthorized aid on this (examination, quiz or paper)."
Rice is noted for its applied science programs in the fields of nanotechnology, artificial heart research, structural chemical analysis, signal processing and space science, being ranked 1st in the world in materials science research by the Times Higher Education (THE) in 2010.
Admission to Rice is rated as "most selective" by "U.S. News & World Report".
For fall 2020, Rice received 23,443 freshmen applications of which 2,346 were admitted (10.0%) up from a record-low 8.7% acceptance rate in 2019. The middle 50% range of SAT scores for the class of 2023 were 1470–1560; the middle 50% range of the ACT Composite score was 33–35.
Rice was ranked tied at 17th among national universities and 107th among global universities, tied at 8th for "best undergraduate teaching", 13th for "Best Value", and tied for 27th "Most Innovative" among national universities in the U.S. by "U.S. News & World Report" in its 2020 edition. "Forbes" magazine ranked Rice University 21st nationally among 650 liberal arts colleges, universities and service academies in 2019, 19th among research universities and 2nd in the South. "Kiplinger's Personal Finance" places Rice 7th in its 2019 ranking of best value private universities in the United States.
In 2020, Rice was ranked 105th in the world by the "Times Higher Education World University Rankings". In 2020, Rice was ranked tied for 95th internationally (41st nationally) by the "Academic Ranking of World Universities". Rice University was also ranked 85th globally in 2020 by "QS World University Rankings". Rice is noted for its entrepreneurial activity, and has been recognized as the top ranked business incubator in the world by the Stockholm-based UBI Index for both 2013 and 2014.
The "Princeton Review" ranked Rice 4th for "Best Quality of Life", 8th for "Happiest Students", 10th among the most LGBT friendliest colleges, and one of the top 50 best value private colleges in its 2020 edition. Rice was ranked 41st among research universities by the Center for Measuring University Performance in 2007. "Consumer's Digest" ranked Rice 3rd on the list of top 5 values in private colleges in its June 2011 issue. "Fiske Guide to Colleges" ranked Rice as one of the top 25 private "best buy" schools in its 2012 edition.
In 2011 the Leiden Ranking, which measures the performance of 500 major research universities worldwide, using metrics designed to measure research impact ranked Rice 4th Globally, for effectiveness and contribution of research. In 2013 the university was again ranked first globally for quality of research in natural sciences and engineering, and 6th globally for all sciences. In 2014, "The Daily Beast" ranked Rice 14th out of nearly 2,000 schools it evaluated. In 2019, "Money Magazine" ranked Rice 24th in the nation.
Situated on nearly in the center of Houston's Museum District and across the street from the city's Hermann Park, Rice is a green and leafy refuge; an oasis of learning convenient to the amenities of the nation's fourth-largest city. Rice's campus adjoins Hermann Park, the Texas Medical Center, and a neighborhood commercial center called Rice Village. Hermann Park includes the Houston Museum of Natural Science, the Houston Zoo, Miller Outdoor Theatre and an 18-hole municipal golf course. NRG Park, home of NRG Stadium and the Astrodome, is two miles (3 km) south of the campus. Among the dozen or so museums in the Museum District is the Rice University Art Gallery, open during the school year. Easy access to downtown's theater and nightlife district and to Reliant Park is provided by the Houston METRORail system, with a station adjacent to the campus's main gate. The campus recently joined the Zipcar program with two vehicles to increase the transportation options for students and staff who need but currently don't utilize a vehicle.
In 1957, Rice University implemented a residential college system, which was proposed by the university's first president, Edgar Odell Lovett. The system was inspired by existing systems in place at Oxford and Cambridge in England and at several other universities in the United States, most notably Yale University. The existing residences known as East, South, West, and Wiess Halls became Baker, Will Rice, Hanszen, and Wiess Colleges, respectively.
Below is a list of residential colleges in order of founding:
Much of the social and academic life as an undergraduate student at Rice is centered around residential colleges. Each residential college has its own cafeteria (serveries) and each residential college has study groups and its own social practices.
Although each college is composed of a full cross-section of students at Rice, they have over time developed their own traditions and "personalities". When students matriculate they are randomly assigned to one of the eleven colleges, although "legacy" exceptions are made for students whose siblings or parents have attended Rice
. Students generally remain members of the college that they are assigned to for the duration of their undergraduate careers, even if they move off-campus at any point. Students are guaranteed on-campus housing for freshman year and two of the next three years; each college has its own system for determining allocation of the remaining spaces, collectively known as "Room Jacking". Students develop strong loyalties to their college and maintain friendly rivalry with other colleges, especially during events such as Beer Bike Race and O-Week. Colleges keep their rivalries alive by performing "jacks," or pranks, on each other, especially during O-Week and Willy Week. During Matriculation, Commencement, and other formal academic ceremonies, the colleges process in the order in which they were established.
The Baker 13 is a tradition in which students run around campus wearing nothing but shoes and shaving cream at 10 p.m. on the 13th and the 31st of every month, as well as the 26th on months with fewer than 31 days. The event, long sponsored by Baker College, usually attracts a small number of students, but Halloween night and the first and last relevant days of the school year both attract large numbers of revelers.
According to the official website, "Beer Bike is a combination intramural bicycle race and drinking competition dating back to 1957. Ten riders and ten chuggers make up a team. Elaborate rules include details such as a prohibition of "bulky or wet clothing articles designed to absorb beer/water or prevent spilled beer/water from being seen" and regulations for chug can design. Each residential college as well as the Graduate Student Association participates with a men's team, a women's team, and alumni (co-ed) team. Each leg of the race is a relay in which a team's "chugger" must chug of beer or water for the men's division and for women before the team's "rider" may begin to ride. Participants who both ride and chug are referred to as "Ironmen". Willy Week is a term coined in the 1990s to refer to the week preceding Beer-Bike, a time of general energy and excitement on campus. Jacks (pranks) are especially common during Willy Week; some examples in the past include removing showerheads and encasing the Hanszen guardian."
The morning of the Beer Bike race itself begins with what is by some estimations the largest annual water balloon fight in the world. Beer-Bike is Rice's most prominent student event, and for younger alumni it serves as an unofficial reunion weekend on par with Homecoming. The 2009 Beer Bike race was dedicated to the memory of Dr. Bill Wilson, a popular professor and long-time resident associate of Wiess College who died earlier that year.
In the event of inclement weather, Beer Bike becomes a Beer Run. The rules are nearly identical, except that the Bikers must instead run the length of the track.
A number of on-campus institutions form an integral part of student life at Rice. Many of these organizations have been operating for several decades.
Rice Coffeehouse finds its beginnings in Hanszen College, where students would serve coffee in the Weenie Loft, a study room in the old section's fourth floor. Later, the coffee house moved to the Hanszen basement to accommodate more student patrons. That coffeehouse became known as Breadsticks and Pomegranates. Due to flooding, an unfortunate effect of 1) its location in the basement and 2) the Houston climate, this coffee house closed. Demand for an on-campus Coffeehouse grew and in 1990, the Rice Coffeehouse was founded.
The Rice Coffeehouse is a not-for-profit student-run organization serving Rice University and the greater Houston community. Over the past few years, it has introduced fair-trade and organic coffee and loose-leaf teas.
Coffeehouse baristas are referred to as K.O.C.'s, or Keepers of the Coffee. Rice Coffeehouse has also adopted an unofficial mascot, the squirrel, which can be found on T-shirts, mugs, and bumper stickers stuck on laptops across campus. The logo pays tribute to Rice's unusually plump and frighteningly tame squirrel population.
Willy's Pub is Rice's undergraduate pub run by students located in the basement of the Rice Memorial Center. It opened on April 11, 1975, with Rice President Norman Hackerman pouring the first beer. The name was chosen by students in tribute to the university's founder, William Marsh Rice. After the drinking age in Texas was raised in 1986, the pub entered a period of financial difficulties and in April 1995, was destroyed in a fire. The space was gutted but renovated and remains open.
Rice Bikes is a full-service on-campus bicycle sale, rental, and repair shop. It originated in the basement of Sid Richardson College in February 2011. In 2012, Rice Bikes officially became the university's third student-run business. Rice Bikes merged with a student-run bicycle rental business in 2013, and operations moved to the Rice Memorial Center in 2014. In 2017, the business moved to the garage of the Rice Housing and Dining department's headquarters.
Rice Bikes sells refurbished bicycles bought from students and functions as a full bicycle repair shop.
Rice has a weekly student newspaper ("The Rice Thresher"), a yearbook (The Campanile), college radio station (KTRU Rice Radio), and now defunct, campus-wide student television station (RTV5). They are based out of the RMC student center. In addition, Rice hosts several student magazines dedicated to a range of different topics; in fact, the spring semester of 2008 saw the birth of two such magazines, a literary sex journal called "Open" and an undergraduate science research magazine entitled "Catalyst".
"The Rice Thresher" is published every Wednesday and is ranked by Princeton Review as one of the top campus newspapers nationally for student readership. It is distributed around campus, and at a few other local businesses and has a website. The "Thresher" has a small, dedicated staff and is known for its coverage of campus news, open submission opinion page, and the satirical Backpage, which has often been the center of controversy. The newspaper has won several awards from the College Media Association, Associated Collegiate Press and Texas Intercollegiate Press Association.
The Rice Campanile was first published in 1916 celebrating Rice's first graduating class. It has published continuously since then, publishing two volumes in 1944 since the university had two graduating classes due to World War II. The website was created sometime in the early to mid 2000s. The 2015 won the first place Pinnacle for best yearbook from College Media Association.
KTRU Rice Radio is the student-run radio station. Though most DJs are Rice students, anyone is allowed to apply. It is known for playing genres and artists of music and sound unavailable on other radio stations in Houston, and often, the US. The station takes requests over the phone or online. In 2000 and 2006, KTRU won Houston Press' Best Radio Station in Houston. In 2003, Rice alum and active KTRU DJ DL's hip-hip show won Houston Press' Best Hip-hop Radio Show. On August 17, 2010, it was announced that Rice University had been in negotiations to sell the station's broadcast tower, FM frequency and license to the University of Houston System to become a full-time classical music and fine arts programming station. The new station, KUHA, would be operated as a not-for-profit outlet with listener supporters. The FCC approved the sale and granted the transfer of license to the University of Houston System on April 15, 2011, however, KUHA proved to be an even larger failure and so after four and a half years of operation, The University of Houston System announced that KUHA's broadcast tower, FM frequency and license were once again up for sale in August 2015. KTRU continued to operate much as it did previously, streaming live on the Internet, via apps, and on HD2 radio using the 90.1 signal. Under student leadership, KTRU explored the possibility of returning to FM radio for a number of years. In spring 2015, KTRU was granted permission by the FCC to begin development of a new broadcast signal via LPFM radio. On October 1, 2015, KTRU made its official return to FM radio on the 96.1 signal. While broadcasting on HD2 radio has been discontinued, KTRU continues to broadcast via internet in addition to its LPFM signal.
RTV5 is a student-run television network available as channel 5 on campus. RTV5 was created initially as Rice Broadcast Television in 1997; RBT began to broadcast the following year in 1998, and aired its first live show across campus in 1999. It experienced much growth and exposure over the years with successful programs like "Drinking with Phil," a weekly news show, and extensive live coverage in December 2000 of the shut down of KTRU by the administration. In spring 2001, the Rice undergraduate community voted in the general elections to support RBT as a blanket tax organization, effectively providing a yearly income of $10,000 to purchase new equipment and provide the campus with a variety of new programming. In the spring of 2005, RBT members decided the station needed a new image and a new name: Rice Television 5. One of RTV5's most popular shows was the 24-hour show, where a camera and couch placed in the RMC stayed on air for 24 hours. One such show is held in fall and another in spring, usually during a weekend allocated for visits by prospective students. RTV5 has a video on demand site at rtv5.rice.edu. The station went off the air in 2014 and changed its name to Rice Video Productions. In 2015 the group's funding was threatened, but ultimately maintained. In 2016 the small student staff requested to no longer be a blanket-tax organization. In the fall of 2017, the club did not register as a club.
"The Rice Review", also known as R2, is a yearly student-run literary journal at Rice University that publishes prose, poetry, and creative nonfiction written by undergraduate students, as well as interviews. The journal was founded in 2004 by creative writing professor and author Justin Cronin.
"The Rice Standard" was an independent, student-run variety magazine modeled after such publications as "The New Yorker" and "Harper's". Prior to fall 2009, it was regularly published three times a semester with a wide array of content, running from analyses of current events and philosophical pieces to personal essays, short fiction and poetry. In August 2009, the "Standard" transitioned to a completely online format with the launch of their redesigned website, ricestandard.org. The first website of its kind on Rice's campus, the "Standard" featured blog-style content written by and for Rice students. "The Rice Standard" had around 20 regular contributors, and the site features new content every day (including holidays). In 2017 no one registered The Rice Standard as a club within the university.
"Open", a magazine dedicated to "literary sex content," predictably caused a stir on campus with its initial publication in spring 2008. A mixture of essays, editorials, stories and artistic photography brought Open attention both on campus and in the Houston Chronicle. The third and last annual edition of "Open" was released in spring of 2010.
Vahalla is the Graduate Student Association on-campus bar under the steps of the chemistry building.
Rice plays in NCAA Division I athletics and is part of Conference USA. Rice was a member of the Western Athletic Conference before joining Conference USA in 2005. Rice is the second-smallest school, measured by undergraduate enrollment, competing in NCAA Division I FBS football, only ahead of Tulsa.
The Rice baseball team won the 2003 College World Series, defeating Stanford, giving Rice its only national championship in a team sport. The victory made Rice University the smallest school in 51 years to win a national championship at the highest collegiate level of the sport. The Rice baseball team has played on campus at Reckling Park since the 2000 season. As of 2010, the baseball team has won 14 consecutive conference championships in three different conferences: the final championship of the defunct Southwest Conference, all nine championships while a member of the Western Athletic Conference, and five more championships in its first five years as a member of Conference USA. Additionally, Rice's baseball team has finished third in both the 2006 and 2007 College World Series tournaments. Rice now has made six trips to Omaha for the CWS. In 2004, Rice became the first school ever to have three players selected in the first eight picks of the MLB draft when Philip Humber, Jeff Niemann, and Wade Townsend were selected third, fourth, and eighth, respectively. In 2007, Joe Savery was selected as the 19th overall pick.
Rice has been very successful in women's sports in recent years. In 2004–05, Rice sent its women's volleyball, soccer, and basketball teams to their respective NCAA tournaments. The women's swim team has consistently brought at least one member of their team to the NCAA championships since 2013. In 2005–06, the women's soccer, basketball, and tennis teams advanced, with five individuals competing in track and field. In 2006–07, the Rice women's basketball team made the NCAA tournament, while again five Rice track and field athletes received individual NCAA berths. In 2008, the women's volleyball team again made the NCAA tournament.
In 2011 the Women's Swim team won their first conference championship in the history of the university. This was an impressive feat considering they won without having a diving team. The team repeated their C-USA success in 2013 and 2014.
In 2017, the women's basketball team, led by second-year head coach Tina Langley, won the Women's Basketball Invitational, defeating UNC-Greensboro 74–62 in the championship game at Tudor Fieldhouse.
Though not a varsity sport, Rice's ultimate frisbee women's team, named Torque, won consecutive Division III national championships in 2014 and 2015.
In 2006, the football team qualified for its first bowl game since 1961, ending the second-longest bowl drought in the country at the time. On December 22, 2006, Rice played in the New Orleans Bowl in New Orleans, Louisiana against the Sun Belt Conference champion, Troy. The Owls lost 41–17. The bowl appearance came after Rice had a 14-game losing streak from 2004–05 and went 1–10 in 2005. The streak followed an internally authorized 2003 McKinsey report that stated football alone was responsible for a $4 million deficit in 2002. Tensions remained high between the athletic department and faculty, as a few professors who chose to voice their opinion were in favor of abandoning the football program. The program success in 2006, the "Rice Renaissance," proved to be a revival of the Owl football program, quelling those tensions. David Bailiff took over the program in 2007 and has remained head coach. Jarett Dillard set an NCAA record in 2006 by catching a touchdown pass in 13 consecutive games and took a 15-game overall streak into the 2007 season.
In 2008, the football team posted a 9-3 regular season, capping off the year with a 38–14 victory over Western Michigan University in the Texas Bowl. The win over Western Michigan marked the Owls' first bowl win in 45 years.
Rice Stadium also serves as the performance venue for the university's Marching Owl Band, or "MOB." Despite its name, the MOB is a scatter band that focuses on performing humorous skits and routines rather than traditional formation marching.
Rice Owls men's basketball won 10 conference titles in the former Southwest Conference (1918, 1935*, 1940, 1942*, 1943*, 1944*, 1945, 1949*, 1954*, 1970; * denotes shared title). Most recently, guard Morris Almond was drafted in the first round of the 2007 NBA Draft by the Utah Jazz. Rice named former Cal Bears head coach Ben Braun as head basketball coach to succeed Willis Wilson, fired after Rice finished the 2007–2008 season with a winless (0-16) conference record and overall record of 3-27.
Rice's mascot is Sammy the Owl. In previous decades, the university kept several live owls on campus in front of Lovett College, but this practice has been discontinued, due to public pressure over the welfare of the owls.
Rice also has a 12-member coed cheerleading squad and a coed dance team, both of which perform at football and basketball games throughout the year.
As of 2011, Rice has graduated 98 classes of students consisting of 51,961 living alumni. Over 100 students at Rice have been Fulbright Scholars, 25 Marshall Scholars, 25 Mellon Fellows, 12 Rhodes Scholars, 6 Udall Scholars, and 65 Watson Fellows, among several other honors and awards.
Rice's distinguished faculty and alumni consists of three Nobel laureates, two Pulitzer Prize award winners, six Fulbright Scholars, 29 Alexander von Humboldt Foundation Recipients, eight members of the American Academy of Arts and Sciences, one member of the American Philosophical Society, 35 Guggenheim Fellowships, 17 members of the National Academy of Engineering, seven members of the National Academy of Sciences, five fellows of the National Humanities Center, and 86 fellows of the National Science Foundation.
Alumni of Rice have occupied top positions in business, including Thomas H. Cruikshank, the former CEO of Halliburton; John Doerr, billionaire and venture capitalist; Howard Hughes; and Fred C. Koch.
In government and politics, Rice alumni include Alberto Gonzales, former Attorney General; Charles Duncan, former Secretary of Energy; William P. Hobby, Jr.; John Kline; George P. Bush; Josh Earnest, White House Press Secretary for President Obama; Ben Rhodes, Deputy National Security Advisor for President Obama and Annise Parker, the 61st Mayor of Houston.
Rice alumni who became prominent writers include Larry McMurtry, Pulitzer Prize–winning author and Oscar-winning writer of the screenplay for "Brokeback Mountain"; Joyce Carol Oates, who was once a doctoral candidate in English; John Graves, author of "Goodbye to a River"; and Candace Bushnell, author of "Sex and the City", who attended for three semesters.
Notable entrepreneurs from Rice include Elizabeth Avellán, co-founder of Troublemaker Studios, Tim and Karrie League, founders of the Alamo Drafthouse Cinema and Drafthouse Films, Gus Sorola, one of the founders of Rooster Teeth, and Brock Wagner and Kevin Bartol, founders of Saint Arnold Brewing Company.
In science and technology, Rice alumni include 14 NASA astronauts; Robert Curl, Nobel Prize–winning discoverer of fullerene; Robert Woodrow Wilson, winner of the Nobel Prize in Physics for the discovery of cosmic microwave background radiation; David Eagleman, celebrity neuroscientist and "NYT" bestselling author; and NASA former Apollo 11 and 13 warning systems engineer and motivational speaker Jerry Woodfill.
Rice athletes include Lance Berkman, Brock Holt, Bubba Crosby, Harold Solomon, Frank Ryan, Tommy Kramer, Jose Cruz, Jr., O.J. Brigance, Larry Izzo, James Casey, Courtney Hall, Bert Emanuel, Luke Willson, Tony Cingrani, Anthony Rendon, and Leo Rucka, as well as three Olympians (Funmi Jimoh '06, Allison Beckford '04, and William Fred Hansen '63). | https://en.wikipedia.org/wiki?curid=25813 |
Richard Smalley
Richard Errett Smalley (June 6, 1943 – October 28, 2005) was the Gene and Norman Hackerman Professor of Chemistry and a Professor of Physics and Astronomy at Rice University. In 1996, along with Robert Curl, also a professor of chemistry at Rice, and Harold Kroto, a professor at the University of Sussex, he was awarded the Nobel Prize in Chemistry for the discovery of a new form of carbon, buckminsterfullerene, also known as buckyballs. He was an advocate of nanotechnology and its applications.
Smalley, the youngest of 4 siblings, was born in Akron, Ohio on June 6, 1943 to Frank Dudley Smalley, Jr., and Esther Virginia Rhoads. He grew up in Kansas City, Missouri. Richard Smalley credits his father, mother and aunt as formative influences in industry, science and chemistry. His father, Frank Dudley Smalley, Jr. worked with mechanical and electrical equipment and eventually became CEO of a trade journal for farm implements called "Implement and Tractor". His mother, Esther Rhoads Smalley, completed her B.A. Degree while Richard was a teenager. She was particularly inspired by mathematician Norman N. Royall Jr., who taught Foundations of Physical Science, and communicated her love of science to her son through long conversations and joint activities. Smalley's mother's sister, pioneering woman chemist Sara Jane Rhoads, interested Smalley in the field of chemistry, letting him work in her organic chemistry laboratory, and suggesting that he attend Hope College, which had a strong chemistry program.
Smalley attended Hope College for two years before transferring to the University of Michigan where he received his Bachelor of Science in 1965. Between his studies, he worked in industry, where he developed his unique managerial style. He received his Ph.D. from Princeton University in 1973 after completing a doctoral dissertation, titled "The lower electronic states of 1,3,5 (sym)-triazine", under the supervision of Elliot R. Bernstein. He did postdoctoral work at the University of Chicago from 1973 to 1976, with Donald Levy and Lennard Wharton where he was a pioneer in the development of supersonic beam laser spectroscopy.
In 1976, Smalley joined Rice University. In 1982, he was appointed to the Gene and Norman Hackerman Chair in Chemistry at Rice. He helped to found the Rice Quantum Institute in 1979, serving as Chairman from 1986 to 1996. In 1990, he became also a Professor in the Department of Physics. In 1990, he helped to found the Center for Nanoscale Science and Technology. In 1996, he was appointed its Director.
He became a member of the National Academy of Sciences in 1990, and the American Academy of Arts and Sciences in 1991.
Smalley's research in physical chemistry investigated formation of inorganic and semiconductor clusters using pulsed molecular beams and time-of-flight mass spectrometry. As a consequence of this expertise, Robert Curl introduced him to Harry Kroto in order to investigate a question about the constituents of astronomical dust. These are carbon-rich grains expelled by old stars such as R Coronae Borealis. The result of this collaboration was the discovery of C60 (Known as Buckyballs) and the fullerenes as the third allotropic form of carbon.
The research that earned Kroto, Smalley and Curl the Nobel Prize mostly comprised three articles. First was the discovery of C60 in the Nov. 14, 1985, issue of "Nature", "C60: Buckminsterfullerene". The second article detailed the discovery of the endohedral fullerenes in "Lanthanum Complexes of Spheroidal Carbon Shells" in the "Journal of the American Chemical Society" (1985). The third announced the discovery of the fullerenes in "Reactivity of Large Carbon Clusters: Spheroidal Carbon Shells and Their Possible Relevance to the Formation and Morphology of Soot" in the "Journal of Physical Chemistry" (1986).
Although only three people can be cited for a Nobel Prize, graduate students James R. Heath, Yuan Liu, and Sean C. O'Brien participated in the work. Smalley mentioned Heath and O'Brien in his Nobel Lecture. Heath went on to become a professor at California Institute of Technology (Caltech) and O'Brien joined Texas Instruments and is now at MEMtronics. Yuan Liu is a Senior Staff Scientist at Oak Ridge National Laboratory.
This research is significant for the discovery of a new allotrope of carbon known as a fullerene. Other allotropes of carbon include graphite, diamond and graphene. Harry Kroto's 1985 paper entitled "C60: Buckminsterfullerine", published with colleagues J. R. Heath, S. C. O'Brien, R. F. Curl, and R. E. Smalley, was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society, presented to Rice University in 2015. The discovery of fullerenes was recognized in 2010 by the designation of a National Historic Chemical Landmark by the American Chemical Society at the Richard E. Smalley Institute for Nanoscale Science and Technology at Rice University in Houston, Texas.
Following nearly a decade's worth of research into the formation of alternate fullerene compounds (e.g. C28, C70), as well as the synthesis of endohedral metallofullerenes (M@C60), reports of the identification of carbon nanotube structures led Smalley to begin investigating their iron-catalyzed synthesis.
As a consequence of these researches, Smalley was able to persuade the administration of Rice University under then-president Malcolm Gillis to create Rice's Center for Nanoscale Science and Technology (CNST) focusing on any aspect of molecular nanotechnology. It was renamed The Richard E. Smalley Institute for Nanoscale Science and Technology after Smalley's death in 2005, and has since merged with the Rice Quantum Institute, becoming the Smalley-Curl Institute (SCI) in 2015.
Smalley's latest research was focused on carbon nanotubes, specifically focusing on the chemical synthesis side of nanotube research. He is well known for his group's invention of the high-pressure carbon monoxide (HiPco) method of producing large batches of high-quality nanotubes. Smalley spun off his work into a company, Carbon Nanotechnologies Inc. and associated nanotechnologies.
He was an outspoken skeptic of the idea of molecular assemblers, as advocated by K. Eric Drexler. His main scientific objections, which he termed the "fat fingers problem" and the "sticky fingers problem", argued against the feasibility of molecular assemblers being able to precisely select and place individual atoms. He also believed that Drexler's speculations about apocalyptic dangers of molecular assemblers threatened the public support for development of nanotechnology. He debated Drexler in an exchange of letters which were published in "Chemical & Engineering News" as a point-counterpoint feature.
Starting in the late 1990s, Smalley advocated for the need for cheap, clean energy, which he described as the number one problem facing humanity in the 21st century. He described what he called "The Terawatt Challenge", the need to develop a new power source capable of increasing "our energy output by a minimum factor of two, the generally agreed-upon number, certainly by the middle of
the century, but preferably well before that."
He also presented a list entitled "Top Ten Problems of Humanity for Next 50 Years". It can be interesting to compare his list, in order of priority, to the Ten Threats formulated by the U.N.'s High Level Threat Panel in 2004. Smalley's list, in order of priority, was:
Smalley regarded several problems as interlinked: the lack of people entering the fields of science and engineering, the need for an alternative to fossil fuels, and the need to address global warming. He felt that improved science education was essential, and strove to encourage young students to consider careers in science. His slogan for this effort was "Be a scientist, save the world."
Smalley was a leading advocate of the National Nanotechnology Initiative in 2003. Suffering from hair loss and weakness as a result of his chemotherapy treatments, Smalley testified before the congressional testimonies, arguing for the potential benefits of nanotechnology in the development of targeted cancer therapies. Bill 189, the 21st Century Nanotechnology Research and Development Act, was introduced in the Senate on January 16, 2003 by Senator Ron Wyden, passed the Senate on November 18, 2003, and at the House of Representatives the next day with a 405–19 vote. President George W. Bush signed the act into law on December 3, 2003, as Public Law 108-
153. Smalley was invited to attend.
Smalley was married four times, to Judith Grace Sampieri (1968-1978), Mary L. Chapieski (1980-1994), JoNell M. Chauvin (1997-1998) and Deborah Sheffield (2005), and had two sons, Chad Richard Smalley (born June 8, 1969) and Preston Reed Smalley (born August 8, 1997).
In 1999, Smalley was diagnosed with cancer. Smalley died of leukemia, variously reported as non-Hodgkin's lymphoma and chronic lymphocytic leukemia, on October 28, 2005, at M.D. Anderson Cancer Center in Houston, Texas, at the age of 62.
Upon Smalley's death, the US Senate passed a resolution to honor Smalley, crediting him as the "Father of Nanotechnology."
Smalley, who had taken classes in religion as well as science at Hope College, rediscovered his Christian foundation in later life, particularly during his final years while battling cancer. During the final year of his life, Smalley wrote: "Although I suspect I will never fully understand, I now think the answer is very simple: it's true. God did create the universe about 13.7 billion years ago, and of necessity has involved Himself with His creation ever since."
At the Tuskegee University's 79th Annual Scholarship Convocation/Parents' Recognition Program he was quoted making the following statement regarding the subject of evolution while urging his audience to take seriously their role as the higher species on this planet. Genesis' was right, and there was a creation, and that Creator is still involved ... We are the only species that can destroy the Earth or take care of it and nurture all that live on this very special planet. I'm urging you to look on these things. For whatever reason, this planet was built specifically for us. Working on this planet is an absolute moral code. ... Let's go out and do what we were put on Earth to do." Old Earth creationist and astronomer Hugh Ross spoke at Smalley's funeral, November 2, 2005. | https://en.wikipedia.org/wiki?curid=25814 |
Robert Curl
Robert Floyd Curl Jr. (born August 23, 1933) is a University Professor Emeritus, Pitzer–Schlumberger Professor of Natural Sciences Emeritus, and Professor of Chemistry Emeritus at Rice University. He was awarded the Nobel Prize in Chemistry in 1996 for the discovery of the nanomaterial buckminsterfullerene, along with Richard Smalley (also of Rice University) and Harold Kroto of the University of Sussex.
Born in Alice, Texas, United States, Curl was the son of a Methodist minister. Due to his father's missionary work, his family moved several times within southern and southwestern Texas, and the elder Curl was involved in starting the San Antonio Medical Center's Methodist Hospital. Curl attributes his interest in chemistry to a chemistry set he received as a nine-year-old, recalling that he ruined the finish on his mother's porcelain stove when nitric acid boiled over onto it. He is a graduate of Thomas Jefferson High School in San Antonio, Texas. His high school offered only one year of chemistry instruction, but in his senior year his chemistry teacher gave him special projects to work on.
Curl received a bachelor of science from Rice Institute (now Rice University) in 1954. He was attracted to the reputation of both the school's academics and football team, and the fact that at the time it charged no tuition. He earned his doctorate in chemistry from the University of California, Berkeley, in 1957. At Berkeley, he worked in the laboratory of Kenneth Pitzer, then dean of the College of Chemistry, with whom he would become a lifelong collaborator. Curl's graduate research involved performing infrared spectroscopy to determine the bond angle of disiloxane.
Curl was a postdoctoral fellow at Harvard University with E. B. Wilson, where he used microwave spectroscopy to study the bond rotation barriers of molecules. After that, he joined the faculty of Rice University in 1958. He inherited the equipment and graduate students of George Bird, a professor who was leaving for a job at Polaroid. Curl's early research involved the microwave spectroscopy of chlorine dioxide. His research program included both experiment and theory, mainly focused on detection and analysis of free radicals using microwave spectroscopy and tunable lasers. He used these observations to develop the theory of their fine structure and hyperfine structure, as well as information about their structure and the kinetics of their reactions.
Curl's research at Rice involved the fields of infrared and microwave spectroscopy. Curl's research inspired Richard Smalley to come to Rice in 1976 with the intention of collaborating with Curl. In 1985, Curl was contacted by Harold Kroto, who wanted to use a laser beam apparatus built by Smalley to simulate and study the formation of carbon chains in red giant stars. Smalley and Curl had previously used this apparatus to study semiconductors such as silicon and germanium. They were initially reluctant to interrupt their experiments on these semiconductor materials to use their apparatus for Kroto's experiments on carbon, but eventually gave in.
They indeed found the long carbon chains they were looking for, but also found an unexpected product that had 60 carbon atoms. Over the course of 11 days, the team studied and determined its structure and named it buckminsterfullerene after noting its similarity to the geodesic domes for which the architect Buckminster Fuller was known. This discovery was based solely on the single prominent peak on the mass spectrograph, implying a chemically inert substance that was geometrically closed with no dangling bonds. Curl was responsible for determining the optimal conditions of the carbon vapor in the apparatus, and examining the spectrograph. Curl noted that James R. Heath and Sean C. O'Brien deserve equal recognition in the work to Smalley and Kroto. The existence of this type of molecule had earlier been theorized by others, but Curl and his colleagues were at the time unaware of this. Later experiments confirmed their proposed structure, and the team moved on to synthesize endohedral fullerenes that had a metal atom inside the hollow carbon shell. The fullerenes, a class of molecules of which buckminsterfullerene was the first member discovered, are now considered to have potential applications in nanomaterials and molecular scale electronics. Robert Curl's 1985 paper entitled "C60: Buckminsterfullerine", published with colleagues H. Kroto, J. R. Heath, S. C. O’Brien, and R. E. Smalley, was honored by a Citation for Chemical Breakthrough Award from the Division of History of Chemistry of the American Chemical Society, presented to Rice University in 2015. The discovery of fullerenes was recognized in 2010 by the designation of a National Historic Chemical Landmark by the American Chemical Society at the Richard E. Smalley Institute for Nanoscale Science and Technology at Rice University in Houston, Texas.
After winning the Nobel Prize in 1996, Curl took a quieter path than Smalley, who became an outspoken advocate of nanotechnology, and Kroto, who used his fame to further his interest in science education, saying, "After winning a Nobel, you can either become a scientific pontificator, or you can have some idea for a new science project and you can use your newfound notoriety to get the resources to do it. Or you can say, 'Well, I enjoy what I was doing, and I want to keep doing that.'"
Curl's later research interests involved physical chemistry, developing DNA genotyping and sequencing instrumentation, and creating photoacoustic sensors for trace gases using quantum cascade lasers. He is known in the residential college life at Rice University for being the first master of Lovett College.
Curl retired in 2008 at the age of 74, becoming a University Professor Emeritus, Pitzer-Schlumberger Professor of Natural Sciences Emeritus, and Professor of Chemistry Emeritus at Rice University.
Curl married Jonel Whipple in 1955, with whom he had two children. He plays bridge every week with the Rice Bridge Brigade.
Journal articles:
Technical reports: | https://en.wikipedia.org/wiki?curid=25815 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.