text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Paul_(2011_film)] | [TOKENS: 3276]
Contents Paul (2011 film) Paul is a 2011 British science fiction comedy road film directed by Greg Mottola and written by Simon Pegg and Nick Frost, who star alongside Jason Bateman, Kristen Wiig, Bill Hader, Blythe Danner, John Carroll Lynch, Sigourney Weaver, and Seth Rogen as the voice and motion capture of the title character. The film follows two science fiction geeks who come across an alien. Together, they help the alien escape from the Secret Service agents who are pursuing him so that he can return to his home planet. The film is a parody of other science-fiction films, especially those of Steven Spielberg, as well as of science fiction fandom in general. The idea was conceived by Pegg and Frost in 2003, during production on Shaun of the Dead. Principal photography primarily took place in the New Mexico Desert and the Albuquerque Convention Center, and wrapped in September 2009. Double Negative provided the animation for Paul. Paul had its world premiere at the Empire Leicester Square in London on 7 February 2011, and was theatrically released in the United Kingdom on 14 February, by Universal Pictures. It received mixed reviews from critics and was a box-office success, grossing $98 million worldwide on a $40 million budget. Plot Best friends Graeme Willy and Clive Gollings are British comic book and sci-fi enthusiasts who travel to the United States to attend the annual San Diego Comic-Con. Clive is trying to write his own sci-fi book, and Graeme is illustrating Clive's book. In addition to going to the convention, they embark on a road trip through the Southwestern U.S. to visit UFO sites on a remote desert highway at night. After an encounter with homophobic rednecks in a diner, they accidentally crash into the redneck's truck. They soon drive away, later watching a car driving erratically and crashing. Stopping to offer assistance to the driver, he is then revealed to be Paul, an extraterrestrial being and grey alien. Graeme agrees to give him a ride, despite Clive fainting and wetting his pants upon seeing him. Later, Special Agent Zoil of the Secret Service arrives at the car-crash site, informing his unseen female superior, known as "the Big Guy," that he is closing in on Paul. She sends rookies Haggard and O'Reilly to assist. Clive remains paranoid over Paul's intentions, considering his appearance as evidence of a conspiracy. Paul explains the government fed his image to the public to keep them from panicking if anyone encounters his race. Graeme, Clive, and Paul later camp at an RV park run by Christian fundamentalists, one-eyed Ruth Buggs and her father Moses. Clive and Paul argue, Clive citing Mac and Me. The next day, when Ruth sees Paul, she faints, so they take her with them. During an argument, Paul convinces Ruth to question her beliefs and cures her blind eye by transferring the condition onto himself and healing it immediately. Stopping at a bar, Ruth calls her father, but Zoil intercepts the call. She is accosted by the rednecks and a bar fight ensues. They escape when Paul terrifies them into fainting. Later, at another RV park, Ruth is questioned by Agent Zoil, but plays dumb and escapes. Meanwhile, Haggard and O'Reilly have figured out about Paul. They confront Zoil, who orders them to return to base, but they go behind his back and try to catch the alien on their own. The group soon arrives at the home of Tara, who rescued Paul when he crashed on Earth 60 years ago, accidentally killing her dog (hence Paul's name) in the crash (opening scene). As no one believed her story, she has spent her life as a pariah. Although angry at first, she forgives Paul and prepares to make tea for her visitors. Haggard, O'Reilly and Zoil arrive and surround the house. The group flees but O'Reilly shoots at them, igniting gas from Tara's stove and destroying her house with him inside. Haggard pursues and catches up to the RV but loses control and drives off a cliff. Zoil reassures the Big Guy that he will have Paul within the hour, but the Big Guy, who has grown tired of waiting, orders a "military response". Paul, Graeme, Clive, Ruth and Tara arrive at Devils Tower National Monument, where they set off fireworks to signal Paul's mothership. A helicopter suddenly arrives with agents and the Big Guy. Zoil appears and initiates a stand-off, unexpectedly shooting the agents, before being wounded. He is revealed to be Paul's friend, attempting to aid his escape under the guise of capturing him. During the fight, Tara knocks out the Big Guy. Moses arrives unexpectedly and fires at Paul, but hits Graeme instead. Paul once again uses his healing powers, reviving Graeme in spite of the danger to himself, causing Moses to believe Paul to be a messiah. Graeme and Ruth admit their feelings for each other and kiss, but the Big Guy regains consciousness and holds the group at gunpoint. Just as she is about to kill them, she is crushed by the landing transport ship. Paul says goodbye to his friends and offers Tara a chance to go with him, promising to give her a new life after ruining her childhood and accidentally killing her dog. The aliens go home as the remaining humans wave. Two years later, Graeme, Clive and Ruth are at another Comic-Con, where Graeme and Clive are promoting their new bestselling novel titled Paul. Cast In an interview for the DVD release of Paul, Pegg and Frost said they made the film to demonstrate their love for Steven Spielberg's films Close Encounters of the Third Kind and E.T. the Extra-Terrestrial, as well as their favourite science-fiction films. After they mentioned the project to Spielberg, he suggested he might make a cameo appearance, and a scene was added to include him as a voice on a speakerphone in 1980 discussing ideas with Paul for his soon-to-become box office hit E.T. the Extra-Terrestrial. According to Robert Kirkman, he, along with Invincible co-creator Cory Walker and Invincible artist Ryan Ottley, had a cameo in the film as the Big Guy's henchmen. Production The idea for Paul came from Simon Pegg and Nick Frost in 2003, while they were filming Shaun of the Dead. To help with the script, Pegg and Frost went on their own road trip across America and used ideas from it to add to the script. According to Greg Mottola, the film was given the green light shortly before the onset of the Great Recession; if it had been delayed, "they probably wouldn't have made the movie." The budget for the film was around US$40 million. Principal photography, including 50 days in the New Mexico desert, wrapped on 9 September 2009, with additional scenes filmed in July 2010 at the Albuquerque Convention Center, which was designed to look like the 2010 San Diego Comic-Con. During filming, Joe Lo Truglio was a stand-in for the character Paul, who was created with CGI, although Seth Rogen, the voice of Paul, did some motion capture in pre- and post-production. The animation for Paul was handled by Double Negative, who also aided in the production of other visual effects work, including "Paul's invisibility, the mind-meld sequences, the digital bird and a multitude of greenscreen driving shots." The cover art for the fictional comic book Encounter Briefs was drawn by alternative comics artist Daniel Clowes. Release A teaser trailer for the film was released on 18 October 2010. The film had its world premiere in London on 7 February 2011, and was released the following week in the United Kingdom, on 14 February, by Universal Pictures. It later was released in the United States on 18 March. In the United Kingdom, Paul received a rating of 15 by the British Board of Film Classification, whereas in the United States, it received a rating of R by the Motion Picture Association of America, for "language including sexual references, and some drug use." The film was released on DVD and Blu-ray in the United Kingdom on 13 June 2011 and was released in North America on 9 August 2011. Three versions of the film were made. The DVD release features an audio commentary with director Greg Mottola, stars Simon Pegg, Nick Frost, Bill Hader, and producer Nira Park; two featurettes; "Simon's Silly Faces"; photo galleries; storyboards and posters; and a blooper reel. The United States Blu-ray release features all the DVD supplements with nine more featurettes and a digital copy. It was later released on 4K by Kino Lorber on 18 November 2025. Reception Paul grossed $37.4 million in the United States and Canada, and $63.6 million in other territories, for a worldwide total of $98 million. In North America, Paul opened on March 18, 2011, alongside Limitless and The Lincoln Lawyer. It went on to debut to $13 million, finishing fifth at the box office. On review aggregator Rotten Tomatoes, the film holds an approval rating of 70% based on 205 reviews. The website's critical consensus reads, "It doesn't measure up to Pegg and Frost's best work, but Paul is an amiably entertaining — albeit uneven — road trip comedy with an intergalactic twist." On Metacritic, the film received a score of 57 based on 37 reviews, indicating "mixed or average reviews". Audiences polled by CinemaScore gave the film an average grade of "B+" on an A+ to F scale. Empire rated the film "excellent" (four stars out of five), stating, "Broader and more accessible than either Shaun of the Dead or Hot Fuzz, Paul is pure Pegg and Frost — clever, cheeky, and very, very funny. You'll never look at E.T. in the same way again." SFX also gives the film four stars out of five, saying, "the film veers dangerously close to alienating (no pun intended) all but its geek core audience, [though] the more obvious concessions to a mainstream crowd [are] never enough to derail the film's laugh-a-minute ride"; SFX also calls it a "triumph of visual effects, convincing characterisation and bad taste humour." Peter Bradshaw gave the film two stars out of five and called it a "goofy, amiable piece of silliness" exhibiting "self-indulgence" and possessing a "distinct shortage of real gags". On the same scale Nigel Andrews gave the film only one star, calling it a "faltering extraterrestrial knockabout". The Independent grades the film two stars out of five, saying, "Pegg is likeable as usual, Frost more doltish than usual, and Kristen Wiig an appealing convert from Bible thumper to ladette", and notes that "from time to time, clever ideas rear their heads – like the idea that 'Paul' has been the brains behind all science-fiction and UFO initiatives for the last 30 years, including Close Encounters and The X-Files – but they soon return to the film's default setting of laddish japes and a conviction that the word 'cocksucker' will always get a laugh." IGN provided Paul with three reviews. The first gave the film three stars, stating, "Simon Pegg and Nick Frost send up everything from Star Wars to E.T. in this sci-fi comedy ... As with Pegg and Frost's previous films together, it's derivative stuff, the plot similar to countless sci-fi flicks of the past; paying homage to the good and gently ribbing the bad." Less excited was their review for the British Blu-ray version, which said, "But unlike previous Pegg and Frost collaborations – Shaun of the Dead and Hot Fuzz – Paul does not generously reward repeat viewing. That's not to say it's a bad film at all; it has a strong central premise, which carries much of the film, loveable central characters, the odd neat idea (it turns out that Paul inspired all major works of SF post-1950, from Close Encounters to The X-Files, and has a direct line to Steven Spielberg), and a couple of genuine laughs, but it never feels more than a rough sketch of a bigger, much funnier movie." In a second review for the American Blu-ray version, IGN compared the movie with Galaxy Quest and wrote that it is "richly layered with clever homage, a refreshingly original alien hero, delightfully entertaining characters and great performances from our leads and their supporting players." Upon its release in the United States, Roger Ebert gave Paul a mixed review of two and a half stars out of four, saying it is a "movie that teeters on the edge of being really pretty good and loses its way. I'm not sure quite what goes wrong, but you can see that it might have gone right." Manohla Dargis of The New York Times wrote: "As genial, foolish and demographically engineered as it sounds (hailing all fan boys and girls), Paul is at once a buddy flick and a classic American road movie of self (and other) discovery, interspersed with buckets of expletives and some startling (especially for a big-studio release) pokes at Christian fundamentalism ... The movie has its attractions, notably Mr. Pegg and Mr. Frost (and of course Mr. Bateman), whose ductile, (noncomputer) animated and open faces were made for comedy ... Paul proves the weak link. One problem is that Mr. Rogen, however comically inclined, has become overexposed, and there's just something too familiar and predictable about this voice coming out of that body. Yet while Paul seems great conceptually, he's not particularly interesting or surprising, despite a funny recap of what he's been doing on his time on Earth. With his vibe and vocabulary, shorts and weed, juvenilia and sentimentality, Paul turns out to be not much different from a lot of guys who have wreaked comedy havoc on American screens lately, even if this one only wants to beam up, not knock up." At the 2011 National Movie Awards, Paul received both nominations for Performance of the Year (Frost and Pegg) and won Best Comedy; this same category was nominated at the St. Louis Gateway Film Critics Association Awards 2011. The film was nominated in two categories at the 2011 Golden Trailer Awards: "Trailer" (Workshop Creative) for Best Comedy and "Dessert" (The Ant Farm) for Best Comedy TV Spot. Character animators David Lowry and Mike Hull were nominated for Outstanding Achievement for Character Animation in a Live Action Production at the 39th Annie Awards. Paul's character design was nominated for Outstanding Animated Character in a Live Action Feature Motion Picture at the 10th Visual Effects Society Awards. Soundtrack Paul: Music from the Original Motion Picture was released on 21 February 2011 by Universal Music. It intersperses David Arnold's score with the rock songs appearing in the film. All tracks are written by David Arnold, except as noted. Future Pegg has stated that he would like to do a sequel to Paul, titled Pauls, but that the time and expense it would take means it is unlikely to happen unless costs decrease. On August 13, 2021, during a live stream on Instagram, Pegg stated that there was 'no chance' of a sequel.[citation needed] See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/University_of_California,_Santa_Barbara] | [TOKENS: 5806]
Contents University of California, Santa Barbara The University of California, Santa Barbara (UC Santa Barbara or UCSB) is a public land-grant research university in Santa Barbara County, California, United States. Tracing its roots back to 1891 as an independent teachers college, UC Santa Barbara joined the University of California system in 1944. It is the third-oldest campus in the system, after UC Berkeley and UCLA. UCSB's campus sits on the oceanfront site of a converted WWII-era Marine Corps air station. UCSB is organized into three undergraduate colleges (Letters and Science, Engineering, and Creative Studies) and two graduate schools (Education and Environment), offering more than 200 degrees and programs. It is classified among "R1: Doctoral Universities – Very high research activity" and is regarded as a Public Ivy. The university has 12 national research centers and institutes, including the Kavli Institute for Theoretical Physics and NSF Quantum Foundry. According to the National Science Foundation, UC Santa Barbara spent $305.48 million on research and development in fiscal year 2023, ranking it 105th in the nation. UCSB was the No. 3 host on the ARPAnet and was elected to the Association of American Universities in 1995. UCSB alumni, faculty, and researchers have included 11 Nobel Prize laureates, founders of 90+ companies, 1 Fields Medalist, 50 members of the National Academy of Sciences, 34 members of the National Academy of Engineering, and 56 members of the American Academy of Arts and Sciences. The faculty also includes two Academy and Emmy Award winners and recipients of a Millennium Technology Prize, an IEEE Medal of Honor, a National Medal of Technology and Innovation and a Breakthrough Prize in Fundamental Physics. History UCSB traces its origins back to the Anna Blake School, which was founded in 1891, and offered training in home economics and industrial arts. The Anna Blake School was taken over by the state in 1909 and became the Santa Barbara State Normal School, which then became the Santa Barbara State College in 1921. In 1944, intense lobbying by an interest group in the City of Santa Barbara led by Thomas Storke and Pearl Chase persuaded the State Legislature, Gov. Earl Warren, and the Regents of the University of California to move the State College over to the more research-oriented University of California system. The State College system sued to stop the takeover, but the governor did not support the suit. A state constitutional amendment was passed in 1946 to stop subsequent conversions of State Colleges to University of California campuses. From 1944 to 1958, the school was known as Santa Barbara College of the University of California, before taking on its current name. When the vacated Marine Corps training station in Goleta was purchased for the rapidly growing college, Santa Barbara City College moved into the vacated State College buildings. Originally, the regents envisioned a small, several thousand–student liberal arts college, a so-called "Williams College of the West", at Santa Barbara. Chronologically, UCSB is the third general-education campus of the University of California, after Berkeley and UCLA (the only other state campus to have been acquired by the UC system). The original campus the regents acquired in Santa Barbara was located on only 100 acres (40 ha) of largely unusable land on a seaside mesa. The availability of a 400-acre (160 ha) portion of the land used as Marine Corps Air Station Santa Barbara until 1946 on another seaside mesa in Goleta, which the regents could acquire for free from the federal government, led to that site becoming the Santa Barbara campus in 1949. Originally, only 3000–3500 students were anticipated, but the post-WWII baby boom led to the designation of a general campus in 1958, along with a name change from "Santa Barbara College" to "University of California, Santa Barbara," and the discontinuation of the industrial arts program for which the state college was famous. A chancellor, Samuel B. Gould, was appointed in 1959. In 1959, UCSB professor Douwe Stuurman hosted the English writer Aldous Huxley as the university's first visiting professor. Huxley delivered a lectures series called "The Human Situation". In the late 1960s and early 1970s, UCSB became nationally known as one of the main national hotbeds of anti–Vietnam War activism. A bombing at the school's faculty club in 1969 killed the caretaker, Dover Sharp. In the spring of 1970, multiple instances of arson occurred, including a burning of the Bank of America branch building in the student community of Isla Vista, during which time one male student, Kevin Moran, was shot and killed by police. UCSB's anti-Vietnam activity impelled then-Gov. Ronald Reagan to impose a curfew and order the National Guard to enforce it. Armed guardsmen were common on campus and in Isla Vista during this time. In 1968, twelve black students occupied North Hall — temporarily renaming it Malcolm X Hall — to force the Chancellor Vernon Cheadle and the administration to acknowledge the marginalization needs of black students. The university answered the demands of the group by creating the Department of Black Studies. In 1995, UCSB was elected to the Association of American Universities, an organization of leading research universities, with a membership consisting of 59 universities in the United States (both public and private) and two universities in Canada. On May 23, 2014, a killing spree occurred in Isla Vista, California, a community near the campus. All six people killed during the rampage were students at UCSB. The murderer was a former Santa Barbara City College student who lived in Isla Vista. In 2009 Professor William I. Robinson became the subject of a formal inquiry after circulating course-related material comparing Israeli military actions to Nazi persecution - a controversy that highlighted tensions between academic freedom and the imperative to avoid content that Jewish students found intimidating. Even though the faculty code process eventually dismissed the charges, the episode raised questions about how Jewish concerns are handled within campus governance and highlighted ambiguities in procedural responses to allegations of antisemitism. More recently the same Multi-Cultural Center where Professor Robinson still teaches was the backdrop for another antisemetic incident. Though the signage was not attributed to any specific individual or entity, one of the posters at the gathering was signed by the Jackson Social Justice Legacy Scholarship (Jackson SJ) interns and MCC faculty and student staff. Santa Barbara State College was under the supervision of a president. In 1944, the college became affiliated with the University of California. The school name was changed to the Santa Barbara College of the University of California. The title of the campus leader was changed to Provost. In September 1958, the Regents of the University of California established Santa Barbara as a full campus of the University of California. The school was renamed the University of California, Santa Barbara. The official title of the campus leader was changed to Chancellor. Henry T. Yang served as the 5th chancellor of the University of California, Santa Barbara from June 23, 1994, to July 14, 2025. With more than 31 years in office, he is the longest-serving chancellor in the University of California history. After leaving the chancellor's office, Yang continues to serve as a professor of mechanical engineering at the UC Santa Barbara College of Engineering. David Marshall, the then-executive vice chancellor and provost of UC Santa Barbara, started to serve as the interim chancellor on July 15, 2025. On July 17, 2025, the UC Board of Regents announced that Dennis Assanis would assume the role of UC Santa Barbara's sixth chancellor on September 1, 2025. Campus UCSB is located on cliffs directly above the Pacific Ocean. UCSB's campus is completely autonomous from local government and has not been annexed by the city of Santa Barbara, and thus is not part of the city. While it appears closer to the recently formed city of Goleta, a parcel of the City of Santa Barbara that forms a strip of "city" through the ocean to the Santa Barbara airport, runs through the east entrance to the university campus. Although UCSB has a Santa Barbara mailing address, as do other unincorporated areas around the city, only this entry parcel is in the Santa Barbara city limits. The campus is divided into four parts: the Main (East) Campus of 708 acres (287 ha), which houses all academic units, plus the majority of undergraduate housing; Storke Campus; West Campus; and North Campus. The campuses surround the unincorporated community of Isla Vista. UCSB is one of the few universities in the United States with its own beach. The campus, bordered on two sides by the Pacific Ocean, has miles of coastline, its own lagoon, and the rocky extension, Goleta Point, which is also known as "Campus Point". The campus has numerous walking and bicycle paths across campus, around the lagoon, and along the beach. It also owns and manages the Coal Oil Point nature preserve on the West Campus. Much of the campus's early architecture was designed by famed architect William Pereira and his partner Charles Luckman and made heavy use of custom-tinted and patterned concrete blocks. This design element was carried over into many of the school's subsequent buildings. The UCSB Libraries, consisting of the Davidson Library and the Arts Library, hold more than three million bound volumes and millions of microforms, government documents, manuscripts, maps, satellite and aerial images, sound recordings, and other materials. Situated at the center of campus, the Davidson Library in June 2013 broke ground on a significant addition and renovation project, which was completed in November 2015 with re-opening to the public in January 2016. Campbell Hall is the university's largest lecture hall with 862 seats. It's also the main venue for the UCSB Arts & Lectures series, which presents special performances, films, and lectures for the UCSB campus and Santa Barbara community. Storke Tower, completed in 1969, is the tallest steel/cement structure in Santa Barbara County. It can be seen from most places on campus, and it overlooks Storke Plaza. It is home to a five-octave, 61-bell carillon. KCSB 91.9 and the Daily Nexus have headquarters beneath Storke Tower. The UCSB Family Vacation Center, founded in 1969, is a summer family camp located on campus that draws over 2,000 guests each summer. The staff of over 50 includes many UCSB students who have been extensively trained as camp counselors. UCSB is known for its extensive biking system. A recent survey says that 53% of UCSB students get around by cycling. Academics UC Santa Barbara is a large, comprehensive, primarily residential doctoral university. The full-time, four-year undergraduate program comprises the majority of enrollments and has a liberal arts & sciences focus with high graduate coexistence. UCSB is organized into five colleges and schools offering 87 undergraduate degrees and 55 graduate degrees. The campus is the sixth-largest in the UC system by enrollment with 18,620 undergraduate and 3,065 graduate students. In 2015, UCSB was designated a Hispanic-Serving Institution. Admission to UC Santa Barbara is rated as "most selective" by U.S. News & World Report. UC Santa Barbara no longer uses SAT or ACT scores in admission decisions or for scholarships. UC Santa Barbara had an acceptance rate of 33.0% for the 2024 incoming freshman class. 110,266 applied, 36,347 were admitted, and 5,008 enrolled. The average High School GPA was 4.3. According to the UCSB Office of Research, UC Santa Barbara budgeted $235.3 million on research and development in fiscal 2020, with the National Science Foundation contributing $60.5 million; Department of Defense-$40 million; UC General Fund-$28 million; Industry- $19.5 million; National Institutes of Health-$17 million; Department of Energy-$9 million; Non-Profit-$8.7 million; Other-$20 million. Corporate research partners in the College of Engineering include military contractors Raytheon Vision Systems, Lockheed Martin and Northrop Grumman. From 2005 to 2009, UCSB was ranked fourth in terms of relative citation impact in the U.S. (behind MIT, Caltech, and Princeton University) according to Thomson Reuters. UCSB hosts 12 National Research Centers, including the Kavli Institute for Theoretical Physics, the National Center for Ecological Analysis and Synthesis, the Southern California Earthquake Center, the UCSB Center for Spatial Studies, an affiliate of the National Center for Geographic Information and Analysis, and the California Nanosystems Institute. Eight of these centers are supported by the National Science Foundation. UCSB is also home to Microsoft Station Q, a research group working on topological quantum computing where American mathematician and Fields Medalist Michael Freedman is the director. The focus of the University of California is on research. Like all University of California campuses, UCSB prioritizes academic development over vocational learning. Undergraduate teaching is centered on lectures, with larger lecture classes having sections. Sections may be tutorial style, or they may be set up as seminars or discussions. For undergraduates, UCSB confers both B.A. and B.S. degrees. Music majors may pursue a Bachelor of Music degree. Graduate teaching involves seminar-style classes and an emphasis on research and further study. UCSB confers M.A., M.S., and Ph.D. degrees. Those studying music may pursue a MM or DMA degree. Students pursuing a career in education may receive a MEd or EdD degree. The university granted 5,812 bachelor's, 578 master's, and 354 Ph.D. degrees in 2010–2011. UCSB is considered to be a "Public Ivy". The 2022 edition of U.S. News & World Report ranked UC Santa Barbara as the 7th best public university and tied for the 32nd best university in the United States. Money magazine ranked UC Santa Barbara 30th in the U.S. out of the 744 schools it evaluated for its 2019 Best Colleges ranking. In 2019, Kiplinger ranked UCSB 30th out of 174 best-value public colleges and universities in the nation, and fifth in California. UC Santa Barbara was ranked 32nd in the United States out of 1,380 colleges and universities by Payscale and CollegeNet's 2018 Social Mobility Index rankings. The Times Higher Education World University Rankings ranked UCSB 48th worldwide for 2016–17, while the Academic Ranking of World Universities (ARWU) in 2016 ranked UCSB 42nd in the world, 28th in the nation, and in 2015 tied for 17th worldwide in engineering. Washington Monthly named UCSB as the 20th best national university in 2020, based on its contribution to the public good as measured by social mobility, research, and promoting public service. U.S. News & World Report's 2016 rankings placed UCSB's graduate programs in Materials Engineering and Chemical Engineering the second and ninth best in the U.S., respectively; graduate school Physics was ranked 10th best, including the fifth-best program for Condensed Matter Physics, seventh-best program for Quantum Physics, seventh-best program for Elementary Particles/Field/String Theory, and eighth-best program for Cosmology/Relativity/Gravity. In terms of the social sciences, UCSB's graduate program in Sociology is ranked first for research in sex and gender, and the History department is ranked seventh for women's history. In 2015, QS World University Rankings ranked UCSB 129th in the world. Forbes magazine ranked the university 24th in the nation (and 5th best public university) in 2024. This ranking focuses mainly on net positive financial impact, in contrast to other rankings, and generally ranks liberal arts colleges above most research universities. PayScale's 2015–16 College Salary Report (ranking universities in terms of graduates' salary potential), UCSB came in first in computer science, seventh in engineering, 14th in Humanities, and 30th in Social Sciences. UCSB was ranked third in The Princeton Review's 2015 list of top party schools. In the Anti-Defamation League’s Campus Antisemitism Report Card, UCSB was assigned one of the lowest grades among 135 institutions assessed, reflecting serious deficiencies in administrative actions, campus conduct and climate, and support for Jewish student life. Organization Santa Barbara is one of the ten major campuses affiliated with the University of California. The University of California is governed by a 26-member board of regents, 18 of which are appointed by the Governor of California to 12-year terms, seven serving as ex officio members, and a single student regent. The position of chancellor was created in 1952 to lead individual campuses. The Board of Regents appointed Henry T. Yang to be the fifth chancellor of the university in 1994. UC Santa Barbara has three colleges: the College of Letters & Science, the College of Engineering, and the College of Creative Studies. The College of Creative Studies offers students an alternative approach to education by supporting advanced, independent work in the arts, mathematics, and sciences. The campus also has two professional schools: the Bren School of Environmental Science & Management, located in Bren Hall, and the Gevirtz Graduate School of Education. Founded in 1973, the Institute for Social, Behavioral, and Economic Research (ISBER), originally the Community and Organization Research Institute (CORI), is the research unit for work in the social sciences. In 1990, it absorbed the Social Process Research Institute (SPRI), and its work now includes the humanities. In 2008, the Institute for Energy Efficiency was founded to establish a new, cross-disciplinary institute that would integrate the many diverse research projects in energy efficiency and provide a focus for work in this area. Student activities and traditions UCSB is a politically active campus. For the 2008 presidential election, UCSB won a national college competition for student voter registration by registering 10,857 voters, or 51.5% of the student population. Over the years, many political parties and organizations have been known to be active on campus, such as the College Republicans, Campus Democrats, Green Party, Libertarians, NORML, Young Democratic Socialists of America, and Queer Student Union. There are a variety of on-campus centers that offer social, recreational, religious, and preprofessional activities for students. The UCSB Multicultural Center hosts numerous activities yearly to support students of color and promote awareness of diversity issues on campus. Other organizations and centers include The Daily Nexus, a daily newspaper; the school radio station, KCSB 91.9; The Bottom Line, a weekly newspaper; and The Gaucho Free Press, the campus's conservative magazine. There are eight residence halls at UCSB, seven of which are located at the main campus. One, Santa Catalina (formerly Francisco Torres Towers), is located near the entrance to West Campus north of Isla Vista. The Main Campus residence halls are found in two different locations. On the east end of campus are the residence halls named after five of the Channel Islands: Santa Rosa, Santa Cruz, Anacapa, San Miguel and San Nicolas. There are two dining commons located near the Channel Islands residence halls. The Ortega Dining Commons is located between San Miguel and the University Center (UCen), and the De La Guerra Dining Commons is located between Santa Rosa, Santa Cruz, and San Nicolas. The two other residence halls, San Rafael and Manzanita Village, are located on the west side of campus and primarily house continuing and transfer students. The Carrillo Dining Commons is located in Manzanita Village, right next to San Rafael Hall. Manzanita Village was completed in 2002 and is the newest residence hall on campus. In addition, the university also has four housing complexes for graduate students and their families: San Clemente Villages for single graduate students, Santa Ynez Apartments, El Dorado Apartments, Westgate Apartments, and family student housing: West Campus Apartments and the Storke Apartment complexes. There is also faculty housing at the West Campus Point and new construction underway at the North Campus. The Sierra Madre Villages, located by the West Campus Apartments, was completed in September 2015 and was the first residential complex certified as LEED platinum throughout the entire UC system. UC Santa Barbara is the only campus in the UC system with any "LEED for Homes" certifications. Billionaire Charles Munger had promised the university a $200 million donation on condition that it builds an 11-story dormitory, to be called Munger Hall, following his design, which assigns each of 4,536 residents a small individual room, 94% without natural light, to house more students and to encourage socialization in common areas. UCSB's acceptance of the proposal, presented in October 2021, led to the resignation of architect Dennis McFadden from the campus design review committee, followed by protests from students and others including the American Institute of Architects. In October 2022, the plan was modified to eliminate two floors, reducing the capacity of the building to 3,500. Plans for the construction of the Dormitory were canceled in August 2023. There are several academic resources offered by the university, including a writing center, open computer labs, a machine shop, a career and counseling center, and drop-in academic advising. The UCSB Recreation Center provides classes and facilities for students and faculty. The center has swimming pools, racquetball courts, a rock wall, and exercise machines. The University Center has facilities for meetings and presentations and also contains a bookstore, restaurants, and a cashier. UCSB has a health clinic. Students with ailments or seeking medical assistance may consult a physician at the clinic. The clinic also offers basic healthcare and provides emergency medicine and contraceptives. The university is the only UC campus with its own paramedic rescue unit. It's staffed by full-time professional paramedics and part-time undergraduate EMTs. SexInfo, which was started in 1976 by professors John and Janice Baldwin, is run by students doing advanced course work and research on sexuality through UCSB's Sociology Department. The site is dedicated to providing accurate information about sexuality in a way that is both informative and personal. SexInfo answers questions sent in by readers from all over the world, as well as regularly updates and posts articles on various topics related to human sexuality. This program helps students get their degree in psychology. Athletics The mascot of UCSB is the Gaucho and the school colors are blue and gold. UCSB's sports teams compete in the Big West Conference, except for the men's water polo, men's and women's swimming, and the men's volleyball teams, which are in the Mountain Pacific Sports Federation. Santa Barbara is best known for its men's swimming and men's soccer teams. In 2006, UCSB won its first NCAA men's soccer title and second overall NCAA championship (1979 water polo) in school history. While there are some 400 students in ICA, there are over 700 in club sports teams, including Alpine racing, cycling, fencing, field hockey, lacrosse, roller hockey, rugby, sailing, soccer, ice hockey, triathlon, ultimate frisbee, water skiing, and rowing. Many of these teams are highly regarded and compete against Intercollegiate teams across the U.S. For example, rowing has produced several national team members including nine-time National Rowing Team member Amy Fuller, winner of several Olympic and World Championship medals, and currently head of the UCLA Rowing Program. The UCSB cycling team has also produced several national team members, Olympians, and members of numerous U.S. and international professional teams. Hundreds of students participate in a large intramural program consisting of badminton, basketball, bowling, flag football, golf, floor hockey, indoor and outdoor soccer, racquetball, squash, running, softball, tennis, table tennis, ultimate frisbee, volleyball, inner-tube water polo, and kickball. Surfing also draws many students to UCSB. The on-campus beaches include several surfing sites, including "Poles", "Campus Point", "Depressions", "Sands", and "Devereaux Point" on West Campus. Because Campus Beach faces south and east and is shielded by the Santa Barbara Channel Islands, the surf is usually quite small. However, a large north or west swell can wrap in to create great waves that are typically very clean and good for surfing. UCSB has a surf team that competes in National Scholastic Surfing Association competitions and is generally considered one of the best in the nation. They continued their reputation by winning a record 14th national title at the college level in 2010's finals. People Current UCSB faculty have received several prestigious awards, including seven Nobel Prizes and a Fields Medal. In addition, there are 29 members of the National Academy of Sciences, 27 members of the National Academy of Engineering, and 31 members of the Academy of Arts and Sciences on the faculty. UC Santa Barbara alumni have become notable in many varied fields, both academic and otherwise. Carol Greider, who won the Nobel Prize in Physiology or Medicine (2009), graduated from the College of Creative Studies with a B.A. in biology in 1983. Robert Ballard, an oceanographer who discovered the RMS Titanic in 1985, graduated from UCSB in 1965 with a degree in chemistry and geology. Actors who have studied at UCSB include Academy Award winner Michael Douglas, who received a B.A. in drama in 1968 and is honorary president of the UCSB Alumni Association, and Gwyneth Paltrow, who studied anthropology before dropping out to act. Filmmakers who have studied at UCSB include Academy Award nominee Don Hertzfeldt, who received a B.A. in Film Studies in 1998; Gregg Araki, director of films like Mysterious Skin and The Doom Generation, who got his B.A. from UCSB in 1982; Brad Silberling, director of films like Moonlight Mile and Lemony Snicket's A Series of Unfortunate Events; and Gavin Garrison, who received a B.A. in Global Studies in 2007 and now produces the Emmy-nominated television show Whale Wars; and Forrest Galante, wildlife biologist and star of Extinct or Alive on the Animal Planet Network. Noah Harpster, writer, actor, producer and director, best known for writing A Beautiful Day in the Neighborhood, Transparent, Painkiller and acting in One Mississippi and For All Mankind, who received a B.F.A. in Acting. Musicians who have attended include Robby Krieger, guitarist in The Doors, singer-songwriter Jack Johnson, singer and guitarist for The Beach Boys, Jeffrey Foskett, and electro-house musician Steve Aoki. Chairman of the Oracle Corporation Jeffrey O. Henley graduated with a B.A. in economics in 1966, while Knut Vollebæk, former foreign minister of Norway, graduated with a degree in political science in 1973. Athletes who have studied at UCSB include swimmer and four-time Olympic gold medalist Jason Lezak, NBA player and head coach Brian Shaw, and UCLA basketball coach Cori Close. Television journalist Katy Tur of NBC and MSNBC received a degree in 2005, and Elizabeth Wagmeister of Page Six TV and Variety graduated with a B.A. in communications in 2012. Demographics The United States Census Bureau has designated the UC Santa Barbara campus as a separate census-designated place (CDP) for statistical purposes. It first appeared as a CDP in the 2020 United States census with a population of 9,710. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Joke#cite_note-FOOTNOTEGeorges1997111-85] | [TOKENS: 8460]
Contents Joke A joke is a display of humour in which words are used within a specific and well-defined narrative structure to make people laugh and is usually not meant to be interpreted literally. It usually takes the form of a story, often with dialogue, and ends in a punch line, whereby the humorous element of the story is revealed; this can be done using a pun or other type of word play, irony or sarcasm, logical incompatibility, hyperbole, or other means. Linguist Robert Hetzron offers the definition: A joke is a short humorous piece of oral literature in which the funniness culminates in the final sentence, called the punchline… In fact, the main condition is that the tension should reach its highest level at the very end. No continuation relieving the tension should be added. As for its being "oral," it is true that jokes may appear printed, but when further transferred, there is no obligation to reproduce the text verbatim, as in the case of poetry. It is generally held that jokes benefit from brevity, containing no more detail than is needed to set the scene for the punchline at the end. In the case of riddle jokes or one-liners, the setting is implicitly understood, leaving only the dialogue and punchline to be verbalised. However, subverting these and other common guidelines can also be a source of humour—the shaggy dog story is an example of an anti-joke; although presented as a joke, it contains a long drawn-out narrative of time, place and character, rambles through many pointless inclusions and finally fails to deliver a punchline. Jokes are a form of humour, but not all humour is in the form of a joke. Some humorous forms which are not verbal jokes are: involuntary humour, situational humour, practical jokes, slapstick and anecdotes. Identified as one of the simple forms of oral literature by the Dutch linguist André Jolles, jokes are passed along anonymously. They are told in both private and public settings; a single person tells a joke to his friend in the natural flow of conversation, or a set of jokes is told to a group as part of scripted entertainment. Jokes are also passed along in written form or, more recently, through the internet. Stand-up comics, comedians and slapstick work with comic timing and rhythm in their performance, and may rely on actions as well as on the verbal punchline to evoke laughter. This distinction has been formulated in the popular saying "A comic says funny things; a comedian says things funny".[note 1] History in print Jokes do not belong to refined culture, but rather to the entertainment and leisure of all classes. As such, any printed versions were considered ephemera, i.e., temporary documents created for a specific purpose and intended to be thrown away. Many of these early jokes deal with scatological and sexual topics, entertaining to all social classes but not to be valued and saved.[citation needed] Various kinds of jokes have been identified in ancient pre-classical texts.[note 2] The oldest identified joke is an ancient Sumerian proverb from 1900 BC containing toilet humour: "Something which has never occurred since time immemorial; a young woman did not fart in her husband's lap." Its records were dated to the Old Babylonian period and the joke may go as far back as 2300 BC. The second oldest joke found, discovered on the Westcar Papyrus and believed to be about Sneferu, was from Ancient Egypt c. 1600 BC: "How do you entertain a bored pharaoh? You sail a boatload of young women dressed only in fishing nets down the Nile and urge the pharaoh to go catch a fish." The tale of the three ox drivers from Adab completes the three known oldest jokes in the world. This is a comic triple dating back to 1200 BC Adab. It concerns three men seeking justice from a king on the matter of ownership over a newborn calf, for whose birth they all consider themselves to be partially responsible. The king seeks advice from a priestess on how to rule the case, and she suggests a series of events involving the men's households and wives. The final portion of the story (which included the punch line), has not survived intact, though legible fragments suggest it was bawdy in nature. Jokes can be notoriously difficult to translate from language to language; particularly puns, which depend on specific words and not just on their meanings. For instance, Julius Caesar once sold land at a surprisingly cheap price to his lover Servilia, who was rumoured to be prostituting her daughter Tertia to Caesar in order to keep his favour. Cicero remarked that "conparavit Servilia hunc fundum tertia deducta." The punny phrase, "tertia deducta", can be translated as "with one-third off (in price)", or "with Tertia putting out." The earliest extant joke book is the Philogelos (Greek for The Laughter-Lover), a collection of 265 jokes written in crude ancient Greek dating to the fourth or fifth century AD. The author of the collection is obscure and a number of different authors are attributed to it, including "Hierokles and Philagros the grammatikos", just "Hierokles", or, in the Suda, "Philistion". British classicist Mary Beard states that the Philogelos may have been intended as a jokester's handbook of quips to say on the fly, rather than a book meant to be read straight through. Many of the jokes in this collection are surprisingly familiar, even though the typical protagonists are less recognisable to contemporary readers: the absent-minded professor, the eunuch, and people with hernias or bad breath. The Philogelos even contains a joke similar to Monty Python's "Dead Parrot Sketch". During the 15th century, the printing revolution spread across Europe following the development of the movable type printing press. This was coupled with the growth of literacy in all social classes. Printers turned out Jestbooks along with Bibles to meet both lowbrow and highbrow interests of the populace. One early anthology of jokes was the Facetiae by the Italian Poggio Bracciolini, first published in 1470. The popularity of this jest book can be measured on the twenty editions of the book documented alone for the 15th century. Another popular form was a collection of jests, jokes and funny situations attributed to a single character in a more connected, narrative form of the picaresque novel. Examples of this are the characters of Rabelais in France, Till Eulenspiegel in Germany, Lazarillo de Tormes in Spain and Master Skelton in England. There is also a jest book ascribed to William Shakespeare, the contents of which appear to both inform and borrow from his plays. All of these early jestbooks corroborate both the rise in the literacy of the European populations and the general quest for leisure activities during the Renaissance in Europe. The practice of printers using jokes and cartoons as page fillers was also widely used in the broadsides and chapbooks of the 19th century and earlier. With the increase in literacy in the general population and the growth of the printing industry, these publications were the most common forms of printed material between the 16th and 19th centuries throughout Europe and North America. Along with reports of events, executions, ballads and verse, they also contained jokes. Only one of many broadsides archived in the Harvard library is described as "1706. Grinning made easy; or, Funny Dick's unrivalled collection of curious, comical, odd, droll, humorous, witty, whimsical, laughable, and eccentric jests, jokes, bulls, epigrams, &c. With many other descriptions of wit and humour." These cheap publications, ephemera intended for mass distribution, were read alone, read aloud, posted and discarded. There are many types of joke books in print today; a search on the internet provides a plethora of titles available for purchase. They can be read alone for solitary entertainment, or used to stock up on new jokes to entertain friends. Some people try to find a deeper meaning in jokes, as in "Plato and a Platypus Walk into a Bar... Understanding Philosophy Through Jokes".[note 3] However a deeper meaning is not necessary to appreciate their inherent entertainment value. Magazines frequently use jokes and cartoons as filler for the printed page. Reader's Digest closes out many articles with an (unrelated) joke at the bottom of the article. The New Yorker was first published in 1925 with the stated goal of being a "sophisticated humour magazine" and is still known for its cartoons. Telling jokes Telling a joke is a cooperative effort; it requires that the teller and the audience mutually agree in one form or another to understand the narrative which follows as a joke. In a study of conversation analysis, the sociologist Harvey Sacks describes in detail the sequential organisation in the telling of a single joke. "This telling is composed, as for stories, of three serially ordered and adjacently placed types of sequences … the preface [framing], the telling, and the response sequences." Folklorists expand this to include the context of the joking. Who is telling what jokes to whom? And why is he telling them when? The context of the joke-telling in turn leads into a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who engage in institutionalised banter and joking. Framing is done with a (frequently formulaic) expression which keys the audience in to expect a joke. "Have you heard the one…", "Reminds me of a joke I heard…", "So, a lawyer and a doctor…"; these conversational markers are just a few examples of linguistic frames used to start a joke. Regardless of the frame used, it creates a social space and clear boundaries around the narrative which follows. Audience response to this initial frame can be acknowledgement and anticipation of the joke to follow. It can also be a dismissal, as in "this is no joking matter" or "this is no time for jokes". The performance frame serves to label joke-telling as a culturally marked form of communication. Both the performer and audience understand it to be set apart from the "real" world. "An elephant walks into a bar…"; a person sufficiently familiar with both the English language and the way jokes are told automatically understands that such a compressed and formulaic story, being told with no substantiating details, and placing an unlikely combination of characters into an unlikely setting and involving them in an unrealistic plot, is the start of a joke, and the story that follows is not meant to be taken at face value (i.e. it is non-bona-fide communication). The framing itself invokes a play mode; if the audience is unable or unwilling to move into play, then nothing will seem funny. Following its linguistic framing the joke, in the form of a story, can be told. It is not required to be verbatim text like other forms of oral literature such as riddles and proverbs. The teller can and does modify the text of the joke, depending both on memory and the present audience. The important characteristic is that the narrative is succinct, containing only those details which lead directly to an understanding and decoding of the punchline. This requires that it support the same (or similar) divergent scripts which are to be embodied in the punchline. The punchline is intended to make the audience laugh. A linguistic interpretation of this punchline/response is elucidated by Victor Raskin in his Script-based Semantic Theory of Humour. Humour is evoked when a trigger contained in the punchline causes the audience to abruptly shift its understanding of the story from the primary (or more obvious) interpretation to a secondary, opposing interpretation. "The punchline is the pivot on which the joke text turns as it signals the shift between the [semantic] scripts necessary to interpret [re-interpret] the joke text." To produce the humour in the verbal joke, the two interpretations (i.e. scripts) need to both be compatible with the joke text and opposite or incompatible with each other. Thomas R. Shultz, a psychologist, independently expands Raskin's linguistic theory to include "two stages of incongruity: perception and resolution." He explains that "… incongruity alone is insufficient to account for the structure of humour. […] Within this framework, humour appreciation is conceptualized as a biphasic sequence involving first the discovery of incongruity followed by a resolution of the incongruity." In the case of a joke, that resolution generates laughter. This is the point at which the field of neurolinguistics offers some insight into the cognitive processing involved in this abrupt laughter at the punchline. Studies by the cognitive science researchers Coulson and Kutas directly address the theory of script switching articulated by Raskin in their work. The article "Getting it: Human event-related brain response to jokes in good and poor comprehenders" measures brain activity in response to reading jokes. Additional studies by others in the field support more generally the theory of two-stage processing of humour, as evidenced in the longer processing time they require. In the related field of neuroscience, it has been shown that the expression of laughter is caused by two partially independent neuronal pathways: an "involuntary" or "emotionally driven" system and a "voluntary" system. This study adds credence to the common experience when exposed to an off-colour joke; a laugh is followed in the next breath by a disclaimer: "Oh, that's bad…" Here the multiple steps in cognition are clearly evident in the stepped response, the perception being processed just a breath faster than the resolution of the moral/ethical content in the joke. Expected response to a joke is laughter. The joke teller hopes the audience "gets it" and is entertained. This leads to the premise that a joke is actually an "understanding test" between individuals and groups. If the listeners do not get the joke, they are not understanding the two scripts which are contained in the narrative as they were intended. Or they do "get it" and do not laugh; it might be too obscene, too gross or too dumb for the current audience. A woman might respond differently to a joke told by a male colleague around the water cooler than she would to the same joke overheard in a women's lavatory. A joke involving toilet humour may be funnier told on the playground at elementary school than on a college campus. The same joke will elicit different responses in different settings. The punchline in the joke remains the same, however, it is more or less appropriate depending on the current context. The context explores the specific social situation in which joking occurs. The narrator automatically modifies the text of the joke to be acceptable to different audiences, while at the same time supporting the same divergent scripts in the punchline. The vocabulary used in telling the same joke at a university fraternity party and to one's grandmother might well vary. In each situation, it is important to identify both the narrator and the audience as well as their relationship with each other. This varies to reflect the complexities of a matrix of different social factors: age, sex, race, ethnicity, kinship, political views, religion, power relationships, etc. When all the potential combinations of such factors between the narrator and the audience are considered, then a single joke can take on infinite shades of meaning for each unique social setting. The context, however, should not be confused with the function of the joking. "Function is essentially an abstraction made on the basis of a number of contexts". In one long-term observation of men coming off the late shift at a local café, joking with the waitresses was used to ascertain sexual availability for the evening. Different types of jokes, going from general to topical into explicitly sexual humour signalled openness on the part of the waitress for a connection. This study describes how jokes and joking are used to communicate much more than just good humour. That is a single example of the function of joking in a social setting, but there are others. Sometimes jokes are used simply to get to know someone better. What makes them laugh, what do they find funny? Jokes concerning politics, religion or sexual topics can be used effectively to gauge the attitude of the audience to any one of these topics. They can also be used as a marker of group identity, signalling either inclusion or exclusion for the group. Among pre-adolescents, "dirty" jokes allow them to share information about their changing bodies. And sometimes joking is just simple entertainment for a group of friends. Relationships The context of joking in turn leads to a study of joking relationships, a term coined by anthropologists to refer to social groups within a culture who take part in institutionalised banter and joking. These relationships can be either one-way or a mutual back and forth between partners. The joking relationship is defined as a peculiar combination of friendliness and antagonism. The behaviour is such that in any other social context it would express and arouse hostility; but it is not meant seriously and must not be taken seriously. There is a pretence of hostility along with a real friendliness. To put it in another way, the relationship is one of permitted disrespect. Joking relationships were first described by anthropologists within kinship groups in Africa. But they have since been identified in cultures around the world, where jokes and joking are used to mark and reinforce appropriate boundaries of a relationship. Electronic The advent of electronic communications at the end of the 20th century introduced new traditions into jokes. A verbal joke or cartoon is emailed to a friend or posted on a bulletin board; reactions include a replied email with a :-) or LOL, or a forward on to further recipients. Interaction is limited to the computer screen and for the most part solitary. While preserving the text of a joke, both context and variants are lost in internet joking; for the most part, emailed jokes are passed along verbatim. The framing of the joke frequently occurs in the subject line: "RE: laugh for the day" or something similar. The forward of an email joke can increase the number of recipients exponentially. Internet joking forces a re-evaluation of social spaces and social groups. They are no longer only defined by physical presence and locality, they also exist in the connectivity in cyberspace. "The computer networks appear to make possible communities that, although physically dispersed, display attributes of the direct, unconstrained, unofficial exchanges folklorists typically concern themselves with". This is particularly evident in the spread of topical jokes, "that genre of lore in which whole crops of jokes spring up seemingly overnight around some sensational event … flourish briefly and then disappear, as the mass media move on to fresh maimings and new collective tragedies". This correlates with the new understanding of the internet as an "active folkloric space" with evolving social and cultural forces and clearly identifiable performers and audiences. A study by the folklorist Bill Ellis documented how an evolving cycle was circulated over the internet. By accessing message boards that specialised in humour immediately following the 9/11 disaster, Ellis was able to observe in real-time both the topical jokes being posted electronically and responses to the jokes. Previous folklore research has been limited to collecting and documenting successful jokes, and only after they had emerged and come to folklorists' attention. Now, an Internet-enhanced collection creates a time machine, as it were, where we can observe what happens in the period before the risible moment, when attempts at humour are unsuccessful Access to archived message boards also enables us to track the development of a single joke thread in the context of a more complicated virtual conversation. Joke cycles A joke cycle is a collection of jokes about a single target or situation which displays consistent narrative structure and type of humour. Some well-known cycles are elephant jokes using nonsense humour, dead baby jokes incorporating black humour, and light bulb jokes, which describe all kinds of operational stupidity. Joke cycles can centre on ethnic groups, professions (viola jokes), catastrophes, settings (…walks into a bar), absurd characters (wind-up dolls), or logical mechanisms which generate the humour (knock-knock jokes). A joke can be reused in different joke cycles; an example of this is the same Head & Shoulders joke refitted to the tragedies of Vic Morrow, Admiral Mountbatten and the crew of the Challenger space shuttle.[note 4] These cycles seem to appear spontaneously, spread rapidly across countries and borders only to dissipate after some time. Folklorists and others have studied individual joke cycles in an attempt to understand their function and significance within the culture. Joke cycles circulated in the recent past include: As with the 9/11 disaster discussed above, cycles attach themselves to celebrities or national catastrophes such as the death of Diana, Princess of Wales, the death of Michael Jackson, and the Space Shuttle Challenger disaster. These cycles arise regularly as a response to terrible unexpected events which command the national news. An in-depth analysis of the Challenger joke cycle documents a change in the type of humour circulated following the disaster, from February to March 1986. "It shows that the jokes appeared in distinct 'waves', the first responding to the disaster with clever wordplay and the second playing with grim and troubling images associated with the event…The primary social function of disaster jokes appears to be to provide closure to an event that provoked communal grieving, by signalling that it was time to move on and pay attention to more immediate concerns". The sociologist Christie Davies has written extensively on ethnic jokes told in countries around the world. In ethnic jokes he finds that the "stupid" ethnic target in the joke is no stranger to the culture, but rather a peripheral social group (geographic, economic, cultural, linguistic) well known to the joke tellers. So Americans tell jokes about Polacks and Italians, Germans tell jokes about Ostfriesens, and the English tell jokes about the Irish. In a review of Davies' theories it is said that "For Davies, [ethnic] jokes are more about how joke tellers imagine themselves than about how they imagine those others who serve as their putative targets…The jokes thus serve to center one in the world – to remind people of their place and to reassure them that they are in it." A third category of joke cycles identifies absurd characters as the butt: for example the grape, the dead baby or the elephant. Beginning in the 1960s, social and cultural interpretations of these joke cycles, spearheaded by the folklorist Alan Dundes, began to appear in academic journals. Dead baby jokes are posited to reflect societal changes and guilt caused by widespread use of contraception and abortion beginning in the 1960s.[note 5] Elephant jokes have been interpreted variously as stand-ins for American blacks during the Civil Rights Era or as an "image of something large and wild abroad in the land captur[ing] the sense of counterculture" of the sixties. These interpretations strive for a cultural understanding of the themes of these jokes which go beyond the simple collection and documentation undertaken previously by folklorists and ethnologists. Classification systems As folktales and other types of oral literature became collectables throughout Europe in the 19th century (Brothers Grimm et al.), folklorists and anthropologists of the time needed a system to organise these items. The Aarne–Thompson classification system was first published in 1910 by Antti Aarne, and later expanded by Stith Thompson to become the most renowned classification system for European folktales and other types of oral literature. Its final section addresses anecdotes and jokes, listing traditional humorous tales ordered by their protagonist; "This section of the Index is essentially a classification of the older European jests, or merry tales – humorous stories characterized by short, fairly simple plots. …" Due to its focus on older tale types and obsolete actors (e.g., numbskull), the Aarne–Thompson Index does not provide much help in identifying and classifying the modern joke. A more granular classification system used widely by folklorists and cultural anthropologists is the Thompson Motif Index, which separates tales into their individual story elements. This system enables jokes to be classified according to individual motifs included in the narrative: actors, items and incidents. It does not provide a system to classify the text by more than one element at a time while at the same time making it theoretically possible to classify the same text under multiple motifs. The Thompson Motif Index has spawned further specialised motif indices, each of which focuses on a single aspect of one subset of jokes. A sampling of just a few of these specialised indices have been listed under other motif indices. Here one can select an index for medieval Spanish folk narratives, another index for linguistic verbal jokes, and a third one for sexual humour. To assist the researcher with this increasingly confusing situation, there are also multiple bibliographies of indices as well as a how-to guide on creating your own index. Several difficulties have been identified with these systems of identifying oral narratives according to either tale types or story elements. A first major problem is their hierarchical organisation; one element of the narrative is selected as the major element, while all other parts are arrayed subordinate to this. A second problem with these systems is that the listed motifs are not qualitatively equal; actors, items and incidents are all considered side-by-side. And because incidents will always have at least one actor and usually have an item, most narratives can be ordered under multiple headings. This leads to confusion about both where to order an item and where to find it. A third significant problem is that the "excessive prudery" common in the middle of the 20th century means that obscene, sexual and scatological elements were regularly ignored in many of the indices. The folklorist Robert Georges has summed up the concerns with these existing classification systems: …Yet what the multiplicity and variety of sets and subsets reveal is that folklore [jokes] not only takes many forms, but that it is also multifaceted, with purpose, use, structure, content, style, and function all being relevant and important. Any one or combination of these multiple and varied aspects of a folklore example [such as jokes] might emerge as dominant in a specific situation or for a particular inquiry. It has proven difficult to organise all different elements of a joke into a multi-dimensional classification system which could be of real value in the study and evaluation of this (primarily oral) complex narrative form. The General Theory of Verbal Humour or GTVH, developed by the linguists Victor Raskin and Salvatore Attardo, attempts to do exactly this. This classification system was developed specifically for jokes and later expanded to include longer types of humorous narratives. Six different aspects of the narrative, labelled Knowledge Resources or KRs, can be evaluated largely independently of each other, and then combined into a concatenated classification label. These six KRs of the joke structure include: As development of the GTVH progressed, a hierarchy of the KRs was established to partially restrict the options for lower-level KRs depending on the KRs defined above them. For example, a lightbulb joke (SI) will always be in the form of a riddle (NS). Outside of these restrictions, the KRs can create a multitude of combinations, enabling a researcher to select jokes for analysis which contain only one or two defined KRs. It also allows for an evaluation of the similarity or dissimilarity of jokes depending on the similarity of their labels. "The GTVH presents itself as a mechanism … of generating [or describing] an infinite number of jokes by combining the various values that each parameter can take. … Descriptively, to analyze a joke in the GTVH consists of listing the values of the 6 KRs (with the caveat that TA and LM may be empty)." This classification system provides a functional multi-dimensional label for any joke, and indeed any verbal humour. Joke and humour research Many academic disciplines lay claim to the study of jokes (and other forms of humour) as within their purview. Fortunately, there are enough jokes, good, bad and worse, to go around. The studies of jokes from each of the interested disciplines bring to mind the tale of the blind men and an elephant where the observations, although accurate reflections of their own competent methodological inquiry, frequently fail to grasp the beast in its entirety. This attests to the joke as a traditional narrative form which is indeed complex, concise and complete in and of itself. It requires a "multidisciplinary, interdisciplinary, and cross-disciplinary field of inquiry" to truly appreciate these nuggets of cultural insight.[note 6] Sigmund Freud was one of the first modern scholars to recognise jokes as an important object of investigation. In his 1905 study Jokes and their Relation to the Unconscious Freud describes the social nature of humour and illustrates his text with many examples of contemporary Viennese jokes. His work is particularly noteworthy in this context because Freud distinguishes in his writings between jokes, humour and the comic. These are distinctions which become easily blurred in many subsequent studies where everything funny tends to be gathered under the umbrella term of "humour", making for a much more diffuse discussion. Since the publication of Freud's study, psychologists have continued to explore humour and jokes in their quest to explain, predict and control an individual's "sense of humour". Why do people laugh? Why do people find something funny? Can jokes predict character, or vice versa, can character predict the jokes an individual laughs at? What is a "sense of humour"? A current review of the popular magazine Psychology Today lists over 200 articles discussing various aspects of humour; in psychological jargon, the subject area has become both an emotion to measure and a tool to use in diagnostics and treatment. A new psychological assessment tool, the Values in Action Inventory developed by the American psychologists Christopher Peterson and Martin Seligman includes humour (and playfulness) as one of the core character strengths of an individual. As such, it could be a good predictor of life satisfaction. For psychologists, it would be useful to measure both how much of this strength an individual has and how it can be measurably increased. A 2007 survey of existing tools to measure humour identified more than 60 psychological measurement instruments. These measurement tools use many different approaches to quantify humour along with its related states and traits. There are tools to measure an individual's physical response by their smile; the Facial Action Coding System (FACS) is one of several tools used to identify any one of multiple types of smiles. Or the laugh can be measured to calculate the funniness response of an individual; multiple types of laughter have been identified. It must be stressed here that both smiles and laughter are not always a response to something funny. In trying to develop a measurement tool, most systems use "jokes and cartoons" as their test materials. However, because no two tools use the same jokes, and across languages this would not be feasible, how does one determine that the assessment objects are comparable? Moving on, whom does one ask to rate the sense of humour of an individual? Does one ask the person themselves, an impartial observer, or their family, friends and colleagues? Furthermore, has the current mood of the test subjects been considered; someone with a recent death in the family might not be much prone to laughter. Given the plethora of variants revealed by even a superficial glance at the problem, it becomes evident that these paths of scientific inquiry are mined with problematic pitfalls and questionable solutions. The psychologist Willibald Ruch [de] has been very active in the research of humour. He has collaborated with the linguists Raskin and Attardo on their General Theory of Verbal Humour (GTVH) classification system. Their goal is to empirically test both the six autonomous classification types (KRs) and the hierarchical ordering of these KRs. Advancement in this direction would be a win-win for both fields of study; linguistics would have empirical verification of this multi-dimensional classification system for jokes, and psychology would have a standardised joke classification with which they could develop verifiably comparable measurement tools. "The linguistics of humor has made gigantic strides forward in the last decade and a half and replaced the psychology of humor as the most advanced theoretical approach to the study of this important and universal human faculty." This recent statement by one noted linguist and humour researcher describes, from his perspective, contemporary linguistic humour research. Linguists study words, how words are strung together to build sentences, how sentences create meaning which can be communicated from one individual to another, and how our interaction with each other using words creates discourse. Jokes have been defined above as oral narratives in which words and sentences are engineered to build toward a punchline. The linguist's question is: what exactly makes the punchline funny? This question focuses on how the words used in the punchline create humour, in contrast to the psychologist's concern (see above) with the audience's response to the punchline. The assessment of humour by psychologists "is made from the individual's perspective; e.g. the phenomenon associated with responding to or creating humor and not a description of humor itself." Linguistics, on the other hand, endeavours to provide a precise description of what makes a text funny. Two major new linguistic theories have been developed and tested within the last decades. The first was advanced by Victor Raskin in "Semantic Mechanisms of Humor", published 1985. While being a variant on the more general concepts of the incongruity theory of humour, it is the first theory to identify its approach as exclusively linguistic. The Script-based Semantic Theory of Humour (SSTH) begins by identifying two linguistic conditions which make a text funny. It then goes on to identify the mechanisms involved in creating the punchline. This theory established the semantic/pragmatic foundation of humour as well as the humour competence of speakers.[note 7] Several years later the SSTH was incorporated into a more expansive theory of jokes put forth by Raskin and his colleague Salvatore Attardo. In the General Theory of Verbal Humour, the SSTH was relabelled as a Logical Mechanism (LM) (referring to the mechanism which connects the different linguistic scripts in the joke) and added to five other independent Knowledge Resources (KR). Together these six KRs could now function as a multi-dimensional descriptive label for any piece of humorous text. Linguistics has developed further methodological tools which can be applied to jokes: discourse analysis and conversation analysis of joking. Both of these subspecialties within the field focus on "naturally occurring" language use, i.e. the analysis of real (usually recorded) conversations. One of these studies has already been discussed above, where Harvey Sacks describes in detail the sequential organisation in telling a single joke. Discourse analysis emphasises the entire context of social joking, the social interaction which cradles the words. Folklore and cultural anthropology have perhaps the strongest claims on jokes as belonging to their bailiwick. Jokes remain one of the few remaining forms of traditional folk literature transmitted orally in western cultures. Identified as one of the "simple forms" of oral literature by André Jolles in 1930, they have been collected and studied since there were folklorists and anthropologists abroad in the lands. As a genre they were important enough at the beginning of the 20th century to be included under their own heading in the Aarne–Thompson index first published in 1910: Anecdotes and jokes. Beginning in the 1960s, cultural researchers began to expand their role from collectors and archivists of "folk ideas" to a more active role of interpreters of cultural artefacts. One of the foremost scholars active during this transitional time was the folklorist Alan Dundes. He started asking questions of tradition and transmission with the key observation that "No piece of folklore continues to be transmitted unless it means something, even if neither the speaker nor the audience can articulate what that meaning might be." In the context of jokes, this then becomes the basis for further research. Why is the joke told right now? Only in this expanded perspective is an understanding of its meaning to the participants possible. This questioning resulted in a blossoming of monographs to explore the significance of many joke cycles. What is so funny about absurd nonsense elephant jokes? Why make light of dead babies? In an article on contemporary German jokes about Auschwitz and the Holocaust, Dundes justifies this research: Whether one finds Auschwitz jokes funny or not is not an issue. This material exists and should be recorded. Jokes are always an important barometer of the attitudes of a group. The jokes exist and they obviously must fill some psychic need for those individuals who tell them and those who listen to them. A stimulating generation of new humour theories flourishes like mushrooms in the undergrowth: Elliott Oring's theoretical discussions on "appropriate ambiguity" and Amy Carrell's hypothesis of an "audience-based theory of verbal humor (1993)" to name just a few. In his book Humor and Laughter: An Anthropological Approach, the anthropologist Mahadev Apte presents a solid case for his own academic perspective. "Two axioms underlie my discussion, namely, that humor is by and large culture based and that humor can be a major conceptual and methodological tool for gaining insights into cultural systems." Apte goes on to call for legitimising the field of humour research as "humorology"; this would be a field of study incorporating an interdisciplinary character of humour studies. While the label "humorology" has yet to become a household word, great strides are being made in the international recognition of this interdisciplinary field of research. The International Society for Humor Studies was founded in 1989 with the stated purpose to "promote, stimulate and encourage the interdisciplinary study of humour; to support and cooperate with local, national, and international organizations having similar purposes; to organize and arrange meetings; and to issue and encourage publications concerning the purpose of the society". It also publishes Humor: International Journal of Humor Research and holds yearly conferences to promote and inform its speciality. In 1872, Charles Darwin published one of the first "comprehensive and in many ways remarkably accurate description of laughter in terms of respiration, vocalization, facial action and gesture and posture" (Laughter) in The Expression of the Emotions in Man and Animals. In this early study Darwin raises further questions about who laughs and why they laugh; the myriad responses since then illustrate the complexities of this behaviour. To understand laughter in humans and other primates, the science of gelotology (from the Greek gelos, meaning laughter) has been established; it is the study of laughter and its effects on the body from both a psychological and physiological perspective. While jokes can provoke laughter, laughter cannot be used as a one-to-one marker of jokes because there are multiple stimuli to laughter, humour being just one of them. The other six causes of laughter listed are social context, ignorance, anxiety, derision, acting apology, and tickling. As such, the study of laughter is a secondary albeit entertaining perspective in an understanding of jokes. Computational humour is a new field of study which uses computers to model humour; it bridges the disciplines of computational linguistics and artificial intelligence. A primary ambition of this field is to develop computer programs which can both generate a joke and recognise a text snippet as a joke. Early programming attempts have dealt almost exclusively with punning because this lends itself to simple straightforward rules. These primitive programs display no intelligence; instead, they work off a template with a finite set of pre-defined punning options upon which to build. More sophisticated computer joke programs have yet to be developed. Based on our understanding of the SSTH / GTVH humour theories, it is easy to see why. The linguistic scripts (a.k.a. frames) referenced in these theories include, for any given word, a "large chunk of semantic information surrounding the word and evoked by it [...] a cognitive structure internalized by the native speaker". These scripts extend much further than the lexical definition of a word; they contain the speaker's complete knowledge of the concept as it exists in his world. As insentient machines, computers lack the encyclopaedic scripts which humans gain through life experience. They also lack the ability to gather the experiences needed to build wide-ranging semantic scripts and understand language in a broader context, a context that any child picks up in daily interaction with his environment. Further development in this field must wait until computational linguists have succeeded in programming a computer with an ontological semantic natural language processing system. It is only "the most complex linguistic structures [which] can serve any formal and/or computational treatment of humor well". Toy systems (i.e. dummy punning programs) are completely inadequate to the task. Despite the fact that the field of computational humour is small and underdeveloped, it is encouraging to note the many interdisciplinary efforts which are currently underway. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Railway_station] | [TOKENS: 4266]
Contents Train station A train station, railroad station, or railway station is a railway facility where trains stop to load or unload passengers, freight, or both. It generally consists of at least one platform, one track, and a station building providing such ancillary services as ticket sales, waiting rooms, and baggage/freight service. Stations on a single-track line often have a passing loop to accommodate trains travelling in the opposite direction. Locations at which passengers only occasionally board or leave a train, sometimes consisting of a short platform and a waiting area but sometimes indicated by no more than a sign, are variously referred to as "stops", "flag stops", "halts", or "provisional stopping places". The stations themselves may be at ground level, underground, or elevated. Connections may be available to intersecting rail lines or other transport modes such as buses, trams, or other rapid transit systems. Terminology Train station is the terminology typically used in the U.S. In Europe, the terms train station and railway station are both commonly used, with railroad being obsolete. In British Commonwealth usage, where railway station is the traditional term, the word station is commonly understood to mean a railway station unless otherwise specified. In the United States, the term depot is sometimes used as an alternative name for station, along with the compound forms train depot, railway depot, and railroad depot—it is used for both passenger and freight facilities. The term depot is not used in reference to vehicle maintenance facilities in the U.S., whereas it is used as such in Canada and the United Kingdom. History The world's first recorded railway station, for trains drawn by horses rather than engined locomotives, began passenger service in 1807. It was The Mount in Swansea, Wales, on the Oystermouth (later the Swansea and Mumbles) Railway. The world's oldest station for engined trains was at Heighington, on the Stockton and Darlington railway in north-east England built by George Stephenson in the early 19th century, operated by locomotive Locomotion No. 1. The station opened in 1827 and was in use until the 1970s. The building, Grade II*-listed, was in bad condition, but was restored in 1984 as an inn. The inn closed in 2017; in 2024 there were plans to renovate the derelict station in time for the 200th anniversary of the opening of the railway line. The two-storey Mount Clare station in Baltimore, Maryland, United States, which survives as a museum, first saw passenger service as the terminus of the horse-drawn Baltimore and Ohio Railroad on 22 May 1830. The oldest terminal station in the world was Crown Street railway station in Liverpool, England, built in 1830, on the locomotive-hauled Liverpool to Manchester line. The station was slightly older than the still extant Liverpool Road railway station terminal in Manchester. The station was the first to incorporate a train shed. Crown Street station was demolished in 1836, as the Liverpool terminal station moved to Lime Street railway station. Crown Street station was converted to a goods station terminal. The first stations had little in the way of buildings or amenities. The first stations in the modern sense were on the Liverpool and Manchester Railway, opened in 1830. Manchester's Liverpool Road Station, the second oldest terminal station in the world, is preserved as part of the Museum of Science and Industry in Manchester. It resembles a row of Georgian houses. Early stations were sometimes built with both passenger and freight facilities, though some railway lines were goods-only or passenger-only, and if a line was dual-purpose there would often be a freight depot apart from the passenger station. This type of dual-purpose station can sometimes still be found today, though in many cases goods facilities are restricted to major stations. Many stations date from the 19th century and reflect the grandiose architecture of the time, lending prestige to the city as well as to railway operations. Countries where railways arrived later may still have such architecture, as later stations often imitated 19th-century styles. Various forms of architecture have been used in the construction of stations, from those boasting grand, intricate, Baroque- or Gothic-style edifices, to plainer utilitarian or modernist styles. Stations in Europe tended to follow British designs and were in some countries, like Italy, financed by British railway companies. Train stations built more recently often have a similar feel to airports, with a simple, abstract style. Examples of modern stations include those on newer high-speed rail networks, such as the Shinkansen in Japan, THSR in Taiwan, TGV lines in France, and ICE lines in Germany. Facilities Stations normally have staffed ticket sales offices, automated ticket machines, or both, although on some lines tickets are sold on board the trains. Many stations include a shop or convenience store. Larger stations usually have fast-food or restaurant facilities. In some countries, stations may also have a bar or pub. Other station facilities may include: toilets, left-luggage, lost-and-found, departures and arrivals schedules, luggage carts, waiting rooms, taxi ranks, bus bays and even car parks. Larger or staffed stations tend to have a greater range of facilities including also a station security office. These are usually open for travellers when there is sufficient traffic over a long enough period of time to warrant the cost. In large cities this may mean facilities available around the clock. A basic station might only have platforms, though it may still be distinguished from a halt, a stopping or halting place that may not even have platforms. Many stations, either larger or smaller, offer interchange with local transportation; this can vary from a simple bus stop across the street to underground rapid-transit urban rail stations. In many African, South American, and Asian countries, stations are also used as a place for public markets and other informal businesses. This is especially true on tourist routes or stations near tourist destinations. As well as providing services for passengers and loading facilities for goods, stations can sometimes have locomotive and rolling stock depots, usually with facilities for storing and refuelling rolling stock and carrying out minor repairs. Configurations The basic configuration of a station and various other features set certain types apart. The first is the level of the tracks. Stations are often sited where a road crosses the railway: unless the crossing is a level crossing, the road and railway will be at different levels. The platforms will often be raised or lowered relative to the station entrance: the station buildings may be on either level, or both. The other arrangement, where the station entrance and platforms are on the same level, is also common, but is perhaps rarer in urban areas, except when the station is a terminus. Stations located at level crossings can be problematic if the train blocks the roadway while it stops, causing road traffic to wait for an extended period of time. Stations also exist where the station buildings are above the tracks. An example of this is Arbroath. Occasionally, a station serves two or more railway lines at differing levels. This may be due to the station's position at a point where two lines cross (example: Berlin Hauptbahnhof), or may be to provide separate station capacity for two types of service, such as intercity and suburban (examples: Paris-Gare de Lyon and Philadelphia's 30th Street Station), or for two different destinations. Stations may also be classified according to the layout of the platforms. Apart from single-track lines, the most basic arrangement is a pair of tracks for the two directions; there is then a basic choice of an island platform between, two separate platforms outside the tracks (side platforms), or a combination of the two. With more tracks, the possibilities expand. Some stations have unusual platform layouts due to space constraints of the station location, or the alignment of the tracks. Examples include staggered platforms, such as at Tutbury and Hatton railway station on the Crewe–Derby line, and curved platforms, such as Cheadle Hulme railway station on the Macclesfield to Manchester Line. Stations at junctions can also have unusual shapes – a Keilbahnhof (or "wedge-shaped" station) is sited where two lines split. Triangular stations also exist where two lines form a three-way junction and platforms are built on all three sides, for example Shipley and Earlestown stations. In a station, there are different types of tracks to serve different purposes. A station may also have a passing loop with a loop line that comes off the straight main line and merge back to the main line on the other end by railroad switches to allow trains to pass. A track with a spot at the station to board and disembark trains is called station track or house track regardless of whether it is a main line or loop line. If such track is served by a platform, the track may be called platform track. A loop line without a platform, which is used to allow a train to clear the main line at the station only, is called passing track. A track at the station without a platform which is used for trains to pass the station without stopping is called through track. There may be other sidings at the station which are lower speed tracks for other purposes. A maintenance track or a maintenance siding, usually connected to a passing track, is used for parking maintenance equipment, trains not in service, autoracks or sleepers. A refuge track is a dead-end siding that is connected to a station track as a temporary storage of a disabled train. A "terminus" or "terminal" is a station at the end of a railway line. Trains arriving there have to end their journeys (terminate) or reverse out of the station. Depending on the layout of the station, this usually permits travellers to reach all the platforms without the need to cross any tracks – the public entrance to the station and the main reception facilities being at the far end of the platforms. Sometimes the track continues for a short distance beyond the station, and terminating trains continue forward after depositing their passengers, before either proceeding to sidings or reversing to the station to pick up departing passengers. Bondi Junction, Australia and Kristiansand Station, Norway are examples. A terminus is frequently, but not always, the final destination of trains arriving at the station. Especially in continental Europe, a city may have a terminus as its main railway station, and all main lines converge on it. In such cases all trains arriving at the terminus must leave in the reverse direction from that of their arrival. There are several ways in which this can be accomplished: There may also be a bypass line, used by freight trains that do not need to stop at the terminus. Some termini have a newer set of through platforms underneath (or above, or alongside) the terminal platforms on the main level. They are used by a cross-city extension of the main line, often for commuter trains, while the terminal platforms may serve long-distance services. Examples of underground through lines include the Thameslink platforms at St Pancras in London, the Argyle and North Clyde lines of Glasgow's suburban rail network, in Antwerp in Belgium, the RER at the Gare du Nord in Paris, the Milan suburban railway service's Passante railway, and many of the numerous S-Bahn lines at terminal stations in Germany, Austria and Switzerland, such as at Zürich Hauptbahnhof. Due to the disadvantages of terminus stations there have been multiple cases in which one or several terminus stations were replaced with a new through-station, including the cases of Berlin Hauptbahnhof, Vienna Hauptbahnhof and numerous examples throughout the first century of railroading. Stuttgart 21 is a controversial project involving the replacement of a terminus station by a through-station. An American example of a terminal with this feature is Union Station in Washington, DC, where there are bay platforms on the main concourse level to serve terminating trains and standard island platforms one level below to serve trains continuing southward. The lower tracks run in a tunnel beneath the concourse and emerge a few blocks away to cross the Potomac River into Virginia. Terminus stations in large cities are by far the biggest stations, with the largest being Grand Central Terminal in New York City. Other major cities, such as London, Boston, Paris, Istanbul, Tokyo, and Milan have more than one terminus, rather than routes straight through the city. Train journeys through such cities often require alternative transport (metro, bus, taxi or ferry) from one terminus to the other. For instance, in Istanbul transfers from the Sirkeci Terminal (the European terminus) and the Haydarpaşa Terminal (the Asian terminus) historically required crossing the Bosphorus via alternative means, before the Marmaray railway tunnel linking Europe and Asia was completed. Some cities, including New York, have both termini and through lines. Terminals that have competing rail lines using the station frequently set up a jointly owned terminal railroad to own and operate the station and its associated tracks and switching operations. Stop During a journey, the term station stop may be used in announcements, to differentiate halts during which passengers may alight and halts for another reasons, such as a locomotive change. While a junction or interlocking usually divides two or more lines or routes, and thus has remotely or locally operated signals, a station stop does not. A station stop usually does not have any tracks other than the main tracks, and may or may not have switches (points, crossovers). An intermediate station does not have any other connecting route, unlike branch-off stations, connecting stations, transfer stations and railway junctions. In a broader sense, an intermediate station is generally any station on the route between its two terminal stations. The majority of stations are, in practice, intermediate stations. They are mostly designed as through stations; there are only a few intermediate stations that take the form of a stub-end station, for example at some zigzags. If there is a station building, it is usually located to the side of the tracks. In the case of intermediate stations used for both passenger and freight traffic, there is a distinction between those where the station building and goods facilities are on the same side of the tracks and those in which the goods facilities are on the opposite side of the tracks from the station building. Intermediate stations also occur on some funicular and cable car routes. An infill station (sometimes in-fill station) is a train station built on an existing passenger rail, rapid transit, or light rail line to address demand in a location between existing stations. Such stations take advantage of existing train service and encourage new riders by providing a more convenient location. Many older transit systems have widely spaced stations and can benefit from infill stations. In some cases, new infill stations are built at sites where a station had once existed many years ago, for example the Cermak–McCormick Place station on the Chicago 'L''s Green Line.[citation needed] A halt, in railway parlance in the Commonwealth of Nations, Ireland and Portugal, is a small passenger station, usually unstaffed or with very few staff, and with few or no facilities. A halt is usually equipped with a platform or platforms on the through track(s) and the appropriate signage, but not with switches. In some cases, trains stop only on request, when passengers on the platform indicate that they wish to board, or passengers on the train inform the crew that they wish to alight. These can sometimes appear with signals and sometimes without. The Great Western Railway in Great Britain began opening haltes on 12 October 1903; from 1905, the French spelling was Anglicised to "halt". These GWR halts had the most basic facilities, with platforms long enough for just one or two carriages; some had no raised platform at all, necessitating the provision of steps on the carriages. Halts were normally unstaffed, tickets being sold on the train. On 1 September 1904, a larger version, known on the GWR as a "platform" instead of a "halt", was introduced; these had longer platforms, and were usually staffed by a senior grade porter, who sold tickets and sometimes booked parcels or milk consignments. From 1903 to 1947 the GWR built 379 halts and inherited a further 40 from other companies at the Grouping of 1923. Peak building periods were before the First World War (145 built) and 1928–1939 (198 built). Ten more were opened by British Rail on ex-GWR lines. The GWR also built 34 "platforms". Many such stops remain on the national railway networks in the United Kingdom, such as Penmaenmawr in North Wales, Yorton in Shropshire, and The Lakes in Warwickshire, where passengers are requested to inform a member of on-board train staff if they wish to alight, or, if catching a train from the station, to make themselves clearly visible to the driver and use a hand signal as the train approaches. Most have had "Halt" removed from their names. Two publicly advertised and publicly accessible National Rail stations retain it: Coombe Junction Halt and St Keyne Wishing Well Halt. A number of other halts are still open and operational on privately owned, heritage, and preserved railways throughout the British Isles. The word is often used informally to describe national rail network stations with limited service and low usage, such as the Oxfordshire Halts on the Cotswold Line. It has also sometimes been used for stations served by public services but accessible only by persons travelling to/from an associated factory (for example IBM near Greenock and British Steel Redcar– although neither of these is any longer served by trains), or military base (such as Lympstone Commando) or railway yard. The only two such "private" stopping places on the national system, where the "halt" designation is still officially used, seem to be Staff Halt (at Durnsford Road, Wimbledon) and Battersea Pier Sidings Staff Halt, both of which are solely for railway staff. In Portugal, railway stops are called halts (Portuguese: apeadeiro). In Ireland, a few small railway stations are designated as "halts" (Irish: stadanna, sing. stad). In some Commonwealth countries the term "halt" is used. In Australia, with its sparse rural populations, such stopping places were common on lines that were still open for passenger traffic. In the state of Victoria, for example, a location on a railway line where a small diesel railcar or railmotor could stop on request, allowing passengers to board or alight, was called a "rail motor stopping place" (RMSP). Usually situated near a level crossing, it was often designated solely by a sign beside the railway. The passenger could hail the driver to stop, and could buy a ticket from the train guard or conductor. In South Australia, such facilities were called "provisional stopping places". They were often placed on routes on which "school trains" (services conveying children from rural localities to and from school) operated. In West Malaysia, halts are commonplace along the less developed KTM East Coast railway line to serve rural 'kampongs' (villages), that require train services to stay connected to important nodes, but do not have a need for staff. People boarding at halts who have not bought tickets online can buy it through staff on board. In rural and remote communities across Canada and the United States, passengers wanting to board the train at such places had to flag the train down to stop it, hence the name "flag stops" or "flag stations". Accessibility Accessibility for disabled people is mandated by law in some countries. Considerations include: In the United Kingdom, rail operators will arrange alternative transport (typically a taxi) at no extra cost to the ticket holder if the station they intend to travel to or from is inaccessible. Goods stations Goods or freight stations deal exclusively or predominantly with the loading and unloading of goods and may well have marshalling yards (classification yards) for the sorting of wagons. The world's first goods terminal was the 1830 Park Lane Goods Station at the South End Liverpool Docks. Built in 1830, the terminal was reached by a 1.24-mile (2 km) tunnel. As goods are increasingly moved by road, many former goods stations, as well as the goods sheds at passenger stations, have closed. Many are used purely for the cross-loading of freight and may be known as transshipment stations, where they primarily handle containers. They are also known as container stations or terminals. Records Busiest Largest Highest See also Bibliography References External links
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Non-player_character&printable=yes] | [TOKENS: 1785]
Contents Non-player character A non-player character (NPC) is a character in a game that is not controlled by a player. The term originated in traditional tabletop role-playing games where it applies to characters controlled by the gamemaster, or referee, rather than by another player. In video games, this usually means a computer-controlled character that has a predetermined set of behaviors that potentially will impact gameplay, but will not necessarily be the product of true artificial intelligence. Role-playing games In traditional tabletop role-playing games (RPG) such as Dungeons & Dragons, an NPC is a character portrayed by the gamemaster (GM). While the player characters (PCs) form the narrative's protagonists, non-player characters can be thought of as the "supporting cast" or "extras" of a roleplaying narrative. Non-player characters populate the fictional world of the game, and can fill any role not occupied by a player character. Non-player characters might be allies, bystanders, or competitors to the PCs. NPCs can also be traders who trade currency for things such as equipment or gear. NPCs thus vary in their level of detail. Some may be only a brief description ("You see a man in a corner of the tavern"), while others may have complete game statistics and backstories. There is some debate about how much work a gamemaster should put into an important NPC's statistics; some players prefer to have every NPC completely defined with stats, skills, and gear, while others define only what is immediately necessary and fill in the rest as the game proceeds. There is also some debate regarding the importance of fully defined NPCs in any given role-playing game, but there is consensus that the more "real" the NPCs feel, the more fun players will have interacting with them in character. In some games and in some circumstances, a player who is without a player character can temporarily take control of an NPC. Reasons for this vary, but often arise from the player not maintaining a PC within the group and playing the NPC for a session or from the player's PC being unable to act for some time (for example, because the PC is injured or in another location). Although these characters are still designed and normally controlled by the gamemaster, when players are allowed to temporarily control these non-player characters, it gives them another perspective on the plot of the game. Some systems, such as Nobilis, encourage this in their rules.[citation needed] Many game systems have rules for characters sustaining positive allies in the form of NPC followers, hired hands, or other dependents stature to the PC (player character). Characters may sometimes help in the design, recruitment, or development of NPCs. In the Champions game (and related games using the Hero System), a character may have a DNPC, or "dependent non-player character". This is a character controlled by the GM, but for which the player character is responsible in some way, and who may be put in harm's way by the PC's choices. Video games The term "non-player character" is also used in video games to describe entities not under the direct control of a player. The term carries a connotation that the character is not hostile towards players; hostile characters are referred to as enemies, mobs, or creeps. NPC behavior in computer games is usually scripted and automatic, triggered by certain actions or dialogue with the player characters. In certain multiplayer games (Neverwinter Nights and Vampire: The Masquerade series, for example) a player that acts as the GM can "possess" both player and non-player characters, controlling their actions to further the storyline. More complex games, such as the aforementioned Neverwinter Nights, allow the player to customize the NPCs' behavior by modifying their default scripts or creating entirely new ones. In some online games, such as massively multiplayer online role-playing games, NPCs may be entirely unscripted, and are essentially regular character avatars controlled by employees of the game company. These "non-players" are often distinguished from player characters by avatar appearance or other visual designation, and often serve as in-game support for new players. In other cases, these "live" NPCs are virtual actors, playing regular characters that drive a continuing storyline (as in Myst Online: Uru Live). In earlier RPGs, NPCs only had monologues. This is typically represented by a dialogue box, floating text, cutscene, or other means of displaying the NPCs' speech or reaction to the player. [citation needed] NPC speeches of this kind are often designed to give an instant impression of the character of the speaker, providing character vignettes, but they may also advance the story or illuminate the world around the PC. Similar to this is the most common form of storytelling, non-branching dialogue, in which the means of displaying NPC speech are the same as above, but the player character or avatar responds to or initiates speech with NPCs. In addition to the purposes listed above, this enables the development of the player character. More advanced RPGs feature interactive dialogue, or branching dialogue (dialogue trees). An example are the games produced by Black Isle Studios and White Wolf, Inc.; every one of their games is multiple-choice roleplaying. When talking to an NPC, the player is presented with a list of dialogue options and may choose between them. Each choice may result in a different response from the NPC. These choices may affect the course of the game, as well as the conversation. At the least, they provide a reference point to the player of their character's relationship with the game world. Ultima is an example of a game series that has advanced from non-branching (Ultima III: Exodus and earlier) to branching dialogue (from Ultima IV: Quest of the Avatar and on). Other role-playing games with branching dialogues include Cosmic Soldier, Megami Tensei, Fire Emblem, Metal Max, Langrisser, SaGa, Ogre Battle, Chrono, Star Ocean, Sakura Wars, Mass Effect, Dragon Age, Radiant Historia, and several Dragon Quest and Final Fantasy games. Certain video game genres revolve almost entirely around interactions with non-player characters, including visual novels such as Ace Attorney and dating sims such as Tokimeki Memorial, usually featuring complex branching dialogues and often presenting the player's possible responses word-for-word as the player character would say them. Games revolving around relationship-building, including visual novels, dating sims such as Tokimeki Memorial, and some role-playing games such as Persona, often give choices that have a different number of associated "mood points" that influence a player character's relationship and future conversations with a non-player character. These games often feature a day-night cycle with a time scheduling system that provides context and relevance to character interactions, allowing players to choose when and if to interact with certain characters, which in turn influences their responses during later conversations. In 2023, Replica Studios unveiled its AI-developed NPCs for the Unreal Engine 5, in cooperation with OpenAI, which enable players to have an interactive conversation with unplayable characters. "NPC streaming"—livestreaming while mimicking the behaviors of an NPC—became popular on TikTok in 2023 and was largely popularized by livestreamer Pinkydoll. Other usage From around 2018, the term NPC became an insult, primarily online, to suggest that a person is unable to form thoughts or opinions of their own. This is sometimes illustrated with a grey-faced, expressionless version of the Wojak meme. Monetization NPC streaming is a type of livestream that allows users to participate in and shape the content they are viewing in real time. It has become widely popular as influencers and users of social media platforms such as TikTok utilize livestreams to act as non-player characters. "Viewers in NPC live streams take on the role of puppeteers, influencing the creator's next move." This phenomenon has been on the rise as viewers are actively involved in what they are watching, by purchasing digital "gifts" and sending them directly to the streamer. In return, the streamer will briefly mimic a character or act. This phenomenon has become a trend starting from July 2023, as influencers make profits from this new internet character. Pinkydoll, a TikTok influencer, gained 400,000 followers the same month that she started NPC streaming, while her livestreams began to earn her as much as $7,000 in a day. NPC streaming gives creators a new avenue to earn money online. Despite this, certain creators are quitting due to certain stigmas that come with the strategy. For example, a pioneer of the NPC trend, Malik Ambersley has been robbed, accosted by police, and gotten into fights due to the controversial nature of his act. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/100_RMB_note] | [TOKENS: 7853]
Contents Renminbi The renminbi (/ˌrɛnˌmɪnˈbiː/; Chinese: 人民币; pinyin: Rénmínbì; lit. 'People's Currency' Chinese pronunciation: [ʐən˧˥min˧˥pi˥˩]; symbol: ¥; ISO code: CNY; abbreviation: RMB) is the official currency of China.[a] The renminbi is issued by the People's Bank of China, the monetary authority of China. It is the world's fifth-most-traded currency as of April 2025. The Chinese yuan (元) is the basic unit of the renminbi. One yuan is divided into 10 jiao (角), and the jiao is further subdivided into 10 fen (分). The word yuan is widely used to refer to the Chinese currency generally, especially in international contexts.[b] Exchange rate Until 2005, the exchange rate of the renminbi was pegged to the US dollar. As China pursued the reform and opening up to transition from central planning to a market economy and increased its participation in foreign trade, the renminbi was devalued to increase the competitiveness of Chinese industry. It has previously been claimed that the renminbi's official exchange rate was undervalued by as much as 37.5% against its purchasing power parity. In 2011, the International Monetary Fund (IMF) stated that the renminbi was undervalued by 23%. However, more recently, appreciation actions by the Chinese government, as well as quantitative easing measures taken by the American Federal Reserve and other major central banks, have caused the renminbi to be within as little as 8% of its equilibrium value by the second half of 2012. Since 2006, the renminbi exchange rate has been allowed to float in a narrow margin around a fixed base rate determined with reference to a basket of world currencies. By 2015, the IMF assessed it as no longer undervalued. The Chinese government has announced that it will gradually increase the flexibility of the exchange rate. As a result of the rapid internationalization of the renminbi, it became the world's 8th most traded currency in 2013, 5th by 2015, but 6th in 2019. In 2026, the IMF stated that the renminbi was undervalued by 16%. On 1 October 2016, the renminbi became the first emerging market currency to be included in the IMF's special drawing rights basket, the basket of currencies used by the IMF as a reserve currency. Its initial weighting in the basket was 10.9%.: 259 Terminology The ISO code for the renminbi is CNY, the PRC's country code (CN) plus "Y" from "yuan". Renminbi that is traded off-shore (internationally) uses the designation CNH, while the on-shore (internal) currency is designated CNY. CNY is used only in mainland China, is controlled by the People's Bank of China, and uses a fixed daily exchange rate. CNH is used outside of the mainland, isn't restricted like CNY, and the exchange rate is determined by the market. The abbreviation RMB is not an ISO code but is sometimes used like one by banks and financial institutions. The currency symbol for the yuan unit is ¥, but when distinction from the Japanese yen is required RMB (e.g. RMB 10,000) or ¥ RMB (e.g. ¥10,000 RMB) is used. However, in written Chinese contexts, the Chinese character for yuan (Chinese: 元; lit. 'constituent', 'part') or, in formal contexts Chinese: 圆; lit. 'round', usually follows the number in lieu of a currency symbol. Renminbi is the name of the currency while yuan is the name of the primary unit of the renminbi. This is analogous to the distinction between "sterling" and "pound" when discussing the official currency of the United Kingdom. Jiao and fen are also units of renminbi. In everyday Mandarin, kuai (Chinese: 块; pinyin: kuài; lit. 'piece') is usually used when discussing money and "renminbi" or "yuan" are rarely heard. Similarly, Mandarin speakers typically use mao (Chinese: 毛; pinyin: máo) instead of jiao. For example, ¥8.74 might be read as 八块七毛四 (pinyin: bā kuài qī máo sì) in everyday conversation, but read 八元七角四分 (pinyin: bā yuán qī jiǎo sì fēn) formally. Renminbi is sometimes referred to as the "redback", a play on "greenback", a slang term for the US dollar. History The various currencies called yuan or dollar issued in mainland China as well as Taiwan, Hong Kong, Macau and Singapore were all derived from the Spanish dollar, which China imported in large quantities from Spanish America from the 16th to 20th centuries. The first locally minted silver dollar or yuan accepted all over Qing dynasty China (1644–1912) was the silver dragon dollar introduced in 1889. Various banknotes denominated in dollars or yuan were also introduced, which were convertible to silver dollars until 1935 when the silver standard was discontinued and the Chinese yuan was made fabi (法币; legal tender fiat currency). The renminbi was introduced by the People's Bank of China in December 1948, about a year before the establishment of the People's Republic of China. It was issued only in paper form at first, and replaced the various currencies circulating in the areas controlled by the Communists. One of the first tasks of the new government was to end the hyperinflation that had plagued China in the final years of the Kuomintang (KMT) era. That achieved, a revaluation occurred in 1955 at the rate of 1 new yuan = 10,000 old yuan. In 2019, the People's Bank of China released an updated edition of the fifth series of renminbi banknotes and coins. The update included new versions of the ¥50, ¥20, ¥10, and ¥1 banknotes, as well as the ¥1, ¥0.5, and ¥0.1 coins. These featured improved security features, enhanced printing quality, and brighter coloration to combat counterfeiting and improve recognizability. Notably, the ¥100 banknote from the 2015 issue remained unchanged in this release. As the Chinese Communist Party took control of ever larger territories in the latter part of the Chinese Civil War, its People's Bank of China began to issue a unified currency in 1948 for use in Communist-controlled territories. Also denominated in yuan, this currency was identified by different names, including "People's Bank of China banknotes" (simplified Chinese: 中国人民银行钞票; traditional Chinese: 中國人民銀行鈔票; from November 1948), "New Currency" (新币; 新幣; from December 1948), "People's Bank of China notes" (中国人民银行券; 中國人民銀行券; from January 1949), "People's Notes" (人民券, as an abbreviation of the last name), and finally "People's Currency", or "renminbi", from June 1949. In the early 2020s, China launched the digital renminbi, also known as the digital yuan or e-CNY, a central bank digital currency (CBDC) developed by the People's Bank of China. Pilot programs began in 2020 in cities such as Shenzhen, Suzhou, and Chengdu. By 2023, e-CNY had been integrated into a wide range of applications, including public transportation, government subsidies, retail payments, and cross-border trials. The digital yuan is intended to enhance payment system resilience, offer an alternative to private payment platforms like Alipay and WeChat Pay, and support financial inclusion. From 1949 until the late 1970s, the state fixed China's exchange rate at a highly overvalued level as part of the country's import-substitution strategy. During this time frame, the focus of the state's central planning was to accelerate industrial development and reduce China's dependence on imported manufactured goods. The overvaluation allowed the government to provide imported machinery and equipment to priority industries at a lower domestic currency cost than otherwise would have been possible. China's transition by the mid-1990s to a system in which the value of its currency was determined by supply and demand in a foreign exchange market was a gradual process spanning 15 years that involved changes in the official exchange rate, the use of a dual exchange rate system, and the introduction and gradual expansion of markets for foreign exchange. The most important move to a market-oriented exchange rate was an easing of controls on trade and other current account transactions, as occurred in several very early steps. In 1979, the State Council approved a system allowing exporters and their provincial and local government owners to retain a share of their foreign exchange earnings, referred to as foreign exchange quotas. At the same time, the government introduced measures to allow retention of part of the foreign exchange earnings from non-trade sources, such as overseas remittances, port fees paid by foreign vessels, and tourism. As early as October 1980, exporting firms that retained foreign exchange above their own import needs were allowed to sell the excess through the state agency responsible for the management of China's exchange controls and its foreign exchange reserves, the State Administration of Exchange Control. Beginning in the mid-1980s, the government sanctioned foreign exchange markets, known as swap centres, eventually in most large cities. The government also gradually allowed market forces to take the dominant role by introducing an "internal settlement rate" of ¥2.8 to 1 US dollar which was a devaluation of almost 100%. In the process of opening up China to external trade and tourism, transactions with foreign visitors between 1980 and 1994 were done primarily using foreign exchange certificates (外汇券; waihuiquan) issued by the Bank of China. Foreign currencies were exchangeable for FECs and vice versa at the renminbi's prevailing official rate which ranged from US$1 = ¥2.8 FEC to ¥5.5 FEC. The FEC was issued as banknotes from ¥0.1 to ¥100, and was officially at par with the renminbi. Tourists used FECs to pay for accommodation as well as tourist and luxury goods sold in Friendship Stores. However, given the non-availability of foreign exchange and Friendship Store goods to the general public, as well as the inability of tourists to use FECs at local businesses, an illegal black market developed for FECs where touts approached tourists outside hotels and offered over ¥1.30 RMB in exchange for ¥1 FEC. In November 1993, the Third Plenum of the 14th Central Committee of the Chinese Communist Party approved a comprehensive reform strategy in which foreign exchange management reforms were highlighted as a key element for a market-oriented economy. A floating exchange rate regime and convertibility for renminbi were seen as the ultimate goal of the reform. Conditional convertibility under current account was achieved by allowing firms to surrender their foreign exchange earning from current account transactions and purchase foreign exchange as needed. Restrictions on Foreign Direct Investment (FDI) was also loosened and capital inflows to China surged. During the era of the command economy, the value of the renminbi was set to unrealistic values in exchange with Western currency and severe currency exchange rules were put in place, hence the dual-track currency system from 1980 to 1994 with the renminbi usable only domestically, and with Foreign Exchange Certificates (FECs) used by foreign visitors.[citation needed] In the late 1980s and early 1990s, China worked to make the renminbi more convertible. Through the use of swap centres, the exchange rate was eventually brought to more realistic levels of above ¥8/US$1 in 1994 and the FEC was discontinued. It stayed above ¥8/$1 until 2005 when the renminbi's peg to the dollar was loosened and it was allowed to appreciate.[citation needed] As of 2013, the renminbi is convertible on current accounts but not capital accounts. The ultimate goal has been to make the renminbi fully convertible. However, partly in response to the Asian financial crisis in 1998, China has been concerned that the Chinese financial system would not be able to handle the potential rapid cross-border movements of hot money, and as a result, as of 2012, the currency trades within a narrow band specified by the Chinese central government.[citation needed] Following the internationalization of the renminbi, on 30 November 2015, the IMF voted to designate the renminbi as one of several main world currencies, thus including it in the basket of special drawing rights. The renminbi became the first emerging market currency to be included in the IMF's SDR basket on 1 October 2016. The other main world currencies are the dollar, the euro, sterling, and the yen. In October 2019, China's central bank, PBOC, announced that a digital renminbi was going to be released after years of preparation. This version of the currency, also called DCEP (Digital Currency Electronic Payment), can be "decoupled" from the banking system to give visiting tourists a taste of the nation's burgeoning cashless society. The announcement received a variety of responses: some believe it is more about domestic control and surveillance. Some argue that the real barriers to internationalisation of the renminbi are China's capital controls, which it has no plans to remove. Maximilian Kärnfelt, an expert at the Mercator Institute for China Studies, said that a digital renminbi "would not banish many of the problems holding the renminbi back from more use globally". He went on to say, "Much of China's financial market is still not open to foreigners and property rights remain fragile." The PBOC has filed more than 80 patents surrounding the integration of a digital currency system, choosing to embrace the blockchain technology. The patents reveal the extent of China's digital currency plans. The patents, seen and verified by the Financial Times, include proposals related to the issuance and supply of a central bank digital currency, a system for interbank settlements that uses the currency, and the integration of digital currency wallets into existing retail bank accounts. Several of the 84 patents reviewed by the Financial Times indicate that China may plan to algorithmically adjust the supply of a central bank digital currency based on certain triggers, such as loan interest rates. Other patents are focused on building digital currency chip cards or digital currency wallets that banking consumers could potentially use, which would be linked directly to their bank accounts. The patent filings also point to the proposed 'tokenomics' being considered by the DCEP working group. Some patents show plans towards programmed inflation control mechanisms. While the majority of the patents are attributed to the PBOC's Digital Currency Research Institute, some are attributed to state-owned corporations or subsidiaries of the Chinese central government. Uncovered by the Chamber of Digital Commerce (an American non-profit advocacy group), their contents shed light on Beijing's mounting efforts to digitise the renminbi, which has sparked alarm in the West and spurred central bankers around the world to begin exploring similar projects. Some commentators have said that the U.S., which has no current plans to issue a government-backed digital currency, risks falling behind China and risking its dominance in the global financial system. Victor Shih, a China expert and professor at the University of California San Diego, said that merely introducing a digital currency "doesn't solve the problem that some people holding renminbi offshore will want to sell that renminbi and exchange it for the dollar", as the dollar is considered to be a safer asset. Eswar Prasad, an economics professor at Cornell University, said that the digital renminbi "will hardly put a dent in the dollar's status as the dominant global reserve currency" due to the United States' "economic dominance, deep and liquid capital markets, and still-robust institutional framework". The U.S. dollar's share as a reserve currency is above 60%, while that of the renminbi is about 2%. In April 2020, The Guardian reported that the digital currency e-RMB had been adopted into multiple cities' monetary systems and "some government employees and public servants [will] receive their salaries in the digital currency from May. The Guardian quoted a China Daily report which stated "A sovereign digital currency provides a functional alternative to the dollar settlement system and blunts the impact of any sanctions or threats of exclusion both at a country and company level. It may also facilitate integration into globally traded currency markets with a reduced risk of politically inspired disruption." There were talks of testing out the digital renminbi in the Beijing Winter Olympics in 2022, but China's overall timetable for rolling out the digital currency was unclear. In May 2023, RMB interest rate swaps was launched. In June 2023, under the Government Green Bond Programme, the Government of the Hong Kong Special Administrative Region of the People's Republic of China (HKSAR) announced a green bonds offering, of approximately US$6 billion denominated in USD, EUR and RMB. Issuance As of 2019, renminbi banknotes are available in denominations of ¥0.1, ¥0.5 (1 and 5 jiao), ¥1, ¥5, ¥10, ¥20, ¥50 and ¥100. These denominations have been available since 1955, except for the ¥20 notes (added in 1999 with the fifth series) ¥50 and ¥100 notes (added in 1987 with the fourth series). Coins are available in denominations from ¥0.01 to ¥1. Thus some denominations exist in both coins and banknotes. On rare occasions, larger yuan coin denominations such as ¥5 have been issued to commemorate events but use of these outside of collecting has never been widespread.[citation needed] The denomination of each banknote is printed in simplified written Chinese. The numbers themselves are printed in financial[c] Chinese numeral characters, as well as Arabic numerals. The denomination and the words "People's Bank of China" are also printed in Mongolian, Tibetan, Uyghur and Zhuang on the back of each banknote, in addition to the boldface Hanyu Pinyin "Zhongguo Renmin Yinhang" (without tones). The right front of the note has a tactile representation of the denomination in Chinese Braille starting from the fourth series. See corresponding section for detailed information.[citation needed] The fen and jiao denominations have become increasingly unnecessary as prices have increased. Coins under ¥0.1 are used infrequently. Chinese retailers tend to avoid fractional values (such as ¥9.99), opting instead to round to the nearest yuan (such as ¥9 or ¥10). In 1953, aluminium ¥0.01, ¥0.02, and ¥0.05 coins began being struck for circulation, and were first introduced in 1955. These depict the national emblem on the obverse (front) and the name and denomination framed by wheat stalks on the reverse (back). In 1980, brass ¥0.1, ¥0.2, and ¥0.5 and cupro-nickel ¥1 coins were added, although the ¥0.1 and ¥0.2 were only produced until 1981, with the last ¥0.5 and ¥1 issued in 1985. All jiǎo coins depicted similar designs to the fēn coins while the yuán depicted the Great Wall of China. In 1991, a new coinage was introduced, consisting of an aluminium ¥0.1, brass ¥0.5 and nickel-clad steel ¥1. These were smaller than the previous jiǎo and yuán coins and depicted flowers on the obverse and the national emblem on the reverse. Issuance of the aluminium ¥0.01 and ¥0.02 coins ceased in 1991, with that of the ¥0.05 halting in 1994. The small coins were still struck for annual uncirculated mint sets in limited quantities, and from the beginning of 2005, the ¥0.01 coin got a new lease on life by being issued again every year since then up to present. New designs of the ¥0.1, ¥0.5 (now brass-plated steel), and ¥1 (nickel-plated steel) were again introduced in between 1999 and 2002. The ¥0.1 was significantly reduced in size, and in 2005 its composition was changed from aluminium to more durable nickel-plated steel. An updated version of these coins was announced in 2019. While the overall design is unchanged, all coins including the ¥0.5 are now of nickel-plated steel, and the ¥1 coin was reduced in size. The frequency of usage of coins varies between different parts of China, with coins typically being more popular in urban areas (with 5-jiǎo and 1-yuán coins used in vending machines), and small notes being more popular in rural areas. Older fēn and large jiǎo coins are uncommonly still seen in circulation, but are still valid in exchange. As of 2025, there have been five series of renminbi banknotes issued by the People's Republic of China: In 1999, a commemorative red ¥50 note was issued in honour of the 50th anniversary of the establishment of the People's Republic of China. This note features Chinese Communist Party chairman Mao Zedong on the front and various animals on the back. An orange polymer note, commemorating the new millennium was issued in 2000 with a face value of ¥100. This features a dragon on the obverse and the reverse features the China Millennium monument (at the Center for Cultural and Scientific Fairs). For the 2008 Beijing Olympics, a green ¥10 note was issued featuring the Bird's Nest Stadium on the front with the back showing a classical Olympic discus thrower and various other athletes. On 26 November 2015, the People's Bank of China issued a blue ¥100 commemorative note to honour China's aerospace science and technology. In commemoration of the 70th anniversary of the Renminbi, the People's Bank of China issued 120 million ¥50 banknotes on 28 December 2018. In recognition of the imminent 2022 Winter Olympics, the People's Bank of China issued ¥20 commemorative banknotes in both paper and polymer in December 2021. In recognition of the imminent since 2024 Chinese New Year celebrations, the People's Bank of China issued ¥20 commemorative banknotes in polymer. The renminbi yuan has different names when used in ethnic minority regions of China. Renminbi currency production is carried out by a state owned corporation, China Banknote Printing and Minting Corporation (CBPMC; 中国印钞造币总公司) headquartered in Beijing. CBPMC uses several printing, engraving and minting facilities around the country to produce banknotes and coins for subsequent distribution. Banknote printing facilities are based in Beijing, Shanghai, Chengdu, Xi'an, Shijiazhuang, and Nanchang. Mints are located in Nanjing, Shanghai, and Shenyang. Also, high grade paper for the banknotes is produced at two facilities in Baoding and Kunshan. The Baoding facility is the largest facility in the world dedicated to developing banknote material according to its website. In addition, the People's Bank of China has its own printing technology research division that researches new techniques for creating banknotes and making counterfeiting more difficult. On 13 March 2006, some delegates to an advisory body at the National People's Congress proposed to include Sun Yat-sen and Deng Xiaoping on the renminbi banknotes. However, the proposal was not adopted. Economics For most of its early history, the renminbi was pegged to the U.S. dollar at ¥2.46 per dollar. During the 1970s, it was revalued until it reached ¥1.50 per dollar in 1980. When China's economy gradually opened in the 1980s, the renminbi was devalued in order to improve the competitiveness of Chinese exports. Thus, the official exchange rate increased from ¥1.50 in 1980 to ¥8.62 by 1994 (the lowest rate on record). Improving current account balance during the latter half of the 1990s enabled the Chinese government to maintain a peg of ¥8.27 per US$1 from 1997 to 2005. The renminbi reached a record high exchange value of ¥6.0395 to the US dollar on 14 January 2014. Chinese leadership have been raising the yuan to tame inflation, a step U.S. officials have pushed for years to lower the massive trade deficit with China. Strengthening the value of the renminbi also fits with the Chinese transition to a more consumer-led economic growth model. In 2015 the People's Bank of China again devalued their country's currency. As of 1 September 2015[update], the exchange rate for US$1 is ¥6.38. On 21 July 2005, the peg was finally lifted, which saw an immediate one-time renminbi revaluation to ¥8.11 per dollar. The exchange rate against the euro stood at ¥10.07060 per euro. However, the peg was reinstituted unofficially when the financial crisis hit: "Under intense pressure from Washington, China took small steps to allow its currency to strengthen for three years starting in July 2005. But China 're-pegged' its currency to the dollar as the financial crisis intensified in July 2008." On 19 June 2010, the People's Bank of China released a statement simultaneously in Chinese and English claiming that they would "proceed further with reform of the renminbi exchange rate regime and increase the renminbi exchange rate flexibility". The news was greeted with praise by world leaders including Barack Obama, Nicolas Sarkozy and Stephen Harper. The PBoC maintained there would be no "large swings" in the currency. The renminbi rose to its highest level in five years and markets worldwide surged on Monday, 21 June following China's announcement. In August 2015, Joseph Adinolfi, a reporter for MarketWatch, reported that China had re-pegged the renminbi. In his article, he narrated that "Weak trade data out of China, released over the weekend, weighed on the currencies of Australia and New Zealand on Monday. But the yuan didn't budge. Indeed, the Chinese currency, also known as the renminbi, has been remarkably steady over the past month despite the huge selloff in China's stock market and a spate of disappointing economic data. Market strategists, including Simon Derrick, chief currency strategist at BNY Mellon, and Marc Chandler, head currency strategist at Brown Brothers Harriman, said that is because China's policy makers have effectively re-pegged the yuan. "When I look at the dollar-renminbi right now, that looks like a fixed exchange rate again. They've re-pegged it," Chandler said." The renminbi has now moved to a managed floating exchange rate based on market supply and demand with reference to a basket of foreign currencies. In July 2005, the daily trading price of the US dollar against the renminbi in the inter-bank foreign exchange market was allowed to float within a narrow band of 0.3% around the central parity published by the People's Bank of China; in a later announcement published on 18 May 2007, the band was extended to 0.5%. On 14 April 2012, the band was extended to 1.0%. On 17 March 2014, the band was extended to 2%. China has stated that the basket is dominated by the United States dollar, euro, Japanese yen and South Korean won, with a smaller proportion made up of sterling, Thai baht, roubles, Australian dollars, Canadian dollars and Singaporean dollars. On 10 April 2008, it traded at ¥6.9920 per US dollar, which was the first time in more than a decade that a dollar had bought less than ¥7, and at ¥11.03630 per euro. Beginning in January 2010, Chinese and non-Chinese citizens have an annual exchange limit of a maximum of US$50,000. Exchanges within this limit require only a passport or Chinese ID and no additional documentation showing the purpose of the exchange. Currency exchange transactions are centrally registered. The maximum dollar withdrawal is $10,000 per day, the maximum purchase limit of US dollars is $500 per day. This stringent management of the currency leads to a bottled-up demand for exchange in both directions. It is viewed as a major tool to keep the currency peg, preventing inflows of "hot money". A shift of Chinese reserves into the currencies of their other trading partners has caused these nations to shift more of their reserves into dollars, leading to no great change in the value of the renminbi against the dollar. Renminbi futures are traded at the Chicago Mercantile Exchange. The futures are cash-settled at the exchange rate published by the People's Bank of China. Scholarly studies suggest that the yuan is undervalued on the basis of purchasing power parity analysis. One 2011 study suggests a 37.5% undervaluation. The People's Bank of China lowered the renminbi's daily fix to the US dollar by 1.9 per cent to ¥6.2298 on 11 August 2015. The People's Bank of China again lowered the renminbi's daily fix to the US dollar from ¥6.620 to ¥6.6375 after Brexit on 27 June 2016. It had not been this low since December 2010. In 2015, the IMF assessed the renminbi's real exchange rate as suggested to be no longer undervalued due to "[t]he subsequent sizable REER appreciation appear[ing] to be more than would be suggested by changes in fundamentals (productivity growth and improved ToT)[.]" Before 2009, the renminbi had little to no exposure in the international markets because of strict government controls by the central Chinese government that prohibited almost all export of the currency, or use of it in international transactions. Transactions between Chinese companies and a foreign entity were generally denominated in US dollars. With Chinese companies unable to hold US dollars and foreign companies unable to hold Chinese yuan, all transactions would go through the People's Bank of China. Once the sum was paid by the foreign party in dollars, the central bank would pass the settlement in renminbi to the Chinese company at the state-controlled exchange rate. In June 2009 the Chinese officials announced a pilot scheme where business and trade transactions were allowed between limited businesses in Guangdong province and Shanghai, and only counterparties in Hong Kong, Macau, and select ASEAN nations. Proving a success, the program was further extended to 20 Chinese provinces and counterparties internationally in July 2010, and in September 2011 it was announced that the remaining 11 Chinese provinces would be included. In steps intended to establish the renminbi as an international reserve currency, China has agreements with Russia, Vietnam, Sri Lanka, Thailand, and Japan, allowing trade with those countries to be settled directly in renminbi instead of requiring conversion to US dollars, with Australia and South Africa to follow soon. In September 2023, the Renminbi passed the euro as the second most utilized currency in international trade, having tripled in the last three years. Currency restrictions regarding renminbi-denominated bank deposits and financial products were greatly liberalised in July 2010. In 2010 renminbi-denominated bonds were reported to have been purchased by Malaysia's central bank and that McDonald's had issued renminbi denominated corporate bonds through Standard Chartered Bank of Hong Kong. Such liberalisation allows the yuan to look more attractive as it can be held with higher return on investment yields, whereas previously that yield was virtually none. Nevertheless, some national banks such as Bank of Thailand (BOT) have expressed a serious concern about renminbi since BOT cannot substitute the deprecated US dollars in its US$200 billion foreign exchange reserves for renminbi as much as it wishes because: To meet IMF requirements, China gave up some of its tight control over the currency. Countries that are left-leaning in the political spectrum had already begun to use the renminbi as an alternative reserve currency to the United States dollar; the Central Bank of Chile reported in 2011 to have US$91 million worth of renminbi in reserves, and the president of the Central Bank of Venezuela, Nelson Merentes, made statements in favour of the renminbi following the announcement of reserve withdrawals from Europe and the United States. In Africa, the central banks of Ghana, Nigeria, and South Africa either hold renminbi as a reserve currency or have taken steps to purchase bonds denominated in renminbi. The "Report on the Internationalization of RMB in 2020", which was released by the People's Bank of China in August 2020, said that renminbi's function as international reserve currency has gradually emerged. In the first quarter 2020, the share of renminbi in global foreign exchange reserves rose to 2.02%, a record high. As of the end of 2019, the People's Bank of China has set up renminbi clearing banks in 25 countries and regions outside of mainland China, which has made the use of renminbi more secure and transaction costs have decreased. The two special administrative regions, Hong Kong and Macau, have their own respective currencies, according to the "one country, two systems" principle and the basic laws of the two territories. Therefore, the Hong Kong dollar and the Macanese pataca remain the legal tenders in the two territories, and the renminbi, although sometimes accepted, is not legal tender. Banks in Hong Kong allow people to maintain accounts in RMB. Because of changes in legislation in July 2010, banks around the world offer foreign currency accounts for deposits in Chinese renminbi. The renminbi had a presence in Macau even before the 1999 return to the People's Republic of China from Portugal. Banks in Macau can issue credit cards based on the renminbi, but not loans. Renminbi-based credit cards cannot be used in Macau's casinos. The government of Taiwan believes that wide usage of the renminbi would create an underground economy and undermine its sovereignty. Tourists are allowed to bring in up to ¥20,000 when visiting Taiwan. These renminbi must be converted to Taiwanese currency at trial exchange sites in Matsu and Kinmen. The Chen Shui-bian administration insisted that it would not allow full convertibility until the mainland signs a bilateral foreign exchange settlement agreement, though president Ma Ying-jeou, who served from 2008 to 2016, sought to allow full convertibility as soon as possible. The renminbi circulates in some of China's neighbors, such as Pakistan, Mongolia and northern Thailand. Cambodia welcomes the renminbi as an official currency and Laos and Myanmar allow it in border provinces such as Wa and Kokang and economic zones like Mandalay. Though unofficial, Vietnam recognizes the exchange of the renminbi to the đồng. In 2017, ¥215 billion was circulating in Indonesia. In 2018, a Bilateral Currency Swap Agreement was made by the Bank of Indonesia and the Bank of China which simplified business transactions, and in 2020, about 10% of Indonesia's global trade was in renminbi. Since 2007, renminbi-nominated bonds have been issued outside mainland China; these are colloquially called dim sum bonds. In April 2011, the first initial public offering denominated in renminbi occurred in Hong Kong, when the Chinese property investment trust Hui Xian REIT raised ¥10.48 billion ($1.6 billion) in its IPO. Beijing has allowed renminbi-denominated financial markets to develop in Hong Kong as part of the effort to internationalise the renminbi. There is limited (under 1%) issuing of renminbi bonds in Indonesia. Since currency flows in and out of mainland China are still restricted, renminbi traded in off-shore markets, such as the Hong Kong market, can have a different value to renminbi traded on the mainland. The offshore RMB market is usually denoted as CNH, but there is another renminbi interbank and spot market in Taiwan for domestic trading known as CNT. Other renminbi markets include the dollar-settled non-deliverable forward (NDF), and the trade-settlement exchange rate (CNT). Note that the two CNTs mentioned above are different from each other.[how?] See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Grey_alien#cite_note-33] | [TOKENS: 2835]
Contents Grey alien Grey aliens, also referred to as Zeta Reticulans, Roswell Greys, or simply, Greys,[a] are purported extraterrestrial beings. They are frequently featured in claims of close encounter and alien abduction. Greys are typically described as having small, humanoid bodies, smooth, grey skin, disproportionately large, hairless heads, and large, black, almond-shaped eyes. The 1961 Barney and Betty Hill abduction claim was key to the popularization of Grey aliens. Precursor figures have been described in science fiction and similar descriptions appeared in later accounts of the 1947 Roswell UFO incident and early accounts of the 1948 Aztec UFO hoax. The Grey alien is cited an archetypal image of an intelligent non-human creature and extraterrestrial life in general, as well as an iconic trope of popular culture in the age of space exploration. Description Greys are typically depicted as grey-skinned, diminutive humanoid beings that possess reduced forms of, or completely lack, external human body parts such as noses, ears, or sex organs. Their bodies are usually depicted as being elongated, having a small chest, and lacking in muscular definition and visible skeletal structure. Their legs are depicted as being shorter and jointed differently from humans with limbs proportionally different from a human. Greys are depicted as having unusually large heads in proportion to their bodies, and as having no hair, no noticeable outer ears or noses, and small orifices for ears, nostrils, and mouths. In drawings, Greys are almost always shown with very large, opaque, black eyes, without eye whites. They are frequently described as shorter than average adult humans. The association between Grey aliens and Zeta Reticuli originated with the interpretation of a map drawn by Betty Hill by a school-teacher named Marjorie Fish sometime in 1969. Betty Hill, under hypnosis, had claimed to have been shown a map that displayed the aliens' home system and nearby stars. Upon learning of this, Fish attempted to create a model from a drawing produced by Hill, eventually determining that the stars marked as the aliens' home were Zeta Reticuli, a binary star system. History In literature, descriptions of beings similar to Grey aliens predate claims of supposed encounters with them. In 1893, H. G. Wells presented a description of humanity's future appearance in the article "The Man of the Year Million", describing humans as having no mouths, noses, or hair, and with large heads. In 1895, Wells also depicted the Eloi, a successor species to humanity, in similar terms in the novel The Time Machine. Both share many characteristics with future perceptions of Greys. As early as 1917, the occultist Aleister Crowley described a meeting with a "preternatural entity" named Lam that was similar in appearance to a modern Grey. Crowley claimed to have contacted Lam through a process called the "Amalantrah Workings," which he believed allowed humans to contact beings from outer space and across dimensions. Other occultists and ufologists, many of whom have retroactively linked Lam to later Grey encounters, have since described their own visitations from him, with one describing the being as a "cold, computer-like intelligence," and utterly beyond human comprehension. ...the creatures did not resemble any race of humans. They were short, shorter than the average Japanese, and their heads were big and bald, with strong, square foreheads, and very small noses and mouths, and weak chins. What was most extraordinary about them were the eyes—large, dark, gleaming, with a sharp gaze. They wore clothes made of soft grey fabric, and their limbs seemed to be similar to those of humans. In 1933, the Swedish novelist Gustav Sandgren, using the pen name Gabriel Linde, published a science fiction novel called Den okända faran (The Unknown Danger), in which he describes a race of extraterrestrials who wore clothes made of soft grey fabric and were short, with big bald heads, and large, dark, gleaming eyes. The novel, aimed at young readers, included illustrations of the imagined aliens. This description would become the template upon which the popular image of grey aliens is based. The conception remained a niche one until 1965, when newspaper reports of the Betty and Barney Hill abduction made the archetype famous. The alleged abductees, Betty and Barney Hill, claimed that in 1961, humanoid alien beings with greyish skin had abducted them and taken them to a flying saucer. In his 1990 article "Entirely Unpredisposed", Martin Kottmeyer suggested that Barney's memories revealed under hypnosis might have been influenced by an episode of the science-fiction television show The Outer Limits titled "The Bellero Shield", which was broadcast 12 days before Barney's first hypnotic session. The episode featured an extraterrestrial with large eyes, who says, "In all the universes, in all the unities beyond the universes, all who have eyes have eyes that speak." The report from the regression featured a scenario that was in some respects similar to the television show. In part, Kottmeyer wrote: Wraparound eyes are an extreme rarity in science fiction films. I know of only one instance. They appeared on the alien of an episode of an old TV series The Outer Limits entitled "The Bellero Shield." A person familiar with Barney's sketch in "The Interrupted Journey" and the sketch done in collaboration with the artist David Baker will find a "frisson" of "déjà vu" creeping up his spine when seeing this episode. The resemblance is much abetted by an absence of ears, hair, and nose on both aliens. Could it be by chance? Consider this: Barney first described and drew the wraparound eyes during the hypnosis session dated 22 February 1964. "The Bellero Shield" was first broadcast on 10 February 1964. Only twelve days separate the two instances. If the identification is admitted, the commonness of wraparound eyes in the abduction literature falls to cultural forces. — Martin Kottmeyer, Entirely Unpredisposed: The Cultural Background of UFO Reports Carl Sagan echoed Kottmeyer's suspicions in his 1997 book, The Demon Haunted World: Science as a Candle in the Dark, where Invaders from Mars was cited as another potential inspiration. After the Hills' encounter, Greys would go on to become an integral part of ufology and other extraterrestrial-related folklore. This is particularly true in the case of the United States: according to journalist C. D. B. Bryan, 73% of all reported alien encounters in the United States describe Grey aliens, a significantly higher proportion than other countries.: 68 During the early 1980s, Greys were linked to the alleged crash-landing of a flying saucer in Roswell, New Mexico, in 1947. A number of publications contained statements from individuals who claimed to have seen the U.S. military handling a number of unusually proportioned, bald, child-sized beings. These individuals claimed, during and after the incident, that the beings had oversized heads and slanted eyes, but scant other distinguishable facial features. In 1987, novelist Whitley Strieber published the book Communion, which, unlike his previous works, was categorized as non-fiction, and in which he describes a number of close encounters he alleges to have experienced with Greys and other extraterrestrial beings. The book became a New York Times bestseller, and New Line Cinema released a 1989 film adaption that starred Christopher Walken as Strieber. In 1988, Christophe Dechavanne interviewed the French science-fiction writer and ufologist Jimmy Guieu on TF1's Ciel, mon mardi !. Besides mentioning Majestic 12, Guieu described the existence of what he called "the little greys", which later on became better known in French under the name: les Petits-Gris. Guieu later wrote two docudramas, using as a plot the Grey aliens / Majestic-12 conspiracy theory as described by John Lear and Milton William Cooper: the series "E.B.E." (for "Extraterrestrial Biological Entity"): E.B.E.: Alerte rouge (first part) (1990) and E.B.E.: L'entité noire d'Andamooka (second part) (1991).[citation needed] Greys have since become the subject of many conspiracy theories. Many conspiracy theorists believe that Greys represent part of a government-led disinformation or plausible deniability campaign, or that they are a product of government mind-control experiments. During the 1990s, popular culture also began to increasingly link Greys to a number of military-industrial complex and New World Order conspiracy theories. In 1995, filmmaker Ray Santilli claimed to have obtained 22 reels of 16 mm film that depicted the autopsy of a "real" Grey supposedly recovered from the site of the 1947 incident in Roswell. In 2006, though, Santilli announced that the film was not original, but was instead a "reconstruction" created after the original film was found to have degraded. He maintained that a real Grey had been found and autopsied on camera in 1947, and that the footage released to the public contained a percentage of that original footage. Analysis Greys are often involved in alien abduction claims. Among reports of alien encounters, Greys make up about 50% in Australia, 73% in the United States, 48% in continental Europe, and around 12% in the United Kingdom.: 68 These reports include two distinct groups of Greys that differ in height.: 74 Abduction claims are often described as extremely traumatic, similar to an abduction by humans or even a sexual assault in the level of trauma and distress. The emotional impact of perceived abductions can be as great as that of combat, sexual abuse, and other traumatic events. The eyes are often a focus of abduction claims, which often describe a Grey staring into the eyes of an abductee when conducting mental procedures. This staring is claimed to induce hallucinogenic states or directly provoke different emotions. Neurologist Steven Novella proposes that Grey aliens are a byproduct of the human imagination, with the Greys' most distinctive features representing everything that modern humans traditionally link with intelligence. "The aliens, however, do not just appear as humans, they appear like humans with those traits we psychologically associate with intelligence." In 2005, Frederick V. Malmstrom, writing in Skeptic magazine, Volume 11, issue 4, presents his idea that Greys are actually residual memories of early childhood development. Malmstrom reconstructs the face of a Grey through transformation of a mother's face based on our best understanding of early-childhood sensation and perception. Malmstrom's study offers another alternative to the existence of Greys, the intense instinctive response many people experience when presented an image of a Grey, and the act of regression hypnosis and recovered-memory therapy in "recovering" memories of alien abduction experiences, along with their common themes. According to biologist Jack Cohen, the typical image of a Grey, assuming that it would have evolved from a world with different environmental and ecological conditions from Earth, is too physiologically similar to a human to be credible as a representation of an alien. The interdimensional hypothesis, the cryptoterrestrial hypothesis, and the time-traveller hypothesis attempt to provide an alternative explanation to the humanoid anatomy and behavior of these alleged beings. In popular culture Depictions of Grey aliens have gone on to appear in a number of films and television shows, supplanting the previously popular little green men. As early as 1966, for example, the superhero character Ultraman was explicitly based on them, and in 1977 they were featured in Close Encounters of the Third Kind. Greys have also been worked into space opera and other interstellar settings: in Babylon 5, the Greys are referred to as the "Vree", and are depicted as being allies and trade partners of 23rd-century Earth, while in the Stargate franchise they are called the "Asgard" and depicted as ancient astronauts allied with modern-day Earth.[citation needed] South Park refers to them as "visitors". During the 1990s, plotlines wherein Greys were linked to conspiracy theories became common. A well-known example is the Fox television series The X-Files, which first aired in 1993. It combined the quest to find proof of the existence of Grey-like extraterrestrials with a number of UFO conspiracy theory subplots, to form its primary story arc. Other notable examples include the XCOM video game franchise (where they are called "Sectoids"); Dark Skies, first broadcast in 1996, which expanded upon the MJ-12 conspiracy;[citation needed] and American Dad!, which features a Grey-like alien named Roger, whose backstory draws from both the Roswell incident and Area 51 conspiracy theories. The 2011 film Paul tells the story of a Grey named Paul who attributes the Greys' frequent presence in science fiction pop culture to the US government deliberately inserting the stereotypical Grey alien image into mainstream media; this is done so that if humanity came into contact with Paul's species, no immediate shock would occur as to their appearance. Child abduction by Greys is a key plot point in the 2013 film, Dark Skies. Greys appear in Syfy's 2021 science fiction dramedy series Resident Alien. The Greys appear as the main antagonistic faction in the 2023 independent game Greyhill Incident. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/TIOBE_Programming_Community_Index] | [TOKENS: 556]
Contents TIOBE index The TIOBE programming community index is a measure of popularity of programming languages, created and maintained by TIOBE Software BV, based in Eindhoven, the Netherlands. TIOBE stands for The Importance of Being Earnest, the title of an 1895 comedy play by Oscar Wilde, to emphasize the organization's "sincere and professional attitude towards customers, suppliers and colleagues". The index is calculated from the number of search engine results for queries containing the name of the language. The index covers searches in Google, Google Blogs, MSN, Yahoo!, Baidu, Wikipedia, and YouTube. The index is updated once a month. The current information is free, but the long-term statistical data is for sale. The index authors have stated that it may be valuable when making various strategic decisions. TIOBE focuses on Turing complete programming languages, and provides no information on the popularity of markup languages, such as HTML or XML. History TIOBE index is sensitive to the ranking policy of the search engines on which it is based. For example, in April 2004, Google performed a cleanup action to get rid of unfair attempts to promote the search rank of many websites. As a consequence, there was a large drop for several languages, such as C++ and JavaScript, yet those languages have stayed at the top of the index. To avoid such fluctuations, TIOBE now uses multiple search engines. In August 2016, C reached its lowest ratings score since the index was launched, but was still the second most popular language after Java, while in May 2020, C regained the top, and since then Java has substantially gone down in popularity while still maintaining number two position until November 2020, when Python overtook Java, taking the number two position. In 2021, Java regained its number two position and in 2022, Python overtook both Java and C to become the most popular programming language. The TIOBE programming language of the year award goes to the language with the biggest annual popularity gain in the index, e.g., Go won it in 2016, and Python won it in 2020. Criticisms Maintainers specify that the TIOBE index is "not about the best programming language or the language in which most lines of code have been written", but do claim that the number of web pages may reflect the number of skilled engineers, courses and jobs worldwide. In 2012, TIOBE's naming of Objective-C as the "programming language of the year" was challenged, with noted critics advocating for Microsoft's C Sharp. Tim Bunce, author of the Perl DBI, has been critical of the index and its methods of ranking. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Bianconi%E2%80%93Barab%C3%A1si_model] | [TOKENS: 2948]
Contents Bianconi–Barabási model The Bianconi–Barabási model is a model in network science that explains the growth of complex evolving networks. This model can explain that nodes with different characteristics acquire links at different rates. It predicts that a node's growth depends on its fitness and can calculate the degree distribution. The Bianconi–Barabási model is named after its inventors Ginestra Bianconi and Albert-László Barabási. This model is a variant of the Barabási–Albert model. The model can be mapped to a Bose gas and this mapping can predict a topological phase transition between a "rich-get-richer" phase and a "winner-takes-all" phase. Concepts The Barabási–Albert (BA) model uses two concepts: growth and preferential attachment. Here, growth indicates the increase in the number of nodes in the network with time, and preferential attachment means that more connected nodes receive more links. The Bianconi–Barabási model, on top of these two concepts, uses another new concept called the fitness. This model makes use of an analogy with evolutionary models. It assigns an intrinsic fitness value to each node, which embodies all the properties other than the degree. The higher the fitness, the higher the probability of attracting new edges. Fitness can be defined as the ability to attract new links – "a quantitative measure of a node's ability to stay in front of the competition". While the Barabási–Albert (BA) model explains the "first mover advantage" phenomenon, the Bianconi–Barabási model explains how latecomers also can win. In a network where fitness is an attribute, a node with higher fitness will acquire links at a higher rate than less fit nodes. This model explains that age is not the best predictor of a node's success, rather latecomers also have the chance to attract links to become a hub. The Bianconi–Barabási model can reproduce the degree correlations of the Internet Autonomous Systems. This model can also show condensation phase transitions in the evolution of complex network. The BB model can predict the topological properties of Internet. Algorithm The fitness network begins with a fixed number of interconnected nodes. They have different fitness, which can be described with fitness parameter, η j {\displaystyle \eta _{j}} which is chosen from a fitness distribution ρ ( η ) {\displaystyle \rho (\eta )} . The assumption here is that a node's fitness is independent of time, and is fixed. A new node j with m links and a fitness η j {\displaystyle \eta _{j}} is added with each time-step. The probability Π i {\displaystyle \Pi _{i}} that a new node connects to one of the existing links to a node i {\displaystyle i} in the network depends on the number of edges, k i {\displaystyle k_{i}} , and on the fitness η i {\displaystyle \eta _{i}} of node i {\displaystyle i} , such that, Each node's evolution with time can be predicted using the continuum theory. If initial number of node is m {\displaystyle m} , then the degree of node i {\displaystyle i} changes at the rate: Assuming the evolution of k i {\displaystyle k_{i}} follows a power law with a fitness exponent β ( η i ) {\displaystyle \beta (\eta _{i})} where t i {\displaystyle t_{i}} is the time since the creation of node i {\displaystyle i} . Here, β ( η ) = η C and C = ∫ ρ ( η ) η 1 − β ( η ) d η . {\displaystyle \beta (\eta )={\frac {\eta }{C}}{\text{ and }}C=\int \rho (\eta ){\frac {\eta }{1-\beta (\eta )}}\,d\eta .} Properties If all fitnesses are equal in a fitness network, the Bianconi–Barabási model reduces to the Barabási–Albert model, when the degree is not considered, the model reduces to the fitness model (network theory). When fitnesses are equal, the probability Π i {\displaystyle \Pi _{i}} that the new node is connected to node i {\displaystyle i} when k i {\displaystyle k_{i}} is the degree of node i {\displaystyle i} is, Degree distribution of the Bianconi–Barabási model depends on the fitness distribution ρ ( η ) {\displaystyle \rho (\eta )} . There are two scenarios that can happen based on the probability distribution. If the fitness distribution has a finite domain, then the degree distribution will have a power-law just like the BA model. In the second case, if the fitness distribution has an infinite domain, then the node with the highest fitness value will attract a large number of nodes and show a winners-take-all scenario. There are various statistical methods to measure node fitnesses η i {\displaystyle \eta _{i}} in the Bianconi–Barabási model from real-world network data. From the measurement, one can investigate the fitness distribution ρ ( η ) {\displaystyle \rho (\eta )} or compare the Bianconi–Barabási model with various competing network models in that particular network. The Bianconi–Barabási model has been extended to weighted networks displaying linear and superlinear scaling of the strength with the degree of the nodes as observed in real network data. This weighted model can lead to condensation of the weights of the network when few links acquire a finite fraction of the weight of the entire network. Recently it has been shown that the Bianconi–Barabási model can be interpreted as a limit case of the model for emergent hyperbolic network geometry called Network Geometry with Flavor. The Bianconi–Barabási model can be also modified to study static networks where the number of nodes is fixed. Bose-Einstein condensation Bose–Einstein condensation in networks is a phase transition observed in complex networks that can be described by the Bianconi–Barabási model. This phase transition predicts a "winner-takes-all" phenomena in complex networks and can be mathematically mapped to the mathematical model explaining Bose–Einstein condensation in physics. In physics, a Bose–Einstein condensate is a state of matter that occurs in certain gases at very low temperatures. Any elementary particle, atom, or molecule, can be classified as one of two types: a boson or a fermion. For example, an electron is a fermion, while a photon or a helium atom is a boson. In quantum mechanics, the energy of a (bound) particle is limited to a set of discrete values, called energy levels. An important characteristic of a fermion is that it obeys the Pauli exclusion principle, which states that no two fermions may occupy the same state. Bosons, on the other hand, do not obey the exclusion principle, and any number can exist in the same state. As a result, at very low energies (or temperatures), a great majority of the bosons in a Bose gas can be crowded into the lowest energy state, creating a Bose–Einstein condensate. Bose and Einstein have established that the statistical properties of a Bose gas are governed by the Bose–Einstein statistics. In Bose–Einstein statistics, any number of identical bosons can be in the same state. In particular, given an energy state ε, the number of non-interacting bosons in thermal equilibrium at temperature T = ⁠1/β⁠ is given by the Bose occupation number where the constant μ is determined by an equation describing the conservation of the number of particles with g(ε) being the density of states of the system. This last equation may lack a solution at low enough temperatures when g(ε) → 0 for ε → 0. In this case a critical temperature Tc is found such that for T < Tc the system is in a Bose-Einstein condensed phase and a finite fraction of the bosons are in the ground state. The density of states g(ε) depends on the dimensionality of the space. In particular g ( ε ) ∼ ε d − 2 2 {\displaystyle g(\varepsilon )\sim \varepsilon ^{\frac {d-2}{2}}} therefore g(ε) → 0 for ε → 0 only in dimensions d > 2. Therefore, a Bose-Einstein condensation of an ideal Bose gas can only occur for dimensions d > 2. The evolution of many complex systems, including the World Wide Web, business, and citation networks, is encoded in the dynamic web describing the interactions between the system's constituents. The evolution of these networks is captured by the Bianconi-Barabási model, which includes two main characteristics of growing networks: their constant growth by the addition of new nodes and links and the heterogeneous ability of each node to acquire new links described by the node fitness. Therefore the model is also known as fitness model. Despite their irreversible and nonequilibrium nature, these networks follow the Bose statistics and can be mapped to a Bose gas. In this mapping, each node is mapped to an energy state determined by its fitness and each new link attached to a given node is mapped to a Bose particle occupying the corresponding energy state. This mapping predicts that the Bianconi–Barabási model can undergo a topological phase transition in correspondence to the Bose–Einstein condensation of the Bose gas. This phase transition is therefore called Bose-Einstein condensation in complex networks. Consequently addressing the dynamical properties of these nonequilibrium systems within the framework of equilibrium quantum gases predicts that the "first-mover-advantage," "fit-get-rich (FGR)," and "winner-takes-all" phenomena observed in a competitive systems are thermodynamically distinct phases of the underlying evolving networks. Starting from the Bianconi-Barabási model, the mapping of a Bose gas to a network can be done by assigning an energy εi to each node, determined by its fitness through the relation where β = 1 / T . In particular when β = 0 all the nodes have equal fitness, when instead β ≫ 1 nodes with different "energy" have very different fitness. We assume that the network evolves through a modified preferential attachment mechanism. At each time a new node i with energy εi drawn from a probability distribution p(ε) enters in the network and attach a new link to a node j chosen with probability: In the mapping to a Bose gas, we assign to every new link linked by preferential attachment to node j a particle in the energy state εj. The continuum theory predicts that the rate at which links accumulate on node i with "energy" εi is given by where k i ( ε i , t , t i ) {\displaystyle k_{i}(\varepsilon _{i},t,t_{i})} indicating the number of links attached to node i that was added to the network at the time step t i {\displaystyle t_{i}} . Z t {\displaystyle Z_{t}} is the partition function, defined as: The solution of this differential equation is: where the dynamic exponent f ( ε ) {\displaystyle f(\varepsilon )} satisfies f ( ε ) = e − β ( ε − μ ) {\displaystyle f(\varepsilon )=e^{-\beta (\varepsilon -\mu )}} , μ plays the role of the chemical potential, satisfying the equation where p(ε) is the probability that a node has "energy" ε and "fitness" η = e−βε. In the limit, t → ∞, the occupation number, giving the number of links linked to nodes with "energy" ε, follows the familiar Bose statistics The definition of the constant μ in the network models is surprisingly similar to the definition of the chemical potential in a Bose gas. In particular for probabilities p(ε) such that p(ε) → 0 for ε → 0 at high enough value of β we have a condensation phase transition in the network model. When this occurs, one node, the one with higher fitness acquires a finite fraction of all the links. The Bose–Einstein condensation in complex networks is, therefore, a topological phase transition after which the network has a star-like dominant structure. The mapping of a Bose gas predicts the existence of two distinct phases as a function of the energy distribution. In the fit-get-rich phase, describing the case of uniform fitness, the fitter nodes acquire edges at a higher rate than older but less fit nodes. In the end the fittest node will have the most edges, but the richest node is not the absolute winner, since its share of the edges (i.e. the ratio of its edges to the total number of edges in the system) reduces to zero in the limit of large system sizes (Fig.2(b)). The unexpected outcome of this mapping is the possibility of Bose–Einstein condensation for T < TBE, when the fittest node acquires a finite fraction of the edges and maintains this share of edges over time (Fig.2(c)). A representative fitness distribution ρ ( η ) {\displaystyle \rho (\eta )} that leads to condensation is given by where λ = 1 {\displaystyle \lambda =1} . However, the existence of the Bose–Einstein condensation or the fit-get-rich phase does not depend on the temperature or β of the system but depends only on the functional form of the fitness distribution ρ ( η ) {\displaystyle \rho (\eta )} of the system. In the end, β falls out of all topologically important quantities. In fact, it can be shown that Bose–Einstein condensation exists in the fitness model even without mapping to a Bose gas. A similar gelation can be seen in models with superlinear preferential attachment, however, it is not clear whether this is an accident or a deeper connection lies between this and the fitness model. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-44] | [TOKENS: 4314]
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/GNU_E] | [TOKENS: 52]
Contents GNU E GNU E is an extension of C++ designed for writing software systems to support persistent applications. It was designed as part of the Exodus project. External links This computer-programming-related article is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-maher20231208-64] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://github.com/features/copilot/copilot-business] | [TOKENS: 701]
Navigation Menu Search code, repositories, users, issues, pull requests... Provide feedback We read every piece of feedback, and take your input very seriously. Saved searches Use saved searches to filter your results more quickly To see all available qualifiers, see our documentation. Build what’s next with GitHub Copilot GitHub Copilot equips you to build the future, whether you're charged with scaling operations or boosting developer productivity. AI that grows with you. Use your code as context while setting boundaries for what to exclude and governance on use. Velocity with quality. Developers want tools without toil, and GitHub Copilot provides AI assistance from the IDE to GitHub to the CLI and more, with agents to review and suggest. Choose your AI adventure. From choice of model to third-party integrations, GitHub Copilot meets your challenges your way. /features/copilot/copilot-business logo The competitive advantage developers ask for by name Since bringing GitHub Copilot to market, we’ve conducted several lab studies to discover its impact on developer efficiency, developer satisfaction, and overall code quality. For the second year in a row Gartner has recognized GitHub as highest and furthest on both Ability to Execute and Completeness of Vision for AI Code Assistant. GitHub is committed to building secure defaults for developers and organizations. 55% faster coding 39% improvement in code quality 68% had a positive experience Resources and insights See how our recent and upcoming releases can help your organization drive efficiency, security, and innovation. Many enterprises quite reasonably ask, “How do I know Copilot is conferring these benefits for my team?” To answer that question, this guide will walk you through a framework for evaluating impact across four stages. Developers tell us how GitHub Copilot and other AI coding tools are transforming their work and changing how they spend their days. Get approved once Insights, best practices, and knowledge to help you adopt GitHub quickly and efficiently. Leading organizations choose GitHub to plan, build, secure and ship software. Thought leadership from subject matter experts that extends beyond tooling into business impact. Whether you're charged with scaling enterprise operations or boosting developer productivity, GitHub Copilot equips you to build what’s next. Yes. GitHub Copilot functionality works in code editors regardless of code hosting platform. Some features are enhanced with the use of GitHub because Copilot can directly draw context and knowledge from repositories, pull requests, issues, and other data structures in the GitHub platform. No. GitHub does not use either Copilot Business or Enterprise data to train its models. Yes, GitHub Copilot does include an optional code referencing filter to detect and suppress certain suggestions that match public code on GitHub. GitHub has created a duplication detection filter to detect and suppress suggestions that contain code segments over a certain length that match public code on GitHub. This filter can be enabled by the administrator for your enterprise and it can apply for all organizations within your enterprise, or the administrator can defer control to individual organizations. With the filter enabled, Copilot checks code suggestions for matches or near-matches against public code on GitHub of 65 lexemes or more (on average,150 characters). If there is a match, the suggestion will not be shown to the user. In addition to off-topic, harmful, and offensive output filters, GitHub Copilot also scans the outputs for vulnerable code. Yes. GitHub and customers can enter a Data Protection Agreement that supports compliance with the GDPR and similar legislation. Site-wide Links Get tips, technical guides, and best practices. Twice a month.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Subbotniks] | [TOKENS: 2449]
Contents Subbotniks Subbotniks (Russian: Субботники, IPA: [sʊˈbotnʲɪkʲɪ], "Sabbatarians") is a common name for adherents of Russian religious movements that split from Sabbatarian sects in the late 18th century. The majority of Subbotniks were converts to Rabbinic or Karaite Judaism from Christianity. Other groups included Judaizing Christians and Spiritual Christians. There are three main groups of people described as Subbotniks: A 1912 religious census in Russia recorded 12,305 "Judaizing Talmudists", and 4,092 "Russian Karaites", and 8,412 Subbotniks who "had fallen away from Orthodoxy". On the whole, the Subbotniks probably differed little from other Judaizing societies in their early years. They first appeared toward the end of the 18th century during the reign of Catherine the Great. According to official reports of the Russian Empire, most[citation needed] of the sect's followers circumcised their boys, believed in a unitary God rather than in the Christian Trinity, accepted only the Hebrew Bible, and observed the Sabbath on Saturday rather than on Sunday as in Christian practice (and hence were called "sabbatarians"). There were variations among their beliefs in relation to Jesus, the Second Coming, and other elements of Eastern Orthodox doctrine. Prior to the First Partition of Poland in 1772, few Jews had settled in the Russian Empire. The Subbotniks were originally Christian peasants of the Russian Orthodox Church. During the reign of Catherine the Great (1729–1796), they adopted elements of Mosaic Law from the Old Testament and were known as "Sabbatarians", part of the Spiritual Christianity movement. Subbotnik families settled in the Holy Land at the time part of the Ottoman Empire, in the 1880s, as part of the Zionist First Aliyah in order to escape oppression in the Russian Empire and later mostly intermarried with Jews. Examples of Israeli Jews descended from Subbotniks include Alexander Zaïd; Major-General Alik Ron; and former Israeli foreign minister, prime minister, and general Ariel Sharon. History Subbotniks, meaning sabbatarians for their observance of the Sabbath on Saturday, as in the Hebrew Bible, rather than on Sunday, arose as part of the Spiritual Christian movement in the 18th century. Imperial Russian officials and Orthodox clergy considered the Subbotniks to be heretical to Russian Orthodox religion, and tried to suppress their sects and other Judaizers. They also emphasized individual interpretation of the law rather than accepting the Talmud or clergy. The Subbotniks concealed their religious beliefs and rites from Orthodox Christians. The Russian government eventually deported the Subbotniks, isolating them from Orthodox Christians and Jews. The Subbotniks observed the Sabbath on Saturday, and were also known as sabbatarians. They avoided work and tried to avoid discussing worldly affairs. Apart from practicing circumcision of boys, many began to slaughter their food animals according to the laws of shechita when they could learn the necessary rules. Some clandestinely used phylacteries, tzitzit (ritual tassels), and mezuzot (doorpost markings), and prayed in private houses of prayer. As their practice deepened, some acquired Jewish "siddur" prayer books with Russian translation for their prayers. The hazzan (cantor) read the prayers aloud, and the congregants prayed silently; during prayers a solemn silence was observed throughout the house. According to the testimony, private and official, of all those who studied their mode of life in tsarist times, the Subbotniks were remarkably industrious; reading and writing, hospitable, not given to drunkenness, poverty, or prostitution. Up to 1820 the Subbotniks lived for the most part in the governments of Voronezh, Oryol, Moscow, Tula, and Saratov. After that year, the government deported those who openly acknowledged their membership in the sect to the foothills of the Caucasus, to Transcaucasia, and to the Siberian governments of Irkutsk, Tobolsk, and Yeniseisk. In 1912, the government's Interior Ministry recorded 8,412 Subbotniks; 12,305 Judaizing Talmudists; and 4,092 Russian Karaites. Under Alexander I's policies of general tolerance, the Subbotniks enjoyed a great deal of freedom. But the Russian clergy opposed them and killed about 100 Subbotniks and their spiritual leaders in Mogilev, in present-day Belarus, including the former archbishop Romantzov[citation needed]. In addition, Romantzov's young son was tortured with red-hot irons before being burned at the stake. The Subbotniks came to an agreement with the Russian Orthodox priests and succeeded in gaining a measure of peace for a period. To compensate the Church for any loss of finances due to the Subbotniks leaving their congregations, the members of the sect undertook to pay the Church the usual fee of two Russian rubles for every birth and three rubles for every marriage. The tsar permitted the Subbotniks to profess their faith openly, but prohibited them from hiring rabbis or proselytizing among Christians. Under Nicholas I, the Subbotniks began to feel restless. Some wanted to embrace Judaism and traveled into the Pale of Settlement in order to learn more about Judaism. Upon learning this, the Russian government sent a number of priests to the Subbotniks to try to persuade them to return to Russian Orthodoxy. When the priests did not meet with any appreciable success, the government decided to suppress the Subbotniks with force. In 1826, the government decided to deport those who lived openly as Subbotniks to internal exile in the above-mentioned regions in the Caucasus, Transcaucasia, and Siberia. At the same time, it prohibited Jews and members of the Russian Orthodox Church from settling among any Subbotniks. Subbotnik communities were among early supporters of Zionism. During the First Aliyah at the end of the 19th century, thousands of Subbotniks settled in Ottoman Palestine to escape religious persecution due to their differences with the Russian Orthodox Church. Some Subbotniks had immigrated to Ottoman Palestine even prior to the First Aliyah. The Subbotniks faced hurdles when intermarrying into the wider Jewish population, as they were not considered Jews according to halakha. They were noted for often being more religiously observant than the mostly secular Jewish Zionist population in that period. They Hebraized their surnames to assimilate. Within a short period, the descendants of Subbotnik Jews who arrived in Ottoman Palestine in the late 19th century had completely blended and inter-married into the wider Jewish population of Israel. Subbotniks in Nazi-occupied areas of Ukraine were killed by SS Einsatzgruppen troops and local Ukrainian collaborators due to their Jewish self-identity. They were relatively recent migrants to Ukraine from areas of Voronezh and considered outsiders by the peasants, who noted their practice of some Jewish customs. During the Holocaust, Nazis killed thousands of Subbotniks. By contrast, they did not attack Crimean Karaites, accepting the state's records that they were ethnic Tatars (or Khazars). Following their massacre in the Holocaust, the Subbotniks came to have an increasingly nationalist self-identification as Jews. However, after the War, the Soviet government ceased to recognize the "Subbotnik" as a legal ethnic category. They counted these people as a subset of the ethnic Russian population. Between 1973 and 1991, the Subbotniks of Ilyinka in Voronezh Oblast emigrated to Israel. After the fall of the Soviet Union, a few thousand Subbotniks left Russia for Israel. This coincided with the 1990s Post-Soviet aliyah to Israel of more than a million Russian Jews and members of their immediate families. Since that period, Subbotniks remaining in Russia have encountered status-related problems. In the 21st century, the Shavei Israel organization for outreach to "lost Jews" and related communities, appointed a rabbi for the Subbotniks at Vysoky in Voronezh Oblast. The objective of teaching them Judaism and facilitate their formal conversion to Orthodox Judaism would make them eligible for aliyah to Israel. State of Israel In the early 21st century, the issue arose of the Jewish identity of some members of Moshav Yitav, located in the Jordan Valley north of Jericho in the West Bank, who were Subbotniks, immigrants from former Soviet Georgia. In 2004, the Sephardic Chief Rabbi of Israel Shlomo Amar ruled the Subbotniks were not defined as Jewish and would have to undergo an Orthodox conversion. The Interior Ministry classified the Subbotniks as a Christian sect and ineligible for aliyah to Israel, because no one knew if their ancestors had formally converted to Judaism. The ruling was abolished in 2014, with an attempt by the Interior Ministry to allow remaining Subbotnik families to immigrate to Israel. Statistics It has been difficult to estimate the exact number of Subbotniks in Russia at any given time. The discrepancies between government statistics and the membership have varied widely. Official data from tsarist times placed the membership of the sect at several thousand. The writer E. Deinard, who was in personal contact with the Subbotniks, said in 1887 there were 2,500,000. Deinard may have included in his figures all of the Judaizing sects, and not just the Subbotniks, as this estimate is not supported by any other historians. Apart from their religious rites, the Subbotniks were generally indistinguishable from Russian Orthodox or secular Russians in terms of dress and lifestyle. Subbotnik Karaites Besides Tambov, Subbotnik Karaites also lived in Saratov Oblast, Astrakhan Oblast, Volgograd Oblast, Stavropol Krai, Samara Oblast, Khakassia, Irkutsk Oblast along the Molochna River in Novorossiya, in Krasnodar Krai, Armenia, and Azerbaijan and along the Russian Empire's borders with Iran. While not all statistics for all provinces are readily available, there are more than 2500 in Privolnoye, Azerbaijan, alone. From 1870 they began to use the "Everyday Prayers for Karaites" by Abraham Firkovich (1870, Vilnius) for their liturgy, which in 1882 they were allowed to publish in Russian as "Порядок молитв для караимов" (tr. Poryadok molitv dlya karaimov). It was based on the Siddur Tefillot keMinhag haKaraim by Isaak ben Solomon Ickowicz. The Subbotnik Karaites had contacts with the Crimean Karaites, who, to a degree, exemplified for them "a Jewish model to be imitated", "were occasional and never formally arranged since, in particular, normative Karaism denied the acceptance of proselytes and regarded the very existence of a community of Karaites of non-Jewish origin senseless." Distribution Due to tsarist persecution, Subbotniks spread out creating a wide diaspora, living since the 19th century in the following countries and regions: Notable people See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Scavenger] | [TOKENS: 3094]
Contents Scavenger Scavengers are animals that feed on dead and decaying organic matter. Often the term is used to describe the consumption of carrion, the bodies of animals that have died from causes other than predation or the bodies of animals that have been killed by other predators. However, the term is also used to describe animals that feed on rotting plant matter or refuse. Vultures and burying beetles are examples of scavengers that feed on carrion, pink bud moth and stag beetle larvae are examples of scavengers that feed on rotting plant matter, and raccoons and squirrels are examples of scavengers that feed on refuse. Carrion-eating scavengers are called necrophages. Scavengers play an important role in ecosystems by preventing the accumulation of decaying matter and helping to recycle nutrients. The process and rate at which dead plant and animal material is scavenged is affected by both biotic and abiotic factors, such as plant species, carcass size, habitat, temperature, moisture levels, and seasons. Detritivores and decomposers complete this process by consuming the remains left by scavengers. Etymology Scavenger is an alteration of scavager, from Middle English skawager meaning "customs collector", from skawage meaning "customs", from Old North French escauwage meaning "inspection", from schauwer meaning "to inspect", of Germanic origin; akin to Old English scēawian and German schauen meaning "to look at", and modern English "show" (with semantic drift). Related terminology Animals that subsist entirely or mainly on decaying biomass (e.g. dead animals, dead plants) are called obligate scavengers, while those capable of obtaining food via other methods are termed facultative scavengers. Animals that rely specifically on carrion as a food source are called obligate necrophages. Animals that feed on particulate plant or animal matter (e.g. humus, marine snow) are typically categorized as detritivores rather than scavengers. The midge fly Propsilocerus akamusi, which feeds on detritus in the sediment of freshwater lakes, is an example of a detritivore. Types of scavengers Obligate scavenging of carrion (obligate necrophagy) is rare among vertebrates, due to the difficulty of finding enough carrion without expending too much energy. New World vultures such as the black vulture, and Old World vultures such as the griffon vulture, white-backed vulture and lappet-faced vulture, are examples of obligate carrion scavengers. Most of the vertebrates that eat carrion are facultative scavengers, capable of obtaining food via predation or other methods, and eating carrion opportunistically. Many large carnivores that hunt regularly, such as hyenas and jackals, but also animals rarely thought of as scavengers, such as African lions, leopards, and wolves will scavenge if given the chance. They may also use their size and ferocity to intimidate the original hunters into abandoning their kills (the cheetah is a notable victim, rather than a perpetrator). Gulls, crows and magpies frequently scavenge roadkill. Other vertebrates, for example Egyptian mastigures, scavenge to survive during times of food scarcity. Aquatic and semi-aquatic vertebrates feed on carrion too. Carrion-eating scavengers found in marine settings include hagfish, great white sharks, northern wolffish and abyssal grenadiers, and carrion-eating scavengers found in freshwater settings include American alligators, Eurasian otters and common midwife toads. Burying beetles, vulture bees and bone skipper flies are examples of obligately necrophagous invertebrates. They are all dependent on carrion during the larval stages of their life cycles. Adult burying beetles and vulture bees feed on carrion too. Other invertebrates, such as blow flies, flesh flies and yellowjackets, also feed on carrion but are not reliant on it for survival. Blow fly and flesh fly larvae can feed on excrement, and some species, for example, Chrysomya putoria and Sarcophaga crassipalpis, can feed on living tissue. Also, yellowjackets can hunt caterpillars and other insects and feed on nectar, sap and fruit. In addition to the terrestrial examples above, many aquatic invertebrates consume carrion. The common octopus, European green crab and seven-armed starfish are all marine invertebrates that feed on carrion, and the ribbon leech Erpobdella obscura and red swamp crayfish are freshwater invertebrates that feed on carrion. Carrion-eating scavengers have numerous adaptations to help them find food (e.g. excellent eyesight and hearing, strong sense of smell), protect themselves from infection and intoxication (e.g. strong immune systems, toxin-resistant physiologies), gorge themselves when food is available (e.g. expandable intestines), and conserve energy between meals (e.g. gliding flight). Animals that feed on dead plant material are called herbivorous scavengers. Some stag beetles are obligate scavengers of dead plant material. For example, Lucanus cervus is dependent on dead wood during the larval stages of its life cycle. Adult Lucanus cervus beetles lay their eggs near the stumps of dead trees, and the larvae then spend the next 4 to 7 years feeding and growing in size. Types of wood eaten include oak, ash, elm, sycamore, lime and hornbeam. Pink bud moth larvae (also known as pink scavenger caterpillars) are facultative scavengers of dead plant material, feeding on rotting fruits, decaying flowers and leaves, but also the fruits and grains of live plants. Termites are facultative scavengers too. Termites feed on dead trees and wood, but also live plants and detritus such as humus and excrement. Darkling beetles (tenebrionids), woodlice, and banana moth larvae are also facultative scavengers of dead plant material. In urban settings, some animals regularly explore public parks and garbage cans for discarded food items that they can eat. Vertebrate examples of this type of scavenger include gulls, crows, feral pigeons, raccoons, baboons, opossums, brown rats, and squirrels. Invertebrate examples include ants and blow flies. In areas where there are municipal dumps, polar bears, elephants, raccoon dogs, red foxes, martens and polecats sometimes scavenge for food. Hyenas also scavenge from municipal dumps in some prey-depleted districts of East Africa. Prehistoric scavengers In the prehistoric eras, the species Tyrannosaurus rex may have been an apex predator, preying upon hadrosaurs, ceratopsians, and possibly juvenile sauropods, although some experts have suggested the dinosaur was primarily a scavenger. The debate about whether Tyrannosaurus was an apex predator or scavenger was among the longest ongoing feuds in paleontology; however, most scientists now agree that Tyrannosaurus was an opportunistic carnivore, acting mostly as a predator but also scavenging when it could sense it. Recent research also shows that while an adult T. rex would energetically gain little through scavenging, smaller theropods of approximately 500 kg (1,100 lb) might have gained levels similar to those of hyenas, though not enough for them to rely on scavenging. Other research suggests that carcasses of giant sauropods may have made scavenging much more profitable to carnivores than it is now. For example, a single 40 tonne Apatosaurus carcass would have been worth roughly 6 years of calories for an average allosaur. As a result of this resource oversupply, it is possible that some theropods evolved to get most of their calories by scavenging giant sauropod carcasses, and may not have needed to consistently hunt in order to survive. The same study suggested that theropods in relatively sauropod-free environments, such as tyrannosaurs, were not exposed to the same type of carrion oversupply, and were therefore forced to hunt in order to survive. Ecological function Scavengers play a fundamental role in the environment through the removal of decaying organisms, serving as a natural sanitation service. While microscopic and invertebrate decomposers break down dead organisms into simple organic matter which are used by nearby autotrophs, scavengers help conserve energy and nutrients obtained from carrion within the upper trophic levels, and are able to disperse the energy and nutrients farther away from the site of the carrion than decomposers. Scavenging unites animals which normally would not come into contact, and results in the formation of highly structured and complex communities which engage in nonrandom interactions. Scavenging communities function in the redistribution of energy obtained from carcasses and reducing diseases associated with decomposition. Oftentimes, scavenger communities differ in consistency due to carcass size and carcass types, as well as by seasonal effects as consequence of differing invertebrate and microbial activity. Competition for carrion results in the inclusion or exclusion of certain scavengers from access to carrion, shaping the scavenger community. When carrion decomposes at a slower rate during cooler seasons, competitions between scavengers decrease, while the number of scavenger species present increases. Alterations in scavenging communities may result in drastic changes to the scavenging community in general, reduce ecosystem services and have detrimental effects on animal and humans. The reintroduction of gray wolves (Canis lupus) into Yellowstone National Park in the United States caused drastic changes to the prevalent scavenging community, resulting in the provision of carrion to many mammalian and avian species. Likewise, the reduction of vulture species in India lead to the increase of opportunistic species such as feral dogs and rats. The presence of both species at carcasses resulted in the increase of diseases such as rabies and bubonic plague in wildlife and livestock, as feral dogs and rats are transmitters of such diseases. Furthermore, the decline of vulture populations in India has been linked to the increased rates of anthrax in humans due to the handling and ingestion of infected livestock carcasses. An increase of disease transmission has been observed in mammalian scavengers in Kenya due to the decrease in vulture populations in the area, as the decrease in vulture populations resulted in an increase of the number of mammalian scavengers at a given carcass along with the time spent at a carcass. Scavenging may provide a direct and indirect method for transmitting disease between animals. Scavengers of infected carcasses may become hosts for certain pathogens and consequently vectors of disease themselves. An example of this phenomenon is the increased transmission of tuberculosis observed when scavengers engage in eating infected carcasses. Likewise, the ingestion of bat carcasses infected with rabies by striped skunks (Mephitis mephitis) resulted in increased infection of these organisms with the virus. A major vector of transmission of diseases are various bird species, with outbreak being influenced by such carrier birds and their environment. An avian cholera outbreak from 2006 to 2007 off the coast Newfoundland, Canada resulted in the mortality of many marine bird species. The transmission, perpetuation and spread of the outbreak was mainly restricted to gull species who scavenge for food in the area. Similarly, an increase of transmission of avian influenza virus to chickens by domestic ducks from Indonesian farms permitted to scavenge surrounding areas was observed in 2007. The scavenging of ducks in rice paddy fields in particular resulted in increased contact with other bird species feeding on leftover rice, which may have contributed to increased infection and transmission of the avian influenza virus. The domestic ducks may not have demonstrated symptoms of infection themselves, though were observed to excrete high concentrations of the avian influenza virus. Threats Many species that scavenge face persecution globally.[citation needed] Vultures, in particular, have faced incredible persecution and threats by humans. Before its ban by regional governments in 2006, the veterinary drug Diclofenac has resulted in at least a 95% decline of Gyps vultures in Asia. Habitat loss and food shortage have contributed to the decline of vulture species in West Africa due to the growing human population and over-hunting of vulture food sources, as well as changes in livestock husbandry. Poisoning certain predators to increase the number of game animals is still a common hunting practice in Europe and contributes to the poisoning of vultures when they consume the carcasses of poisoned predators. Benefits to humans Highly efficient scavengers, also known as dominant or apex-scavengers, can have benefits to humans. Increases in dominant scavenger populations, such as vultures, can reduce populations of smaller opportunistic scavengers, such as rats. These smaller scavengers are often pests and disease vectors. In humans In the 1980s, Lewis Binford suggested that early humans primarily obtained meat via scavenging, not through hunting. In 2010, Dennis Bramble and Daniel Lieberman proposed that early carnivorous human ancestors subsequently developed long-distance running behaviors which improved the ability to scavenge and hunt: they could reach scavenging sites more quickly and also pursue a single animal until it could be safely killed at close range due to exhaustion and hyperthermia. In Tibetan Buddhism, the practice of excarnation—that is, the exposure of dead human bodies to carrion birds and/or other scavenging animals—is the distinctive characteristic of sky burial, which involves the dismemberment of human cadavers of whom the remains are fed to vultures, and traditionally the main funerary rite (alongside cremation) used to dispose of the human body. A similar funerary practice that features excarnation can be found in Zoroastrianism; in order to prevent the pollution of the sacred elements (fire, earth, and water) from contact with decomposing bodies, human cadavers are exposed on the Towers of Silence to be eaten by vultures and wild dogs. Studies in behavioral ecology and ecological epidemiology have shown that cannibalistic necrophagy, although rare, has been observed as a survival behavior in several social species, including anatomically modern humans; however, episodes of human cannibalism occur rarely in most human societies.[Note 1] Many instances have occurred in human history, especially in times of war and famine, where necrophagy and human cannibalism emerged as a survival behavior, although anthropologists report the usage of ritual cannibalism among funerary practices and as the preferred means of disposal of the dead in some tribal societies. Gallery See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Google_Books] | [TOKENS: 6235]
Contents Google Books Google Books (previously known as Google Book Search, Google Print, and by its code-name Project Ocean) is a service from Google that searches the full text of books and magazines that Google has scanned, converted to text using optical character recognition (OCR), and stored in its digital database. Books are provided either by publishers and authors through the Google Books Partner Program, or by Google's library partners through the Library Project. Additionally, Google has partnered with a number of magazine publishers to digitize their archives. The Publisher Program was first known as Google Print when it was introduced at the Frankfurt Book Fair in October 2004. The Google Books Library Project, which scans works in the collections of library partners and adds them to the digital inventory, was announced in December 2004. The Google Books initiative has been hailed for its potential to offer unprecedented access to what may become the largest online body of human knowledge and promoting the democratization of knowledge. However, it has also been criticized for potential copyright violations, and lack of editing to correct the many errors introduced into the scanned texts by the OCR process. As of October 2019[update], Google celebrated 15 years of Google Books and provided the number of scanned books as more than 40 million titles. Google estimated in 2010 that there were about 130 million distinct titles in the world, and stated that it intended to scan all of them. However, the scanning process in American academic libraries has slowed since the 2000s. Google Book's scanning efforts have been subject to litigation, including Authors Guild v. Google, a class-action lawsuit in the United States, decided in Google's favor (see below). This was a major case that came close to changing copyright practices for orphan works in the United States. A 2023 study by scholars from the University of California, Berkeley, and Northeastern University's business schools found that Google Books's digitization of books has led to increased sales for the physical versions of the books. Details Results from Google Books show up in both the universal Google Search and in the dedicated Google Books search website (books.google.com). In response to search queries, Google Books allows users to view full pages from books in which the search terms appear if the book is out of copyright or if the copyright owner has given permission. If Google believes the book is still under copyright, a user sees "snippets" of text around the queried search terms. All instances of the search terms in the book text appear with a yellow highlight. The four access levels used on Google Books are: In response to criticism from groups such as the American Association of Publishers and the Authors Guild, Google announced an opt-out policy in August 2005, through which copyright owners could provide a list of titles that they do not want scanned, and the request would be respected. The company also stated that it would not scan any in-copyright books between August and 1 November 2005, to provide the owners with the opportunity to decide which books to exclude from the Project. Thus, copyright owners have three choices with respect to any work: Most scanned works are no longer in print or commercially available. In addition to procuring books from libraries, Google also obtains books from its publisher partners, through the "Partner Program" – designed to help publishers and authors promote their books. Publishers and authors submit either a digital copy of their book in EPUB or PDF format, or a print copy to Google, which is made available on Google Books for preview. The publisher can control the percentage of the book available for preview, with the minimum being 20%. They can also choose to make the book fully viewable, and even allow users to download a PDF copy. Books can also be made available for sale on Google Play. Unlike the Library Project, this does not raise any copyright concerns as it is conducted pursuant to an agreement with the publisher. The publisher can choose to withdraw from the agreement at any time. For many books, Google Books displays the original page numbers. However, Tim Parks, writing in The New York Review of Books in 2014, noted that Google had stopped providing page numbers for many recent publications (likely the ones acquired through the Partner Program) "presumably in alliance with the publishers, in order to force those of us who need to prepare footnotes to buy paper editions." Scanning of books The project began in 2002 under the codename Project Ocean. Google co-founder Larry Page had always had an interest in digitizing books. When he and Marissa Mayer began experimenting with book scanning in 2002, it took 40 minutes for them to digitize a 300-page book. But soon after the technology had been developed to the extent that scanning operators could scan up to 6000 pages an hour. Google established designated scanning centers to which books were transported by trucks. The stations could digitize at the rate of 1,000 pages per hour. The books were placed in a custom-built mechanical cradle that adjusted the book spine in place while an array of lights and optical instruments scanned the two open pages. Each page would have two cameras directed at it capturing the image, while a range finder LIDAR overlaid a three-dimensional laser grid on the book's surface to capture the curvature of the paper. A human operator would turn the pages by hand, using a foot pedal to take the photographs. With no need to flatten the pages or align them perfectly, Google's system not only reached a remarkable efficiency and speed but also helped protect the fragile collections from being over-handled. Afterwards, the crude images went through three levels of processing: first, de-warping algorithms used the LIDAR data fix the pages' curvature. Then, optical character recognition (OCR) software transformed the raw images into text, and, lastly, another round of algorithms extracted page numbers, footnotes, illustrations and diagrams. Many of the books are scanned using a customized Elphel 323 camera at a rate of 1,000 pages per hour. A patent awarded to Google in 2009 revealed that Google had come up with an innovative system for scanning books that uses two cameras and infrared light to automatically correct for the curvature of pages in a book. By constructing a 3D model of each page and then "de-warping" it, Google is able to present flat-looking pages without having to really make the pages flat, which requires the use of destructive methods such as unbinding or glass plates to individually flatten each page, which is inefficient for large scale scanning. Google decided to omit color information in favour of better spatial resolution, as most out-of-copyright books at the time did not contain colors. Each page image was passed through algorithms that distinguished the text and illustration regions. Text regions were then processed via OCR to enable full-text searching. Google expended considerable resources in coming up with optimal compression techniques, aiming for high image quality while keeping the file sizes minimal to enable access by internet users with low bandwidth. Website functionality For each work, Google Books automatically generates an overview page. This page displays information extracted from the book—its publishing details, a high frequency word map, the table of contents—as well as secondary material, such as summaries, reader reviews (not readable in the mobile version of the website), and links to other relevant texts. A visitor to the page, for instance, might see a list of books that share a similar genre and theme, or they might see a list of current scholarship on the book. This content, moreover, offers interactive possibilities for users signed into their Google account. They can export the bibliographic data and citations in standard formats, write their own reviews, add it to their library to be tagged, organized, and shared with other people. Thus, Google Books collects these more interpretive elements from a range of sources, including the users, third-party sites like Goodreads, and often the book's author and publisher. In fact, to encourage authors to upload their own books, Google has added several functionalities to the website. The authors can allow visitors to download their ebook for free, or they can set their own purchase price. They can change the price back and forth, offering discounts whenever it suits them. Also, if a book's author chooses to add an ISBN, LCCN or OCLC record number, the service will update the book's url to include it. Then, the author can set a specific page as the link's anchor. This option makes their book more easily discoverable. Ngram Viewer The Ngram Viewer is a service connected to Google Books that graphs the frequency of word usage across their book collection. The service is important for historians and linguists as it can provide an inside look into human culture through word use throughout time periods. This program has fallen under criticism because of errors in the metadata used in the program. Content issues and criticism The project has received criticism that its stated aim of preserving orphaned and out-of-print works is at risk due to scanned data having errors and such problems not being solved. The scanning process is subject to errors. For example, some pages may be unreadable, upside down, or in the wrong order. Scholars have even reported crumpled pages, obscuring thumbs and fingers, and smeared or blurry images. On this issue, a declaration from Google at the end of scanned books says: The digitization at the most basic level is based on page images of the physical books. To make this book available as an ePub formatted file we have taken those page images and extracted the text using Optical Character Recognition (or OCR for short) technology. The extraction of text from page images is a difficult engineering task. Smudges on the physical books' pages, fancy fonts, old fonts, torn pages, etc. can all lead to errors in the extracted text. Imperfect OCR is only the first challenge in the ultimate goal of moving from collections of page images to extracted-text based books. Our computer algorithms also have to automatically determine the structure of the book (what are the headers and footers, where images are placed, whether text is verse or prose, and so forth). Getting this right allows us to render the book in a way that follows the format of the original book. Despite our best efforts you may see spelling mistakes, garbage characters, extraneous images, or missing pages in this book. Based on our estimates, these errors should not prevent you from enjoying the content of the book. The technical challenges of automatically constructing a perfect book are daunting, but we continue to make enhancements to our OCR and book structure extraction technologies. In 2009, Google stated that they would start using reCAPTCHA to help fix the errors found in Google Book scans. This method would only improve scanned words that are hard to recognize because of the scanning process and cannot solve errors such as turned pages or blocked words. Scanning errors have inspired works of art such as published collections of anomalous pages and a Tumblr blog. Scholars have frequently reported rampant errors in the metadata information on Google Books – including misattributed authors and erroneous dates of publication. Geoffrey Nunberg, a linguist researching on the changes in word usage over time noticed that a search for books published before 1950 and containing the word "internet" turned up an unlikely 527 results. Woody Allen is mentioned in 325 books ostensibly published before he was born. Google responded to Nunberg by blaming the bulk of errors on outside contractors. Other metadata errors reported include publication dates before the author's birth (e.g. 182 works by Charles Dickens prior to his birth in 1812); incorrect subject classifications (an edition of Moby Dick found under "computers", a biography of Mae West classified under "religion"), conflicting classifications (10 editions of Whitman's Leaves of Grass all classified as both "fiction" and "nonfiction"), incorrectly spelled titles, authors, and publishers (Moby Dick: or the White "Wall"), and metadata for one book incorrectly appended to a completely different book (the metadata for an 1818 mathematical work leads to a 1963 romance novel). A review of the author, title, publisher, and publication year metadata elements for 400 randomly selected Google Books records was undertaken. The results show 36% of sampled books in the digitization project contained metadata errors. This error rate is higher than one would expect to find in a typical library online catalog. The overall error rate of 36.75% found in this study suggests that Google Books' metadata has a high rate of error. While "major" and "minor" errors are a subjective distinction based on the somewhat indeterminate concept of "findability", the errors found in the four metadata elements examined in this study should all be considered major. Metadata errors based on incorrect scanned dates has made research using the Google Books Project database difficult. According to a 2009 article by academic Geoffrey Nunberg Google was aware of these errors and working towards fixing them. Some European politicians and intellectuals have criticized Google's effort on linguistic imperialism grounds. They argue that because the vast majority of books proposed to be scanned are in English, it will result in disproportionate representation of natural languages in the digital world. German, Russian, French, and Spanish, for instance, are popular languages in scholarship. The disproportionate online emphasis on English, however, could shape access to historical scholarship, and, ultimately, the growth and direction of future scholarship. Among these critics is Jean-Noël Jeanneney, the former president of the Bibliothèque nationale de France. While Google Books has digitized large numbers of journal back issues, its scans do not include the metadata required for identifying specific articles in specific issues. This has led the makers of Google Scholar to start their own program to digitize and host older journal articles (in agreement with their publishers). Library partners The Google Books Library Project is aimed at scanning and making searchable the collections of several major research libraries. Along with bibliographic information, snippets of text from a book are often viewable. If a book is out of copyright and in the public domain, the book is fully available to read or download. In-copyright books scanned through the Library Project are made available on Google Books for snippet view. Regarding the quality of scans, Google acknowledges that they are "not always of sufficiently high quality" to be offered for sale on Google Play. Also, because of supposed technical constraints, Google does not replace scans with higher quality versions that may be provided by the publishers. The project is the subject of the Authors Guild v. Google lawsuit, filed in 2005 and ruled in favor of Google in 2013, and again, on appeal, in 2015. Copyright owners can claim the rights for a scanned book and make it available for preview or full view (by "transferring" it to their Partner Program account), or request Google to prevent the book text from being searched. The number of institutions participating in the Library Project has grown since its inception. Other institutional partners have joined the project since the partnership was first announced: History 2002: A group of team members at Google officially launch the "secret 'books' project." Google founders Sergey Brin and Larry Page came up with the idea that later became Google Books while still graduate students at Stanford in 1996. The history page on the Google Books website describes their initial vision for this project: "in a future world in which vast collections of books are digitized, people would use a 'web crawler' to index the books' content and analyze the connections between them, determining any given book's relevance and usefulness by tracking the number and quality of citations from other books." This team visited the sites of some of the larger digitization efforts at that time including the Library of Congress's American Memory Project, Project Gutenberg, and the Universal Library to find out how they work, as well as the University of Michigan, Page's alma mater, and the base for such digitization projects as JSTOR and Making of America. In a conversation with the at that time University President Mary Sue Coleman, when Page found out that the university's current estimate for scanning all the library's volumes was 1,000 years, Page reportedly told Coleman that he "believes Google can help make it happen in six." 2003: The team works to develop a high-speed scanning process as well as software for resolving issues in odd type sizes, unusual fonts, and "other unexpected peculiarities." December 2004: Google signaled an extension to its Google Print initiative known as the Google Print Library Project. Google announced partnerships with several high-profile university and public libraries, including the University of Michigan, Harvard (Harvard University Library), Stanford (Green Library), Oxford (Bodleian Library), and the New York Public Library. According to press releases and university librarians, Google planned to digitize and make available through its Google Books service approximately 15 million volumes within a decade. The announcement soon triggered controversy, as publisher and author associations challenged Google's plans to digitize, not just books in the public domain, but also titles still under copyright. September–October 2005: Two lawsuits against Google charge that the company has not respected copyrights and has failed to properly compensate authors and publishers. One is a class action suit on behalf of authors (Authors Guild v. Google, September 20, 2005) and the other is a civil lawsuit brought by five large publishers and the Association of American Publishers. (McGraw Hill v. Google, October 19, 2005) November 2005: Google changed the name of this service from Google Print to Google Book Search. Its program enabling publishers and authors to include their books in the service was renamed Google Books Partner Program, and the partnership with libraries became Google Books Library Project. 2006: Google added a "download a pdf" button to all its out-of-copyright, public domain books. It also added a new browsing interface along with new "About this Book" pages. August 2006: The University of California System announced that it would join the Books digitization project. This includes a portion of the 34 million volumes within the approximately 100 libraries managed by the System. September 2006: The Complutense University of Madrid became the first Spanish-language library to join the Google Books Library Project. October 2006: The University of Wisconsin–Madison announced that it would join the Book Search digitization project along with the Wisconsin Historical Society Library. Combined, the libraries have 7.2 million holdings. November 2006: The University of Virginia joined the project. Its libraries contain more than five million volumes and more than 17 million manuscripts, rare books and archives. January 2007: The University of Texas at Austin announced that it would join the Book Search digitization project. At least one million volumes would be digitized from the university's 13 library locations. March 2007: The Bavarian State Library announced a partnership with Google to scan more than a million public domain and out-of-print works in German as well as English, French, Italian, Latin, and Spanish. May 2007: A book digitizing project partnership was announced jointly by Google and the Cantonal and University Library of Lausanne. May 2007: The Boekentoren Library of Ghent University announced that it would participate with Google in digitizing and making digitized versions of 19th century books in the French and Dutch languages available online. May 2007: Mysore University announces Google will digitize over 800,000 books and manuscripts–including around 100,000 manuscripts written in Sanskrit or Kannada on both paper and palm leaves. June 2007: The Committee on Institutional Cooperation (rebranded as the Big Ten Academic Alliance in 2016) announced that its twelve member libraries would participate in scanning 10 million books over the course of the next six years. July 2007: Keio University became Google's first library partner in Japan with the announcement that they would digitize at least 120,000 public domain books. August 2007: Google announced that it would digitize up to 500,000 both copyrighted and public domain items from Cornell University Library. Google would also provide a digital copy of all works scanned to be incorporated into the university's own library system. September 2007: Google added a feature that allows users to share snippets of books that are in the public domain. The snippets may appear exactly as they do in the scan of the book, or as plain text. September 2007: Google debuted a new feature called "My Library" which allows users to create personal customized libraries, selections of books that they can label, review, rate, or full-text search. December 2007: Columbia University was added as a partner in digitizing public domain works. May 2008: Microsoft tapered off and planned to end its scanning project, which had reached 750,000 books and 80 million journal articles. October 2008: A settlement was reached between the publishing industry and Google after two years of negotiation. Google agreed to compensate authors and publishers in exchange for the right to make millions of books available to the public. October 2008: The HathiTrust "Shared Digital Repository" (later known as the HathiTrust Digital Library) is launched jointly by the Committee on Institutional Cooperation and the 11 university libraries in the University of California system, all of which were Google partner libraries, in order to archive and provide academic access to books from their collections scanned by Google and others. November 2008: Google reached the 7 million book mark for items scanned by Google and by their publishing partners. 1 million were in full preview mode and 1 million were fully viewable and downloadable public domain works. About five million were out of print. December 2008: Google announced the inclusion of magazines in Google Books. Titles include New York Magazine, Ebony, and Popular Mechanics February 2009: Google launched a mobile version of Google Book Search, allowing iPhone and Android phone users to read over 1.5 million public domain works in the US (and over 500,000 outside the US) using a mobile browser. Instead of page images, the plain text of the book is displayed. May 2009: At the annual BookExpo convention in New York, Google signaled its intent to introduce a program that would enable publishers to sell digital versions of their newest books direct to consumers through Google. December 2009: A French court shut down the scanning of copyrighted books published in France, saying this violated copyright laws. It was the first major legal loss for the scanning project. April 2010: Visual artists were not included in the previous lawsuit and settlement, are the plaintiff groups in another lawsuit, and say they intend to bring more than just Google Books under scrutiny. "The new class action," read the statement, "goes beyond Google's Library Project, and includes Google's other systematic and pervasive infringements of the rights of photographers, illustrators and other visual artists." May 2010: It was reported that Google would launch a digital book store called Google Editions. It would compete with Amazon, Barnes & Noble, Apple and other electronic book retailers with its own e-book store. Unlike others, Google Editions would be completely online and would not require a specific device (such as kindle, Nook, or iPad). June 2010: Google passed 12 million books scanned. August 2010: It was announced that Google intends to scan all known existing 129,864,880 books within a decade, amounting to over 4 billion digital pages and 2 trillion words in total. December 2010: Google eBooks (Google Editions) was launched in the US. December 2010: Google launched the Ngram Viewer, which collects and graphs data on word usage across its book collection. March 2011: A federal judge rejected the settlement reached between the publishing industry and Google. March 2012: Google passed 20 million books scanned. March 2012: Google reached a settlement with publishers. January 2013: The documentary Google and the World Brain was shown at the Sundance Film Festival. November 2013: Ruling in Authors Guild v. Google, US District Judge Denny Chin sides with Google, citing fair use. The authors said they would appeal. October 2015: The appeals court sided with Google, declaring that Google did not violate copyright law. According to the New York Times, Google has scanned more than 25 million books. April 2016: The US Supreme Court declined to hear the Authors Guild's appeal, which means the lower court's decision stood, and Google would be allowed to scan library books and display snippets in search results without violating the law. Google has been quite secretive regarding its plans on the future of the Google Books project. Scanning operations had been slowing down since at least 2012, as confirmed by the librarians at several of Google's partner institutions. At University of Wisconsin, the speed had reduced to less than half of what it was in 2006. However, the librarians have said that the dwindling pace could be a natural result of maturation of the project – initially stacks of books were entirely taken up for scanning whereas now only the titles that had not already been scanned needed to be considered. The company's own Google Books timeline page did not mention anything after 2007 even in 2017, and the Google Books blog was merged into the Google Search blog in 2012. Despite winning the decade-long litigation in 2017, The Atlantic has said that Google has "all but shut down its scanning operation." In April 2017, Wired reported that there were only a few Google employees working on the project, and new books were still being scanned, but at a significantly lower rate. It commented that the decade-long legal battle had caused Google to lose its ambition. Legal issues Through the project, library books were being digitized somewhat indiscriminately regardless of copyright status, which led to a number of lawsuits against Google. By the end of 2008, Google had reportedly digitized over seven million books, of which only about one million were works in the public domain. Of the rest, one million were in copyright and in print, and five million were in copyright but out of print. In 2005, a group of authors and publishers brought a major class-action lawsuit against Google for infringement on the copyrighted works. Google argued that it was preserving "orphaned works" – books still under copyright, but whose copyright holders could not be located. The Authors Guild and Association of American Publishers separately sued Google in 2005 for its book project, citing "massive copyright infringement." Google countered that its project represented a fair use and is the digital age equivalent of a card catalog with every word in the publication indexed. The lawsuits were consolidated, and eventually a settlement was proposed. The settlement received significant criticism on a wide variety of grounds, including antitrust, privacy, and inadequacy of the proposed classes of authors and publishers. The settlement was eventually rejected, and the publishers settled with Google soon after. The Authors Guild continued its case, and in 2011 their proposed class was certified. Google appealed that decision, with a number of amici asserting the inadequacy of the class, and the Second Circuit rejected the class certification in July 2013, remanding the case to the District Court for consideration of Google's fair use defense. In 2015 Authors Guild filed another appeal against Google to be considered by the 2nd U.S. Circuit Court of Appeals in New York. Google won the case unanimously based on the argument that they were not showing people the full texts but instead snippets, and they are not allowing people to illegally read the book. In a report, courts stated that they did not infringe on copyright laws, as they were protected under the fair use clause. Authors Guild tried again in 2016 to appeal the decision and this time took their case to be considered by the Supreme Court. The case was rejected, leaving the Second Circuit's decision on the case intact, meaning that Google did not violate copyright laws. This case also set a precedent for other similar cases in regards to fair use laws, as it further clarified the law and expanded it. Such clarification affects other scanning projects similar to Google. Other lawsuits followed the Authors Guild's lead. In 2006 a German lawsuit, previously filed, was withdrawn. In June 2006, Hervé de la Martinière, a French publisher known as La Martinière and Éditions du Seuil, announced its intention to sue Google France. In 2009, the Paris Civil Court awarded 300,000 EUR (approximately 430,000 USD) in damages and interest and ordered Google to pay 10,000 EUR a day until it removes the publisher's books from its database. The court wrote, "Google violated author copyright laws by fully reproducing and making accessible" books that Seuil owns without its permission and that Google "committed acts of breach of copyright, which are of harm to the publishers". Google said it will appeal. Syndicat National de l'Edition, which joined the lawsuit, said Google has scanned about 100,000 French works under copyright. In December 2009, Chinese author Mian Mian filed a civil lawsuit for $8,900 against Google for scanning her novel, Acid Lovers. This is the first such lawsuit to be filed against Google in China. Also, in November that year, the China Written Works Copyright Society (CWWCS) accused Google of scanning 18,000 books by 570 Chinese writers without authorization. Google agreed on Nov 20 to provide a list of Chinese books it had scanned, but the company refused to admit having "infringed" copyright laws.[unreliable source?] In March 2007, Thomas Rubin, associate general counsel for copyright, trademark, and trade secrets at Microsoft, accused Google of violating copyright law with their book search service. Rubin specifically criticized Google's policy of freely copying any work until notified by the copyright holder to stop. Google licensing of public domain works is also an area of concern due to using of digital watermarking techniques with the books. Some published works that are in the public domain, such as all works created by the U.S. Federal government, are still treated like other works under copyright, and therefore locked after 1922. Since at least 2014, Google has allowed authors and publishers to remove book previews from Google Books upon request. Similar projects See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Noahidism] | [TOKENS: 2856]
Contents Noahidism Noahidism (/ˈnoʊəhaɪdɪzəm/) or Noachidism (/ˈnoʊəxaɪdɪzəm/) is a monotheistic Judaic religious movement aimed at non-Jews, based upon the Seven Laws of Noah and their traditional interpretations within Orthodox Judaism. According to the Jewish law, non-Jews (gentiles) are not obligated to convert to Judaism, but they are required to observe the Seven Laws of Noah to be assured of a place in the World to Come (Olam Ha-Ba), the final reward of the righteous. The penalty for violating any of the Noahide laws is discussed in the Talmud, but in practical terms it is subject to the working legal system which is established by the society at large. Those who subscribe to the observance of the Noahic Covenant are referred to as Bnei Noach (Hebrew: בני נח, "Sons of Noah") or Noahides (/ˈnoʊ.əhaɪdz/). The modern Noahide movement was founded in the 1990s by Orthodox Jewish rabbis from Israel, mainly tied to Chabad-Lubavitch and religious Zionist organizations, including the Temple Institute. Historically, the Hebrew term Bnei Noach has been applied to all non-Jews as descendants of Noah. However, nowadays it is primarily used to refer specifically to those "Righteous Gentiles" who observe the Seven Laws of Noah. Noahide communities have spread and developed primarily in the United States, United Kingdom, Latin America, Nigeria, the Philippines, and Russia. According to a Noahide source in 2018[update], there are over 20,000 official Noahides around the world and the country with the greatest number is the Philippines. Noahic Covenant The scriptural and theological basis for the seven commandments of the Noahic Covenant is said to be derived interpretatively from demands addressed to Adam and to Noah, who are believed to be the progenitors of humankind in Judaism, and therefore to be regarded as universal moral laws. The seven commandments of the Noahic Covenant enumerated in the Babylonian Talmud (Avodah Zarah 8:4, Sanhedrin 56a-b) are: According to the American Roman Catholic priest and dogmatic theologian Bruce R. Barnes, the obligation to follow the Noahic Covenant and its seven commandments was incumbent upon the Jewish people as well, and remained effective for them until the Ten Commandments were given to Moses on Mount Sinai: With the giving of the Torah, God chose a people to live by His Commandments. This is a critical moment for those who believe that revelation is the only authentic expression of law. Such individuals think that the Revealed Law predominates and that the Noahide Laws are absorbed into the Mosaic Laws, thereby losing their independence. This unification of the two sets of law during the revelation at Sinai strengthened and confirmed (rather than diminished) the obligation for non-Jews to follow the Noahide Laws. Righteous Gentiles were obliged to follow the Seven Commandments and, by association, the Sinaitic Commandments because the Noahide Laws were now considered subsumed into the Sinai Laws. This did not alter the distinction between the two sets of people who followed the respective laws. [...] The relationship between the Noahites and the Jews would always be similar to the relationship between a priest and a faithful layman. The obligation to follow the Noahide Laws was incumbent upon the Jews from Adam to the Revelation at Sinai. Virtually all Jewish thinkers who dealt with this issue kept this in mind. Historical precedents The concept of "Righteous Gentiles" (gerim toshavim) has a few precedents in the history of Judaism, primarily during Biblical times and the Roman domination of the Mediterranean. In the Hebrew Bible, it is reported that the legal status of ger toshav (Biblical Hebrew: גר תושב, ger: "foreigner" or "alien" + toshav: "resident", lit. 'resident alien') was granted to those Gentiles (non-Jews) living in the Land of Israel who did not want to convert to Judaism but agreed to observe the Seven Laws of Noah. The Sebomenoi or God-fearers of the Roman Empire were another ancient example of non-Jews being included within the Jewish community without converting to Judaism. During the Golden Age of Jewish culture in the Iberian Peninsula, the medieval Jewish philosopher and rabbi Moses Maimonides (1135–1204) wrote in the halakhic legal code Mishneh Torah that Gentiles (non-Jews) must perform exclusively the Seven Laws of Noah and refrain from studying the Torah or performing any Jewish commandment, including resting on the Shabbat; however, Maimonides also states that if Gentiles want to perform any Jewish commandment besides the Seven Laws of Noah according to the correct halakhic procedure, they are not prevented from doing so. According to Maimonides, teaching non-Jews to follow the Seven Laws of Noah is incumbent on all Jews, a commandment in and of itself. Nevertheless, the majority of rabbinic authorities over the centuries have rejected Maimonides' opinion, and the dominant halakhic consensus has always been that Jews are not required to spread the Noahide laws to non-Jews. During the 1860s in Western Europe, the idea of Noahidism as a universal Judaic religion for non-Jews was developed by Elijah Benamozegh, an Italian Sephardic Orthodox rabbi and renowned Jewish Kabbalist. Between the years 1920s–1930s, French writer Aimé Pallière [fr] adopted the Noahide laws at the suggestion of his teacher Elijah Benamozegh; afterwards, Pallière spread Benamozegh's doctrine in Europe and never formally converted to Judaism. Modern historians argue that Benamozegh's role in the debate on Jewish universalism in the history of Jewish philosophy was focused on the Seven Laws of Noah as the means subservient to the shift of Jewish ethics from particularism to universalism, although the arguments that he used to support his universalistic viewpoint were neither original nor unheard in the history of this debate. According to Clémence Boulouque, Carl and Bernice Witten Associate Professor of Jewish and Israel Studies at Columbia University in the City of New York, Benamozegh ignored the ethnocentric biases contained in the Noahide laws, whereas some contemporary right-wing Jewish political movements have embraced them. Modern Noahide movement Menachem Mendel Schneerson, the Lubavitcher Rebbe, encouraged his followers on many occasions to preach the Seven Laws of Noah, devoting some of his addresses to the subtleties of this code. Since the 1990s, Orthodox Jewish rabbis from Israel, most notably those affiliated to Chabad-Lubavitch and religious Zionist organizations, including The Temple Institute, have set up the modern Noahide movement. These Noahide organizations, led by religious Zionist and Orthodox Jewish rabbis, are aimed at non-Jews to proselytize among them and commit them to follow the Noahide laws. According to Rachel Z. Feldman, American anthropologist and Assistant Professor of Religious Studies at Dartmouth College, many of the Orthodox Jewish rabbis involved in mentoring Noahides are supporters of the Third Temple movement who believe that the messianic era shall begin with the establishment of a Jewish theocratic state in Israel, supported by communities of Noahides worldwide: Today, nearly 2,000 Filipinos consider themselves members of the "Children of Noah", a new Judaic faith that is growing into the tens of thousands worldwide as ex-Christians encounter forms of Jewish learning online. Under the tutelage of Orthodox Jewish rabbis, Filipino "Noahides", as they call themselves, study Torah, observe the Sabbath, and passionately support a form of messianic Zionism. Filipino Noahides believe that Jews are a racially superior people, with an innate ability to access divinity. According to their rabbi mentors, they are forbidden from performing Jewish rituals and even reading certain Jewish texts. These restrictions have necessitated the creation of new, distinctly Noahide ritual practices and prayers modeled after Jewish ones. Filipino Noahides are practicing a new faith that also affirms the superiority of Judaism and Jewish biblical right to the Land of Israel, in line with the aims of the growing messianic Third Temple Movement in Jerusalem. Feldman describes Noahidism as a "new world religion" that "carv[es] out a place for non-Jews in the messianic Zionist project". She characterizes Noahide ideology in the Philippines and elsewhere in the global south as having a "markedly racial dimension" constructed around "an essential categorical difference between Jews and Noahides". David Novak, professor of Jewish theology and ethics at the University of Toronto, has denounced the modern Noahide movement by stating that "If Jews are telling Gentiles what to do, it's a form of imperialism". In 2005 a "High Council of Bnei Noah", set up to represent Noahide communities around the world, was endorsed by a group that claimed to be the new Sanhedrin. The High Council of Bnei Noah consists of a group of Noahides who, at the request of the nascent Sanhedrin, gathered in Jerusalem on 10 January 2006 to be recognized as an international Noahide organization for the purpose of serving as a bridge between the nascent Sanhedrin and Noahides worldwide. There were ten initial members who flew to Israel and pledged to uphold the Seven Laws of Noah and to conduct themselves under the authority of the Noahide beth din (religious court) of the nascent Sanhedrin. Acknowledgment Meir Kahane and Shlomo Carlebach organized one of the first Noahide conferences in the 1980s. In 1990, Kahane was the keynote speaker at the First International Conference of the Descendants of Noah, the first Noahide gathering, in Fort Worth, Texas. After the assassination of Meir Kahane that same year, The Temple Institute, which advocates to rebuild the Third Jewish Temple on the Temple Mount in Jerusalem, started to promote the Noahide laws as well. The Chabad-Lubavitch movement has been one of the most active in Noahide outreach, believing that there is spiritual and societal value for non-Jews in at least simply acknowledging the Noahide laws. In 1982, Chabad-Lubavitch had a reference to the Noahide laws enshrined in a U.S. Presidential proclamation: the "Proclamation 4921", signed by the then-U.S. President Ronald Reagan. The United States Congress, recalling House Joint Resolution 447 and in celebration of Menachem Mendel Schneerson's 80th birthday, proclaimed 4 April 1982, as a "National Day of Reflection". In 1989 and 1990, they had another reference to the Noahide laws enshrined in a U.S. Presidential proclamation: the "Proclamation 5956", signed by then-President George H. W. Bush. The United States Congress, recalling House Joint Resolution 173 and in celebration of Menachem Mendel Schneerson's 87th birthday, proclaimed 16 April 1989, and 6 April 1990, as "Education Day, U.S.A." In January 2004, the spiritual leader of the Druze community in Israel, Sheikh Mowafak Tarif, met with a representative of Chabad-Lubavitch to sign a declaration calling on all non-Jews in Israel to observe the Noahide laws; the mayor of the Arab city of Shefa-'Amr (Shfaram) also signed the document. In March 2016, the Sephardic Chief Rabbi of Israel, Yitzhak Yosef, declared during a sermon that Jewish law requires that the only non-Jews allowed to live in Israel are obligated to follow the Noahide laws: According to Jewish law, it's forbidden for a non-Jew to live in the Land of Israel – unless he has accepted the seven Noahide laws, [...] If the non-Jew is unwilling to accept these laws, then we can send him to Saudi Arabia, [...] When there will be full, true redemption, we will do this. Yosef further added: [N]on-Jews shouldn't live in the land of Israel. [...] If our hand were firm, if we had the power to rule, then non-Jews must not live in Israel. But, our hand is not firm. [...] Who, otherwise be the servants? Who will be our helpers? This is why we leave them in Israel. Yosef's sermon sparked outrage in Israel and was fiercely criticized by several human rights associations, NGOs and members of the Knesset; Jonathan Greenblatt, Anti-Defamation League's CEO and national director, and Carole Nuriel, Anti-Defamation League's Israel Office acting director, issued a strong denunciation of Yosef's sermon: The statement by Chief Rabbi Yosef is shocking and unacceptable. It is unconscionable that the Chief Rabbi, an official representative of the State of Israel, would express such intolerant and ignorant views about Israel's non-Jewish population – including the millions of non-Jewish citizens. As a spiritual leader, Rabbi Yosef should be using his influence to preach tolerance and compassion towards others, regardless of their faith, and not seek to exclude and demean a large segment of Israelis. We call upon the Chief Rabbi to retract his statements and apologize for any offense caused by his comments. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Watts%E2%80%93Strogatz_model] | [TOKENS: 1398]
Contents Watts–Strogatz model The Watts–Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering. It was proposed by Duncan J. Watts and Steven Strogatz in their article published in 1998 in the Nature scientific journal. The model also became known as the (Watts) beta model after Watts used β {\displaystyle \beta } to formulate it in his popular science book Six Degrees. Rationale for the model The formal study of random graphs dates back to the work of Paul Erdős and Alfréd Rényi. The graphs they considered, now known as the classical or Erdős–Rényi (ER) graphs, offer a simple and powerful model with many applications. However the ER graphs do not have two important properties observed in many real-world networks: The Watts and Strogatz model was designed as the simplest possible model that addresses the first of the two limitations. It accounts for clustering while retaining the short average path lengths of the ER model. It does so by interpolating between a randomized structure close to ER graphs and a regular ring lattice. Consequently, the model is able to at least partially explain the "small-world" phenomena in a variety of networks, such as the power grid, neural network of C. elegans, networks of movie actors, or fat-metabolism communication in budding yeast. Algorithm Given the desired number of nodes N {\displaystyle N} , the mean degree K {\displaystyle K} (assumed to be an even integer), and a parameter β {\displaystyle \beta } , all satisfying 0 ≤ β ≤ 1 {\displaystyle 0\leq \beta \leq 1} and N ≫ K ≫ ln ⁡ N ≫ 1 {\displaystyle N\gg K\gg \ln N\gg 1} , the model constructs an undirected graph with N {\displaystyle N} nodes and N K 2 {\displaystyle {\frac {NK}{2}}} edges in the following way: Properties The underlying lattice structure of the model produces a locally clustered network, while the randomly rewired links dramatically reduce the average path lengths. The algorithm introduces about β N K 2 {\displaystyle \beta {\frac {NK}{2}}} of such non-lattice edges. Varying β {\displaystyle \beta } makes it possible to interpolate between a regular lattice ( β = 0 {\displaystyle \beta =0} ) and a structure close to an Erdős–Rényi random graph G ( N , p ) {\displaystyle G(N,p)} with p = K N − 1 {\displaystyle p={\frac {K}{N-1}}} at β = 1 {\displaystyle \beta =1} . It does not approach the actual ER model since every node will be connected to at least K / 2 {\displaystyle K/2} other nodes. The three properties of interest are the average path length, the clustering coefficient, and the degree distribution. For a ring lattice, the average path length is ℓ ( 0 ) ≈ N / 2 K ≫ 1 {\displaystyle \ell (0)\approx N/2K\gg 1} and scales linearly with the system size. In the limiting case of β → 1 {\displaystyle \beta \rightarrow 1} , the graph approaches a random graph with ℓ ( 1 ) ≈ ln ⁡ N ln ⁡ K {\displaystyle \ell (1)\approx {\frac {\ln N}{\ln K}}} , while not actually converging to it. In the intermediate region 0 < β < 1 {\displaystyle 0<\beta <1} , the average path length falls very rapidly with increasing β {\displaystyle \beta } , quickly approaching its limiting value. For the ring lattice the clustering coefficient C ( 0 ) = 3 ( K − 2 ) 4 ( K − 1 ) {\displaystyle C(0)={\frac {3(K-2)}{4(K-1)}}} , and so tends to 3 / 4 {\displaystyle 3/4} as K {\displaystyle K} grows, independently of the system size. In the limiting case of β → 1 {\displaystyle \beta \rightarrow 1} the clustering coefficient is of the same order as the clustering coefficient for classical random graphs, C = K / ( N − 1 ) {\displaystyle C=K/(N-1)} and is thus inversely proportional to the system size. In the intermediate region the clustering coefficient remains quite close to its value for the regular lattice, and only falls at relatively high β {\displaystyle \beta } . This results in a region where the average path length falls rapidly, but the clustering coefficient does not, explaining the "small-world" phenomenon. The degree distribution in the case of the ring lattice is just a Dirac delta function centered at K {\displaystyle K} . The degree distribution for a large number of nodes and 0 < β < 1 {\displaystyle 0<\beta <1} can be written as, where k i {\displaystyle k_{i}} is the number of edges that the i th {\displaystyle i^{\text{th}}} node has or its degree. Here k ≥ K / 2 {\displaystyle k\geq K/2} , and f ( k , K ) = min ( k − K / 2 , K / 2 ) {\displaystyle f(k,K)=\min(k-K/2,K/2)} . The shape of the degree distribution is similar to that of a random graph and has a pronounced peak at k = K {\displaystyle k=K} and decays exponentially for large | k − K | {\displaystyle |k-K|} . The topology of the network is relatively homogeneous, meaning that all nodes are of similar degree. Limitations The major limitation of the model is that it produces an unrealistic degree distribution. In contrast, real networks are often scale-free networks inhomogeneous in degree, having hubs and a scale-free degree distribution. Such networks are better described in that respect by the preferential attachment family of models, such as the Barabási–Albert (BA) model. (On the other hand, the Barabási–Albert model fails to produce the high levels of clustering seen in real networks, a shortcoming not shared by the Watts and Strogatz model. Thus, neither the Watts and Strogatz model nor the Barabási–Albert model should be viewed as fully realistic.) The Watts and Strogatz model also implies a fixed number of nodes and thus cannot be used to model network growth. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-264] | [TOKENS: 12858]
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Namco] | [TOKENS: 9014]
Contents Namco Namco Limited[a] (formerly known as Nakamura Seisakusho, Nakamura Manufacturing Company and Nakamura Amusement Machine Company) was a Japanese multinational video game and entertainment company founded in 1955. It operated video arcades and amusement parks globally, and produced video games, films, toys, and arcade cabinets. Namco was one of the most influential companies in the coin-op and arcade game industry, producing multi-million-selling game franchises such as Pac-Man, Galaxian, Tekken, Soulcalibur, Tales, Ridge Racer, and Ace Combat. The name Namco comes from Nakamura Manufacturing Company, derived from Namco's founder, Masaya Nakamura. In the 1960s, Nakamura Manufacturing built electro-mechanical arcade games such as the 1965 hit Periscope. It entered the video game industry after acquiring the struggling Japanese division of Atari in 1974, distributing games such as Breakout in Japan. The company renamed itself Namco in 1977 and published Gee Bee, its first original video game, a year later. Among Namco's first major hits was the fixed shooter Galaxian in 1979, followed by Pac-Man in 1980. Namco prospered during the golden age of arcade video games in the early 1980s, releasing popular games such as Galaga, Xevious, and Pole Position. Namco entered the home market in 1984 with conversions of its arcade games for the MSX and the Nintendo Family Computer, later expanding to competing platforms, such as the Sega Genesis, TurboGrafx-16, and PlayStation. It continued to produce hit games in the 1990s, including Ridge Racer, Tekken, and Taiko no Tatsujin, but endured financial difficulties due to the struggling Japanese economy and diminishing arcade market. In 2006, Namco merged with Bandai to form Bandai Namco Holdings. The standalone Namco brand continues to be used for video arcade and other entertainment products by the group's Bandai Namco Amusements division. Namco's video games division was merged into the subsidiary Bandai Namco Entertainment. Namco is remembered for its unique corporate model, its importance to the industry, and its advancements in technology. History On June 1, 1955, Japanese businessman Masaya Nakamura founded Nakamura Seisakusho Co., Ltd.,[b] in Ikegami, Tokyo. The son of a shotgun repair business owner, Nakamura proved unable to find work in his chosen profession of ship building in the struggling post-World War II economy. Nakamura established his own company after his father's business saw success with producing pop cork guns. Beginning with only ¥300,000 (US$12,000), Nakamura spent the money on two hand-cranked rocking horses that he installed on the roof garden of a Matsuya department store in Yokohama. The horses were loved by children and turned a decent profit for Nakamura, who began expanding his business to cover other smaller locations. A 1959 business reorganization renamed the company Nakamura Seisakusho Company, Ltd. The Mitsukoshi department store chain noticed his success in 1963, and approached him with the idea of constructing a rooftop amusement space for its store in Nihonbashi, Tokyo. It consisted of horse rides, a picture viewing machine, and a goldfish scooping pond, with the centerpiece being a moving train named Roadaway Race. The space was a hit and led to Mitsukoshi requesting rooftop amusement parks for all of its stores. Along with Taito, Rosen Enterprises, and Nihon Goraku Bussan, Nakamura Seisakusho became one of Japan's leading amusement companies. As the business grew in size, it used its clout to purchase amusement machines in bulk from other manufacturers at a discount, and then sell them to smaller outlets at full price. While its machines sold well, Nakamura Seisakusho lacked the manufacturing lines and distribution networks of its competitors, which made the production of them longer and more expensive. The company was unable to place its machines inside stores because other manufacturers already had exclusive rights to these locations. In response, Nakamura Seisakusho opened a production plant in February 1966, moving its corporate office to a four-story building in Ōta, Tokyo. The company secured a deal with Walt Disney Productions to produce children's rides in the likenesses of its characters, in addition to those using popular anime characters like Q-Taro; this move allowed the business to further expand its operations and become a driving force in the Japanese coin-op market. Though the manufacturing facility was largely reserved for its Disney and anime rides, Nakamura also used it to construct larger, more elaborate electro-mechanical games. The first of these was Torpedo Launcher (1965), a submarine warfare shooting gallery later titled Periscope. Its other products included Ultraman-themed gun games and pinball-like games branded with Osomatsu-kun characters. The name Namco was introduced in 1971 as a brand for several of its machines. The company grew to having ten employees, which included Nakamura himself. It saw continued success with its arcade games, which had become commonplace in bowling alleys and grocery stores. The company also established a robotics division to produce robots for entertainment centers and festivals, such as those that distributed pamphlets, ribbon making machines, and a robot named Putan that solved pre-built mazes. In August 1973, American game company Atari began establishing a series of divisions in Asia, one of which was named Atari Japan. Its president, Kenichi Takumi, approached Nakamura in early 1974 to have his business become the distributor of Atari games across Japan. Nakamura, already planning global expansion following his company's success, agreed to the deal. In part due to employee theft, Atari Japan was a financial disaster and nearly collapsed in its first few years of operation. When Takumi stopped showing up to work, the company was handed to Hideyuki Nakajima, a former employee of the Japan Art Paper Company. Atari co-founder Nolan Bushnell, whose company was already struggling in America, chose to sell the Japanese division. His fixer, Ron Gordon, was given the task of finding the buyer for Atari Japan. After being turned down by Sega and Taito, Gordon's offer was accepted by Nakamura for ¥296 million ($1.18M), though Nakamura informed Bushnell his company was unable to pay the money by the deadline. With no other takers for Atari Japan, Bushnell ultimately allowed Nakamura to only pay $550,000 and then $250,000 a year for three years. The acquisition allowed Nakamura Seisakusho to distribute Atari games across Japan, and would make it one of the country's largest arcade game companies. The Atari Japan purchase was not an immediate success, in part due to the medal game fad of the 1970s. While Nakamura Seisakusho saw some success with imports such as Kee Games's Tank, the Japanese video game industry's decrease in popularity did not make them as profitable as hoped. The market became more viable once restrictions on medal games were imposed by the Japanese government in 1976, as Nakamura Seisakusho began returning higher profits; its import of Atari's Breakout was so successful that it led to rampant piracy in the industry. By the end of the year, Nakamura Seisakusho was one of Japan's leading video game companies. Nakamura Seisakusho changed its corporate name to Namco in June 1977. It opened a division in Hong Kong named Namco Enterprises Asia, which maintained video arcades and amusement centers. As Namco's presence in Japan was steadily rising, Nakajima suggested to Nakamura that he open a division in the United States to increase worldwide brand awareness. Nakamura agreed to the proposal, and on September 1, 1978, established Namco-America in Sunnyvale, California. With Nakajima as its president and Satashi Bhutani as vice president, Namco-America's aim was to import games and license them to companies such as Atari and Bally Manufacturing. Namco-America would release a few non-video arcade games itself, such as Shoot Away (1977). As the video game industry prospered in Japan during the 1970s with the release of Taito's Space Invaders, Namco turned its attention towards making its own video games. While its licensed Atari games were still profitable, sales were decreasing and the quality of the hardware used began deteriorating. Per the recommendation of company engineer Shigekazu Ishimura, the company retrofitted its Ōta manufacturing facility into a small game division and purchased old stock computers from NEC for employees to study. Namco released Gee Bee, its first original game, in October 1978. Designed by new hire Toru Iwatani, it is a video pinball game that incorporates elements from Breakout and similar "block breaker" clones. Though Gee Bee fell short of the company's sales expectations and was unable to compete with games such as Space Invaders, it allowed Namco to gain a stronger foothold in the video game market. In 1979, Namco published its first major hit Galaxian, one of the first video games to incorporate RGB color graphics, score bonuses, and a tilemap hardware model. Galaxian is considered historically important for these innovations, and for its mechanics building off those in Space Invaders. It was released in North America by Midway Manufacturing, the video game division of Bally, where it became one of its best-selling games and formed a relationship between Midway and Namco. The space shooter genre became ubiquitous by the end of the decade, with games such as Galaxian and Space Invaders becoming commonplace in Japanese amusement centers. As video games often depicted the killing of enemies and shooting of targets, the industry possessed a predominately male playerbase. Toru Iwatani began work on a maze video game that was targeted primarily towards women, with simplistic gameplay and recognizable characters. Alongside a small team, he created a game named Puck Man, where players controlled a character that had to eat dots in an enclosed maze while avoiding four ghosts that pursued them. Iwatani based the gameplay off eating and designed its characters with soft colors and simplistic facial features. Puck Man was test-marketed in Japan on May 22, 1980 and given a wide-scale release in July. It was only a modest success; players were more accustomed to the shooting gameplay of Galaxian as opposed to Puck Man's visually distinctive characters and gameplay style. In North America, it was released as Pac-Man in November 1980. Pac-Man's simplicity and abstract characters made it a fixture in popular culture, spawning a multi-million-selling media franchise. Namco regularly released several successful games throughout the early 1980s. It published Galaga, the follow-up to Galaxian, in 1981 to critical acclaim, usurping its predecessor in popularity with its fast-paced action and power-ups. 1982 saw the release of Pole Position, a racing game that is the first to use a real racetrack (the Fuji Speedway) and helped laydown the foundations for the racing genre. It released Dig Dug the same year, a maze chaser that allowed players to create their own mazes. Namco's biggest post-Pac-Man success was the vertical-scrolling shooter Xevious in 1983, designed by new-hire Masanobu Endō. Xevious's early usage of pre-rendered visuals, boss fights, and a cohesive world made it an astounding success in Japan, recording record-breaking sales figures that had not been seen since Space Invaders. The game's success led to merchandise, tournament play, and the first video game soundtrack album. The same year, Namco released Mappy, an early side-scrolling platformer, and the Pole Position sequel Pole Position II. Endō went on to design The Tower of Druaga a year later, a maze game that helped establish the concept for the action role-playing game. Druaga's design influenced games such as Nintendo's The Legend of Zelda. 1984 also saw the release of Pac-Land, a Pac-Man-themed platform game that paved the way for similar games such as Super Mario Bros., and Gaplus, a moderately successful update to Galaga. The success of Namco's arcade games prompted it to launch its own print publication, Namco Community Magazine NG, to allow its fans to connect with developers. In July 1983, Nintendo released the Family Computer, a video game console that utilized interchangeable cartridges to play games. The console's launch came with ports of some of Nintendo's popular arcade games, like Donkey Kong, which at the time were considered high quality. Though Namco recognized the system's potential to allow consumers to play accurate versions of its games, the company chose to hold off on the idea after its ports for platforms such as the Sord M5 flopped. Nakamura suggested that his son-in-law, Shigeichi Ishimura, work with a team to reverse-engineer and study the Famicom's hardware in the meantime. His team created a conversion of Galaxian with their newfound knowledge of the console's capabilities, which exceeded the quality of previous home releases. The port was presented to Nintendo president Hiroshi Yamauchi alongside notification that Namco intended to release it with or without Nintendo's approval. Namco's demonstration was the impetus for Nintendo's decision to create a licensing program for the console. Namco signed a five-year royalties contract that included several preferential terms, such as the ability to produce its own cartridges. A subsidiary named Namcot[c] was established in 1984 to act as Namco's console game division. According to former Namco video game music composer Norio Nakagata [ja], "T" means "Tomorrow" and was capitalised for emphasis. Tomorrow was derived from EPCOT (Experimental Prototype Community of Tomorrow). It released its first four games in September: Galaxian, Pac-Man, Xevious, and Mappy. Xevious sold over 1.5 million copies and became the Famicom's first "killer app". Namcot also began releasing games for the MSX, a popular Japanese computer. Namco's arcade game ports were considered high-quality and helped increase sales of the console. Namcot was financially successful and became an important pillar within the company; when Namco moved its headquarters to Ōta, Tokyo in 1985, it used the profits generated from the Famicom conversion of Xevious to fund its construction (the building was nicknamed "Xevious" as a result). The Talking Aid, a speech impairment device, was part of the company's attempts in venturing into other markets. By the time the video game crash of 1983 concluded in 1985 with the release of the Nintendo Entertainment System (NES), Atari had effectively collapsed. After enduring numerous financial difficulties and losing its control in the industry, parent Warner Communications sold the company's personal computer and home console divisions to Commodore International founder Jack Tramiel, who renamed his company Tramel Technology to Atari Corporation. Warner was left with Atari's arcade game and computer software divisions, which it renamed Atari Games. Namco America purchased a 60% stake in Atari Games on February 4, 1985 through its AT Games subsidiary, with Warner holding the remaining 40%. The acquisition gave Namco the exclusive rights to distribute Atari games in Japan. Nakamura began losing interest and patience in Atari Games not long after the acquisition. As he started viewing Atari as a competitor to Namco, he was hesitant to pour additional funds and resources into the company. Nakamura also disliked having to share ownership with Warner Communications. Nakajima grew frustrated with Nakamura's attempts at marketing Atari video games in Japan, and had constant disagreements with him over which direction to take the company. Viewing the majority-acquisition as a failure, in 1987 Namco America sold 33% of its ownership stake to a group of Atari Games employees led by Nakajima. This prompted Nakajima to resign from Namco America and become president of Atari Games. He established Tengen, a publisher that challenged Nintendo's licensing restrictions for the NES by selling several unlicensed games, which included ports of Namco arcade games. Though its selloff made Atari Games an independent entity, Namco still held a minority stake in the company and Nakamura retained his position as its board chairman until the middle of 1988. In Japan, Namco continued to see expeditious growth. It published Pro Baseball: Family Stadium for the Famicom, which was critically acclaimed and sold over 2.5 million copies. Its sequel, Pro Baseball: Family Stadium '87, sold an additional two million. In 1986, Namco entered the restaurant industry by acquiring the Italian Tomato café chain. It also released Sweet Land, a popular candy-themed prize machine. One of Namco's biggest hits from the era was the racing game Final Lap from 1987. It is credited as the first arcade game to allow multiple machines to be connected—or "linked"—together to allow for additional players. Final Lap was one of the most-profitable coin-operated games of the era in Japan, remaining towards the top of sales charts for the rest of the decade. Namco's continued success in arcades provided its arcade division with the revenue and resources needed to fund its research and development (R&D) departments. Among their first creations was the helicopter shooter Metal Hawk in 1988, fitted in a motion simulator arcade cabinet. Its high development costs prevented it from being massed-produced. While most of its efforts were commercially unsuccessful, Namco grew interested in motion-based arcade games and began designing those at a larger scale. In 1988, Namco became involved in film production when it distributed the film Mirai Ninja in theaters, with a tie-in video game coinciding with its release. Namco also developed the beat 'em up Splatterhouse, which attracted attention for its fixture on gore and dismemberment, and Gator Panic, a derivative of Whack-a-Mole that became a mainstay in Japanese arcades and entertainment centers. In early 1989, Namco unveiled its System 21 arcade system, one of the earliest arcade boards to utilize true 3D polygonal graphics. Nicknamed "Polygonizer", the company demonstrated its power through the Formula One racer Winning Run. With an arcade cabinet that shook and swayed the player as they drove, the game was seen as "a breakthrough product in term of programming technique" and garnered significant attention from the press. Winning Run was commercially successful, convincing Namco to continue researching 3D video game hardware. Video arcades under the Namco banner continued opening up in Japan and overseas, such as the family-friendly Play City Carrot chain. Namco saw continued success in the consumer game market as a result of the "Famicom boom" in the late 1980s. By 1989, sales of games for the Famicom and NES accounted for 40% of its annual revenue. During the same time frame, the company's licensing contract with Nintendo expired; when Namco attempted to renew its license, Nintendo chose to revoke many of the preferential terms it originally possessed. Hiroshi Yamauchi insisted that all companies, including Namco, had to follow the same guidelines. The revocation of Namco's terms enraged Nakamura, who announced the company would abandon Nintendo hardware and focus on production of games for competing systems such as the PC Engine. Executives resisted the idea, fearing it would severely impact the company financially. Against Nakamura's protest, Namco signed Nintendo's new licensee contract anyway. While it continued to produce games for Nintendo hardware, most of Namco's quality releases came from the PC Engine and Mega Drive. In 1989, it was reported that Namco was underway with developing its own video game console to compete against companies such as Nintendo and NEC. Electronic Gaming Monthly claimed that the system, which was nearing completion, featured hardware comparable to the then-upcoming Nintendo Super Famicom. According to company engineer Yutaka Isokawa, it was produced to compete against the Mega Drive, a 16-bit console by Namco's arcade rival Sega. With the console industry being crowded by other competing systems, publications were unsure how well it would perform in the market. While the console was never released, it allowed Namco to familiarize itself with designing home video game hardware. Tadashi Manabe replaced Nakamura as president of Namco on May 2, 1990. Manabe, who had been the company's representative director since 1981, was tasked with strengthening relationships and teamwork ethics of management. Two months later, the company dissolved its remaining connections with Atari Games when Time Warner reacquired Namco America's remaining 40% stake in Atari Games. In return, Namco America was given Atari's video arcade management division, Atari Operations, allowing the company to operate video arcades across the United States. Namco began distributing games in North America directly from its US office, rather than through Atari. Namco Hometek was established as the home console game division of Namco America; the latter's relations with Atari Games and Tengen made the company ineligible to become a Nintendo third-party licensee, instead relying on publishers such as Bandai to release its games in North America. In Japan, Namco developed two theme park attractions, which were demonstrated at the 1990 International Garden and Greenery Exposition (Expo '90): Galaxian3: Project Dragoon, a 3D rail shooter that supported 28 players, and a dark ride based on The Tower of Druaga. As part of the company's idea of "hyperentertainment" video games, Namco engineers had drafted ideas for a possible theme park based on Namco's experience with designing and operating indoor play areas and entertainment complexes. Both attractions were commercially successful and among the most popular of Expo 90's exhibitions. In arcades, Namco released Starblade, a 3D rail shooter noteworthy for its cinematic presentation. This led to Namco dominating the Japanese dedicated arcade cabinet charts by October 1991, holding the top six positions that month with Starblade at the top. In February 1992, Namco opened its own theme park, Wonder Eggs, in the Futakotamagawa Time Spark area in Setagaya, Tokyo. Described as an "urban amusement center", Wonder Eggs was the first amusement park operated by a video game company. In addition to Galaxian3 and The Tower of Druaga, the park featured carnival games, carousels, motion simulators, and Fighter Camp, the first flight simulator available to the public. The park saw regularly high attendance numbers; 500,000 visitors attended in its first few months of operation and over one million by the end of the year. Namco created the park out of its interest in designing a Disneyland-inspired theme park that featured the same kind of stories and characters present in its games. Wonder Eggs contributed to Namco's 34% increase in revenue by December 1992. Namco also designed smaller, indoor theme parks for its larger entertainment complexes across the country, such as Plabo Sennichimae Tempo in Osaka. Manabe resigned as president on May 1, 1992 due to a serious anxiety disorder, and Nakamura once again assumed the role. Manabe instead served as the company's vice chairman until his death in 1994. The company's arcade division, in the meantime, began work on a new 3D arcade board named System 22, capable of displaying polygonal 3D models with fully-textured graphics. Namco enlisted the help of Evans & Sutherland, a designer of combat flight simulators for The Pentagon, to assist in the board's development. The System 22 powered Ridge Racer, a racing game, in 1993. Ridge Racer usage of 3D textured polygons and drifting made it a popular game in arcades and one of Namco's most-successful releases, and is labeled a milestone in 3D computer graphics. The company followed its success with Tekken, a 3D fighting game, a year later. Designed by Seiichi Ishii, the co-creator of Sega's landmark fighting game Virtua Fighter, Tekken's wide array of playable characters and consistent framerate helped it outperform Sega's game in popularity, and launched a multi-million-selling franchise as a result. The company continued expanding its operations overseas, such as the acquisition of Bally's Aladdin's Castle, Inc., the owners of the Aladdin's Castle chain of mall arcades. In December, Namco acquired Nikkatsu, Japan's oldest-surviving film studio that at the time was undergoing bankruptcy procedures. The purchase allowed Nikkatsu to utilize Namco's computer graphics hardware for its films, while Namco was able to gain a foothold in the Japanese film industry. In October 1993, Sony announced establishing of its video game & entertainment division Sony Computer Entertainment (now Sony Interactive Entertainment) in November of that year In early 1994, Sony announced that it was developing its own video game console, the 32-bit PlayStation. The console began as a collaboration between Nintendo and Sony to create a CD-based peripheral for the Super Nintendo Entertainment System in 1988. Fearing that Sony would assume control of the entire project, Nintendo silently scrapped the add-on. Sony chose to refocus its efforts in designing the PlayStation in-house as its own console. As it lacked the resources to produce its own games, Sony called for the support of third-party companies to develop PlayStation software. Namco, frustrated with Nintendo and Sega's licensing conditions for its consoles, agreed to support the PlayStation and became its first third-party developer. The company began work on a conversion of Ridge Racer, its most-popular arcade game at the time. The PlayStation was released in Japan on December 3, 1994, with Ridge Racer as one of its first games. Sony moved 100,000 units on launch day alone; publications attributed Ridge Racer to the PlayStation's early success, giving it an edge over its competitor, the Sega Saturn. For a time, it was the best-selling PlayStation game in Japan. Namco formerly used the "Namcot" brand for its home video games. The Namcot brand was consolidated into Namco in 1995; its final game was a PlayStation port of Tekken, published in March in Japan and in November worldwide. Tekken was designed for Namco's System 11 arcade system board, which was based on raw PlayStation hardware; this allowed the home version to be a near-perfect rendition of its arcade counterpart. Tekken became the first PlayStation game to sell one million copies and played a vital role in the console's mainstream success. Sony recognized Namco's commitment to the console, leading to Namco receiving special treatment from Sony and early promotional material adopting the tagline "PlayStation: Powered by Namco". Namco was also given the rights to produce controllers, such as the NeGcon, which it designed with the knowledge it gained through developing its cancelled console. Though it had signed contracts to produce games for systems such as the Sega Saturn and 3DO Interactive Multiplayer, Namco concentrated its consumer software efforts on PlayStation for the remainder of the decade. As a means to draw players into its video arcades, Namco's arcade game division began releasing games that featured unique and novel control styles and gameplay. In 1995, the company released Alpine Racer, an alpine skiing game that was awarded "Best New Equipment" during the year's Amusement and Music Operators Association (AMOA) exposition. Time Crisis, a lightgun shooter noteworthy for its pedal ducking mechanic, helped set the standard for the genre as a whole, while Prop Cycle gained notoriety for its usage of a bicycle controller the player pedaled. The photo booth machine Star Audition, which offered players the chance of becoming a star in the show business, became a media sensation in Japan. Namco Operations, which was renamed Namco Cybertainment in 1996, acquired the Edison Brothers Stores arcade chain in April. Namco also introduced the Postpaid System, a centralized card payment system, as a means to combat the piracy of IC Cards in Japanese arcades. In September 1997, Namco announced it would begin development of games for the Nintendo 64, a console struggling to receive support from third-party developers. Namco signed a contract with Nintendo that allowed the company to produce two games for the console: Famista 64, a version of its Family Stadium series, and an untitled RPG for the 64DD peripheral. The RPG was never released while the 64DD went on to become a commercial failure. In October 1998, which one publication described as being "the most stunning alliance this industry has seen in a long while", Namco announced a partnership deal with long-time rival Sega to bring some of its games to the newly unveiled Dreamcast. As Namco primarily developed games for Sony hardware, and were among the biggest third-party developers for the PlayStation, the announcement surprised news outlets. For its PlayStation-based System 12 arcade board, Namco released the weapon-based fighting game Soul Edge a couple years back in 1996. Its 1999 Dreamcast sequel, which features multiple graphical enhancements and new game modes, is an early instance of a console game being better than its arcade version. Soulcalibur sold over one million units, won multiple awards, and contributed to the early success of the Dreamcast. Namco began experiencing decline in its consumer software sales by 1998 as a result of the Japanese recession, which affected the demand for video games as consumers had less time to play them. Namco's arcade game Tekken 3, launched in March 1997, had been well-received, and the console version of its arcade game Tekken 2 also became a hit selling over three million units. The company's arcade division had similar struggles, having slumped by 21% at the end of its fiscal year ending March 1998. Namco's US subsidiary Namco Cybertainment filed for Chapter 11 bankruptcy protection on January 29, 1998, citing reduced mall traffic, though they planned to close fewer than 50 of their 370 mall locations during the bankruptcy reorganization and even open new locations. In its 1998 annual report, Namco reported a 26.3% drop in net sales, which it partly blamed on low consumer spending. A further 55% drop was reported in November 1999 when its home console game output decreased. As a means to diversify itself from its arcade and consumer game markets, Namco entered the mobile phone game market with the Namco Station, a marketplace for i-Mode cellular devices that featured ports of its arcade games like Pac-Man and Galaxian. In October 1999, the company teamed up with former Square alum Tetsuya Takahashi to establish a development studio called Monolith Soft, which later become an action role-playing game developer best known for creating & developing the Xenosaga series with Namco provided funding for the new franchise and majority-acquired the new development stuio turning it as a subsidiary within Namco. It continued introducing novel concepts for arcades to help attract players, such as the Cyber Lead II, an arcade cabinet that features PlayStation and Dreamcast VMU memory card slots. Namco's financial losses worsened in the 2000s. In October 2000, the Japanese newspaper Nihon Keizai Shimbun reported that the company projected a loss of ¥2.1 billion ($19.3M) for the fiscal year ending March 2001. Namco had previously hinted at this during an event with industry analysts, blaming its struggles on the depressed Japanese economy and dwindling arcade game market. The company closed its Wonder Eggs park on December 31, 2000, which by that point saw an attendance number of six million visitors, in addition to shuttering many of its video arcades that returned substandard profits. In February 2001, Namco updated its projections and reported it now expected a ¥6.5 billion ($56.3M) net loss and a drop in revenue by 95% for the fiscal year ending March 2001, which severely impacted the company's release schedule and corporate structure. The company's earnings forecasts were lowered to accommodate its losses, its development strategy was reorganized to focus largely on established franchises, and 250 of its employees were laid off in what it described as "early retirement". Namco underwent restructuring to increase its income, which included the shuffling of its management and the announcement of production of games for Nintendo's GameCube and Microsoft's Xbox. Following its financial struggles, Namco's arcade division underwent mass reorganization. This division achieved strong success with Taiko no Tatsujin, a popular drum-based rhythm game where players hit a taiko drum controller to the beat of a song. Taiko no Tatsujin became a best-seller and created one of the company's most popular and prolific franchises. Namco's North American divisions, in the meantime, underwent reorganization and restructuring as a result of decreasing profits. Namco Hometek was stripped of its research and development divisions following Namco's disappointment in the quality of its releases. Its continuing expansion into other non-video game divisions, including rehabilitation electronics and travel agency websites, prompted the creation of the Namco Incubation Center, which would control these businesses. The Incubation Center also hosted the Namco Digital Hollywood Game Laboratory game school, which designed the sleeper hit Katamari Damacy (2004). Nakamura resigned as company president later in the year, being replaced with Kyushiro Takagi. Anxious about the company's continuing financial struggles, Nakamura suggested that Namco begin looking into the possibility of merging with another company. Namco first looked to Final Fantasy developer Square and Dragon Quest publisher Enix, offering to combine the three companies into one. Yoichi Wada, the president of Square, disliked Namco's financial showing and declined the offer. Square instead agreed to a business alliance with Namco. Following this, Namco then approached Sega, a company struggling to stay afloat after the commercial failure of the Dreamcast. Sega's development teams and extensive catalog of properties caught Namco's interest, and believed a merge could allow the two to increase their competitiveness. Sega was already discussing a merge with pachinko manufacturer Sammy Corporation; executives at Sammy were infuriated at Sega's consideration of Namco's offer. A failed attempt to overturn the merge led Namco to withdraw its offer the same day Sega announced it turned down Sammy's. While Namco stated it was willing to negotiate with Sega on a future deal, Sega turned down the idea. Shigeichi Ishimura, the son in-law of Nakamura, succeeded Takagi as Namco president on April 1, 2005; Nakamura retained his role as the company's executive chairman. This was part of Namco's continuing efforts at reorganizing itself to be in line with changing markets. On July 26, as part of its 50th anniversary event, Namco published NamCollection—a compilation of several of its PlayStation games—for the PlayStation 2 in Japan. Namco also opened the Riraku no Mori, a companion to its Namja Town park that held massage parlors for visitors; Namco believed it would help make relaxation a source of entertainment. The Idolmaster, a rhythm game that incorporated elements of life simulations, was widely successful in Japan and resulted in the creation of a multi-million-grossing franchise. In early 2005, Namco began merger talks with Bandai, a manufacturing toy and anime production company. The two discussed a year prior about a possible business alliance after Namco collaborated with Bandai subsidiary Banpresto to create and develop an arcade & PS2 game called Mobile Suit Gundam: One Year War based on Mobile Suit Gundam. Bandai showed interest in Namco's game development skills and believed combining this with its wide library of profitable characters and franchises, such as Sailor Moon and Tamagotchi, could increase their competitiveness in the industry. Nakamura and Namco's content development division advisors pushed against the idea, as they felt Bandai's corporate model would not blend well with Namco's more agricultural work environment. Namco's advisors were also critical of Bandai for focusing on promotion and marketing over quality. As Namco's financial state continued to deteriorate, Ishimura pressured Nakamura into supporting the merger. Bandai's offer was accepted on May 2, with both companies stating in a joint statement their financial difficulties were the reason for the merger. The business takeover, where Bandai acquired Namco for ¥175.3 billion ($1.7bn), was finalized on September 29. An entertainment conglomerate named Namco Bandai Holdings was established the same day; while their executive departments merged, Bandai and Namco became independently-operating subsidiaries of the new umbrella holding company. Kyushiro Takagi, Namco's vice chairman, was appointed chairman and director of Namco Bandai Holdings. The combined revenues of the new company were estimated to be ¥458 billion ($4.34bn), making Namco Bandai the third-largest Japanese game company after Nintendo and Sega Sammy Holdings. As its parent company was preparing for a full business integration, Namco continued its normal operations, such as releasing Ridge Racer 6 as a launch game for the newly unveiled Xbox 360 in October and collaborating with Nintendo to produce the arcade game Mario Kart Arcade GP. The company honored the 25th anniversary of its Pac-Man series with Pac-Pix, a puzzle game for the Nintendo DS, and entered the massively multiplayer online game market with Tales of Eternia Online, an action role-playing game based on its Tales franchise. On January 4, 2006, Namco's American game developing division Namco Hometek was merged with Bandai's American consumer game division Bandai Games to create Namco Bandai Games America Inc., absorbing Namco America's subsidiaries and completing Namco and Bandai's merge in North America and was housed within Namco Hometek's former premises. Namco's console game, business program, mobile phone, and research facility divisions were merged with Bandai's console division to create a new company, Namco Bandai Games, on March 31, as Namco was effectively dissolved. The Namco name was repurposed for a new Namco Bandai subsidiary the same day, which absorbed its predecessor's amusement facility and theme park operations. Both Namco and Bandai continued working independently under the newly formed Bandai Namco Holdings until 31 March 2006, when their video game operations were merged to form Namco Bandai Games. On October 30, Namco's European video game division would merge with Bandai's European game division as well, forming Namco Bandai Games Europe S.A.S. Namco's European division was folded into Namco Bandai Networks Europe on January 1, 2007, as it was reorganized into the company's mobile game and website division. Until April 2014, Namco Bandai Games used the Namco logo on its games to represent the brand's legacy. The Namco Cybertainment division was renamed Namco Entertainment in January 2012, and to Namco USA in 2015. A division of Bandai Namco Holdings USA, Namco USA worked with chains such as AMC Theatres to host its video arcades in their respective locations. The second Namco company was renamed Bandai Namco Amusement on April 1, 2018 following a corporate restructuring by its parent. Amusement took over the arcade game development branch of Bandai Namco Games, which renamed itself to Bandai Namco Entertainment in 2015. Namco USA was absorbed into Bandai Namco Amusement's North American branch in 2021 following its parent company's decision to exit the arcade management industry in the United States. This makes Namco Enterprises Asia and Namco Funscape―Bandai Namco's arcade division in Europe―the last companies to use the original Namco trademark in their names. Bandai Namco Holdings and its subsidiaries continue to use the Namco name for a variety of products, including mobile phone applications, streaming programs, and eSports-focused arcade centers in Japan. Legacy Namco was one of the world's largest producers of arcade games, having published over 300 since 1978. Many are considered some of the greatest games of all time, including Pac-Man, Galaga, Xevious, Ridge Racer, Tekken 3, and Katamari Damacy. Pac-Man is considered one of the most important video games ever made, having helped encourage originality and creative thinking within the industry. Namco was recognized for the game's worldwide success in 2005 by Guinness World Records; by that timeframe, Pac-Man sold over 300,000 arcade units and grossed over $1 billion in quarters globally. In an obituary for Masaya Nakamura in 2017, Nintendo Life's Damien McFerran wrote: "without Namco and Pac-Man, the video game arena would be very different today." Namco's corporate philosophy and innovation have received recognition from publications. In a 1994 retrospective on the company, a writer for Edge described Namco as being "among the true pioneers of the coin-op business", a developer with a catalog of well-received and historically significant games. The writer believed that Namco's success lay in its forward-thinking and firmness on quality, which they argued made it stand out from other developers. A staff member of Edge's sister publication, Next Generation, wrote in 1998: "In a world where today's stars almost always become tomorrow's has-beens, Namco has produced consistently excellent games throughout most of its history." The writer credited the company's connections with its players and its influential releases, namely Pac-Man, Xevious, and Winning Run, as the keys to its success in a rapidly changing industry. Publications and industry journalists have identified Namco's importance to the industry. Hirokazu Hamamura, chief editor of Famitsu, credited the company's quality releases to the rise in popularity of video game consoles, and, in turn, the entirety of Japan's video game industry. Writers for Ultimate Future Games and Official UK PlayStation Magazine have credited the company and its games to the early success of the PlayStation, one of the most iconic entertainment brands worldwide. In addition, Official UK PlayStation Magazine wrote that Namco serves as "the godfather of game developers", and one of the most important video game developers in history. Staff for IGN in 1997 claimed that Namco represents the industry as a whole, with games like Pac-Man and Galaga associated with and representing video games. They wrote: "Tracing the history of Namco is like tracing the history of the industry itself. From its humble beginnings on the roof of a Yokohama department store, to the impending release of Tekken 3 for the PlayStation, Namco has always stayed ahead of the pack." In 2012, IGN listed Namco among the greatest video game companies of all time, writing that many of its games—including Galaga, Pac-Man, Dig Dug, and Ridge Racer—were of consistent quality and helped define the industry as a whole. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Guido_van_Rossum_in_PyConUS24.jpg] | [TOKENS: 98]
File:Guido van Rossum in PyConUS24.jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following page uses this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Salvatore_Attardo] | [TOKENS: 332]
Contents Salvatore Attardo Salvatore Attardo is a full professor at Texas A&M University–Commerce and was the editor-in-chief of Humor, the journal for the International Society for Humor Studies from 2002 to 2011. He studied at Purdue University under Victor Raskin and extended Raskin's script-based semantic theory of humor (SSTH) into the general theory of verbal humor (GTVH). He publishes in the field of humor in literature and is considered to be one of the top authorities in the area. He is also the author of Humor 2.0: How the Internet Changed Humor published by Anthem Press in 2023. He was born March 14, 1962, in Anderlecht, Belgium, to an Italian State Railways employee and a Belgian mother, living thereafter in Como, Italy, until adulthood. He has been a permanent resident of the United States since 1991. He has one daughter, Gaia, born in 1994. Attardo is a native speaker of Italian and French. He has served on the thesis and dissertation committees for other humor scholars, including Christian F. Hempelmann and Katrina Triezenberg. Education Experience Major publications Trivia As a teenager, Attardo attended a High School specializing in Humanities (Liceo Ginnasio Statale Alessando Volta, Como) where along with fellow students he published a satirical magazine on the school life, its teachers and principal, called "Giravolta." In these early days, he was known by the nickname of "Pidou." References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars_Reconnaissance_Orbiter] | [TOKENS: 7484]
Contents Mars Reconnaissance Orbiter The Mars Reconnaissance Orbiter (MRO) is a spacecraft designed to search for the existence of water on Mars and provide support for missions to Mars, as part of NASA's Mars Exploration Program. It was launched from Cape Canaveral on August 12, 2005, at 11:43 UTC and reached Mars on March 10, 2006, at 21:24 UTC. In November 2006, after six months of aerobraking, it entered its final science orbit and began its primary science phase. Mission objectives include observing the climate of Mars, investigating geologic forces, providing reconnaissance of future landing sites, and relaying data from surface missions back to Earth. To support these objectives, the MRO carries different scientific instruments, including three cameras, two spectrometers and a subsurface radar. As of July 29, 2023, the MRO has returned over 450 terabits of data, helped choose safe landing sites for NASA's Mars landers, discovered pure water ice in new craters and further evidence that water once flowed on the surface on Mars. The spacecraft continues to operate at Mars, far beyond its intended design life. Due to its critical role as a high-speed data-relay for ground missions, NASA intends to continue the mission as long as possible, at least through the late 2020s. As of February 20, 2026, the MRO has been active at Mars for 7092 sols, or 19 years, 11 months and 10 days, and is the third longest-lived spacecraft to orbit Mars, after 2001 Mars Odyssey and Mars Express. Pre-launch After the failures of the Mars Climate Orbiter and the Mars Polar Lander missions in 1999, NASA reorganized and replanned its Mars Exploration Program. In October 2000, NASA announced its reformulated Mars plans, which reduced the number of planned missions and introduced a new theme, "follow the water". The plans included the Mars Reconnaissance Orbiter (MRO), to be launched in 2005. On October 3, 2001, NASA chose Lockheed Martin as the primary contractor for the spacecraft's fabrication. By the end of 2001 all of the mission's instruments were selected. There were no major setbacks during the MRO's construction, and the spacecraft arrived at John F. Kennedy Space Center on April 30, 2005, for launch preparations. Mission objectives MRO has both scientific and "mission support" objectives which were carried out during the mission's phases. The Primary Science Phase lasted until November 2008, at which time NASA declared the mission a success.: 18 The Extended Science Phase, lasting from 2008 to 2010, was initially planned to support the Phoenix lander and the Mars Science Laboratory, but they were uncontactable and delayed respectively, freeing up the MRO to further study Mars.: 19–20 After 2010, the mission consisted of Extended Mission (EM) phases, each lasting two years up to EM4, and three years from then on.: 28 As of 2024, the MRO is on its 6th extended mission.: 13 The formal science objectives of MRO are to observe the present climate, particularly its atmospheric circulation and seasonal variations; search for signs of water, both past and present, and understand how it altered the planet's surface; map and characterize the geological forces that shaped the surface. To support other missions to Mars, the MRO also has mission support objectives. They are to provide data relay services from ground missions back to Earth, characterize the safety and feasibility of potential future landing sites and Mars rover traverses, and capture data from the entry, descent and landing phase of rovers.: 12 MRO played a key role in choosing safe landing sites for the Phoenix lander in 2008, Mars Science Laboratory / Curiosity rover in 2012, InSight lander in 2018, and the Mars 2020 / Perseverance rover in 2021. Launch and orbital insertion On August 12, 2005, MRO was launched aboard an Atlas V-401 rocket from Space Launch Complex 41 at Cape Canaveral Air Force Station. The Centaur upper stage of the rocket completed its burns over a 56-minute period and placed MRO into an interplanetary transfer orbit towards Mars. MRO cruised through interplanetary space for seven and a half months before reaching Mars. While en route, most of the scientific instruments and experiments were tested and calibrated. To ensure proper orbital insertion upon reaching Mars, four trajectory correction maneuvers were planned and a fifth emergency maneuver was discussed. However, only three trajectory correction maneuvers were necessary, which saved 27 kilograms (60 lb) of fuel that would be usable during MRO's extended mission. MRO began orbital insertion by approaching Mars on March 10, 2006, and passing above its southern hemisphere at an altitude of 370–400 kilometers (230–250 miles). All six of MRO's main engines burned for 27 minutes to slow the probe by 1,000 meters per second (3,300 ft/s). The burn was remarkably accurate, as the insertion route had been designed more than three months prior, with the achieved change in speed only 0.01% short from the design, necessitating an additional 35-second burn time. Completion of the orbital insertion placed the orbiter in a highly elliptical polar orbit with a period of approximately 35.5 hours. Shortly after insertion, the periapsis – the point in the orbit closest to Mars – was 426 km (265 mi) from the surface (3,806 km (2,365 mi) from the planet's center). The apoapsis – the point in the orbit farthest from Mars – was 44,500 km (27,700 mi) from the surface (47,972 km (29,808 mi) from the planet's center). When MRO entered orbit, it joined five other active spacecraft that were either in orbit or on the planet's surface: Mars Global Surveyor, Mars Express, 2001 Mars Odyssey, and the two Mars Exploration Rovers (Spirit and Opportunity). This set a new record for the most operational spacecraft in the immediate vicinity of Mars. On March 30, 2006, MRO began the process of aerobraking, a three-step procedure that halved the fuel needed to achieve a lower, more circular orbit with a shorter period. First, during its first five orbits of the planet (one Earth week), MRO used its thrusters to drop the periapsis of its orbit into aerobraking altitude. Second, while using its thrusters to make minor corrections to its periapsis altitude, MRO maintained aerobraking altitude for 445 planetary orbits (about five Earth months) to reduce the apoapsis of the orbit to 450 kilometers (280 mi). This was done in such a way so as to not heat the spacecraft too much, but also dip enough into the atmosphere to slow the spacecraft down. Third, after the process was complete, MRO used its thrusters to move its periapsis out of the edge of the atmosphere on August 30, 2006. In September 2006, MRO fired its thrusters twice more to adjust its final, nearly circular orbit to approximately 250 to 316 km (155 to 196 mi) above the surface, with a period of about 112 minutes and a polar inclination of around 93°.: 6 The SHARAD radar antennas were deployed on September 16. All of the scientific instruments were tested and most were turned off prior to the solar conjunction that occurred from October 7 to November 6, 2006. This was done to prevent charged particles from the Sun from interfering with signals and potentially endangering the spacecraft. After the conjunction ended the "primary science phase" began. Timeline On September 29, 2006 (sol 402), MRO took its first high resolution image from its science orbit. This image is said to resolve items as small as 90 cm (3 feet) in diameter. On October 6, NASA released detailed pictures from the MRO of Victoria crater along with the Opportunity rover on the rim above it. In November, problems began to surface in the operation of two MRO spacecraft instruments. A stepping mechanism in the Mars Climate Sounder (MCS) skipped on multiple occasions resulting in a field of view that was slightly out of position. By December, normal operations of the instrument had been suspended, although a mitigation strategy allows the instrument to continue making most of its intended observations. Also, an increase in noise and resulting bad pixels has been observed in several CCDs of the High Resolution Imaging Science Experiment (HiRISE). Operation of this camera with a longer warm-up time[a] has alleviated the issue. However, the cause is still unknown and may return. On November 17, 2006, NASA announced the successful test of the MRO as an orbital communications relay. Using the NASA rover Spirit as the point of origin for the transmission, the MRO acted as a relay for transmitting data back to Earth. HiRISE was able to photograph the Phoenix lander during its parachuted descent to Vastitas Borealis on May 25, 2008 (sol 990). The orbiter continued to experience recurring problems in 2009, including four spontaneous resets, culminating in a four-month shut-down of the spacecraft from August to December. Though engineers were not able to determine the cause of the recurrent resets, they suspected a piece of electronics had been affected by radiation. While investigating, the engineers discovered and fixed a flaw that could have deleted all critical information onboard the MRO.: 7 Another spontaneous reset occurred in September 2010. On March 3, 2010, the MRO passed another significant milestone, having transmitted over 100 terabits of data back to Earth, which was more than all other interplanetary probes sent from Earth combined. In December 2010, the first Extended Mission began. Goals included exploring seasonal processes, searching for surface changes, and providing support for other Martian spacecraft. This lasted until October 2012, after which NASA started the MRO's second Extended Mission, which lasted until October 2014. As of 2023, the MRO has completed five missions, and is currently on its sixth. On August 6, 2012 (sol 2483), the orbiter passed over Gale crater, the landing site of the Mars Science Laboratory mission, during its EDL phase. It captured an image via the HiRISE camera of the Curiosity rover descending with its backshell and supersonic parachute. In December 2014 and April 2015, Curiosity was photographed again by HiRISE inside Gale Crater. Another computer anomaly occurred on March 9, 2014, when the MRO put itself into safe mode after an unscheduled swap from one computer to another. The MRO resumed normal science operations four days later. This occurred again on April 11, 2015, after which the MRO returned to full operational capabilities a week later. NASA reported that the MRO, as well as the Mars Odyssey Orbiter and MAVEN orbiter had a chance to study the Comet Siding Spring flyby on October 19, 2014. To minimize risk of damage from the material shed by the comet, the MRO made orbital adjustments on July 2, 2014, and August 27, 2014. During the flyby, the MRO took the best ever pictures of a comet from the Oort cloud and was not damaged. In January 2015, the MRO discovered and identified the wreckage of Britain's Beagle 2, which was lost during its landing phase in 2003 and was thought to have crashed. The images revealed that Beagle 2 had actually landed safely, but one or two of its solar panels had failed to fully deploy, which blocked the radio antenna. In October 2016, the crash site of another lost spacecraft, Schiaparelli EDM, was photographed by the MRO, using both the CTX and HiRISE cameras. On July 29, 2015, the MRO was placed into a new orbit to provide communications support during the anticipated arrival of the InSight Mars lander mission in September 2016. The maneuver's engine burn lasted for 75 seconds. InSight was delayed and missed the 2016 launch window, but was successfully launched during the next window on May 5, 2018, and landed on November 26, 2018. Due to the longevity of the mission, a number of MRO components have started deteriorating. From the start of the mission in 2005 to 2017, the MRO had used a miniature inertial measurement unit (MIMU) for altitude and orientation control. After 58,000 hours of use, and limited signs of life, the orbiter switched over to a backup, which, as of 2018, has reached 52,000 hours of use. To conserve the life of the backup, NASA switched from MIMUs to an "all-stellar" mode for routine operations in 2018. The "all-stellar" mode uses cameras and pattern recognition software to determine the location of stars, which can then be used to identify the MRO's orientation. Problems with blurring in pictures from HiRISE and battery degradation also arose in 2017 but have since been resolved. In August 2023, electronic units within the HiRISE's CCD RED4 sensor began to fail as well, and are causing visual artifacts in pictures taken. In 2017, the cryocoolers used by CRISM completed their lifecycle, limiting the instrument's capabilities to visible wavelengths, instead of its full wavelength range. In 2022, NASA announced the shutdown of CRISM in its entirety, and the instrument was formally retired on April 3, 2023, after creating two final, near global, maps using prior data and a more limited second spectrometer that did not require cryocoolers. As of January 2024[update], the MRO has around 132 kg of fuel remaining, enough to support operations until 2035.: 3 Instruments Three cameras, two spectrometers and a radar are included on the orbiter along with three engineering instruments and two "science-facility experiments", which use data from engineering subsystems to collect science data. Two of the engineering instruments are being used to test and demonstrate new equipment for future missions. The MRO takes around 29,000 images per year. The High Resolution Imaging Science Experiment (HiRISE) camera is a 0.5 m (1 ft 8 in) reflecting telescope, the largest ever carried on a deep space mission, and has a resolution of 1 microradian, or 0.3 m (1 ft 0 in) from an altitude of 300 km (190 mi). In comparison, satellite images of Earth are generally available with a resolution of 0.5 m (1 ft 8 in). HiRISE collects images in three color bands, 400 to 600 nm (blue–green or B–G), 550 to 850 nm (red) and 800 to 1,000 nm (near infrared). Red color images are 20,264 pixels across (6 km (3.7 mi) wide), and B–G and NIR are 4,048 pixels across (1.2 km (0.75 mi) wide). HiRISE's onboard computer reads these lines in time with the orbiter's ground speed, and images are potentially unlimited in length. Practically however, their length is limited by the computer's 28 Gb memory capacity, and the nominal maximum size is 20,000 × 40,000 pixels (800 megapixels) and 4,000 × 40,000 pixels (160 megapixels) for B–G and NIR images. Each 16.4 Gb image is compressed to 5 Gb before transmission and release to the general public on the HiRISE website in JPEG 2000 format. To facilitate the mapping of potential landing sites, HiRISE can produce stereo pairs of images from which topography can be calculated to an accuracy of 0.25 m (9.8 in). HiRISE was built by Ball Aerospace & Technologies Corp. The Context Camera (CTX) provides grayscale images (500 to 800 nm) with a pixel resolution up to about 6 m (20 ft). CTX is designed to provide context maps for the targeted observations of HiRISE and CRISM, and is also used to mosaic large areas of Mars, monitor a number of locations for changes over time, and to acquire stereo (3D) coverage of key regions and potential future landing sites. The optics of CTX consist of a 350 mm (14 in) focal length Maksutov Cassegrain telescope with a 5,064 pixel wide line array CCD. The instrument takes pictures 30 km (19 mi) wide and has enough internal memory to store an image 160 km (99 mi) long before loading it into the main computer. The camera was built, and is operated by Malin Space Science Systems. CTX had mapped more than 99% of Mars by March 2017 and helped create an interactive map of Mars in 2023. The Mars Color Imager (MARCI) is a wide-angle, relatively low-resolution camera that views the surface of Mars in five visible and two ultraviolet bands. Each day, MARCI collects about 84 images and produces a global map with pixel resolutions of 1 to 10 km (0.62 to 6.21 mi). This map provides a weekly weather report for Mars, helps to characterize its seasonal and annual variations, and maps the presence of water vapor and ozone in its atmosphere. The camera was built and is operated by Malin Space Science Systems. It has a 180-degree fisheye lens with the seven color filters bonded directly on a single CCD sensor. The same MARCI camera was onboard Mars Climate Orbiter launched in 1998. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) instrument is a visible and near infrared spectrometer that is used to produce detailed maps of the surface mineralogy of Mars. It operates from 362 to 3920 nm, measures the spectrum in 544 channels (each 6.55 nm wide), and has a resolution of 18 m (59 ft) at an altitude of 300 km (190 mi). CRISM is being used to identify minerals and chemicals indicative of the past or present existence of water on the surface of Mars. These materials include iron oxides, phyllosilicates, and carbonates, which have characteristic patterns in their visible-infrared energy. The CRISM instrument was shut down on April 3, 2023. The Mars Climate Sounder (MCS) is a radiometer that looks both down and horizontally through the atmosphere in order to quantify the atmosphere's vertical variations. It has one visible/near infrared channel (0.3 to 3.0 μm) and eight far infrared (12 to 50 μm) channels selected for the purpose. MCS observes the atmosphere on the horizon of Mars (as viewed from MRO) by breaking it up into vertical slices and taking measurements within each slice in 5 km (3.1 mi) increments. These measurements are assembled into daily global weather maps to show the basic variables of Martian weather: temperature, pressure, humidity, and dust density. The MCS weighs roughly 9 kg (20 lb) and began operation in November 2006. Since beginning operation, it has helped create maps of mesospheric clouds, study and categorize dust storms, and provide direct evidence of carbon dioxide snow on Mars. This instrument, supplied by NASA's Jet Propulsion Laboratory (JPL), is an updated version of a heavier, larger instrument originally developed at JPL for the 1992 Mars Observer and 1998 Mars Climate Orbiter missions, which both failed. The Shallow Radar (SHARAD) sounder experiment onboard MRO is designed to probe the internal structure of the Martian polar ice caps. It also gathers planet-wide information about underground layers of regolith, rock, and ice that might be accessible from the surface. SHARAD emits HF radio waves between 15 and 25 MHz, a range that allows it to resolve layers as thin as 7 m (23 ft) to a maximum depth of 3 km (1.864 mi). It has a horizontal resolution of 0.3 to 3 km (0.2 to 1.9 mi). SHARAD is designed to complement the Mars Express MARSIS instrument, which has coarser resolution but penetrates to a much greater depth. Both SHARAD and MARSIS were made by the Italian Space Agency. In addition to its imaging equipment, MRO carries three engineering instruments. The Electra communications package is a UHF software-defined radio that provides a flexible platform for evolving relay capabilities. It is designed to communicate with other spacecraft as they approach, land, and operate on Mars. In addition to protocol controlled inter-spacecraft data links of 1 kbit/s to 2 Mbit/s, Electra also provides Doppler data collection, open loop recording and a highly accurate timing service based on an ultra-stable oscillator. Doppler information for approaching vehicles can be used for final descent targeting or descent and landing trajectory recreation. Doppler information on landed vehicles allows scientists to accurately determine the surface location of Mars landers and rovers. The two Mars Exploration Rover (MER) spacecraft utilized an earlier generation UHF relay radio providing similar functions through the Mars Odyssey orbiter. The Electra radio has relayed information to and from the MER spacecraft, Phoenix lander and Curiosity rover. During the cruise phase, the MRO also used the Ka band Telecommunications Experiment Package to demonstrate a less power-intensive way to communicate with Earth. The Optical Navigation Camera images the Martian moons, Phobos and Deimos, against background stars to precisely determine MRO's orbit. Although this is not critical, it was included as a technology test for future orbiting and landing of spacecraft. The Optical Navigation Camera was tested successfully in February and March 2006. It was subsequently turned off, but was turned back on in 2022 to collect data for a potential NASA-ESA Mars Sample Return mission.: 11 Two additional science investigations are also on the spacecraft. The Gravity Field Investigation Package measures variations in the Martian gravitational field through variations in the spacecraft's speed. Speed changes are detected by measuring doppler shifts in MRO's radio signals received on Earth. Data from this investigation can be used to understand the subsurface geology of Mars, determine the density of the atmosphere and track seasonal changes in the location of carbon dioxide deposited on the surface. Due to decreased budgets, data collection ended in 2022.: 8 The Atmospheric Structure Investigation used sensitive onboard accelerometers to deduce the in situ atmospheric density of Mars during aerobraking. The measurements helped provide greater understanding of seasonal wind variations, the effects of dust storms, and the structure of the atmosphere. Spacecraft systems Workers at Lockheed Martin Space Systems in Denver assembled the spacecraft structure and attached the instruments. Instruments were constructed at the Jet Propulsion Laboratory, the University of Arizona Lunar and Planetary Laboratory in Tucson, Arizona, Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, the Italian Space Agency in Rome, and Malin Space Science Systems in San Diego. The structure is made mostly of carbon composites and aluminum-honeycombed plates. The titanium fuel tank takes up most of the volume and mass of the spacecraft and provides most of its structural integrity. The spacecraft's total mass is less than 2,180 kg (4,810 lb) with an unfueled dry mass less than 1,031 kg (2,273 lb). MRO gets all of its electrical power from two solar panels, each of which can move independently around two axes (up-down, or left-right rotation). Each solar panel measures 5.35 m × 2.53 m (17.6 ft × 8.3 ft) and has 9.5 m2 (102 sq ft) covered with 3,744 individual photovoltaic cells. Its high-efficiency solar cells are able to convert more than 26% of the energy it receives from the Sun directly into electricity and are connected together to produce a total output of 32 volts. Whilst orbiting Mars, the panels together produce 600–2000[b] watts of power; in contrast, the panels would generate 6,000 watts in a comparable Earth orbit by being closer to the Sun. MRO has two rechargeable nickel-hydrogen batteries used to power the spacecraft when it is not facing the Sun. Each battery has an energy storage capacity of 50 ampere hours (180 kC). The full range of the batteries cannot be used due to voltage constraints on the spacecraft, but allows the operators to extend the battery life—a valuable capability, given that battery drain is one of the most common causes of long-term satellite failure. Planners anticipate that only 40% of the batteries' capacities will be required during the lifetime of the spacecraft. MRO's main computer is a 133 MHz, 10.4 million transistor, 32-bit, RAD750 processor, a radiation-hardened version of a PowerPC 750 or G3 processor with a purpose-built motherboard. The operating system software is VxWorks and has extensive fault protection protocols and monitoring. Data is stored in a 160 Gbit (20 GB) flash memory module consisting of over 700 memory chips, each with a 256 Mbit capacity. This memory capacity is not actually that large considering the amount of data to be acquired; for example, a single image from the HiRISE camera can be as large as 28 Gb. When it was launched, the Telecom Subsystem on MRO was the best digital communication system sent into deep space, and for the first time used capacity-approaching turbo-codes. It was more powerful than any previous deep space mission, and is able to transmit data more than ten times faster than previous Mars missions. Along with the Electra communications package, the system consists of a very large (3 m (9.8 ft)) High Gain Antenna, which is used to transmit data to the Deep Space Network on Earth via X-band frequencies at 8.41 GHz. It also demonstrates the use of the Ka band at 32 GHz for higher data rates. Maximum transmission speed from Mars can be as high as 6 Mbit/s, but averages between 0.5 and 4 Mbit/s. The spacecraft carries two 100-watt X-band Travelling Wave Tube Amplifiers (TWTA) (one of which is a backup), one 35-watt Ka-band amplifier, and two Small Deep Space Transponders (SDSTs). Two smaller low-gain antennas are also present for lower-rate communication during emergencies and special events. These antennas do not have focusing dishes and can transmit and receive from any direction. They are an important backup system to ensure that MRO can always be reached, even if its main antenna is pointed away from the Earth. The Ka band subsystem was used to show how such a system could be used by spacecraft in the future. Due to lack of spectrum at 8.41 GHz X-band, future high-rate deep space missions will use 32 GHz Ka-band. NASA Deep Space Network (DSN) implemented Ka-band receiving capabilities at all three of its complexes (Goldstone, Canberra and Madrid) over its 34-m beam-waveguide (BWG) antenna subnet. Ka-band tests were also planned during the science phase, but during aerobraking a switch failed, limiting the X-band high gain antenna to a single amplifier. If this amplifier fails all high-speed X-band communications will be lost. The Ka downlink is the only remaining backup for this functionality, and since the Ka-band capability of one of the SDST transponders has already failed, (and the other might have the same problem) JPL decided to halt all Ka-band demonstrations and hold the remaining capability in reserve. By November 2013, the MRO had passed 200 terabits in the amount of science data returned. The data returned by the mission is more than three times the total data returned via NASA's Deep Space Network for all the other missions managed by NASA's Jet Propulsion Laboratory over the past 10 years. The spacecraft uses a 1,175 L (258 imp gal; 310 US gal) fuel tank filled with 1,187 kg (2,617 lb) of hydrazine monopropellant. Fuel pressure is regulated by adding pressurized helium gas from an external tank. Seventy percent of the propellant was used for orbital insertion, and it has enough propellant to keep functioning into the 2030s. MRO has 20 rocket engine thrusters on board. Six large thrusters each produce 170 N (38 lbf) of thrust for a total of 1,020 N (230 lbf) meant mainly for orbital insertion. These thrusters were originally designed for the Mars Surveyor 2001 Lander. Six medium thrusters each produce 22 N (4.9 lbf) of thrust for trajectory correction maneuvers and attitude control during orbit insertion. Finally, eight small thrusters each produce 0.9 N (0.20 lbf) of thrust for attitude control during normal operations. Four reaction wheels are also used for precise attitude control during activities requiring a highly stable platform, such as high-resolution imaging, in which even small motions can cause blurring of the image. Each wheel is used for one axis of motion. The fourth wheel is a backup in case one of the other three wheels fails. Each wheel weighs 10 kg (22 lb) and can be spun as fast as 100 Hz or 6,000 rpm. In order to determine the spacecraft's orbit and facilitate maneuvers, 16 Sun sensors – eight primaries and eight backups – are placed around the spacecraft to calibrate solar direction relative to the orbiter's frame. Two star trackers, digital cameras used to map the position of catalogued stars, provide NASA with full, three-axis knowledge of the spacecraft orientation and attitude. A primary and backup Miniature Inertial Measurement Unit (MIMU), provided by Honeywell, measures changes to the spacecraft attitude as well as any non-gravitationally induced changes to its linear velocity. Each MIMU is a combination of three accelerometers and three ring-laser gyroscopes. These systems are all critically important to MRO, as it must be able to point its camera to a very high precision in order to take the high-quality pictures that the mission requires. It has also been specifically designed to minimize any vibrations on the spacecraft, so as to allow its instruments to take images without any distortions caused by vibrations. Cost The total cost of the MRO through the end of its prime mission was $716.6 million. Of this amount, $416.6 million was spent on spacecraft development, approximately $90 million for its launch, and $210 million for 5 years of mission operations. Since 2011, MRO's annual operations costs are, on average, $31 million per year, when adjusted for inflation. The MRO's science budget has, like other long term missions, been declining, leading to reduced science activity.: 44 Discoveries An article in the journal Science in September 2009, reported that some new craters on Mars have excavated relatively pure water ice. After being exposed, the ice gradually fades as it sublimates away. These new craters were found and dated by the CTX camera, and the identification of the ice was confirmed using CRISM. The ice was found in a five locations, three of which were in the Cebrenia quadrangle. These locations are 55°34′N 150°37′E / 55.57°N 150.62°E / 55.57; 150.62; 43°17′N 176°54′E / 43.28°N 176.9°E / 43.28; 176.9; and 45°00′N 164°30′E / 45°N 164.5°E / 45; 164.5. Two others are in the Diacria quadrangle: 46°42′N 176°48′E / 46.7°N 176.8°E / 46.7; 176.8 and 46°20′N 176°54′E / 46.33°N 176.9°E / 46.33; 176.9. Radar results from SHARAD suggested that features termed lobate debris aprons (LDAs) contain large amounts of water ice. Of interest from the days of the Viking Orbiters, these LDA are aprons of material surrounding cliffs. They have a convex topography and a gentle slope; this suggests flow away from the steep source cliff. In addition, lobate debris aprons can show surface lineations just as rock glaciers on the Earth. SHARAD has provided strong evidence that the LDAs in Hellas Planitia are glaciers that are covered with a thin layer of debris (i.e. rocks and dust); a strong reflection from the top and base of LDAs was observed, suggesting that pure water ice makes up the bulk of the formation (between the two reflections). Based on the experiments of the Phoenix lander and the studies of the Mars Odyssey from orbit, water ice is known to exist just under the surface of Mars in the far north and south (high latitudes). Using data from Mars Global Surveyor, Mars Odyssey, and the MRO, scientists have found widespread deposits of chloride minerals. Evidence suggests that the deposits were formed from the evaporation of mineral enriched waters. The research suggests that lakes may have been scattered over large areas of the Martian surface. Usually, chlorides are the last minerals to come out of solution. Carbonates, sulfates, and silica should precipitate out ahead of them. Sulfates and silica have been found by the Mars rovers on the surface. Places with chloride minerals may have once held various life forms. Furthermore, such areas could preserve traces of ancient life. In 2009, a group of scientists from the CRISM team reported on nine to ten different classes of minerals formed in the presence of water. Different types of clays (also called phyllosilicates) were found in many locations. The phyllosilicates identified included aluminum smectite, iron/magnesium smectite, kaolinite, prehnite, and chlorite. Rocks containing carbonate were found around the Isidis basin. Carbonates belong to one class in which life could have developed. Areas around Valles Marineris were found to contain hydrated silica and hydrated sulfates. The researchers identified hydrated sulfates and ferric minerals in Terra Meridiani and in Valles Marineris. Other minerals found on Mars were jarosite, alunite, hematite, opal, and gypsum. Two to five of the mineral classes were formed with the right pH and sufficient water to permit life to grow. On August 4, 2011 (sol 2125), NASA announced that MRO had detected dark streaks on slopes, known as recurring slope lineae caused by what appeared to be flowing salty water on the surface or subsurface of Mars. On September 28, 2015, this finding was confirmed at a special NASA news conference. In 2017, however, further research suggested that the dark streaks were created by grains of sand and dust slipping down slopes, and not water darkening the ground. See also Notes References This article incorporates public domain material from websites or documents of the National Aeronautics and Space Administration. Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-31] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Konami] | [TOKENS: 4364]
Contents Konami Konami Group Corporation (Japanese: コナミグループ株式会社, Hepburn: Konami Gurūpu kabushiki-gaisha), commonly known as Konami,[nb 1] is a Japanese multinational entertainment company and video game developer and publisher headquartered in Chūō, Tokyo. The company also produces and distributes trading cards, anime, tokusatsu, pachinko machines, slot machines, and arcade cabinets. It has casinos around the world, and operates health and physical fitness clubs across Japan. The company originated in 1969 as a jukebox rental and repair business in Toyonaka, Osaka, Japan, by Kagemasa Kōzuki, who remains the company's chairman. On top of their flagship development subsidiary, Konami also owns Bemani, known for Dance Dance Revolution and Beatmania, as well as the assets of former game developer Hudson Soft, known for Bomberman, Adventure Island, Bonk, Bloody Roar, and Star Soldier. Konami is the twentieth-largest game company in the world by revenue.[better source needed] Konami also publishes the Yu-Gi-Oh! Trading Card Game, one of the best-selling TCGs in history. Konami's video game franchises include Metal Gear, Silent Hill, Power Pros, Castlevania, Contra, Frogger, Tokimeki Memorial, Gradius, Parodius, Yu-Gi-Oh!, Suikoden, and eFootball (including its predecessors International Superstar Soccer and Pro Evolution Soccer). History The company was founded on 21 March 1969 as a jukebox rental and repair business in Toyonaka, Osaka by three business partners: Kagemasa Kōzuki, Nakama, and Miyasako. Kōzuki and Miyasako met while working at Nippon Columbia's Osaka branch; Nakama, who was Miyasako's acquaintance, also worked in the music industry. The name Konami is a portmanteau of their names. By 1973, the Japanese jukebox industry was in decline, which caused the business to transition into a manufacturer of electro-mechanical arcade games. On 19 March 1973, the company was officially incorporated under the name Konami Industry Co., Ltd. (コナミ工業株式会社, Konami Kōgyō kabushiki gaisha). In the late 1970s, Konami began developing video games as a contractor for the Leijac Corporation [ja], an early video game publisher; their first video game was Block Yard, a coin-operated Breakout clone, which released in August 1977. In January 1979, they began exporting products to the United States. Konami began to achieve success with arcade games in the early 1980s, starting with Scramble (1981), followed by hits such as Frogger (1981), Super Cobra (1981), Time Pilot (1982), Roc'n Rope (1983), Track & Field (1983), and Yie Ar Kung-Fu (1985). Many of their early games were licensed to other companies for US release, including Centuri, Stern Electronics, Sega, and Gremlin Industries. They established their U.S. subsidiary, Konami Inc. (later Konami of America Inc., and Konami Digital Entertainment Inc.), in November 1982; initially based in Torrance, California, they would later move to Buffalo Grove, Illinois, in 1984 following their acquisition of arcade distributor Interlogic, Inc., with Interlogic founder and president Ben Harel serving as president of Konami Inc. It was during this period that Konami began expanding their video game business into the home consumer market following a brief stint releasing video games for the Atari 2600 in 1982 for the U.S. market. The company released numerous games for the MSX home computer standard in 1983, followed by the Nintendo Entertainment System in 1985.[non-primary source needed] Numerous Konami franchises were established during this period on both platforms, as well as the arcades, such as Gradius, Castlevania, TwinBee, Ganbare Goemon, Contra, and Metal Gear, in addition to success with hit licensed games such as Teenage Mutant Ninja Turtles (TMNT). Due to the success of their arcade and NES games, Konami's earnings grew from $10 million in 1987 to $300 million in 1991. In June 1991, Konami's legal name was changed to Konami Co., Ltd. (コナミ株式会社, Konami kabushiki gaisha) and their headquarters were relocated to Minato, Tokyo, in April 1993.[non-primary source needed] The company started supporting the 16-bit video game consoles during this period, starting with the Super NES in 1990, followed by the PC Engine in 1991, and the Sega Genesis in 1992. 1991 was also the year when Konami introduced a new approach to combat piracy in Teenage Mutant Ninja Turtles III: The Manhattan Project, released for the Nintendo Entertainment System (NES) in 1991. If the game detected that it was an unauthorized copy, it subtly altered gameplay mechanics. The player’s attack damage was reduced, while enemy attacks became significantly stronger. Additionally, the game's final boss, Shredder, was made invincible, rendering the game impossible to complete. This anti-piracy measure served as a deterrent to unauthorized copies by making the game frustratingly difficult for those using pirated versions. After the launch of the Sega Saturn and PlayStation in 1994, Konami became a business divisional organization with the formation of various Konami Computer Entertainment (KCE) subsidiaries, starting with KCE Tokyo and KCE Osaka (later known as KCE Studios) in April 1995, followed by KCE Japan (later known as Kojima Productions) in April 1996. Each KCE subsidiary created different intellectual properties such as KCE Tokyo's Silent Hill series and KCE Japan's Metal Gear Solid series (a revival of the Metal Gear series on MSX). In 1997, Konami started producing rhythm games for arcades under the Bemani brand and branched off into the collectible card game business with the launch of the Yu-Gi-Oh! Trading Card Game. Konami was not only known for its card games, it also imported into the Pachinko business. Pachinko played a huge role in Konami's success as it started to popularize new never before seen characters. In July 2000, the company's legal English name was changed to Konami Corporation, but the Japanese legal name remained the same. As the company transitioned into developing video games for the sixth-generation consoles, they branched out into the health and fitness business acquiring People Co., Ltd and Daiei Olympic Sports Club, Inc. which became Konami subsidiaries. In August 2001, the company invested in another video game publisher, Hudson Soft, which became a consolidated subsidiary after Konami accepted new third-party shares issued by them. In January 2003, Avranches Automatique began handling sales of Konami's arcade games in Europe outside the U.K. and Ireland. On February 7, 2003, Betson Enterprises took over distribution and service for Konami's arcade games in the U.S. Some time later, PMT Sales started handling Konami arcade game sales in the U.K. and Ireland. In March 2006, Konami merged all their video game development divisions into a new subsidiary known as Konami Digital Entertainment Co. (KDE), as the parent company became a pure holding company. Their headquarters were relocated to Minato, Tokyo, in 2007. On January 20, 2009, Electrocoin became the exclusive distributor and after-sale agent of Konami's arcade games in Europe, Russia, the Middle East, and Africa. The absorption of Hudson Soft in 2012 resulted in the addition of several other franchises including: Adventure Island, Bonk, Bloody Roar, Bomberman, Far East of Eden, and Star Soldier. In April 2015, Konami delisted itself from the New York Stock Exchange following the dissolution of their Kojima Productions subsidiary. In a translated interview with Nikkei Trendy Net published in the following month, the newly appointed president of Konami's gaming division, Konami Digital Entertainment, Hideki Hayakawa, announced that Konami would shift their focus towards mobile gaming for a while, claiming that "mobile is where the future of gaming lies." Also in April 2015, the trade name of the company was changed from Konami Corporation to Konami Holdings Corporation during the same month. Konami consolidated its productions teams established in 2004 into their headquarters, including Pawapuro Production, BEMANI Production, Virtual Kiss Production, Loveplus Production, Kojima Productions and others, that year. In 2017, Konami announced that they would be reviving some of the company's other well-known video game titles following the success of their Nintendo Switch launch title Super Bomberman R.[unreliable source?] In early 2020, Konami moved their headquarters to the Ginza district of Tokyo, which includes a facility for holding esports events as well as a school for esports players. Konami announced a major restructuring of Konami Digital Entertainment on 25 January 2021, which including the dissolution of its Product Divisions 1, 2, and 3 to be reconsolidated into a new structure to be announced at a later time. Konami affirmed this would not affect their commitment to video games and was only an internal restructuring. On 1 July 2022 Konami changed their corporate name again from Konami Holdings Corporation to Konami Group Corporation. In April 2023, Konami announced that it has opened a new studio in Osaka, Japan. The new offices, located in the Umeda Sky Building south building, will support the developer in its efforts both grow and endure over the coming decades. Konami suggested that the new building would be a core entity in the studio's current and future projects, noting that it hopes Konami Osaka will encourage "sustainable growth" over the next 50 years. In February 2024, Konami Digital Entertainment announced the establishment of its own anime studio called Konami Animation. The studio will invest the CG technology and know-how it fostered from game development into animation, and it plans not only to work on Konami's own intellectual properties but in other properties as well. Its first work was a PV for Yu-Gi-Oh! 25th anniversary. In May 2025, Konami announced that it would split off its arcade game business into a new subsidiary known as Konami Arcade Games (led by Bemani musician Yoshitaka Nishimura), leaving Konami Amusement to focus on pachinko and pachislot machines. In November 2025, following the successful releases of Metal Gear Solid Delta: Snake Eater and Silent Hill f, Konami signed a strategic alliance and cooperation with CyberAgent (including Cygames) and Electronic Arts focused on video game business, as well as the signing of the JOC/JPC, NPB, and J-League became official partners. Corporate structure Konami is headquartered in Tokyo. In the United States, Konami manages its digital/arcade/trading card game business from Hawthorne, California, and its casino gaming business from Paradise, Nevada. Its Australian gaming operations are in Sydney. As of March 2019, it owns 22 consolidated subsidiaries around the world. On 7 November 2005, Konami Corporation announced restructuring Konami Corporation into a holding company, by moving its Japanese Digital Entertainment Business segment under Konami Corporation. The Digital Entertainment Business became Konami Digital Entertainment Co., Ltd. The newly established Konami Corporation was expected to begin operation on 31 March 2006. Konami Digital Entertainment Co., Ltd. (株式会社コナミデジタルエンタテインメント, Kabushiki-gaisha Konami Dejitaru Entateinmento) is Konami's Japanese video game development and publishing subsidiary founded on 31 March 2006. Before Konami Corporation had formally changed to a holding company in 2006, various forms of Konami Digital Entertainment companies had been established either as holding company or publisher. The last of the company, the Japan-based Konami Digital Entertainment Co., Ltd., was split from Konami Corporation during the holding company restructuring process. Konami Computer Entertainment Nagoya, Inc. (KCEN), founded on 1 October 1996, was dissolved along with Konami Computer Entertainment Kobe, Inc. (KCEK) in December 2002. On 16 December 2004, Konami Corporation announced Konami Online, Inc., Konami Computer Entertainment Studios, Inc., Konami Computer Entertainment Tokyo, Inc. and Konami Computer Entertainment Japan, Inc. would merge into Konami Corporation, effective on 1 April 2005. On 22 February 2005, Konami Corporation announced Konami Media Entertainment, Inc. would merge into Konami Corporation, effective on 1 March 2005. On 11 March 2005, Konami Corporation announced Konami Traumer, Inc would be merged back into Konami Corporation, effective on 1 June 2005. On 5 January 2006, Konami Corporation announced the merger of Konami Sports Corporation merged with its parent company, Konami Sports Life Corporation. The parent would be dissolved under the merger, and Konami Sports would become the wholly owned subsidiary of Konami Corporation after share exchange between KC and KS. After the share exchange, KS would be renamed Konami Sports & Life Co., Ltd. On 28 February 2006, Konami Sports Corporation merged with its parent company, Konami Sports Life Corporation, and became Konami Sports Corporation. On 21 September 2010, Konami Corporation announced it has signed an agreement to acquire with Abilit Corporation via share exchange. After the transaction, Abilit Corporation became a wholly owned subsidiary of Konami Corporation, effective 1 January 2011. On 1 January 2011, Abilit Corporation was renamed to Takasago Electric Industry Co., Ltd. As part of the acquisition, Biz Share Corporation also became a subsidiary of Konami Corporation. On 2 October 2006, Konami Corporation announced it had completed the acquisition of mobile phone content developer Megacyber Corporation. On 6 February 2007, Konami Corporation announced Megacyber Corporation to be merged into Konami Digital Entertainment Co., Ltd., with Konami Digital Entertainment Co., Ltd. being the surviving company, effective on 1 April 2007. On 1 April 2011, Konami acquired video game developer Hudson Soft, a company in which Konami had held a controlling stake since 11 April 2005. On 1 March 2012, Hudson Soft merged with Konami Digital Entertainment, with the latter emerging as the surviving entity. Video games Major titles by Konami include the action Castlevania series, the survival horror Silent Hill series, the action shooter Contra series, the platform adventure Ganbare Goemon series, the stealth action Metal Gear series, the role-playing Suikoden series, the Bemani rhythm game series (which includes Dance Dance Revolution, Beatmania IIDX, GuitarFreaks, DrumMania, and Pop'n Music, among others), Dancing with the Stars, the dating simulation Tokimeki Memorial series, and football simulation Pro Evolution Soccer. Konami has produced shoot 'em up arcade games such as Gradius, Life Force, Time Pilot, Gyruss, Parodius, Axelay, and TwinBee. Konami's also licenses games based on cartoons, especially Batman: The Animated Series, Teenage Mutant Ninja Turtles, Tiny Toon Adventures, and the Animaniacs series, but other American productions like The Simpsons, Bucky O'Hare, G.I. Joe, X-Men, and The Goonies, and French comic Asterix all have seen release at some point in the past by Konami on arcades or video game consoles. Some cinematically styled franchises from Konami are Silent Hill survival horror franchise, and the Metal Gear series. Another successful franchise is Winning Eleven, the spiritual sequel to International Superstar Soccer. In Japan, it is known for the popular Jikkyō Powerful Pro Yakyū series baseball series and the Zone of the Enders games. The company had obtained the rights to Saw from Brash Entertainment when the game's production had been suspended due to financial issues. Konami is known for its cheat code, which traditionally gives many power-ups in its games. In 2024, FIFA announced Konami as its new official esports partner. This collaboration allows FIFA to host the FIFAe World Cup using Konami's eFootball instead of EA Sports FC. Players can now participate in qualifying matches for two tournaments scheduled for 2024: one for mobile and one for consoles. This partnership aims to enhance eFootball's visibility and attract new players, particularly those who were deterred by previous issues with the game. Film production In 2006, various films based on video game franchises began being produced by Konami. Konami produced the Silent Hill film (released in 2006) and announced that they would produce a Metal Gear Solid film. On 4 December 2020, Deadline reported that Oscar Isaac would star as Solid Snake in the adaptation in development at Sony Pictures, with Jordan Vogt-Roberts on board to direct. Personal computing In 2020, Konami launched a PC gaming brand in Japan known as Arespear, which includes desktop computers, keyboards, and headsets (the last of which designed in collaboration with Konami's Bemani musicians). The computers have been used in newer Bemani arcade cabinets as a showcase for their capabilities. Controversies Silent Hills, set to be the ninth installment of the Silent Hill video game series, was abruptly cancelled in April 2015 without explanation despite the critical acclaim and success of P.T., a playable teaser. Hours after the announcement, Konami delisted itself from the New York Stock Exchange. Game co-director and writer Guillermo del Toro publicly criticized the cancellation as not making any sense and questioned what he described as a "scorched earth" approach to removing the trailer. Due to the experience, del Toro stated that he would never work on another video game. In 2015, Konami Digital Entertainment CEO Hideki Hayakawa announced that, with few exceptions, Konami would stop making console games and instead focus on the mobile gaming platform. The decision was heavily criticized by the video gaming community. Konami UK community manager Graham Day soon after pushed back against the reporting that Konami would cease AAA game production, stating that he believed the root of the problem to be either a mistranslation or a misinterpretation of Hayakawa's remarks. On 3 March 2015, Konami announced they would be shifting focus away from individual studios, notably Kojima Productions. Internal sources claimed the restructure was due to a clash between Hideo Kojima and Konami. References to Kojima were soon stripped from marketing material, and Kojima's position as an executive vice president of Konami Digital Entertainment was removed from the company's official listing of executives. Later that year, Konami's legal department barred Kojima from accepting the award for Best Action-Adventure for his work on Metal Gear Solid V: The Phantom Pain at The Game Awards 2015. When announced during the event, the audience booed in disapproval of Konami's actions. Host Geoff Keighley expressed his disappointment in Konami's actions. After actor Kiefer Sutherland accepted the award in Kojima's stead, a choir sang "Quiet's Theme" from The Phantom Pain as a tribute to the absent Kojima. Kojima left Konami several days afterwards, re-opening Kojima Productions as an independent company. In August 2015, The Nikkei criticized Konami for its unethical treatment of employees. In June 2017, The Nikkei further reported on Konami's continued clashes with Kojima Productions, preventing the studio's application for health insurance, as well as Konami's actions in making it difficult for former employees to get future jobs; they are notably forbidden from mentioning their work with Konami on their résumés. Konami also started filing complaints against other game companies that hired ex-Konami employees, leading to an unspecified major game company warning its staff against doing so. A former employee of Konami stated: "If an ex-[Konami employee] is interviewed by the media, the company will send that person a letter through a legal representative, in some cases indicating that Konami is willing to take them to court"; they also pressured an ex-employee into closing their new business. See also Notes References Sources Further reading External links
========================================