source
stringlengths
32
199
text
stringlengths
26
3k
https://en.wikipedia.org/wiki/Httpd
HTTPd is a software program that usually runs in the background, as a process, and plays the role of a server in a client–server model using the HTTP and/or HTTPS network protocol(s). The process waits for the incoming client requests and for each request it answers by replying with requested information, including the sending of the requested web resource, or with an HTTP error message. HTTPd stands for Hypertext Transfer Protocol daemon. It usually is the main software part of an HTTP server better known as a web server. Some commonly used implementations are: Apache HTTP Server BusyBox httpd Lighttpd HTTP server Nginx HTTP and reverse proxy server OpenBSD's httpd (since OpenBSD 5.6) See also HTTP server Web server Comparison of web server software References External links Example of an HTTPd: httpd - Apache Hypertext Transfer Protocol Server
https://en.wikipedia.org/wiki/List%20of%20programs%20broadcast%20by%20NBC
This is a list of programs currently broadcast by the American television network NBC. Current programming Dramas Law & Order (1990–2010; 2022) Law & Order: Special Victims Unit (1999) Chicago Fire (2012) Chicago P.D. (2014) Chicago Med (2015) Transplant (2020) Law & Order: Organized Crime (2021) La Brea (2021) Quantum Leap (2022) Magnum P.I. (2023) (moved from CBS) The Irrational (2023) Found (2023) Comedies Lopez vs Lopez (2022) Night Court (2023) Docuseries LA Fire & Rescue (2023) Reality/non-scripted America's Got Talent (2006) The Voice (2011) American Ninja Warrior (2012) Baking It (2021; also on Peacock) America's Got Talent: All-Stars (2023) Hot Wheels: Ultimate Challenge (2023) Game shows Weakest Link (2001–02; 2020) The Wall (2016) Capital One College Bowl (1963–70, previously aired on NBC Radio Network from 1953–55; 2021) That's My Jam (2021) Password (2022) Award shows The Golden Globe Awards (1958–68; 1974) People's Choice Awards (2021) Talk shows 3rd Hour Today (2018) Today with Hoda & Jenna (2019) Late-night shows Saturday Night Live (1975) The Tonight Show Starring Jimmy Fallon (2014) Late Night with Seth Meyers (2014) News Meet the Press (1947) Today (1952) NBC Nightly News (1970) Saturday Today (1992) Dateline NBC (1992) Early Today (1999) 1st Look (2008) Sunday Today with Willie Geist (2016) NBC News Daily (2022) Specials Macy's Thanksgiving Day Parade (experimental local broadcasts in 1939, then again starting from 1945; broadcast nationally since 1953) How the Grinch Stole Christmas! (acquired the broadcast rights in 2015) The National Dog Show (since 2002) Macy's Fourth of July Spectacular (earliest records are from 1996) Miss America Competition (livestreamed since 2019) Shrek the Halls (acquired the broadcast rights in 2023) Saturday morning A New Leaf (2019) Consumer 101 (2018) Earth Odyssey with Dylan Dreyer (2019) One Team: The Power of Sports (2021) Roots Less Traveled (2020) Vets Saving Pets (2018) Wild Child (2021) Mutual of Omaha's Wild Kingdom: Protecting the Wild (2023) Sports Olympics on NBC, which includes: Summer Olympic Games Winter Olympic Games NFL on NBC, which includes: Football Night in America Sunday Night Football NFL Kickoff Game The NFL on Thanksgiving Day Select playoff games The Super Bowl (every four years) Golf Channel on NBC, which includes: The Open Championship The Players Championship The Ryder Cup Presidents Cup Scottish Open Senior PGA Championship Thoroughbred Racing on NBC, which includes the following races: Kentucky Derby Preakness Stakes Belmont Stakes Breeders' Cup Classic Santa Anita Derby College Football on NBC Sports and high school football, including: Notre Dame Football on NBC Big Ten football The Bayou Classic The All-American Bowl US Olympic Trials Tennis on NBC, which includes the French Open Boxing on NBC, which includes Premier Boxing Champions bouts World Athletics Championships Select Diamond League meetings, which includes the Prefontaine Classic USA
https://en.wikipedia.org/wiki/List%20of%20programs%20broadcast%20by%20CBS
This is a list of programs currently or formerly broadcast by CBS. Current programming Note: Titles are listed according to their year of debut on the network in parentheses. Dramas NCIS (2003) Blue Bloods (2010) S.W.A.T. (2017) FBI (2018) FBI: Most Wanted (2020) The Equalizer (2021) NCIS: Hawaiʻi (2021) FBI: International (2021) CSI: Vegas (2021) So Help Me Todd (2022) Fire Country (2022) Comedies Young Sheldon (2017) The Neighborhood (2018) Bob Hearts Abishola (2019) Ghosts (2021) Reality/non-scripted Survivor (2000) Big Brother (2000) The Amazing Race (2001) Celebrity Big Brother (2018) Tough as Nails (2020) The Greatest @AtHome Videos (2020) Secret Celebrity Renovation (2021) The Challenge: USA (2022) Buddy Games (2023) Awards shows Grammy Awards (1973) Tony Awards (1978) Kennedy Center Honors (1978) CMT Music Awards (2022) Game shows The Price Is Right (1972) Let's Make a Deal (2009) Lingo (2023) Superfan (2023) Lotería Loca (2023) Raid the Cage (2023) Talk shows The Talk (2010) Late night shows The Late Show with Stephen Colbert (2015) Comics Unleashed (2023) Specials Rudolph the Red-Nosed Reindeer (1972) Frosty the Snowman (1969) Frosty Returns (1992) The Story of Santa Claus (1996) Robbie the Reindeer in Hooves of Fire (2002) News CBS Evening News (1948) Face the Nation (1954) 60 Minutes (1968) CBS News Sunday Morning (1979) CBS Morning News (1982) 48 Hours (1988) CBS Overnight News (2015) CBS Reports (2017) CBS Mornings (2021) Film presentations CBS Sunday Movie (1989–2006; 2007–15; 2020) Saturday morning Lucky Dog (2013) Recipe Rehab (2013–15; 2023) The Henry Ford Innovation Nation (2014) Hope In the Wild (2018) Mission Unstoppable (2019) Tails of Valor (2019; 2023) Soap operas The Young and the Restless (1973) The Bold and the Beautiful (1987) Sports NFL on CBS (1956) AFC games (and inter-conference games when the AFC team is the road team) The AFC Championship Game The Super Bowl (every four years) The NFL Today (1961) PGA Tour on CBS (1970) Masters Tournament (shared with ESPN) PGA Championship (shared with ESPN) PGA Tour (shared with NBC Sports) College Basketball on CBS (1981) Select weekend regular season games CBS Sports Classic Missouri Valley Conference men's basketball tournament championship Mountain West Conference men's basketball tournament championship Atlantic 10 men's basketball tournament championship Big Ten Conference men's basketball tournament semifinals and Championship Big Ten Conference women's basketball tournament Championship College Football (1996) Southeastern Conference and Big Ten Conference Football, including: Saturday Game of the Week The SEC Championship Game (until 2023) The Big Ten Championship Game (in 2024 and 2028) The Sun Bowl The Army-Navy Game Formula E (2021–present) New York City ePrix, as well as 1 additional race NCAA March Madness (2011) Selection Sunday NCAA Division I men's basketball tournament (shared with Warner Bros. Discovery Sports) Final Four and National Cham
https://en.wikipedia.org/wiki/List%20of%20programs%20broadcast%20by%20Fox
The Fox Broadcasting Company is an American commercial free-to-air television network owned and operated by the Fox Corporation. Though it was officially launched on October 9, 1986, Fox began its official primetime setup on April 5, 1987, with the series Married... with Children and The Tracey Ullman Show airing that night. Overview As of October 2012, Fox maintains 19.5 hours of network programming per week. The animated comedy series The Simpsons is one of Fox's most popular shows, becoming the network's first series to rank among the top 30-highest-rated shows of a television season after its original debut, and is the longest running sitcom, as well as animated series, of all time, contributing to the channel's success. According to Lanford Beard of Entertainment Weekly, "The Simpsons have transformed Fox from a small, ignored network into a global network that cannot be ignored. The science fiction television series The X-Files also contributed to the network's success, which led to two spin-offs Millennium and The Lone Gunmen. Fox began airing in high-definition on September 12, 2004, with a series of National Football League (NFL) American football games. Fox had a programming block for children titled Fox Kids, which ran from September 8, 1990 to September 7, 2002. Unlike the "three larger networks", which aired primetime programming from 8 to 11 p.m. (EST) Mondays to Saturdays and 7 to 11 p.m. (EST) Sundays, Fox has traditionally avoided programming in the 10 p.m. (EST) time interval, leaving that hour to affiliates to program locally (primarily with local newscasts). On April 21, 2012, Fox celebrated its 25th-anniversary, with a two-hour television special featuring people related to Fox and its shows. It presented Fox's programs 24, American Idol, Cops, Family Guy, Married... with Children, The Simpsons, and The X-Files, among other programs. The network's adult cartoons are listed under the Animation Domination banner, which is a Sunday night programming block. Fox is a full member of the North American Broadcasters Association (NABA) and the National Association of Broadcasters (NAB). Current programming Dramas 9-1-1: Lone Star (2020) The Cleaning Lady (2022) Alert: Missing Persons Unit (2023) Accused (2023) Comedy Animal Control (2023) Animation The Simpsons (1989) Family Guy (1999–2002; 2005) Bob's Burgers (2011) The Great North (2021) HouseBroken (2021) Krapopolis (2023) Unscripted Reality Hell's Kitchen (2005) Kitchen Nightmares (2007–14; 2023) MasterChef (2010) MasterChef Junior (2013) The Masked Singer (2019) Lego Masters (2020) Crime Scene Kitchen (2021) Next Level Chef (2022) Special Forces: World's Toughest Test (2023) Farmer Wants a Wife (2023; moved from The CW) Gordon Ramsay's Food Stars (2023) Stars on Mars (2023) Game shows Don't Forget the Lyrics! (2007–09; 2022) Beat Shazam (2017) I Can See Your Voice (2020) Name That Tune (2021) Snake Oil (2023) Awards shows/beauty pageants Teen Choice Awards (1999)
https://en.wikipedia.org/wiki/List%20of%20programs%20broadcast%20by%20UPN
The following is a list of programs broadcast by UPN. Some programs were carried over to The CW, a network formed through a partnership between WB parent company Time Warner and UPN corporate parent CBS Corporation, in September 2006 following the closure of The WB. Titles are listed in alphabetical order followed by the year of debut in parentheses. Dramas Comedies Adult Animation Dilbert (1999–00) Game Over (2004) Gary & Mike (2001) Home Movies (1999; moved to Adult Swim) Reality/other Children's programming UPN Kids Originals Beetleborgs (1998–99) Bureau of Alien Detectors (1996–97) Fantastic Four (1998–99) Iron Man (1998–99) Jumanji (1996–98) Spider-Man (1998–99) Spider-Man and His Amazing Friends (1998–99) Sweet Valley High (1997–98) The Incredible Hulk (1996–99) X-Men (1998–99) Disney's One Too Originals Buzz Lightyear of Star Command (2000–03) Disney's Doug (1999–2001) Hercules (1999–2000) The Mouse and the Monster (1996–97) Recess (1999–2003) Sabrina: The Animated Series (1999–2002) The Legend of Tarzan (2001–03) The Weekenders (2001–02) Pepper Ann (2000–01) Acquired (Both Blocks) Breaker High (1997–98) Digimon (2002–03) Space Strikers (1995–96) Teknoman (1995–96) See also List of programs broadcast by The CW References UPN
https://en.wikipedia.org/wiki/1946%20in%20television
The year 1946 in television involved some significant events. Below is a list of television-related events during 1946. The number of television programming was increasing after World War II. Events February 4 – RCA demonstrates an all-electronic color television system. February 18 – The first Washington, D.C. – New York City telecast through AT&T corporation's coaxial cable, in which General Dwight Eisenhower places a wreath at the base of the statue in the Lincoln Memorial and others make brief speeches, is termed a success by engineers, although Time magazine calls it "as blurred as an early Chaplin movie." February 25 – The prewar U.S. 18-channel VHF allocation is officially ended in favor of a new 13-channel VHF allocation due to the appropriation of some frequencies by the military and the relocation of FM radio. Only five of the old channels are the same as new channels in terms of frequency and none have the same number as before. April 22 – CBS transmits a Technicolor movie short and color slides by coaxial cable from Manhattan to Washington (332 kilometers) and return. June 7 – The BBC Television Service begins broadcasting again for the first time since 1939. The first words heard are "Good afternoon everybody. How are you? Do you remember me, Jasmine Bligh?". Twenty minutes later, the Mickey Mouse cartoon Mickey's Gala Premiere, last programme transmitted seven years earlier at the start of World War II, is reshown. June 19 – The first televised heavyweight boxing title fight between Joe Louis and Billy Conn is broadcast from Yankee Stadium. The fight is seen by 141,000 people, the largest television audience to see a boxing match to this date. July 7 – Broadcasting of the BBC's children's programme For The Children is resumed, one of the few pre-war programmes to resume after reintroduction of the service. August 4 – Children's puppet "Muffin the Mule" debuts in an episode of the series For the Children. He is so popular he is given his own show later that same year. September 6 – Chicago's WBKB-TV (now WBBM-TV) commences broadcasting as the first U.S. television station outside the Eastern Time Zone. September 15 – DuMont Television Network begins broadcasting regularly in the United States. October 2 – The first television network soap opera, Faraway Hill, is broadcast by DuMont. October 22 – Telecrime, the first television crime series from the 1930s, is resumed by the BBC, retitled Telecrimes. December 24 – The first Christmas church service is telecast, Grace Episcopal Church in New York, on WABD. Tokyo Tsushin Kogyo founds a company, which would later become Sony. Zoomar introduces the first professional zoom lens for television cameras. The first postwar television sets are released by the companies RCA, DuMont, Crosley, and Belmont. Debuts January 4 - You Be the Judge premieres on CBS May 9 – The first regularly scheduled American variety show, Hour Glass, premieres on NBC (1946–1947). May 23 - Let's Play Reporter
https://en.wikipedia.org/wiki/THX
THX is a suite of high fidelity audiovisual reproduction standards for movie theaters, screening rooms, home theaters, computer speakers, video game consoles, car audio systems, and video games. The THX trailer that precedes movies is based on the Deep Note, with a distinctive glissando up from a rumbling low pitch. THX was developed by Tomlinson Holman at George Lucas's company Lucasfilm in 1983 to ensure that the soundtrack for the third Star Wars film, Return of the Jedi, would be accurately reproduced in the best venues. THX was named after Holman's initials, with the "X" standing for "crossover" or "experiment". The name is also an homage to Lucas's first film, THX 1138 (1971). Deep Note was created by Holman's co-worker James A. Moorer. THX Ltd. was founded on May 20, 1983 by Lucas and Holman, and headquartered in San Francisco, California. THX is a quality assurance system, not a recording technology. All sound recording formats, whether digital (Dolby Digital, DTS, SDDS) or analog (Dolby Stereo, Ultra Stereo), can be reproduced in a THX system. THX-certified theaters provide a high-quality, predictable playback environment to ensure that any film soundtrack mixed in THX will sound as near as possible to the intentions of the mixing engineer. THX also provides certified theaters with a special crossover circuit whose use is part of the standard. Certification of an auditorium entails specific acoustic and other technical requirements; architectural requirements include a floating floor, baffled and acoustically treated walls, non-parallel walls (to reduce standing waves), a perforated screen (to allow center channel continuity), and NC30 rating for background noise ("ensures noise from air conditioning units and projection equipment does not mask the subtle effects in a movie's soundtrack"). On June 12, 2002, THX was spun off as a separate company from Lucasfilm and sold to sound card manufacturer Creative Technology Limited, which held a 60% share of the company. Under Creative Technology, the company developed several further innovations, such as the first THX-certified audio card for computers, the Sound Blaster Audigy 2. In 2016, THX was acquired by video game hardware company Razer Inc. History In 1982, George Lucas and his company, Lucasfilm, were preparing to release Return of the Jedi, the third film in the Star Wars trilogy. The C Building had been constructed in San Rafael, California, where Industrial Light & Magic and much of Lucasfilm occupied a series of work bays and office complexes along Kerner Boulevard. The C Building boasted a shooting stage, editing facilities, computer server rooms, and a large theater as a state-of-the-art sound mixing room. That year, Lucas and his team were working on Return of the Jedi when a major situation began to arise. They brought their film to test in a commercial theater to find out that much of the audio detail and quality they mixed for countless hours in the studio was simply missin
https://en.wikipedia.org/wiki/BITNET%20Relay
BITNET Relay, also known as the Inter Chat Relay Network, was a chat network setup over BITNET nodes. It predated Internet Relay Chat and other online chat systems. The program that made the network possible was called "Relay" and was developed by Jeff Kell of the University of Tennessee at Chattanooga in 1985 using the REXX programming language. This system drew its name from "relay race" which shares a comparable behavior, where messages travel hop-by-hop along the network of Relay servers until they reached their destination. Messages sent within the United States would take a few seconds to reach their destinations, but communication times varied in other countries or internationally. If one or more network links were down, BITNET would store and forward the messages when the network links recovered, minutes or even hours later. Background Before BITNET Relay was implemented, any form of communication over BITNET required identifying the remote user and host. Relay ran on a special ID using several BITNET hosts. To use it, a message was sent to a user ID called RELAY. The Relay program running on that user ID would then provide multi-user chat functions, primarily in the form of "channels" (chat rooms). The message could contain either a command for Relay (preceded by the popular "/" slash character command prefix, still in use today), or a message at the remote host (typically a mainframe computer). Computers connected to BITNET were generally located at universities and government agencies, due to limited access to computer network bandwidth. It was not uncommon for a university's entire network connection to run over a single leased telephone line or even a 4800 baud dial-up connection. Thus using scarce computing and network resources for "frivolous" purposes, such as chat, was often discouraged. Popularity One of the reasons Relay gained acceptance was that its system of peer servers decreased the network bandwidth consumed by group chat, due to no longer having to send multiple copies of the same message individually to each server. Because of this efficiency and the limited bandwidth at the time, users were often not allowed to use or develop alternate chat systems. Experimental chats like Galaxy Network and VM/Shell were asked to shut down before they achieved noteworthy success. Bitnet Relay gained popularity in the late 1980s when Valdis Kletnieks at Virginia Tech created a Pascal version that consumed far less CPU time, and again in the early 1990s when Smart Relay improved handling of message delivery. Though Jeff Kell himself had made observations about the possible demise of BITNET Relay, only TCP/IP and the Internet brought about the end of BITNET and Relay. Jarkko Oikarinen, the creator of Internet Relay Chat, says that he was inspired by BITNET Relay Usage The following is an example of a session: /SIGNUP robert harper * Thank you for signing up, robert harper. * Now use the /SIGNON <nickname> command to * estab
https://en.wikipedia.org/wiki/Happy%20Days
Happy Days is an American television sitcom that aired first-run on the ABC network from January 15, 1974, to July 19, 1984, with a total of 255 half-hour episodes spanning 11 seasons. Created by Garry Marshall, it was one of the most successful series of the 1970s. The series presented an idealized vision of life in the 1950s and early 1960s Midwestern United States, and it starred Ron Howard as Richie Cunningham, Henry Winkler as his friend Fonzie, and Tom Bosley and Marion Ross as Richie's parents, Howard and Marion Cunningham. Although it opened to mixed reviews from critics, Happy Days became successful and popular over time. The series began as an unsold pilot starring Howard, Ross and Anson Williams, which aired in 1972 as a segment titled "Love and the Television Set" (later retitled "Love and the Happy Days" for syndication) on ABC's anthology show Love, American Style. Based on the pilot, director George Lucas cast Howard as the lead in his 1973 film American Graffiti, causing ABC to take a renewed interest in the pilot. The first two seasons of Happy Days focused on the experiences and dilemmas of "innocent teenager" Richie Cunningham, his family, and his high school friends, attempting to "honestly depict a wistful look back at adolescence". Initially a moderate success, the series' ratings began to fall during its second season, causing Marshall to retool it. The new format emphasized broad comedy and spotlighted the previously minor character of Fonzie, a "cool" biker and high school dropout. Following these changes, Happy Days became the number-one program in television in 1976–1977, Fonzie became one of the most merchandised characters of the 1970s, and Henry Winkler became a major star. The series also spawned a number of spin-offs, including Laverne & Shirley and Mork & Mindy. Plot Set in Milwaukee, Wisconsin, during the 1950s, the series revolves around teenager Richie Cunningham and his family: his father, Howard, who owns a hardware store; traditional homemaker and mother, Marion; younger sister Joanie Cunningham; Richie's older brother Chuck (briefly in seasons 1 and 2 only, disappearing from storylines afterward); and high school dropout, leather jacket clad greaser, mechanic and suave ladies' man Fonzie, who would eventually become Richie's best friend and the Cunninghams' over-the-garage tenant. The earliest episodes revolve around Richie and his friends, Potsie Weber and Ralph Malph, with Fonzie as a secondary character. However, as the series progressed, Fonzie proved to be a favorite with viewers, and soon more story lines were written to reflect his growing popularity, Winkler was top billed in the opening credits alongside Howard by season 3. Fonzie befriended Richie and the Cunningham family and, when Richie left the series for military service, Fonzie became the central figure of the show, with Winkler receiving sole top billing. In later seasons, other characters were introduced including Fonzie's young cousin
https://en.wikipedia.org/wiki/PNP
PNP may refer to: Science and technology Purine nucleoside phosphorylase, an enzyme 4-Nitrophenol or p-nitrophenol PNP transistor Theoretical computer science P versus NP problem Computing Plug and play, not requiring configuration Legacy Plug and Play or Legacy PnP Perspective-n-Point in computer vision Organizations New Progressive Party (Puerto Rico) Partido Nashonal di Pueblo, a Curaçaoan political party National Patriots' Party, a Burkinabé political party National Popular Party (Romania) Parti national populaire, a 1970s political party in Quebec, Canada People's New Party, Japan Peoples National Party (disambiguation) Peruvian National Police Policía Nacional del Perú) Peruvian Nationalist Party Philippine National Police Princess Naoko Planning, Naoko Takeuchi's studio Progressive National Party (disambiguation) Princeton Newport Partners, a hedge fund Other uses Party and play (PnP), sex with drug use See also P&P (disambiguation) Universal Plug and Play (UPnP), networking protocols
https://en.wikipedia.org/wiki/Optical%20mouse
An optical mouse is a computer mouse which uses a light source, typically a light-emitting diode (LED), and a light detector, such as an array of photodiodes, to detect movement relative to a surface. Variations of the optical mouse have largely replaced the older mechanical mouse design, which uses moving parts to sense motion. The earliest optical mice detected movement on pre-printed mousepad surfaces. Modern optical mice work on most opaque diffusely reflective surfaces like paper, but most of them do not work properly on specularly reflective surfaces like polished stone or transparent surfaces like glass. Optical mice that use dark field illumination can function reliably even on such surfaces. Mechanical mice Though not commonly referred to as optical mice, nearly all mechanical mice tracked movement using LEDs and photodiodes to detect when beams of infrared light did and didn't pass through holes in a pair of incremental rotary encoder wheels (one for left/right, another for forward/back), driven by a rubberized ball. Thus, the primary distinction of “optical mice” is not their use of optics, but their complete lack of moving parts to track mouse movement, instead employing an entirely solid-state system. Early optical mice The first two optical mice, first demonstrated by two independent inventors in December 1980, had different basic designs: One of these, invented by Steve Kirsch of MIT and Mouse Systems Corporation, used an infrared LED and a four-quadrant infrared sensor to detect grid lines printed with infrared absorbing ink on a special metallic surface. Predictive algorithms in the CPU of the mouse calculated the speed and direction over the grid. The other type, invented by Richard F. Lyon of Xerox, used a 16-pixel visible-light image sensor with integrated motion detection on the same ntype (5µm) MOS integrated circuit chip, and tracked the motion of light dots in a dark field of a printed paper or similar mouse pad. The Kirsch and Lyon mouse types had very different behaviors, as the Kirsch mouse used an x-y coordinate system embedded in the pad, and would not work correctly when the pad was rotated, while the Lyon mouse used the x-y coordinate system of the mouse body, as mechanical mice do. The optical mouse ultimately sold with the Xerox STAR office computer used an inverted sensor chip packaging approach patented by Lisa M. Williams and Robert S. Cherry of the Xerox Microelectronics Center. The Mouse Systems (Kirsch) design was commercialised and sold in PC compatible form by the company itself alongside variants rebranded for OEM use with Sun Microsystems workstations and by Data General. Modern optical mice Optical sensor Modern surface-independent optical mice work by using an optoelectronic sensor (essentially, a tiny low-resolution video camera) to take successive images of the surface on which the mouse operates. As computing power grew cheaper, it became possible to embed more powerful special-purpos
https://en.wikipedia.org/wiki/Autovon
The Automatic Voice Network (AUTOVON, military designation 490-L) was a worldwide American military telephone system. The system was built starting in 1963, based on the Army's existing Switch Communications Automated Network (SCAN) system. In June 1966 the Air Defense Command voice network was cut over to the new service. In 1969, AUTOVON switching centers opened in the United Kingdom, and later in other European countries, Asia, the Middle East, and Panama. It was a major part of the Defense Communications System (DCS), providing non-secure switched voice services. The system was replaced in the early 1990s by the Defense Switched Network. Circuits AUTOVON used a combination of its own constructed circuits and other lines operated by AT&T Corporation and smaller independent telephone companies, connected by high-speed switching centers produced by Automatic Electric Company to exchanges located far from other civilian or military targets. In the US the cables were predominantly L-carrier coaxial multiplex built by AT&T, who also used them to carry about one third of all civilian long-distance calls, as their capacity was much higher than the military needed. Although unused, some of the cables remain today and the routes are visible on satellite photos. The system's traffic was transported over many media other than underground cable, including microwave links, open wire and, near the end of the system's life, fiber optic. Contrary to stories of underground concrete cable ducts, most of the cable was directly buried without added concrete, relying instead on the natural protection of soil. In some areas, however, cables from the AUTOSEVOCOM network were laid in parallel. These were often concrete-encased when the traffic they were carrying was not encrypted. Most of the cable repeater huts have been sold to private interests, to round out existing parcels, or as possible build-to-suit tower sites, etc. AT&T has been filling the small underground portion before sale, unless they sell to a major company. The junctions for AUTOVON are also being sold into private ownership, with a few exceptions. Most are stripped of all the equipment, although the AUTOVON junction in Mounds, Oklahoma was sold with all the old equipment in place. The telephone switches used were initially a 4 wire version of Number Five Crossbar Switching System, replaced in the early 1970s after the more versatile 1ESS switch had shown its reliability. (Note that this paragraph disagrees in part with an earlier paragraph regarding the manufacturer of switching equipment. In general, switches in Bell operating territories were made by AT&T, and most others by Automatic Electric; see the Autovon switch location map cited in External Links, below, for a snapshot listing of locations (some of the names in the map have been anonymized) and equipment manufacturer.) Multilevel precedence and preemption The AUTOVON system provided a facility for placing calls with multileve
https://en.wikipedia.org/wiki/Chinese%20social%20relations
Chinese social relations are typified by a reciprocal social network. Often social obligations within the network are characterized in familial terms. The individual link within the social network is known by guanxi (关系/關係) and the feeling within the link is known by the term ganqing (感情). An important concept within Chinese social relations is the concept of face, as in many other Asian cultures. A Buddhist-related concept is yuanfen (缘分/緣分). As articulated in the sociological works of leading Chinese academic Fei Xiaotong, the Chinese—in contrast to other societies—tend to see social relations in terms of networks rather than boxes. Hence, people are perceived as being "near" or "far" rather than "in" or "out". See also Culture of China Chinese tea culture Kowtow Red envelope Chinese marriage Sifu Chinese culture Culture of Hong Kong Taiwanese culture Society of China Reputation management Information society Social influence Social information processing Social networks Social status
https://en.wikipedia.org/wiki/Autoconf
GNU Autoconf is a tool for producing configure scripts for building, installing, and packaging software on computer systems where a Bourne shell is available. Autoconf is agnostic about the programming languages used, but it is often used for projects using C, C++, Fortran, Fortran 77, Erlang, or Objective-C. A configure script configures a software package for installation on a particular target system. After running a series of tests on the target system, the configure script generates header files and a makefile from templates, thus customizing the software package for the target system. Together with Automake and Libtool, Autoconf forms the GNU Build System, which comprises several other tools, notably Autoheader. Usage overview The developer specifies the desired behaviour of the configure script by writing a list of instructions in the GNU m4 language in a file called "configure.ac". A library of pre-defined m4 macros is available to describe common configure script instructions. Autoconf transforms the instructions in "configure.ac" into a portable configure script. The system that will be doing the building need not have autoconf installed: autoconf is needed only to build the configure script, that is usually shipped with the software. History Autoconf was begun in the summer of 1991 by David Mackenzie to support his work at the Free Software Foundation. In the subsequent years it grew to include enhancements from a variety of authors and became the most widely used build configuration system for writing portable free or open-source software. Approach Autoconf is similar to the Metaconfig package used by Perl. The imake system formerly used by the X Window System (up to X11R6.9) is closely related, but has a different philosophy. The Autoconf approach to portability is to test for features, not for versions. For example, the native C compiler on SunOS 4 did not support ISO C. However, it is possible for the user or administrator to have installed an ISO C-compliant compiler. A pure version-based approach would not detect the presence of the ISO C compiler, but a feature-testing approach would be able to discover the ISO C compiler the user had installed. The rationale of this approach is to gain the following advantages: the configure script can get reasonable results on newer or unknown systems it allows administrators to customize their machines and have the configure script take advantage of the customizations there is no need to keep track of minute details of versions, patch numbers, etc., to figure out whether a particular feature is supported or not Autoconf provides extensive documentation around the non-portability of many POSIX shell constructs to older shells and bugs therein. It also provides M4SH, a macro-based replacement for shell syntax. Criticism There is some criticism that states that Autoconf uses dated technologies, has a lot of legacy restrictions, and complicates simple scenarios unnecessaril
https://en.wikipedia.org/wiki/Metropolis%20light%20transport
Metropolis light transport (MLT) is a global illumination application of a variant of the Monte Carlo method called the Metropolis–Hastings algorithm to the rendering equation for generating images from detailed physical descriptions of three-dimensional scenes. The procedure constructs paths from the eye to a light source using bidirectional path tracing, then constructs slight modifications to the path. Some careful statistical calculation (the Metropolis algorithm) is used to compute the appropriate distribution of brightness over the image. This procedure has the advantage, relative to bidirectional path tracing, that once a path has been found from light to eye, the algorithm can then explore nearby paths; thus difficult-to-find light paths can be explored more thoroughly with the same number of simulated photons. In short, the algorithm generates a path and stores the path's 'nodes' in a list. It can then modify the path by adding extra nodes and creating a new light path. While creating this new path, the algorithm decides how many new 'nodes' to add and whether or not these new nodes will actually create a new path. Metropolis light transport is an unbiased method that, in some cases (but not always), converges to a solution of the rendering equation faster than other unbiased algorithms such as path tracing or bidirectional path tracing. Energy Redistribution Path Tracing (ERPT) uses Metropolis sampling-like mutation strategies instead of an intermediate probability distribution step. See also Nicholas Metropolis – The physicist after whom the algorithm is named Renderers using MLT: Arion – A commercial unbiased renderer based on path tracing and providing an MLT sampler Iray (external link) – An unbiased renderer that has an option for MLT Kerkythea – A free unbiased 3D renderer that uses MLT LuxRender – An open source unbiased renderer that uses MLT Mitsuba Renderer (web site) A research-oriented renderer which implements several MLT variants Octane Render – An commercial unbiased renderer that uses MLT Unicorn Render (web site) – A commercial unbiased render providing MTL sampler and Caustic sampler References External links Metropolis project at Stanford Homepage of the Mitsuba renderer LuxRender - an open source render engine that supports MLT Kerkythea 2008 - a freeware rendering system that uses MLT A Practical Introduction to Metropolis Light Transport Unbiased physically based rendering on the GPU Monte Carlo methods Global illumination algorithms
https://en.wikipedia.org/wiki/Computational%20archaeology
Computational archaeology describes computer-based analytical methods for the study of long-term human behaviour and behavioural evolution. As with other sub-disciplines that have prefixed 'computational' to their name (e.g., computational biology, computational physics and computational sociology), the term is reserved for (generally mathematical) methods that could not realistically be performed without the aid of a computer. Computational archaeology may include the use of geographical information systems (GIS), especially when applied to spatial analyses such as viewshed analysis and least-cost path analysis as these approaches are sufficiently computationally complex that they are extremely difficult if not impossible to implement without the processing power of a computer. Likewise, some forms of statistical and mathematical modelling, and the computer simulation of human behaviour and behavioural evolution using software tools such as Swarm or Repast would also be impossible to calculate without computational aid. The application of a variety of other forms of complex and bespoke software to solve archaeological problems, such as human perception and movement within built environments using software such as University College London's Space Syntax program, also falls under the term 'computational archaeology'. The acquisition, documentation and analysis of archaeological finds at excavations and in museums is an important field having pottery analysis as one of the major topics. In this area 3D-acquisition techniques like structured light scanning (SLS), photogrammetric methods like "structure from motion" (SfM), computed tomography as well as their combinations provide large data-sets of numerous objects for digital pottery research. These techniques are increasingly integrated into the in-situ workflow of excavations. The Austrian subproject of the Corpus vasorum antiquorum (CVA) is seminal for digital research on finds within museums. Computational archaeology is also known as "archaeological informatics" (Burenhult 2002, Huggett and Ross 2004) or "archaeoinformatics" (sometimes abbreviated as "AI", but not to be confused with artificial intelligence). Origins and objectives In recent years, it has become clear that archaeologists will only be able to harvest the full potential of quantitative methods and computer technology if they become aware of the specific pitfalls and potentials inherent in the archaeological data and research process. AI science is an emerging discipline that attempts to uncover, quantitatively represent and explore specific properties and patterns of archaeological information. Fundamental research on data and methods for a self-sufficient archaeological approach to information processing produces quantitative methods and computer software specifically geared towards archaeological problem solving and understanding. AI science is capable of complementing and enhancing almost any area of scientific archaeo
https://en.wikipedia.org/wiki/Phong%20shading
In 3D computer graphics, Phong shading, Phong interpolation, or normal-vector interpolation shading is an interpolation technique for surface shading invented by computer graphics pioneer Bui Tuong Phong. Phong shading interpolates surface normals across rasterized polygons and computes pixel colors based on the interpolated normals and a reflection model. Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model. History Phong shading and the Phong reflection model were developed at the University of Utah by Bui Tuong Phong, who published them in his 1973 Ph.D. dissertation and a 1975 paper. Phong's methods were considered radical at the time of their introduction, but have since become the de facto baseline shading method for many rendering applications. Phong's methods have proven popular due to their generally efficient use of computation time per rendered pixel. Phong interpolation Phong shading improves upon Gouraud shading and provides a better approximation of the shading of a smooth surface. Phong shading assumes a smoothly varying surface normal vector. The Phong interpolation method works better than Gouraud shading when applied to a reflection model with small specular highlights such as the Phong reflection model. The most serious problem with Gouraud shading occurs when specular highlights are found in the middle of a large polygon. Since these specular highlights are absent from the polygon's vertices and Gouraud shading interpolates based on the vertex colors, the specular highlight will be missing from the polygon's interior. This problem is fixed by Phong shading. Unlike Gouraud shading, which interpolates colors across polygons, in Phong shading, a normal vector is linearly interpolated across the surface of the polygon from the polygon's vertex normals. The surface normal is interpolated and normalized at each pixel and then used in a reflection model, e.g. the Phong reflection model, to obtain the final pixel color. Phong shading is more computationally expensive than Gouraud shading since the reflection model must be computed at each pixel instead of at each vertex. In modern graphics hardware, variants of this algorithm are implemented using pixel or fragment shaders. Phong reflection model Phong shading may also refer to the specific combination of Phong interpolation and the Phong reflection model, which is an empirical model of local illumination. It describes the way a surface reflects light as a combination of the diffuse reflection of rough surfaces with the specular reflection of shiny surfaces. It is based on Bui Tuong Phong's informal observation that shiny surfaces have small intense specular highlights, while dull surfaces have large highlights that fall off more gradually. The reflection model also includes an ambient term to account for the small amount of light that is scattered about the entire scene. See also List of common shading algorith
https://en.wikipedia.org/wiki/Bump%20mapping
Bump mapping is a texture mapping technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface although the surface of the underlying object is not changed. Bump mapping was introduced by James Blinn in 1978. Normal mapping is the most common variation of bump mapping used. Principles Bump mapping is a technique in computer graphics to make a rendered surface look more realistic by simulating small displacements of the surface. However, unlike displacement mapping, the surface geometry is not modified. Instead only the surface normal is modified as if the surface had been displaced. The modified surface normal is then used for lighting calculations (using, for example, the Phong reflection model) giving the appearance of detail instead of a smooth surface. Bump mapping is much faster and consumes less resources for the same level of detail compared to displacement mapping because the geometry remains unchanged. There are also extensions which modify other surface features in addition to increasing the sense of depth. Parallax mapping and horizon mapping are two such extensions. The primary limitation with bump mapping is that it perturbs only the surface normals without changing the underlying surface itself. Silhouettes and shadows therefore remain unaffected, which is especially noticeable for larger simulated displacements. This limitation can be overcome by techniques including displacement mapping where bumps are applied to the surface or using an isosurface. Methods There are two primary methods to perform bump mapping. The first uses a height map for simulating the surface displacement yielding the modified normal. This is the method invented by Blinn and is usually what is referred to as bump mapping unless specified. The steps of this method are summarized as follows. Before a lighting calculation is performed for each visible point (or pixel) on the object's surface: Look up the height in the heightmap that corresponds to the position on the surface. Calculate the surface normal of the heightmap, typically using the finite difference method. Combine the surface normal from step two with the true ("geometric") surface normal so that the combined normal points in a new direction. Calculate the interaction of the new "bumpy" surface with lights in the scene using, for example, the Phong reflection model. The result is a surface that appears to have real depth. The algorithm also ensures that the surface appearance changes as lights in the scene are moved around. The other method is to specify a normal map which contains the modified normal for each point on the surface directly. Since the normal is specified directly instead of derived from a height map this method usually leads to more predictable
https://en.wikipedia.org/wiki/Valleys%20%26%20Cardiff%20Local%20Routes
Valleys & Cardiff Local Routes () (formerly Valley Lines) is the network of passenger suburban railway services radiating from Cardiff, Wales. It includes lines within the city itself, the Vale of Glamorgan and the South Wales Valleys. The services are currently operated by Transport for Wales Rail. In total, it serves 81 stations in six unitary authority areas: 20 in the city of Cardiff, 11 in the Vale of Glamorgan, 25 in Rhondda Cynon Taf, 15 in Caerphilly, 8 in Bridgend and 5 in Merthyr Tydfil. Services on these routes are provided by Class 150 DMUs and Class 769 bi-mode multiple units in Diesel mode. They are typically end-to-end, in that they run from one branch terminus, through Cardiff Queen Street station, to another branch terminus, e.g. from Pontypridd to Barry Island. The major hubs of the network are and . Other hubs are , and . History A stretch of the Vale of Glamorgan Line, on which passenger services were closed under the Beeching Axe, re-opened for passenger service, with services from to , via , Rhoose Cardiff Intl. Airport and Llantwit Major. These services were originally advertised to start in April 2005, but commenced on 12 June 2005. Previously services only went as far as Barry. On 28 March 2020, ownership of the lines between Cardiff and Treherbert, Aberdare, Merthyr Tydfil, Coryton, Rhymney and Cwmbargoed (the "Core Valley Lines") was transferred from Network Rail to Transport for Wales, who leased them to operator AKIL. Electrification On 16 July 2012 the UK Government announced plans to extend the electrification of the network at a cost of £350 million. This was at the same time of the announcement of electrification of the South Wales Main Line from Cardiff to Swansea. This would also see investment in new trains and continued improvements to stations. The investment will require new trains and should result in reduced journey times and cheaper maintenance of the network. Work was expected to start between 2014 and 2019, but has since been pushed back to between 2019 and 2024. Lines The colours used below are from the official network map (see External links). Stations in bold are major interchanges for the network. Routes Generally trains run from one line to another, joining at Cardiff Central eliminating the need for changing trains there. However they may not run for the whole length of the line. Services run between: Bridgend/Barry Island and Merthyr Tydfil/Aberdare - incorporating the Vale of Glamorgan and Merthyr Lines Penarth and Rhymney/Bargoed - incorporating the Vale of Glamorgan and Rhymney Lines Radyr and Coryton - incorporating the City and Coryton Lines Cardiff Central and Treherbert - incorporating the Rhondda Line only Cardiff Queen Street and Cardiff Bay - incorporating the Butetown Branch Line only Surrounding lines The following lines also serve Cardiff and the South Wales Valleys but are not considered part of the network by Transport for Wales and use more "mainline" rolling sto
https://en.wikipedia.org/wiki/XLISP
XLISP is a family of Lisp implementations written by David Betz and first released in 1983. The first version was a Lisp with object-oriented extensions for computers with limited power. The second version (XLISP 2.0) moved toward Common Lisp, but was by no means a complete implementation. After a long period of inactivity, the author released a new version based on XSCHEME, his Scheme implementation. The most current version follows the Scheme R3RS standard. Derivatives AutoLISP, a programming and scripting language for AutoCAD, is based on a very old version of XLISP. XLISP-PLUS is a derivative of XLISP 2.0 that continues to add Common Lisp features. Winterp is a derivative of XLISP-PLUS. XLISP-STAT is an implementation of Lisp-Stat, an environment for dynamic graphics and statistics with objects. Nyquist is an extension of XLISP for sound synthesis. ANIMAL (AN IMage ALgebra) is an image manipulation environment created by Carla Maria Modena and Roberto Brunelli. A 1989 entry to the IOCCC identifies itself as "XLISP 4.0". References External links . . . . . Lisp programming language family Scheme (programming language) Object-oriented programming languages CP/M software
https://en.wikipedia.org/wiki/List%20of%20cities%20and%20towns%20in%20Russia
This is a list of cities and towns in Russia. According to the data of 2010 Russian Census, there are 1,117 cities and towns in List Gallery See also Types of inhabited localities in Russia List of renamed cities and towns in Russia List of cities in Asia List of cities in Europe References External links List of all places in Russia (2002 census) Russian places with 2002 census population data (Excel file) Cities and towns Russia Russia
https://en.wikipedia.org/wiki/Rc
rc (for "run commands") is the command line interpreter for Version 10 Unix and Plan 9 from Bell Labs operating systems. It resembles the Bourne shell, but its syntax is somewhat simpler. It was created by Tom Duff, who is better known for an unusual C programming language construct ("Duff's device"). A port of the original rc to Unix is part of Plan 9 from User Space. A rewrite of rc for Unix-like operating systems by Byron Rakitzis is also available but includes some incompatible changes. Rc uses C-like control structures instead of the original Bourne shell's ALGOL-like structures, except that it uses an if not construct instead of else, and has a Bourne-like for loop to iterate over lists. In rc, all variables are lists of strings, which eliminates the need for constructs like "$@". Variables are not re-split when expanded. The language is described in Duff's paper. Influences es es (for "extensible shell") is an open source, command line interpreter developed by Rakitzis and Paul Haahr that uses a scripting language syntax influenced by the rc shell. It was originally based on code from Byron Rakitzis's clone of rc for Unix. Extensible shell is intended to provide a fully functional programming language as a Unix shell. It does so by introducing "program fragments" in braces as a new datatype, lexical scoping via let, and some more minor improvements. The bulk of es development occurred in the early 1990s, after the shell was introduced at the Winter 1993 USENIX conference in San Diego, Official releases appear to have ceased after 0.9-beta-1 in 1997, and es lacks features as compared to more popular shells, such as zsh and bash. A public domain fork of is active as of 2019. Examples The Bourne shell script: if [ "$1" = "hello" ]; then echo hello, world else case "$2" in 1) echo $# 'hey' "jude's"$3;; 2) echo `date` :$*: :"$@":;; *) echo why not 1>&2 esac for i in a b c; do echo $i done fi is expressed in rc as: if(~ $1 hello) echo hello, world if not { switch($2) { case 1 echo $#* 'hey' 'jude''s'^$3 case 2 echo `{date} :$"*: :$*: case * echo why not >[1=2] } for(i in a b c) echo $i } Rc also supports more dynamic piping: a |[2] b # pipe only standard error of a to b — equivalent to '3>&2 2>&1 >&3 | b' in Bourne shell a <>b # opens file b as a's standard input and standard output a <{b} <{c} # becomes a {standard output of b} {standard output of c}. Better known as "process substitution" References External links - Plan 9 manual page. Plan 9 from User Space - Includes rc and other Plan 9 tools for Linux, Mac OS X and other Unix-like systems. Byron Rakitzis' rewrite for Unix (article ) es Official website Free system software Inferno (operating system) Plan 9 from Bell Labs Procedural programming languages Programming languages created in 1989 Scripting languages Text-oriented programming languages Unix shells
https://en.wikipedia.org/wiki/GNU%20Autotools
The GNU Autotools, also known as the GNU Build System, is a suite of programming tools designed to assist in making source code packages portable to many Unix-like systems. It can be difficult to make a software program portable: the C compiler differs from system to system; certain library functions are missing on some systems; header files may have different names; shared libraries may be compiled and installed in different ways. One way to handle platform differences is to write conditional code, with code blocks selected by means of preprocessor directives (#ifdef); but because of the wide variety of build environments this approach quickly becomes unmanageable. Autotools is designed to address this problem more manageably. Autotools is part of the GNU toolchain and is widely used in many free software and open source packages. Its component tools are free software, licensed under the GNU General Public License with special license exceptions permitting its use with proprietary software. The GNU Build System makes it possible to build many programs using a two-step process: configure followed by make. Components Autotools consists of the GNU utility programs Autoconf, Automake, and Libtool. Other related tools frequently used alongside it include GNU's make program, GNU gettext, pkg-config, and the GNU Compiler Collection, also called GCC. GNU Autoconf Autoconf generates a configure script based on the contents of a configure.ac file, which characterizes a particular body of source code. The configure script, when run, scans the build environment and generates a subordinate config.status script which, in turn, converts other input files and most commonly Makefile.in into output files (Makefile), which are appropriate for that build environment. Finally, the make program uses Makefile to generate executable programs from source code. The complexity of Autotools reflects the variety of circumstances under which a body of source code may be built. If a source code file is changed then it suffices to re-run make, which only re-compiles that part of the body of the source code affected by the change. If a .in file has changed then it suffices to re-run config.status and make. If the body of source code is copied to another computer then it is sufficient to re-run configure (which runs config.status) and make. (For this reason source code using Autotools is normally distributed without the files that configure generates.) If the body of source code is changed more fundamentally, then configure.ac and the .in files need to be changed and all subsequent steps also followed. To process files, autoconf uses the GNU implementation of the m4 macro system. Autoconf comes with several auxiliary programs such as autoheader, which is used to help manage C header files; autoscan, which can create an initial input file for Autoconf; and ifnames, which can list C pre-processor identifiers used in the program. GNU Automake Automake helps t
https://en.wikipedia.org/wiki/Classful%20network
A classful network is an obsolete network addressing architecture used in the Internet from 1981 until the introduction of Classless Inter-Domain Routing (CIDR) in 1993. The method divides the IP address space for Internet Protocol version 4 (IPv4) into five address classes based on the leading four address bits. Classes A, B, and C provide unicast addresses for networks of three different network sizes. Class D is for multicast networking and the class E address range is reserved for future or experimental purposes. Since its discontinuation, remnants of classful network concepts have remained in practice only in limited scope in the default configuration parameters of some network software and hardware components, most notably in the default configuration of subnet masks. Background In the original address definition, the most significant eight bits of the 32-bit IPv4 address was the network number field which specified the particular network a host was attached to. The remaining 24 bits specified the local address, also called rest field (the rest of the address), which uniquely identified a host connected to that network. This format was sufficient at a time when only a few large networks existed, such as the ARPANET (network number 10), and before the wide proliferation of local area networks (LANs). As a consequence of this architecture, the address space supported only a low number (254) of independent networks. Before the introduction of address classes, the only address blocks available were these large blocks which later became known as Class A networks. As a result, some organizations involved in the early development of the Internet received address space allocations far larger than they would ever need (16,777,216 IP addresses each). It became clear early in the growth of the network that this would be a critical scalability limitation. Introduction of address classes Expansion of the network had to ensure compatibility with the existing address space and the IPv4 packet structure, and avoid the renumbering of the existing networks. The solution was to expand the definition of the network number field to include more bits, allowing more networks to be designated, each potentially having fewer hosts. Since all existing network numbers at the time were smaller than 64, they had only used the 6 least-significant bits of the network number field. Thus it was possible to use the most-significant bits of an address to introduce a set of address classes while preserving the existing network numbers in the first of these classes. The new addressing architecture was introduced by in 1981 as a part of the specification of the Internet Protocol. It divided the address space into primarily three address formats, henceforth called address classes, and left a fourth range reserved to be defined later. The first class, designated as Class A, contained all addresses in which the most significant bit is zero. The network number for this class
https://en.wikipedia.org/wiki/Logit
In statistics, the logit ( ) function is the quantile function associated with the standard logistic distribution. It has many uses in data analysis and machine learning, especially in data transformations. Mathematically, the logit is the inverse of the standard logistic function , so the logit is defined as Because of this, the logit is also called the log-odds since it is equal to the logarithm of the odds where is a probability. Thus, the logit is a type of function that maps probability values from to real numbers in , akin to the probit function. Definition If is a probability, then is the corresponding odds; the of the probability is the logarithm of the odds, i.e.: The base of the logarithm function used is of little importance in the present article, as long as it is greater than 1, but the natural logarithm with base is the one most often used. The choice of base corresponds to the choice of logarithmic unit for the value: base 2 corresponds to a shannon, base  to a “nat”, and base 10 to a hartley; these units are particularly used in information-theoretic interpretations. For each choice of base, the logit function takes values between negative and positive infinity. The “logistic” function of any number is given by the inverse-: The difference between the s of two probabilities is the logarithm of the odds ratio (), thus providing a shorthand for writing the correct combination of odds ratios only by adding and subtracting: History There have been several efforts to adapt linear regression methods to a domain where the output is a probability value, , instead of any real number . In many cases, such efforts have focused on modeling this problem by mapping the range to and then running the linear regression on these transformed values. In 1934 Chester Ittner Bliss used the cumulative normal distribution function to perform this mapping and called his model probit an abbreviation for "probability unit";. However, this is computationally more expensive. In 1944, Joseph Berkson used log of odds and called this function logit, abbreviation for "logistic unit" following the analogy for probit: Log odds was used extensively by Charles Sanders Peirce (late 19th century). G. A. Barnard in 1949 coined the commonly used term log-odds; the log-odds of an event is the logit of the probability of the event. Barnard also coined the term lods as an abstract form of "log-odds", but suggested that "in practice the term 'odds' should normally be used, since this is more familiar in everyday life". Uses and properties The logit in logistic regression is a special case of a link function in a generalized linear model: it is the canonical link function for the Bernoulli distribution. The logit function is the negative of the derivative of the binary entropy function. The logit is also central to the probabilistic Rasch model for measurement, which has applications in psychological and educational assessment, among other areas. The i
https://en.wikipedia.org/wiki/Adaptive%20filter
An adaptive filter is a system with a linear filter that has a transfer function controlled by variable parameters and a means to adjust those parameters according to an optimization algorithm. Because of the complexity of the optimization algorithms, almost all adaptive filters are digital filters. Adaptive filters are required for some applications because some parameters of the desired processing operation (for instance, the locations of reflective surfaces in a reverberant space) are not known in advance or are changing. The closed loop adaptive filter uses feedback in the form of an error signal to refine its transfer function. Generally speaking, the closed loop adaptive process involves the use of a cost function, which is a criterion for optimum performance of the filter, to feed an algorithm, which determines how to modify filter transfer function to minimize the cost on the next iteration. The most common cost function is the mean square of the error signal. As the power of digital signal processors has increased, adaptive filters have become much more common and are now routinely used in devices such as mobile phones and other communication devices, camcorders and digital cameras, and medical monitoring equipment. Example application The recording of a heart beat (an ECG), may be corrupted by noise from the AC mains. The exact frequency of the power and its harmonics may vary from moment to moment. One way to remove the noise is to filter the signal with a notch filter at the mains frequency and its vicinity, but this could excessively degrade the quality of the ECG since the heart beat would also likely have frequency components in the rejected range. To circumvent this potential loss of information, an adaptive filter could be used. The adaptive filter would take input both from the patient and from the mains and would thus be able to track the actual frequency of the noise as it fluctuates and subtract the noise from the recording. Such an adaptive technique generally allows for a filter with a smaller rejection range, which means, in this case, that the quality of the output signal is more accurate for medical purposes. Block diagram The idea behind a closed loop adaptive filter is that a variable filter is adjusted until the error (the difference between the filter output and the desired signal) is minimized. The Least Mean Squares (LMS) filter and the Recursive Least Squares (RLS) filter are types of adaptive filter. There are two input signals to the adaptive filter: and which are sometimes called the primary input and the reference input respectively. The adaptation algorithm attempts to filter the reference input into a replica of the desired input by minimizing the residual signal, . When the adaptation is successful, the output of the filter is effectively an estimate of the desired signal. which includes the desired signal plus undesired interference and which includes the signals that are correlated to
https://en.wikipedia.org/wiki/Application%20layer
An application layer is an abstraction layer that specifies the shared communication protocols and interface methods used by hosts in a communications network. An application layer abstraction is specified in both the Internet Protocol Suite (TCP/IP) and the OSI model. Although both models use the same term for their respective highest-level layer, the detailed definitions and purposes are different. Internet protocol suite In the Internet protocol suite, the application layer contains the communications protocols and interface methods used in process-to-process communications across an Internet Protocol (IP) computer network. The application layer only standardizes communication and depends upon the underlying transport layer protocols to establish host-to-host data transfer channels and manage the data exchange in a client–server or peer-to-peer networking model. Though the TCP/IP application layer does not describe specific rules or data formats that applications must consider when communicating, the original specification (in ) does rely on and recommend the robustness principle for application design. OSI model In the OSI model, the definition of the application layer is narrower in scope. The OSI model defines the application layer as only the interface responsible for communicating with host-based and user-facing applications. OSI then explicitly distinguishes the functionality of two additional layers, the session layer and presentation layer, as separate levels below the application layer and above the transport layer. OSI specifies a strict modular separation of functionality at these layers and provides protocol implementations for each. In contrast, the Internet Protocol Suite compiles these functions into a single layer. Sublayers Originally the OSI model consisted of two kinds of application layer services with their related protocols. These two sublayers are the common application service element (CASE) and specific application service element (SASE). Generally, an application layer protocol is realized by the use of the functionality of a number of application service elements. Some application service elements invoke different procedures based on the version of the session service available. CASE The common application service element sublayer provides services for the application layer and request services from the session layer. It provides support for common application services, such as: ACSE (Association Control Service Element) ROSE (Remote Operation Service Element) CCR (Commitment Concurrency and Recovery) RTSE (Reliable Transfer Service Element) SASE The specific application service element sublayer provides application-specific services (protocols), such as: FTAM (File Transfer, Access and Manager) VT (Virtual Terminal) MOTIS (Message Oriented Text Interchange Standard) CMIP (Common Management Information Protocol) JTM (Job Transfer and Manipulation) MMS (Manufacturing Messaging Specification) RDA (Rem
https://en.wikipedia.org/wiki/Schengen%20Information%20System
The Schengen Information System (SIS) is a governmental database maintained by the European Commission. The SIS is used by 31 European countries to find information about individuals and entities for the purposes of national security, border control and law enforcement since 2001. A second technical version of this system, SIS II, went live on 9 April 2013. An upgraded Schengen Information System entered into operation on 7 March 2023. Participating nations Information in SIS is shared among the institutions of countries participating in the Schengen Agreement Application Convention (SAAC). The five original participating countries were France, Germany, Belgium, the Netherlands, and Luxembourg. Twenty-two additional countries joined the system since its creation: Spain, Portugal, Italy, Austria, Greece, Finland, Sweden, Switzerland, Denmark, Iceland, Norway, Estonia, the Czech Republic, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia, Slovenia, Liechtenstein and Croatia. Among the current participants, Iceland, Liechtenstein, Norway, and Switzerland are members of the European Free Trade Association but not of the European Union. Although Ireland and the United Kingdom operate a Common Travel Area and had not signed the Schengen Agreement Application Convention while the United Kingdom was still an EU member, they had the right to take part in Schengen co-operation under the terms of the Treaty of Amsterdam, which introduced the provisions of Schengen acquis into European Union law. Schengen allowed the United Kingdom and Ireland to take part in all or part of the Schengen convention arrangements. The United Kingdom ceased to have access at the end of a transition period on 31 December 2020; Ireland was connected to SIS II on 15 March 2021. Since 1 August 2018, Bulgaria and Romania have full access to SIS; before that they had access to SIS only for law enforcement purposes. Cyprus joined SIS on 25 July 2023. Ireland joined the law enforcement aspect on 1 January 2021 and has "full operational capacity" since March 2021. The United Kingdom did 571 million searches in the database in 2019. Introduction SIS information is stored according to the legislation of each participating country. There are more than 46 million entries (called "alerts") in SIS, most about lost identity documents. Person alerts make up around 1.9 percent of the database (about 885,000 records). Each alert contains information such as: name, date of birth, gender, nationality, aliases, arms or history of violence, the reason for the alert and the action to be taken if the person is encountered. SIS does not record travellers' entries and exits from the Schengen Area. History On 25 March 1957, the Treaty of Rome was completed. On 3 February 1958, the economic union of the Benelux countries was formed. Both agreements aimed to enable the free movement of people and goods across national borders. The Benelux countries, as a smaller group, were able to quickly implem
https://en.wikipedia.org/wiki/Stride
Stride or STRIDE may refer to: Computing STRIDE (security), spoofing, tampering, repudiation, information disclosure, denial of service, elevation of privilege Stride (software), a successor to the cloud-based HipChat, a corporate cloud-based collaboration tool Stride (game engine), a free and open-source 2D and 3D cross-platform game engine STRIDE (algorithm), an algorithm for identifying secondary structures in proteins Stride of an array, in computer programming Stride scheduling, a soft real-time scheduling algorithm System to Retrieve Information from Drug Evidence, a United States Drug Enforcement Administration database used to track the prices of drugs obtained in sting operations Music Stride (composition), a 2019 orchestral composition by Tania León Stride (music), a type of piano playing "Stride", a song by Avail from their 1992 album Satiate "Stride," a song by Canadian musician Hayden from his 1996 EP Moving Careful People Stride (surname) Other uses Stride (gum), a chewing gum produced by Cadbury Adams Stride Rite Corporation, a footwear company Strides, a British & Australian slang term for trousers Strides Pharma, a pharmaceutical company headquartered in India Stride, formerly K12, an online education company
https://en.wikipedia.org/wiki/QuakeNet
QuakeNet is an Internet Relay Chat (IRC) network, and was one of the largest IRC networks. The network was founded in 1997 by Garfield (Henrik Rasmussen, Denmark) and Oli (Oli Gustafsson, Sweden) as a new home for their respective countries' Quake channels. At its peak on February 8, 2005, the network recorded 243,394 simultaneous connections. , there are 9 servers and about 12,000 users remaining. About QuakeNet Founded in 1997 as an IRC network for QuakeWorld players, QuakeNet saw huge growth over the coming years as it attracted many other gamers. As interest in IRC started to decline, QuakeNet's userbase followed suit. Services Channels often feature QuakeNet's requestable bespoke channel service 'Q'. Q is the main channel service and manages account authentication similar to nickname registration on servers with Nickserv; although there is no nickname protection service, instead operating on a first come first served basis. The other popular channel service seen in the larger channels is 'S'. S is SpamScan, a service used to detect spam from channels and warn or later punish the offending users. Since April 2014, D was also added as a channel service which collects various statistic metrics about a channel, such as word counts and popular phrases. Additional services include O as an operserv reference bot to the server operators on QuakeNet, and R (RequestBot) which allows users to request both Q and S if their channel meets their requirements. There are many other backend services which help QuakeNet staff administer the network. QuakeNet also is the home to many other third-party bot-operated services that can be used for various purposes to assist channel operators to run their channels or provide light entertainment. Many of these channels can be found using the channel search facility on the QuakeNet website. Webchat QuakeNet has a webchat client which allows users to connect to the network without the use of a dedicated IRC client. The client software, called qwebirc, was created by the QuakeNet development team. It is often embedded into other websites and also used by other IRC networks. References External links QuakeNet open-sources core services Interview with QuakeNet staff Internet Relay Chat networks Quake (series)
https://en.wikipedia.org/wiki/Aperture%20grille
An aperture grille is one of two major technologies used to manufacture color cathode-ray tube (CRT) televisions and computer displays; the other is the shadow mask. Fine vertical wires behind the front glass of the display screen separate the different colors of phosphors into strips. These wires are positioned such that an electron beam from one of three guns at the rear of the tube is only able to strike phosphors of the appropriate color. That is, the blue electron gun will strike blue phosphors, but will find a wire blocks the path to red and green phosphors. The fine wires allow for a finer dot pitch as they can be spaced much closer together than the perforations of a shadow mask, and there need be no gap between adjacent horizontal pixels. During the display of bright images, a shadow mask warms, and expands outward in all directions (sometimes called blooming). Aperture grilles do not exhibit this behavior; when the wires heat up, they expand vertically. Because there are no defined holes, this expansion does not affect the image, and the wires do not move horizontally. The vertical wires of the aperture grille have a resonant frequency and will vibrate in sympathetic resonance with loud sounds near the display, resulting in fluttering and shimmering of colors on the display. To reduce these resonant effects, one or two horizontal stabilizing wires, called "damping wires", are welded across the grille wires, and may be visible as fine dark lines across the face of the screen. These stabilizing wires provide the easiest way to distinguish aperture grille and shadow mask displays at a glance. The stabilized grille can still vibrate but the sounds need to be loud and be emitted close to the display. Additionally, aperture grille displays tend to be vertically flat and are often horizontally flat as well, while shadow mask displays usually have a spherical curvature. The first patented aperture grille televisions were manufactured by Sony in the late 1960s under the Trinitron brand name, which the company carried over to its line of CRT computer monitors. Subsequent designs, whether licensed from Sony or manufactured after the patent's expiration, tend to use the -tron suffix, such as Mitsubishi's DiamondTron and ViewSonic's SonicTron. Aperture grilles are not as mechanically stable as shadow or slot masks; a tap can cause the image to briefly become distorted, even with damping/support wires. See also Shadow mask Cromaclear Chromatron Trinitron References External links Aperture grille details Cathode ray tube Japanese inventions
https://en.wikipedia.org/wiki/Shadow%20mask
The shadow mask is one of the two technologies used in the manufacture of cathode-ray tube (CRT) televisions and computer monitors which produce clear, focused color images. The other approach is the aperture grille, better known by its trade name, Trinitron. All early color televisions and the majority of CRT computer monitors used shadow mask technology. Both of these technologies are largely obsolete, having been increasingly replaced since the 1990s by the liquid-crystal display (LCD). A shadow mask is a metal plate punched with tiny holes that separate the colored phosphors in the layer behind the front glass of the screen. Shadow masks are made by photochemical machining, a technique that allows for the drilling of small holes on metal sheets. Three electron guns at the back of the screen sweep across the mask, with the beams only reaching the screen if they pass through the holes. As the guns are physically separated at the back of the tube, their beams approach the mask from three slightly different angles, so after passing through the holes they hit slightly different locations on the screen. The screen is patterned with dots of colored phosphor positioned so that each can only be hit by one of the beams coming from the three electron guns. For instance, the blue phosphor dots are hit by the beam from the "blue gun" after passing through a particular hole in the mask. The other two guns do the same for the red and green dots. This arrangement allows the three guns to address the individual dot colors on the screen, even though their beams are much too large and too poorly aimed to do so without the mask in place. A red, a green, and a blue phosphor are generally arranged in a triangular shape (sometimes called a "triad"). For television use, modern displays (starting in the late 1960s) use rectangular slots instead of circular holes, improving brightness. This variation is sometimes referred to as a slot mask. Development Color television Color television had been studied even before commercial broadcasting became common, but it was not until the late 1940s that the problem was seriously considered. At the time, a number of systems were being proposed that used separate red, green and blue signals (RGB), broadcast in succession. Most experimental systems broadcast entire frames in sequence, with a colored filter (or "gel") that rotated in front of an otherwise conventional black and white television tube. Each frame encoded one color of the picture, and the wheel spun in sync with the signal so the correct gel was in front of the screen when that colored frame was being displayed. Because they broadcast separate signals for the different colors, all of these systems were incompatible with existing black and white sets. Another problem was that the mechanical filter made them flicker unless very high refresh rates were used. (This is conceptually similar to a DLP based projection display where a single DLP device is used for all th
https://en.wikipedia.org/wiki/Modchip
A modchip (short for modification chip) is a small electronic device used to alter or disable artificial restrictions of computers or entertainment devices. Modchips are mainly used in video game consoles, but also in some DVD or Blu-ray players. They introduce various modifications to its host system's function, including the circumvention of region coding, digital rights management, and copy protection checks for the purpose of using media intended for other markets, copied media, or unlicensed third-party (homebrew) software. Function and construction Modchips operate by replacing or overriding a system's protection hardware or software. They achieve this by either exploiting existing interfaces in an unintended or undocumented manner, or by actively manipulating the system's internal communication, sometimes to the point of re-routing it to substitute parts provided by the modchip. Most modchips consist of one or more integrated circuits (microcontrollers, FPGAs, or CPLDs), often complemented with discrete parts, usually packaged on a small PCB to fit within the console system it is designed for. Although there are modchips that can be reprogrammed for different purposes, most modchips are designed to work within only one console system or even only one specific hardware version. Modchips typically require some degree of technical skill to install since they must be connected to a console's circuitry, most commonly by soldering wires to select traces or chip legs on a system's circuit board. Some modchips allow for installation by directly soldering the modchip's contacts to the console's circuit ("quicksolder"), by the precise positioning of electrical contacts ("solderless"), or, in rare cases, by plugging them into a system's internal or external connector. Memory cards or cartridges that offer functions similar to modchips work on a completely different concept, namely by exploiting flaws in the system's handling of media. Such devices are not referred to as modchips, even if they are frequently traded under this umbrella term. The diversity of hardware modchips operate on and varying methods they use mean that while modchips are often used for the same goal, they may work in vastly different ways, even if they are intended for use on the same console. Some of the first modchips for the Nintendo Wii, known as drive chips, modify the behaviour and communication of the optical drive to bypass security. On the Xbox 360, a common modchip took advantage of the fact short periods of instability in the CPU could be used to fairly reliably lead it to incorrectly compare security signatures. The precision required in this attack meant that the modchip had to make use of a CPLD. Other modchips, such as the XenoGC and clones for the GameCube, invoke a debug mode where security measures are reduced or absent (in which case a stock Atmel AVR microcontroller was used). A more recent innovation are optical disk drive emulators or ODDE, which repla
https://en.wikipedia.org/wiki/Chinese%20postman%20problem
In graph theory, a branch of mathematics and computer science, Guan's route problem, the Chinese postman problem, postman tour or route inspection problem is to find a shortest closed path or circuit that visits every edge of an (connected) undirected graph at least once. When the graph has an Eulerian circuit (a closed walk that covers every edge once), that circuit is an optimal solution. Otherwise, the optimization problem is to find the smallest number of graph edges to duplicate (or the subset of edges with the minimum possible total weight) so that the resulting multigraph does have an Eulerian circuit. It can be solved in polynomial time. The problem was originally studied by the Chinese mathematician Kwan Mei-Ko in 1960, whose Chinese paper was translated into English in 1962. The original name "Chinese postman problem" was coined in his honor; different sources credit the coinage either to Alan J. Goldman or Jack Edmonds, both of whom were at the U.S. National Bureau of Standards at the time. A generalization is to choose any set T of evenly many vertices that are to be joined by an edge set in the graph whose odd-degree vertices are precisely those of T. Such a set is called a T-join. This problem, the T-join problem, is also solvable in polynomial time by the same approach that solves the postman problem. Undirected solution and T-joins The undirected route inspection problem can be solved in polynomial time by an algorithm based on the concept of a T-join. Let T be a set of vertices in a graph. An edge set J is called a T-join if the collection of vertices that have an odd number of incident edges in J is exactly the set T. A T-join exists whenever every connected component of the graph contains an even number of vertices in T. The T-join problem is to find a T-join with the minimum possible number of edges or the minimum possible total weight. For any T, a smallest T-join (when it exists) necessarily consists of paths that join the vertices of T in pairs. The paths will be such that the total length or total weight of all of them is as small as possible. In an optimal solution, no two of these paths will share any edge, but they may have shared vertices. A minimum T-join can be obtained by constructing a complete graph on the vertices of T, with edges that represent shortest paths in the given input graph, and then finding a minimum weight perfect matching in this complete graph. The edges of this matching represent paths in the original graph, whose union forms the desired T-join. Both constructing the complete graph, and then finding a matching in it, can be done in O(n3) computational steps. For the route inspection problem, T should be chosen as the set of all odd-degree vertices. By the assumptions of the problem, the whole graph is connected (otherwise no tour exists), and by the handshaking lemma it has an even number of odd vertices, so a T-join always exists. Doubling the edges of a T-join causes the given graph t
https://en.wikipedia.org/wiki/Rete%20algorithm
The Rete algorithm ( , , rarely , ) is a pattern matching algorithm for implementing rule-based systems. The algorithm was developed to efficiently apply many rules or patterns to many objects, or facts, in a knowledge base. It is used to determine which of the system's rules should fire based on its data store, its facts. The Rete algorithm was designed by Charles L. Forgy of Carnegie Mellon University, first published in a working paper in 1974, and later elaborated in his 1979 Ph.D. thesis and a 1982 paper. Overview A naive implementation of an expert system might check each rule against known facts in a knowledge base, firing that rule if necessary, then moving on to the next rule (and looping back to the first rule when finished). For even moderate sized rules and facts knowledge-bases, this naive approach performs far too slowly. The Rete algorithm provides the basis for a more efficient implementation. A Rete-based expert system builds a network of nodes, where each node (except the root) corresponds to a pattern occurring in the left-hand-side (the condition part) of a rule. The path from the root node to a leaf node defines a complete rule left-hand-side. Each node has a memory of facts that satisfy that pattern. This structure is essentially a generalized trie. As new facts are asserted or modified, they propagate along the network, causing nodes to be annotated when that fact matches that pattern. When a fact or combination of facts causes all of the patterns for a given rule to be satisfied, a leaf node is reached and the corresponding rule is triggered. Rete was first used as the core engine of the OPS5 production system language, which was used to build early systems including R1 for Digital Equipment Corporation. Rete has become the basis for many popular rule engines and expert system shells, including CLIPS, Jess, Drools, IBM Operational Decision Management, BizTalk Rules Engine, and Soar. The word 'Rete' is Latin for 'net' or 'comb'. The same word is used in modern Italian to mean 'network'. Charles Forgy has reportedly stated that he adopted the term 'Rete' because of its use in anatomy to describe a network of blood vessels and nerve fibers. The Rete algorithm is designed to sacrifice memory for increased speed. In most cases, the speed increase over naïve implementations is several orders of magnitude (because Rete performance is theoretically independent of the number of rules in the system). In very large expert systems, however, the original Rete algorithm tends to run into memory and server consumption problems. Other algorithms, both novel and Rete-based, have since been designed that require less memory (e.g. Rete* or Collection Oriented Match). Description The Rete algorithm provides a generalized logical description of an implementation of functionality responsible for matching data tuples ("facts") against productions ("rules") in a pattern-matching production system (a category of rule engine)
https://en.wikipedia.org/wiki/AmigaOne
AmigaOne is a series of computers intended to run AmigaOS 4 developed by Hyperion Entertainment, as a successor to the Amiga series by Commodore International. Earlier models were produced by Eyetech, and were based on the Teron series of PowerPC POP mainboards. In September 2009, Hyperion Entertainment secured an exclusive licence for the AmigaOne name and subsequently new AmigaOne computers were released by A-Eon Technology and Acube Systems. History AmigaOne by Eyetech (2000–05) Originally in 2000, AmigaOne was the name of a project for new computer hardware to run the Amiga Digital Environment (DE), later plans replaced by AmigaOS 4. Initially it was managed by Eyetech and designed by the German company Escena GmbH. The AmigaOne motherboard was to be available in two models, the AmigaOne-1200 and the AmigaOne-4000 as expansions for the Amiga 1200 and Amiga 4000 computers. This would probably not have been actually possible. This AmigaOne project was cancelled in the design stage in 2001, mostly due to the inability to find or design a suitable northbridge chip. Eyetech, who at this point had invested funds into the project, was forced instead to license the Teron CX board from Mai to form the basis of the new AmigaONE computer range. The first fruit of this partnership with Mai, AmigaOne SE, was announced with a connector for an optionally attached Amiga 1200, in order to use the old custom chips of an Amiga for backwards compatibility. However, no such solution was ever introduced. The main difference between the ATX-format AmigaOne SE and AmigaOne XE was that the SE had a soldered-on 600 MHz PowerPC 750CXe processor, whereas the XE used a CPU board attached to a MegArray connector on the motherboard. While the MegArray connector is physically similar to the Apple Power Mac G4 CPU daughtercard connector, it is not electrically compatible. There were G3 and G4 options with a maximum clock frequency of 800 MHz and 933 MHz. The G4 module originally used a Freescale 7451 processor which was later changed to a Freescale 7455, both without level 3 cache. The G4 CPU runs hotter and requires a better heatsink than that supplied on some machines. Consequently, the G4 was often supplied underclocked to run at 800 MHz. In 2007 Acube offered 1.267 GHz 7457. The Micro-A1 was announced in two configurations, under the Micro-A1 I (Industrial) and Micro-A1 C (Consumer) labels. Only the C configuration was produced. Both AmigaOneG3-XE and AmigaOneG4-XE has four 32-bit PCI-slots (3× 33 MHz, 1× 66 MHz) and one AGP-2x slot. The Micro-A1 has only one 32-bit PCI-slot and an integrated Radeon 7000 via AGP with dedicated 32 MB VRAM. AmigaOne (SE and XE) motherboards had several hardware issues including conflicts between the onboard IDE and Ethernet controllers, problems with USB device detection and initially no support for the on-board AC97 audio. Due to the mistaken belief that the on-board AC97 audio could not be supported, the AC97 codec was removed from
https://en.wikipedia.org/wiki/Iterator
In computer programming, an iterator is an object that enables a programmer to traverse a container, particularly lists. Various types of iterators are often provided via a container's interface. Though the interface and semantics of a given iterator are fixed, iterators are often implemented in terms of the structures underlying a container implementation and are often tightly coupled to the container to enable the operational semantics of the iterator. An iterator performs traversal and also gives access to data elements in a container, but does not itself perform iteration (i.e., not without some significant liberty taken with that concept or with trivial use of the terminology). An iterator is behaviorally similar to a database cursor. Iterators date to the CLU programming language in 1974. Description Internal Iterators Internal iterators are higher order functions (often taking anonymous functions, but not necessarily) such as map(), reduce() etc., implementing the traversal across a container, applying the given function to every element in turn. An example might be Python's map function: digits = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] squared_digits = map(lambda x: x**2, digits) # Iterating over this iterator would result in 0, 1, 4, 9, 16, ..., 81. External iterators and the iterator pattern An external iterator may be thought of as a type of pointer that has two primary operations: referencing one particular element in the object collection (called element access), and modifying itself so it points to the next element (called element traversal). There must also be a way to create an iterator so it points to some first element as well as some way to determine when the iterator has exhausted all of the elements in the container. Depending on the language and intended use, iterators may also provide additional operations or exhibit different behaviors. The primary purpose of an iterator is to allow a user to process every element of a container while isolating the user from the internal structure of the container. This allows the container to store elements in any manner it wishes while allowing the user to treat it as if it were a simple sequence or list. An iterator class is usually designed in tight coordination with the corresponding container class. Usually, the container provides the methods for creating iterators. A loop counter is sometimes also referred to as a loop iterator. A loop counter, however, only provides the traversal functionality and not the element access functionality. Generators One way of implementing iterators is to use a restricted form of coroutine, known as a generator. By contrast with a subroutine, a generator coroutine can yield values to its caller multiple times, instead of returning just once. Most iterators are naturally expressible as generators, but because generators preserve their local state between invocations, they're particularly well-suited for complicated, stateful iterators, such as tre
https://en.wikipedia.org/wiki/InterBase
InterBase is a relational database management system (RDBMS) currently developed and marketed by Embarcadero Technologies. InterBase is distinguished from other RDBMSs by its small footprint, close to zero administration requirements, and multi-generational architecture. InterBase runs on the Microsoft Windows, macOS, Linux, Solaris operating systems as well as iOS and Android. Technology InterBase is a SQL-92-compliant relational database and supports standard interfaces such as JDBC, ODBC, and ADO.NET. Small footprint A full InterBase server installation requires around 40 MB on disk. A minimum InterBase client install requires about 400 KB of disk space. Embedded or server InterBase can be run as an embedded database or regular server. Data controller friendly inbuilt encryption Since InterBase XE, InterBase includes 256-bit AES-strength encryption that offers full database, table or column data encryption. This assists data controllers conform with data protection laws around at-rest data by separating encryption and access to the database, ensuring the database file is encrypted wherever it resides. The separation of the encryption also enables developers to just develop the application rather than worry about the data visible from a specific user login. Multi-generational architecture Concurrency control To avoid blocking during updates, Interbase uses multiversion concurrency control instead of locks. Each transaction will create a version of the record. Upon the write step, the update will fail rather than be blocked initially. Rollbacks and recovery InterBase also uses multi-generational records to implement rollbacks rather than transaction logs. Drawbacks Certain operations are more difficult to implement in a multi-generational architecture, and hence perform slowly relative to a more traditional implementation. One example is the SQL COUNT verb. Even when an index is available on the column or columns included in the COUNT, all records must be visited in order to see if they are visible under the current transaction isolation. History Multiversion concurrency control before InterBase Multiversion concurrency control is described in some detail in sections 4.3 and 5.5 of the 1981 paper "Concurrency Control in Distributed Database Systems" by Philip Bernstein and Nathan Goodman—then employed by the Computer Corporation of America. Bernstein and Goodman's paper cites a 1978 dissertation by D.P. Reed which quite clearly describes MVCC and claims it as an original work. Early years Jim Starkey was working at DEC on their DATATRIEVE 4th generation language 4GL product when he came up with an idea for a system to manage concurrent changes by many users. The idea dramatically simplified the existing problems of locking which were proving to be a serious problem for the new relational database systems being developed at the time. Starkey, however, had the idea after he had spun off his original relational database project to ano
https://en.wikipedia.org/wiki/OpenGL%20Utility%20Library
The OpenGL Utility Library (GLU) is a computer graphics library for OpenGL. It consists of a number of functions that use the base OpenGL library to provide higher-level drawing routines from the more primitive routines that OpenGL provides. It is usually distributed with the base OpenGL package. GLU is not implemented in the embedded version of the OpenGL package, OpenGL ES. Among these features are mapping between screen- and world-coordinates, generation of texture mipmaps, drawing of quadric surfaces, NURBS, tessellation of polygonal primitives, interpretation of OpenGL error codes, an extended range of transformation routines for setting up viewing volumes and simple positioning of the camera, generally in more human-friendly terms than the routines presented by OpenGL. It also provides additional primitives for use in OpenGL applications, including spheres, cylinders and disks. All GLU functions start with the glu prefix. An example function is gluOrtho2D which defines a two dimensional orthographic projection matrix. The GLU specification was last updated in 1998, and it depends on features which were deprecated with the release of OpenGL 3.1 in 2009. Specifications for GLU are still available here See also FreeGLUT OpenGL User Interface Library (GLUI) OpenGL Utility Toolkit (GLUT) References OpenGL
https://en.wikipedia.org/wiki/Call%20Level%20Interface
The Call Level Interface (CLI) is an application programming interface (API) and software standard to embed Structured Query Language (SQL) code in a host program as defined in a joint standard by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC): ISO/IEC 9075-3:2003. The Call Level Interface defines how a program should send SQL queries to the database management system (DBMS) and how the returned recordsets should be handled by the application in a consistent way. Developed in the early 1990s, the API was defined only for the programming languages C and COBOL. The interface is part of what The Open Group, publishes in a part of the X/Open Portability Guide, termed the Common Application Environment, which is intended to be a wide standard for programming open applications, i.e., applications from different programming teams and different vendors that can interoperate efficiently. SQL/CLI provides an international standard implementation-independent CLI to access SQL databases. Client–server tools can easily access databases through dynamic-link libraries (DLL). It supports and encourages a rich set of client–server tools. The most widespread use of the CLI standard is the basis of the Open Database Connectivity (ODBC) specification, which is widely used to allow applications to transparently access database systems from different vendors. The current version of the API, ODBC 3.52, incorporates features from both the ISO and X/Open standards. Examples of languages that support Call Level Interface are ANSI C, C#, Visual Basic .NET (VB.NET), Java, Pascal, and Fortran. History The work with the Call Level Interface began in a subcommittee of the US-based SQL Access Group (SAG) In 1992, it was initially published and marketed as Microsoft's ODBC API. The CLI specification was submitted as to the ISO and American National Standards Institute (ANSI) standards committees in 1993. The standard has the book number and the internal document number is C451. ISO SQL/CLI is an addendum to 1992 SQL standard (SQL-92). It was completed as ISO standard ISO/IEC 9075-3:1995 Information technology—Database languages—SQL—Part 3: Call-Level Interface (SQL/CLI). The current SQL/CLI effort is adding support for SQL3. In the fourth quarter of 1994, control over the standard was transferred to the X/Open Company, which significantly expanded and updated it. The X/Open CLI interface is a superset of the ISO SQL CLI. References External links Online definition of CLI at The Open Group webpage SQL Open Group standards
https://en.wikipedia.org/wiki/Junior%20%28chess%29
Junior is a computer chess program written by the Israeli programmers Amir Ban and Shai Bushinsky. Grandmaster Boris Alterman assisted, in particular with the opening book. Junior can take advantage of multiple processors, taking the name Deep Junior when competing this way in tournaments. According to Bushinsky, one of the innovations of Junior over other chess programs is the way it counts moves. Junior counts orthodox, ordinary moves as two moves, while it counts interesting moves as only one move, or even less. In this way interesting variations are analyzed more meticulously than less promising lines. This seems to be a generalization of search extensions already used by other programs. Another approach its designers claim to use is 'opponent modeling'; Junior might play moves that are not objectively the strongest but that exploit the weaknesses of the opponent. According to Don Dailey ″It has some evaluation that can sting if it's in the right situation—that no other program has.″ Results In 2003 Deep Junior played a six-game match against Garry Kasparov, which resulted in a 3–3 tie. It won a 2006 rapid game against Teimour Radjabov. In June 2007, Deep Junior won the "ultimate computer chess challenge" organized by FIDE, defeating Deep Fritz 4–2. These programs opted out of the World Computer Chess Championship, which was held at the same time and won by Rybka with a score of 10/11. Junior won the World Microcomputer Chess Championship in 1997 and 2001 and the World Computer Chess Championship in 2002, 2004, 2006, 2009, 2011 and 2013; both organized by the International Computer Games Association. References External links Deep Junior's 2006 Computer Chess Championship games Chess engines
https://en.wikipedia.org/wiki/Carmageddon
Carmageddon is a vehicular combat video game released for personal computers in 1997. It was produced by Stainless Games and published by Interplay Productions and Sales Curve Interactive. It was ported to other platforms, and spawned a series. In 2011, Stainless Games obtained the rights to Carmageddon from Square Enix Europe. iOS and Android ports were released in 2012 and 2013, respectively. THQ Nordic acquired the rights to the Carmageddon series from Stainless Games in December 2018. Gameplay The player races a vehicle against several other computers controlled competitors in various settings, including city, mine, and industrial areas. The player has a certain amount of time to complete each race, more time may be gained by collecting bonuses, damaging the competitors' cars, or by running over pedestrians. Races are completed by either completing the course as one would a normal racing game, "wasting" (wrecking) all other race cars, or killing all pedestrians on the level. The game includes thirty-six racetracks, played across eleven different locations. The game features three instrumental remixes from Fear Factory's album of 1995, Demanufacture. Development The game that became Carmageddon started out as 3D Destruction Derby, a banger racing sim prototyped by Stainless Software. This was signed by SCi in 1995, with the condition that it be made into a licensed game to guarantee popularity. Initially, SCi wanted to use the Mad Max license, but was unable to find out who owned the rights to the franchise. It instead secured the Death Race 2000 license, as a sequel to the original film was at that time planned. According to head programmer Patrick Buckland, the initial concept stemmed from team members getting bored while playing racing games, leading them to ultimately drive in the wrong direction and crash into other cars. They decided it made sense to create a game where this was the objective to begin with. Shortly after, Psygnosis released a game with this same concept, Destruction Derby. The notion of running over pedestrians was added to distinguish the game from Destruction Derby and arouse controversy. However, there had been a number of recent games which involved running over pedestrians, such as Quarantine and Die Hard Trilogy. Rob Henderson from SCi suggested increasing the potential for controversy by awarding the player points for the pedestrian kills. The sequel to Death Race 2000 was later cancelled, but by this point SCi were impressed enough by Stainless's work that they felt Stainless could try creating their own intellectual property. The name Carmageddon was coined, and development proceeded with the designers allowed unusually free rein with regard to the content of the game. The game uses the BRender engine, which Stainless Software were already thoroughly familiar with; one of their previous contracts was to port BRender to Macintosh and build the corresponding tools and demos. The PlayStation conversion was
https://en.wikipedia.org/wiki/Groff
Groff may refer to: Groff (surname) groff (software), a typesetting computer program Groff (lychee), a variety of lychee fruit tree Groff v. DeJoy, a United States Supreme Court case regarding religious liberty decided as part of the 2022 term See also Graf (disambiguation) Graff (disambiguation) Grof (disambiguation)
https://en.wikipedia.org/wiki/Nroff
nroff (short for "new roff") is a text-formatting program on Unix and Unix-like operating systems. It produces output suitable for simple fixed-width printers and terminal windows. It is an integral part of the Unix help system, being used to format man pages for display. nroff and the related troff were both developed from the original roff. While nroff was intended to produce output on terminals and line printers, troff was intended to produce output on typesetting systems. Both used the same underlying markup and a single source file could normally be used by nroff or troff without change. History nroff was written by Joe Ossanna for Version 2 Unix, in Assembly language and then ported to C. It was a descendant of the RUNOFF program from CTSS, the first computerized text-formatting program, and is a predecessor of the Unix troff document processing system. There is also a free software version of nroff in the groff package. Variants The Minix operating system, among others, uses a clone of nroff called cawf by Vic Abell, based on awf, the Amazingly Workable Formatter designed in awk by Henry Spencer. These are not full replacements for the nroff/troff suite of tools, but are sufficient for display and printing of basic documents and manual pages. In addition, a simplified version of nroff is available in Ratfor source code form as an example in the book Software Tools by Brian Kernighan and P. J. Plauger. See also troff groff TeX LaTeX man page Lout References External links source code for Henry Spencer's AWF troff/nroff quick reference nroff source code in Illumos. Explanation by Bryan Cantrill Troff Assembly language software Markup languages Unix text processing utilities
https://en.wikipedia.org/wiki/Perceptron
In machine learning, the perceptron (or McCulloch-Pitts neuron) is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. History The perceptron was invented in 1943 by Warren McCulloch and Walter Pitts. The first hardware implementation was Mark I Perceptron machine built in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the Information Systems Branch of the United States Office of Naval Research and the Rome Air Development Center. It was first publicly demonstrated on 23 June 1960. The machine was "part of a previously secret four-year NPIC [the US' National Photographic Interpretation Center] effort from 1963 through 1966 to develop this algorithm into a useful tool for photo-interpreters". Rosenblatt described the details of the perceptron in a 1958 paper. His organization of a perceptron is constructed of three kinds of cells ("units"): AI, AII, R, which stand for "projection", "association" and "response". Mark I Perceptron machine The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the "Mark I perceptron", designed for image recognition. The machine is currently in Smithsonian National Museum of American History. The Mark I Perceptron has 3 layers. An array of 400 photocells arranged in a 20x20 grid, named "sensory units" (S-units), or "input retina". Each S-unit can connect to up to 40 A-units. A hidden layer of 512 perceptrons, named "association units" (A-units). An output layer of 8 perceptrons, named "response units" (R-units). Rosenblatt called this three-layered perceptron network the alpha-perceptron, to distinguish it from other perceptron models he experimented with. The S-units are connected to the A-units randomly (according to a table of random numbers) via a plugboard (see photo), to "eliminate any particular intentional bias in the perceptron". The connection weights are fixed, not learned. The A-units are connected to the R-units, with adjustable weights encoded in potentiometers, and weight updates during learning were performed by electric motors.The hardware details are in an operators' manual. In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." Rosenblatt des
https://en.wikipedia.org/wiki/Joe%20Ossanna
Joseph Frank Ossanna, Jr. (December 10, 1928 – November 28, 1977) was an electrical engineer and computer programmer who worked as a member of the technical staff at the Bell Telephone Laboratories in Murray Hill, New Jersey. He became actively engaged in the software design of Multics (Multiplexed Information and Computing Service), a general-purpose operating system used at Bell. Education and career Ossanna received his Bachelor of Engineering (B.S.E.E.) from Wayne State University in 1952. At Bell Telephone Labs, Ossanna was concerned with low-noise amplifier design, feedback amplifier design, satellite look-angle prediction, mobile radio fading theory, and statistical data processing. He was also concerned with the operation of the Murray Hill Computation Center and was actively engaged in the software design of Multics. After learning how to program the PDP-7 computer, Ken Thompson, Dennis Ritchie, Joe Ossanna, and Rudd Canaday began to program the operating system that was designed earlier by Thompson (Unics, later named Unix). After writing the file system and a set of basic utilities, and assembler, a core of the Unix operating system was established. Doug McIlroy later wrote, "Ossanna, with the instincts of a motor pool sergeant, equipped our first lab and attracted the first outside users." When the team got a Graphic Systems CAT phototypesetter for making camera-ready copy of professional articles for publication and patent applications, Ossanna wrote a version of nroff that would drive it. It was dubbed troff, for typesetter roff. So it was that in 1973 he authored the first version of troff for Unix entirely written in PDP-11 assembly language. However, two years later, Ossanna re-wrote the code in the C programming language. He had planned another rewrite which was supposed to improve its usability but this work was taken over by Brian Kernighan. Ossanna was a member of the Association for Computing Machinery, Sigma Xi, and Tau Beta Pi. He died as a consequence of heart disease. Sometimes he is described as having died in a car accident, but this is a mistake. Selected publications Bogert, Bruce P.; Ossanna, Joseph F., "The heuristics of cepstrum analysis of a stationary complex echoed Gaussian signal in stationary Gaussian noise", IEEE Transactions on Information Theory, v.12, issue 3, July 19, 1966, pp. 373 – 380 Ossanna, Joseph F.; Kernighan, Brian W., Troff user's manual, UNIX Vol. II, W. B. Saunders Company, March 1990 Kernighan, B W; Lesk, M E; Ossanna, J F, Jr., Document preparation, in UNIX:3E system readings and applications. Volume I: UNIX:3E time-sharing system, Prentice-Hall, Inc., December 1986 Ossanna, Joseph F., "The current state of minicomputer software", AFIPS '72 (Spring): Proceedings of the May 16–18, 1972, spring joint computer conference, Publisher: ACM, May 1972 Ossanna, Joseph F., "Identifying terminals in terminal-oriented systems", Proceedings of the ACM second symposium on Problems in the op
https://en.wikipedia.org/wiki/JT%20Storage
JT Storage, Inc. (also known as JTS Corporation) was a maker of inexpensive IDE hard drives for personal computers based in San Jose, California. It was founded in 1994 by "Jugi" Tandon—the inventor of the double-sided floppy disk drive and founder of Tandon Corporation—and Tom Mitchell, a co-founder of Seagate and former president and Chief Operating Officer of both Seagate and Conner Peripherals. The company later reverse-merged with Jack Tramiel's Atari Corporation in 1996, sold all Atari assets to Hasbro Interactive in 1998 and was finally declared bankrupt in 1999. History Early years and products JTS initially focused on a new 3" form-factor drive for laptops. The 3" form factor allowed a larger drive capacity for laptops with the existing technology. Compaq was actively engaged in qualifying these drives and built several laptops with this form factor drive. Lack of a second source was a major obstacle for this new form factor to gain a foothold; JTS licensed the form factor to Western Digital to attempt to remedy this problem. Eventually, as 2.5" drives became cheaper to build, interest in the 3" form factor waned, and JTS and WD stopped the project in 1998. JTS by then had become a source of cheap, medium-performance 3.5" drives with 5400 RPM spindles. The drives, produced in a factory in India (the factory was in the Madras Export Processing Zone in the suburbs of the Southern Indian city of Madras, now known as Chennai), were known for poor reliability. Failure rates were very high and quality control was inconsistent: good drives were very good, still running after 5 years, whereas bad drives almost always failed within a few weeks. Because of their low-tier reputation, JTS drives were rare in brand-name PCs and most frequently turned up in home-built and whitebox PCs. Product lines included Palladium and Champion internal IDE hard drives. The basic design of their drives was done by Kalok for TEAC in the early 1990s. TEAC used the design as part of a removable HDD system, which was also sold under the Kalok name. After Kalok failed in 1994, JTS hired its founder as their chief technical officer, and licensed the patents involved from TEAC and Pont Peripherals. Merger with Atari Corporation and demise On February 13, 1996, JTS announced a reverse merger with former video game and home computer manufacturer, Atari Corporation. It was primarily a marriage of convenience; JTS had products but little cashflow, while Atari had money, primarily from a series of successful lawsuits earlier in the decade followed by good investments. However, with the failure of its Jaguar game console, losses mounting, and no other products to sell, Atari expected to run out of money within two years. Within a few months of the merger becoming official on July 30, all former Atari employees were either dismissed or relocated to JTS's headquarters. Atari's remaining inventory of Jaguar products proved difficult to get rid of, even at liquidation price
https://en.wikipedia.org/wiki/Roff%20%28software%29
roff is a typesetting markup language. As the first Unix text-formatting computer program, it is a predecessor of the nroff and troff document processing systems. Roff was a Unix version of the runoff text-formatting program from Multics, which was a descendant of RUNOFF for CTSS (the first computerized text-formatting application). History CTSS roff is a descendant of the RUNOFF program by Jerry Saltzer, which ran on CTSS. Douglas McIlroy and Robert Morris wrote runoff for Multics in BCPL based on Saltzer's program written in MAD assembler. Their program in turn was "transliterated" by Ken Thompson into PDP-7 assembler language for his early Unix operating system, circa 1970. When the first PDP-11 was acquired for Unix in late 1970, the justification cited to management for the funding required was that it was to be used as a word processing system, and so roff was quickly transliterated again, into PDP-11 assembly, in 1971. roff printed the man pages for Versions 1 through 3 of Unix, and when the Bell Labs patent department began using it, it became the first Unix application with an outside client. Dennis Ritchie noted that the ability to rapidly modify roff (because it was locally written software) to provide special features was an important factor in leading to the adoption of Unix by the patent department to fill its word processing needs. This in turn gave UNIX enough credibility inside Bell Labs to secure the funding to purchase one of the first PDP-11/45s produced. See also nroff troff groff References Sources D. M. Ritchie, The Evolution of the UNIX Time-sharing System (AT&T Bell Laboratories Technical Journal, Vol. 63, No. 8, October 1984) External links roff - Concepts and history of roff typesetting Typesetting software
https://en.wikipedia.org/wiki/GNU%20Libtool
In computer programming, GNU Libtool is a software development tool, part of the GNU build system, consisting of a shell script created to address the software portability problem when compiling shared libraries from source code. It hides the differences between computing platforms for the commands which compile shared libraries. It provides a command-line interface that is identical across platforms and it executes the platform's native commands. Rationale Different operating systems handle shared libraries differently. Some platforms do not use shared libraries at all. It can be difficult to make a software program portable: the C compiler differs from system to system; certain library functions are missing on some systems; header files may have different names. Libtool helps manage the creation of static and dynamic libraries on various Unix-like operating systems. Libtool accomplishes this by abstracting the library-creation process, hiding differences between various systems (e.g. Linux systems vs. Solaris). GNU Libtool is designed to simplify the process of compiling a computer program on a new system, by "encapsulating both the platform-specific dependencies, and the user interface, in a single script". When porting a program to a new system, Libtool is designed so the porter need not read low-level documentation for the shared libraries to be built, rather just run a configure script (or equivalent). Use Libtool is used by Autoconf and Automake, two other portability tools in the GNU build system. It can also be used directly. Clones and derivatives Since GNU Libtool was released, other free software projects have created drop-in replacements under different software licenses. See also GNU Compiler Collection GNU build system pkg-config References External links Autobook homepage Autotools Tutorial Avoiding libtool minefields when cross-compiling Autotools Mythbuster Compiling tools Libtool Free computer libraries Cross-platform software
https://en.wikipedia.org/wiki/Pan%20%28newsreader%29
Pan is a news client for multiple operating systems, developed by Charles Kerr and others. It supports offline reading, multiple servers, multiple connections, fast (indexed) article header filtering and mass saving of multi-part attachments encoded in uuencode, yEnc and base64; images in common formats can be viewed inline. Pan is free software available for Linux, FreeBSD, NetBSD, OpenBSD, OpenSolaris, and Windows. Pan is popular for its large feature set. It passes the Good Netkeeping Seal of Approval 2.0 set of standards for newsreaders. Name The name Pan originally stood for Pimp-ass newsreader. As Pan became an increasingly popular and polished application, the full name was perceived to be unprofessional and in poor taste, so references to it have been removed from the program and its website. See also List of Usenet newsreaders Comparison of Usenet newsreaders References External links Free Usenet clients BSD software GNOME Applications News aggregators that use GTK Software using the GPL license
https://en.wikipedia.org/wiki/News%20server
A news server is a collection of software used to handle Usenet articles. It may also refer to a computer itself which is primarily or solely used for handling Usenet. Access to Usenet is only available through news server providers. Articles and posts End users often use the term "posting" to refer to a single message or file posted to Usenet. For articles containing plain text, this is synonymous with an article. For binary content such as pictures and files, it is often necessary to split the content among multiple articles. Typically through the use of numbered Subject: headers, the multiple-article postings are automatically reassembled into a single unit by the newsreader. Most servers do not distinguish between single and multiple-part postings, dealing only at the level of the individual component articles. Headers and overviews Each news article contains a complete set of header lines, but in common use the term "headers" is also used when referring to the News Overview database. The overview is a list of the most frequently used headers, and additional information such as article sizes, typically retrieved by the client software using the NNTP command. Overviews make reading a newsgroup faster for both the client and server by eliminating the need to open each individual article to present them in list form. If non-overview headers are required, such as for when using a kill file, it may still be necessary to use the slower method of reading all the complete article headers. Many clients are unable to do this, and limit filtering to what is available in the summaries. News server attributes Among the operators and users of commercial news servers, common concerns are the continually increasing storage and network capacity requirements and their effects. Completion (the ability of a server to successfully receive all traffic), retention (the amount of time articles are made available to readers) and overall system performance. With the increasing demands, it is common for the transit and reader server roles to be subdivided further into numbering, storage and front end systems. These server farms are continually monitored by both insiders and outsiders, and measurements of these characteristics are often used by consumers when choosing a commercial news service. Speed Speed, in relation to Usenet, is how quickly a server can deliver an article to the user. The server that the user connects to is typically part of a server farm that has many servers dedicated to multiple tasks. How fast the data can move throughout this farm is the first thing that affects the speed of delivery. The speed of data traveling throughout the farm can be severely bottlenecked through hard drive operations. Retrieving the article and overview information can cause massive stress on hard drives. To combat this, caching technology and cylindrical file storage systems have been developed. Once the farm is able to deliver the data to the network,
https://en.wikipedia.org/wiki/GNAT
GNAT is a free-software compiler for the Ada programming language which forms part of the GNU Compiler Collection (GCC). It supports all versions of the language, i.e. Ada 2012, Ada 2005, Ada 95 and Ada 83. Originally its name was an acronym that stood for GNU NYU Ada Translator, but that name no longer applies. The front-end and run-time are written in Ada. History The GNAT project started in 1992 when the United States Air Force awarded New York University (NYU) a contract to build a free compiler for Ada to help with the Ada 9X standardization process. The 3-million-dollar contract required the use of the GNU GPL for all development, and assigned the copyright to the Free Software Foundation. The first official validation of GNAT occurred in 1995. In 1994 and 1996, the original authors of GNAT founded two sister companies, Ada Core Technologies in New York City and ACT-Europe (later AdaCore SAS) in Paris, to provide continuing development and commercial support of GNAT. The two companies always operated as one entity, but did not formally unify until 2012 as AdaCore. GNAT was initially released separately from the main GCC sources. On October 2, 2001, the GNAT sources were contributed to the GCC CVS repository. The last version to be released separately was GNAT 3.15p, based on GCC 2.8.1, on October 2, 2002. Starting with GCC 3.4, on major platforms the official GCC release is able to pass 100% of the ACATS Ada tests included in the GCC testsuite. By GCC 4.0, more exotic platforms were also able to pass 100% of the ACATS tests. License The compiler is licensed under the terms of the GNU GPL 3+ with GCC Runtime Library Exception. All versions leading up to and including 3.15p are licensed under the GMGPL offering similar runtime exceptions. The GMGPL license is GNU GPL 2 with a linking exception that permits software with licenses that are incompatible with the GPL to be linked with the output of Ada standard generic libraries that are supplied with GNAT without breaching the license agreement. Versions FSF GNAT is part of most major Linux or BSD distributions and is included in the main GCC Sources. GNAT Pro is a supported version of GNAT from AdaCore. In addition to FSF GNAT and AdaCore's GNAT Pro, AdaCore releases additional versions (GNAT-GPL, a public older version of GNAT Pro, and GNAT GAP, a version for AdaCore's educational programs). These AdaCore versions have the runtime exceptions removed, this requires software that is linked with the standard libraries to have GPL-compatible licenses to avoid being in breach of the license agreement. JGNAT was a GNAT version that compiled from the Ada programming language to Java bytecode. GNAT for dotNET is a GNAT version that compiles from the Ada programming language to Common Language Infrastructure for the .NET Framework and the free and open source implementations Mono and Portable.NET. See also APSE – a specification for a programming environment to support software deve
https://en.wikipedia.org/wiki/Monkey%27s%20Audio
Monkey's Audio is an algorithm and file format for lossless audio data compression. Lossless data compression does not discard data during the process of encoding, unlike lossy compression methods such as Advanced Audio Coding, MP3, Vorbis, and Opus. Therefore, it may be decompressed to a file that is identical to the source material. Similar to other lossless audio codecs, files encoded to Monkey's Audio are typically reduced to about half of the original size, with data transfer time and storage requirements being reduced accordingly. Comparisons Like any lossless compression scheme, Monkey's Audio format takes up several times as much space as lossy compression formats - typically, about twice as much as a 320 kbit/s bitrate MP3 file. The upside is that no data is lost compared to the input file, making lossless codecs suitable for transcoding, or simply taking up approximately half as much space as raw PCM data. Relative to FLAC, Apple Lossless Audio Codec, or WavPack, Monkey's Audio is slow to encode or decode files. While Monkey's Audio can achieve high compression ratios, the cost is a dramatic increase in requirements on the decoding end. Many older portable media players, and even older smartphones, have difficulty handling this. In comparison, most lossless codecs are asymmetric, meaning that the work done to achieve higher compression ratios, if selected by the user, slows down the encoding process, but has essentially no effect on the decoding requirements. Licensing On 10 August 2023, with the release of version 10.18, Monkey's Audio switched to the Open Source Initiative-approved 3-Clause BSD Licence. Other lossless codecs such as FLAC and WavPack are also available under open source licences, and are well supported in Linux distributions and in many applications. Since all of these formats are lossless, users can transcode between formats without generation loss. Supported platforms Officially, Monkey's Audio is available only for the Microsoft Windows platform. As of version 4.02 (19 January 2009) a DirectShow filter is distributed with the installer, allowing for compatibility with most media players running on the Windows operating system. Monkey's Audio is also supported on Linux and OS X using JRiver Media Center or Plex. A GPL-licensed version of the Monkey's Audio decoder has been independently written for Rockbox and is included in FFmpeg. This code also provides playback support in applications that use GStreamer, as well as DeaDBeeF. A number of players and rippers support the format as well. It is also available as a port and package on FreeBSD. Monkey's Audio files can be encoded and decoded on any platform which has a J2SE implementation, by the means of the unofficial JMAC library, which is free software licensed under the GNU LGPL. Hardware support Monkey's Audio is supported natively on all modern Cowon multimedia media players, the FiiO X Series and some Cayin digital audio players. On other hardwa
https://en.wikipedia.org/wiki/Demos
Demos may refer to: Computing DEMOS, a Soviet Unix-like operating system DEMOS (ISP), the first internet service provider in the USSR Demos Commander, an Orthodox File Manager for Unix-like systems plural for Demo (computer programming) Organizations Demos (UK think tank), London-based public policy research organisation and publisher Demos (U.S. think tank), a public policy research and advocacy organization DEMOS (Republika Srpska), a political party in Republika Srpska DEMOS (Montenegro), a parliamentary political party in Montenegro DEMOS (Slovenia), a coalition of democratic political parties in Slovenia Demos Helsinki, a think tank in Finland Demos Medical Publishing, a publisher of books on medical subjects Solidary Democracy, a political party in Italy Democracy and Solidarity Party, a political party in Romania Arts and entertainment Demos (film), a 1921 silent film Demos (novel), an 1886 novel by George Gissing Demos Journal, an Australian literary and political journal Music Demos, 1982-86, a set of recordings by Swedish pop artist Per Gessle Demos (Crosby, Stills & Nash album), a 2009 album by Crosby, Stills & Nash Demos (Edith Frost album), 2004 album by Edith Frost Demos (Imperial Drag album), 2005 album by Imperial Drag Demos (Matt Skiba album), a 2010 album by Matt Skiba The Demos (Jess Moskaluke album), 2021 album by Jess Moskaluke The Demos (Rebecca Hollweg album) Demos, an EP by the Ramones which features four of the five demos recorded with Alan Betrock in the summer of 1975 The Demos (Father John Misty EP) Cowboys from Hell: The Demos, by Pantera People Demos Chiang (born 1976), Taiwanese businessman Demos Goumenos (born 1978), Cypriot football midfielder Demos Shakarian (1913–1993), Christian businessman of Armenian origin from Los Angeles George Demos (born 1976), former U.S. Securities and Exchange Commission prosecutor, and former congressional candidate Other uses Demos, the ruling body of free citizens in ancient Greek city-states, such as Athens, a root of the word democracy Demos, the personification of the previous meaning, treated as a deity Demos, the Greek meaning as a rhetorical term in English Deme, (Greek: demos) a municipal subdivision of ancient Attica, Greece See also The Demos Remastered: Anthology 1, a Black 'n Blue compilation album Unlabeled - The Demos, a 2006 EP by the American singer Leah Andreone Eindhovense Studentenvereniging Demos, a student association in the Netherlands Deimos (moon), one of the two moons of Mars Demo (disambiguation) Deimos (disambiguation) Demoz (disambiguation) Demonstration (disambiguation)
https://en.wikipedia.org/wiki/Demogroup
Demogroups are teams of demosceners, who make computer based audio-visual works of art known as demos. Demogroups form a subculture collectively known as the demoscene. Groups frequently consist of students, young computer enthusiasts who spend days coding their demos. They often have a pseudonym (called a "handle" or "nick"), usually chained together with the name of their group (in formats like "Scener of Demo Group" or "Scener/DG"). Demosceners rarely use their real names in demoscene contexts. This is a tradition originating from the demoscene's roots, where small demos were distributed along with cracked software, usually computer games. Many demogroups have been founded by friends who already knew each other in real life. However, there have also been groups that have taken their form online via Bulletin Board Systems or the Internet. Perhaps the most important way for demogroups to communicate is IRC. Demosceners from different groups also meet each other in real life at demoparties and smaller meetings. Demogroups often bear resemblances to corporate companies: demogroups incorporate wordmarks, logos, catchphrases, and slogans for their promotion. It is very important for a demogroup to have good PR, and major groups have dedicated group organisers who are responsible for "managing the group's human resources", i.e. nag the members who slack off. Some groups also treat the recruitment of new members with great care, often applying "trial periods" in which the new member has to prove themself to be worthy. However these practices are often just intentional exaggeration (often tongue-in-cheek), to maintain an "elite" image for the group. A group is perhaps the most important social unit in the demoscene, and belonging to a group is often considered more or less synonymous to being a demoscener. Even individual productions, with no group activity involved, are typically associated with the group of the creative individual. There have even been several "one-man groups" when an individual demomaker with no group has wanted to release a demo or intro. Demography , the countries with the most active demogroups and demoparties are the Nordic countries (Norway, Sweden, Finland, and Denmark), Germany, the Netherlands, Hungary and France. Due to the community-like nature of the demoscene, multi-national demogroups are not uncommon. Demoscener functions Demosceners specialize themselves into various categories to be able to take part in the demomaking process. A few people are able to cross over between multiple archetypes (e.g. coder-musician, musician-designer), but this is by no means a trend. Coder The coder is the demogroup's programmer who creates the demo's software framework and is responsible for the actual realtime state of the demo. While some coders specialize in developing system-level functionality (such as providing wrappers and APIs for other coders to base their code on), others code effects which are usually visual represent
https://en.wikipedia.org/wiki/Demoscene
The demoscene is an international computer art subculture focused on producing demos: self-contained, sometimes extremely small, computer programs that produce audiovisual presentations. The purpose of a demo is to show off programming, visual art, and musical skills. Demos and other demoscene productions (graphics, music, videos, games) are shared at festivals known as demoparties, voted on by those who attend and released online. The scene started with the home computer revolution of the early 1980s, and the subsequent advent of software cracking. Crackers altered the code of computer games to remove copy protection, claiming credit by adding introduction screens of their own ("cracktros"). They soon started competing for the best visual presentation of these additions. Through the making of intros and stand-alone demos, a new community eventually evolved, independent of the gaming and software sharing scenes. Demos are informally classified into several categories, mainly of size-restricted intros. The most typical competition categories for intros are the 64k intro and the 4K intro, where the size of the executable file is restricted to 65536 and 4096 bytes, respectively. In other competitions the choice of platform is restricted; only 8-bit computers like the Atari 800 or Commodore 64, or the 16-bit Amiga or Atari ST. Such restrictions provide a challenge for coders, musicians, and graphics artists, to make a device do more than was intended in its original design. History The earliest computer programs that have some resemblance to demos and demo effects can be found among the so-called display hacks. Display hacks predate the demoscene by several decades, with the earliest examples dating back to the early 1950s. Demos in the demoscene sense began as software crackers' "signatures", that is, crack screens and crack intros attached to software whose copy protection was removed. The first crack screens appeared on the Apple II in the early 1980s, and they were often nothing but plain text screens crediting the cracker or their group. Gradually, these static screens evolved into increasingly impressive-looking introductions containing animated effects and music. Eventually, many cracker groups started to release intro-like programs separately, without being attached to unlicensed software. These programs were initially known by various names, such as letters or messages, but they later came to be known as demos. In 1980, Atari, Inc. began using a looping demo with visual effects and music to show the features of the Atari 400/800 computers in stores. At the 1985 Consumer Electronics Show, Atari showed a demoscene-style demo for its latest 8-bit computers that alternated between a 3D walking robot and a flying spaceship, each with its own music, and animating larger objects than typically seen on those systems; the two sections were separated by the Atari logo. The program was released to the public. Also in 1985, a large, spinning, chec
https://en.wikipedia.org/wiki/Henry%20Spencer
Henry Spencer (born 1955) is a Canadian computer programmer and space enthusiast. He wrote "regex", a widely used software library for regular expressions, and co-wrote C News, a Usenet server program. He also wrote The Ten Commandments for C Programmers. He is coauthor, with David Lawrence, of the book Managing Usenet. While working at the University of Toronto he ran the first active Usenet site outside the U.S., starting in 1981. His records from that period were eventually acquired by Google to provide an archive of Usenet in the 1980s. The first international Usenet site was run in Ottawa, in 1981; however, it is generally not remembered, as it served merely as a read-only medium. Later in 1981, Spencer acquired a Usenet feed from Duke University, and brought "utzoo" online; the earliest public archives of Usenet date from May 1981 as a result. The small size of Usenet in its youthful days, and Spencer's early involvement, made him a well-recognised participant; this is commemorated in Vernor Vinge's 1992 novel A Fire Upon the Deep. The novel featured an interstellar communications medium remarkably similar to Usenet, down to the author including spurious message headers; one of the characters who appeared solely through postings to this was modeled on Spencer (and, slightly obliquely, named for him). He is also credited with the claim that "Those who do not understand Unix are condemned to reinvent it, poorly." Preserving Usenet In mid-December 2001, Google unveiled its improved Usenet archives, which now go more than a decade deeper into the Internet's past than did the millions of posts that the company had originally acquired when it bought an existing archive called Deja News. Between 1981 and 1991, while running the zoology department's computer system at the University of Toronto, Spencer copied more than 2 million Usenet messages onto magnetic tapes. The 141 tapes wound up at the University of Western Ontario, where Google's Michael Schmidt tracked them down and, with the help of David Wiseman and others, got them transferred onto disks and into Google's archives. Free software contributions Henry Spencer helped Geoff Collyer write C News in 1987. At around the same time he wrote a non-proprietary replacement for regex(3), the Unix library for handling regular expressions, and made it freely available; his API followed that of Eighth Edition Research Unix. Spencer's library has been used in many software packages, including Tcl, MySQL (prior to MySQL 8.0.4), and PostgreSQL, as well as being adapted for others, including early versions of Perl. Circa 1993, Spencer donated a second version of his RE library to 4.4BSD, following the POSIX standard for regular expressions. Spencer was technical lead on the FreeS/WAN project, implementing an IPsec cryptographic protocol stack for Linux. He also wrote 'aaa' (Amazing Awk Assembler), which is one of the longest and most complex programs ever written in the awk programming language.
https://en.wikipedia.org/wiki/Network%20News%20Transfer%20Protocol
The Network News Transfer Protocol (NNTP) is an application protocol used for transporting Usenet news articles (netnews) between news servers, and for reading/posting articles by the end user client applications. Brian Kantor of the University of California, San Diego, and Phil Lapsley of the University of California, Berkeley, wrote , the specification for the Network News Transfer Protocol, in March 1986. Other contributors included Stan O. Barber from the Baylor College of Medicine and Erik Fair of Apple Computer. Usenet was originally designed based on the UUCP network, with most article transfers taking place over direct point-to-point telephone links between news servers, which were powerful time-sharing systems. Readers and posters logged into these computers reading the articles directly from the local disk. As local area networks and Internet participation proliferated, it became desirable to allow newsreaders to be run on personal computers connected to local networks. The resulting protocol was NNTP, which resembled the Simple Mail Transfer Protocol (SMTP) but was tailored for exchanging newsgroup articles. A newsreader, also known as a news client, is a software application that reads articles on Usenet, either directly from the news server's disks or via the NNTP. The well-known TCP port 119 is reserved for NNTP. Well-known TCP port 433 (NNSP) may be used when doing a bulk transfer of articles from one server to another. When clients connect to a news server with Transport Layer Security (TLS), TCP port 563 is often used. This is sometimes referred to as NNTPS. Alternatively, a plain-text connection over port 119 may be changed to use TLS via the STARTTLS command. In October 2006, the IETF released , which updates NNTP and codifies many of the additions made over the years since RFC 977. At the same time, the IETF also released , which specifies the use of Transport Layer Security (TLS) via NNTP over STARTTLS. Network News Reader Protocol During an abortive attempt to update the NNTP standard in the early 1990s, a specialized form of NNTP intended specifically for use by clients, NNRP, was proposed. This protocol was never completed or fully implemented, but the name persisted in InterNetNews's (INN) nnrpd program. As a result, the subset of standard NNTP commands useful to clients is sometimes still referred to as "NNRP". NNTP server software Leafnode InterNetNews C News Apache James Synchronet yProxy DIABLO, a backbone news transit system, designed to replace INND on backbone machines. See also List of Usenet newsreaders External links Kantor, Brian and Phil Lapsley. "Network News Transfer Protocol: A Proposed Standard for the Stream-Based Transmission of News." 1986. Horton, Mark, and R. Adams. "Standard for Interchange of USENET Messages." 1987. Barber, Stan, et al. "Common NNTP Extensions." 2000 IETF nntpext Working Group Feather, Clive. "Network News Transfer Protocol (NNTP)." 2006 Murchison, K., J.
https://en.wikipedia.org/wiki/Radio%20broadcasting
Radio broadcasting is the broadcasting of audio (sound), sometimes with related metadata, by radio waves to radio receivers belonging to a public audience. In terrestrial radio broadcasting the radio waves are broadcast by a land-based radio station, while in satellite radio the radio waves are broadcast by a satellite in Earth orbit. To receive the content the listener must have a broadcast radio receiver (radio). Stations are often affiliated with a radio network that provides content in a common radio format, either in broadcast syndication or simulcast, or both. Radio stations broadcast with several different types of modulation: AM radio stations transmit in AM (amplitude modulation), FM radio stations transmit in FM (frequency modulation), which are older analog audio standards, while newer digital radio stations transmit in several digital audio standards: DAB (Digital Audio Broadcasting), HD radio, DRM (Digital Radio Mondiale). Television broadcasting is a separate service that also uses radio frequencies to broadcast television (video) signals. History The earliest radio stations were radiotelegraphy systems and did not carry audio. For audio broadcasts to be possible, electronic detection and amplification devices had to be incorporated. The thermionic valve (a kind of vacuum tube) was invented in 1904 by the English physicist John Ambrose Fleming. He developed a device he called an "oscillation valve" (because it passes current in only one direction). The heated filament, or cathode, was capable of thermionic emission of electrons that would flow to the plate (or anode) when it was at a higher voltage. Electrons, however, could not pass in the reverse direction because the plate was not heated and thus not capable of thermionic emission of electrons. Later known as the Fleming valve, it could be used as a rectifier of alternating current and as a radio wave detector. This greatly improved the crystal set which rectified the radio signal using an early solid-state diode based on a crystal and a so-called cat's whisker. However, what was still required was an amplifier. The triode (mercury-vapor filled with a control grid) was created on March 4, 1906, by the Austrian Robert von Lieben independent from that, on October 25, 1906, Lee De Forest patented his three-element Audion. It was not put to practical use until 1912 when its amplifying ability became recognized by researchers. By about 1920, valve technology had matured to the point where radio broadcasting was quickly becoming viable. However, an early audio transmission that could be termed a broadcast may have occurred on Christmas Eve in 1906 by Reginald Fessenden, although this is disputed. While many early experimenters attempted to create systems similar to radiotelephone devices by which only two parties were meant to communicate, there were others who intended to transmit to larger audiences. Charles Herrold started broadcasting in California in 1909 and was carryi
https://en.wikipedia.org/wiki/Spin%20network
In physics, a spin network is a type of diagram which can be used to represent states and interactions between particles and fields in quantum mechanics. From a mathematical perspective, the diagrams are a concise way to represent multilinear functions and functions between representations of matrix groups. The diagrammatic notation can thus greatly simplify calculations. Roger Penrose described spin networks in 1971. Spin networks have since been applied to the theory of quantum gravity by Carlo Rovelli, Lee Smolin, Jorge Pullin, Rodolfo Gambini and others. Spin networks can also be used to construct a particular functional on the space of connections which is invariant under local gauge transformations. Definition Penrose's definition A spin network, as described in Penrose (1971), is a kind of diagram in which each line segment represents the world line of a "unit" (either an elementary particle or a compound system of particles). Three line segments join at each vertex. A vertex may be interpreted as an event in which either a single unit splits into two or two units collide and join into a single unit. Diagrams whose line segments are all joined at vertices are called closed spin networks. Time may be viewed as going in one direction, such as from the bottom to the top of the diagram, but for closed spin networks the direction of time is irrelevant to calculations. Each line segment is labelled with an integer called a spin number. A unit with spin number n is called an n-unit and has angular momentum nħ/2, where ħ is the reduced Planck constant. For bosons, such as photons and gluons, n is an even number. For fermions, such as electrons and quarks, n is odd. Given any closed spin network, a non-negative integer can be calculated which is called the norm of the spin network. Norms can be used to calculate the probabilities of various spin values. A network whose norm is zero has zero probability of occurrence. The rules for calculating norms and probabilities are beyond the scope of this article. However, they imply that for a spin network to have nonzero norm, two requirements must be met at each vertex. Suppose a vertex joins three units with spin numbers a, b, and c. Then, these requirements are stated as: Triangle inequality: a must be less than or equal to b + c, b less than or equal to a + c, and c less than or equal to a + b. Fermion conservation: a + b + c must be an even number. For example, a = 3, b = 4, c = 6 is impossible since 3 + 4 + 6 = 13 is odd, and a = 3, b = 4, c = 9 is impossible since 9 > 3 + 4. However, a = 3, b = 4, c = 5 is possible since 3 + 4 + 5 = 12 is even, and the triangle inequality is satisfied. Some conventions use labellings by half-integers, with the condition that the sum a + b + c must be a whole number. Formal approach to definition Formally, a spin network may be defined as a (directed) graph whose edges are associated with irreducible representations of a compact Lie group and whose vertice
https://en.wikipedia.org/wiki/Donald%20Becker
Donald Becker is an American computer programmer who wrote Ethernet drivers for the Linux operating system. Becker, in collaboration with Thomas Sterling, created the Beowulf clustering software while at NASA, to connect many inexpensive PCs to solve complex math problems typically reserved for classic supercomputers. For this work, Becker received the Gordon Bell Prize in 1997. Becker became the Chief Technology Officer (CTO) of Scyld Computer Corporation, a wholly owned subsidiary of Penguin Computing, a developer and supplier of Beowulf clusters. References External links Scyld Computer Corporation Free software programmers American chief technology officers Year of birth missing (living people) Living people
https://en.wikipedia.org/wiki/Bolo%20%281987%20video%20game%29
Bolo is a video game initially created for the BBC Micro computer by Stuart Cheshire in 1987, and was later ported by Cheshire to the Apple Macintosh. Although offered for sale for the BBC Micro, this version is now regarded as lost. It is a networked multiplayer game that simulates a tank battlefield. Currently, a Windows version known as Winbolo remains in operation and continues to have a small but active player base. Name According to the Bolo Frequently Asked Questions page, "Bolo is the Hindi word for communication. Bolo is about computers communicating on the network, and more importantly about humans communicating with each other, as they argue, negotiate, form alliances, agree strategies, etc." Another tank game with the same name was created for the Apple II in 1982. In the user manual, Cheshire wrote that this was "an unfortunate coincidence". Description Players are divided into two teams. Each player commands a tank that can be driven around a battlefield within an orthogonal, top-down view. The tank has a cannon, which fires forward, and it carries mines as a secondary weapon, which can be dropped while moving or be placed somewhere on the map. Tanks have a certain amount of "armor" (hit points), which is reduced by enemy shots. A tank is destroyed if its armor reaches zero or if it is driven into the sea. Cannon ammunition and mines can be refilled by going to a friendly "base". The bases also repair damage to tanks, but this depletes the base's armor and enemy tanks can kill you faster than a base can heal. Bases' ammunition and armor regenerate slowly, however wood must be gathered by the engineer. The goal of the game is to capture all of the bases on the map (and pillboxes). Neutral bases may be captured by driving one's tank over them. Hostile bases can be made neutral again by shooting them until their armor supply is reduced to zero. Another game element is the "pillbox". Pillboxes are initially neutral and will shoot at any tank that approaches them. Like the supply bases, pillboxes can be shot at until destroyed, after which a player can restore it, making it friendly. Unlike the bases, pillboxes can be moved around the map by the players. Inside the tank is an engineer aka builder, who places mines and moves pillboxes and is also nick-named "little green men" by the creator. The engineer can also perform building tasks, after collecting wood in a forest. The structures that can be built are roads, which speed up travel, boats, and walls, which act as a barrier. The engineer can be killed by enemies while out of the tank. Networking The Macintosh version of Bolo supported up to sixteen concurrent networked players, using AppleTalk over a Local Area Network, or UDP over the Internet. All AppleTalk network connection types were supported, including LocalTalk, EtherTalk, TokenTalk, and AppleTalk Remote Access. The current Windows version continues to support 16 players, who join via an active games page or the gam
https://en.wikipedia.org/wiki/UUCP
UUCP (Unix-to-Unix Copy) is a suite of computer programs and protocols allowing remote execution of commands and transfer of files, email and netnews between computers. A command named is one of the programs in the suite; it provides a user interface for requesting file copy operations. The UUCP suite also includes (user interface for remote command execution), (the communication program that performs the file transfers), (reports statistics on recent activity), (execute commands sent from remote machines), and (reports the UUCP name of the local system). Some versions of the suite include / (convert 8-bit binary files to 7-bit text format and vice versa). Although UUCP was originally developed on Unix in the 1970s and 1980s, and is most closely associated with Unix-like systems, UUCP implementations exist for several non-Unix-like operating systems, including DOS, OS/2, OpenVMS (for VAX hardware only), AmigaOS, classic Mac OS, and even CP/M. History UUCP was originally written at AT&T Bell Laboratories by Mike Lesk. By 1978 it was in use on 82 UNIX machines inside the Bell system, primarily for software distribution. It was released in 1979 as part of Version 7 Unix. The first UUCP emails from the U.S. arrived in the United Kingdom in 1979 and email between the UK, the Netherlands and Denmark started in 1980, becoming a regular service via EUnet in 1982. The original UUCP was rewritten by AT&T researchers Peter Honeyman, David A. Nowitz, and Brian E. Redman around 1983. The rewrite is referred to as HDB or HoneyDanBer uucp, which was later enhanced, bug fixed, and repackaged as BNU UUCP ("Basic Network Utilities"). Each of these versions was distributed as proprietary software, which inspired Ian Lance Taylor to write a new free software version from scratch in 1991. Taylor UUCP was released under the GNU General Public License. Taylor UUCP addressed security holes which allowed some of the original network worms to remotely execute unexpected shell commands. Taylor UUCP also incorporated features of all previous versions of UUCP, allowing it to communicate with any other version and even use similar config file formats from other versions. UUCP was also implemented for non-UNIX operating systems, most-notably DOS systems. Packages such as UUSLAVE/GNUUCP (John Gilmore, Garry Paxinos, Tim Pozar), UUPC/extended (Drew Derbyshire of Kendra Electronic Wonderworks) and FSUUCP (Christopher Ambler of IODesign), brought early Internet connectivity to personal computers, expanding the network beyond the interconnected university systems. FSUUCP formed the basis for many bulletin board system (BBS) packages such as Galacticomm's Major BBS and Mustang Software's Wildcat! BBS to connect to the UUCP network and exchange email and Usenet traffic. As an example, UFGATE (John Galvin, Garry Paxinos, Tim Pozar) was a package that provided a gateway between networks running Fidonet and UUCP protocols. FSUUCP was the only other implementation of
https://en.wikipedia.org/wiki/MIX
MIX is a hypothetical computer used in Donald Knuth's monograph, The Art of Computer Programming (TAOCP). MIX's model number is 1009, which was derived by combining the model numbers and names of several contemporaneous, commercial machines deemed significant by the author. Also, "MIX" read as a Roman numeral is 1009. The 1960s-era MIX has since been superseded by a new (also hypothetical) computer architecture, MMIX, to be incorporated in forthcoming editions of TAOCP. Software implementations for both the MIX and MMIX architectures have been developed by Knuth and made freely available (named "MIXware" and "MMIXware", respectively). Several derivatives of Knuth's MIX/MMIX emulators also exist. GNU MDK is one such software package; it is free and runs on a wide variety of platforms. Their purpose for education is quite similar to John L. Hennessy's and David A. Patterson's DLX architecture, from Computer Organization and Design - The Hardware Software Interface. Architecture MIX is a hybrid binary–decimal computer. When programmed in binary, each byte has 6 bits (values range from 0 to 63). In decimal, each byte has 2 decimal digits (values range from 0 to 99). Bytes are grouped into words of five bytes plus a sign. Most programs written for MIX will work in either binary or decimal, so long as they do not try to store a value greater than 63 in a single byte. A word has the range −1,073,741,823 to 1,073,741,823 (inclusive) in binary mode, and −9,999,999,999 to 9,999,999,999 (inclusive) in decimal mode. The sign-and-magnitude representation of integers in the MIX architecture distinguishes between “−0” and “+0.” This contrasts with modern computers, whose two's-complement representation of integer quantities includes a single representation for zero, but whose range for a given number of bits includes one more negative integer than the number of representable positive integers. Registers There are 9 registers in MIX: rA: Accumulator (full word, five bytes and a sign). rX: Extension (full word, five bytes and a sign). rI1, rI2, rI3, rI4, rI5, rI6: Index registers (two bytes and a sign). rJ: Jump address (two bytes, always positive). A byte is assumed to be at least 6 bits. Most instructions can specify which of the "fields" (bytes) of a register are to be altered, using a suffix of the form (first:last). The zeroth field is the one-bit sign. MIX also records whether the previous operation overflowed, and has a one-trit comparison indicator (less than, equal to, or greater than). Memory and input/output The MIX machine has 4000 words of storage (each with 5 bytes and a sign), addressed from 0 to 3999. A variety of input and output devices are also included: Tape units (devices 0...7). Disk or drum units (devices 8...15). Card reader (device 16). Card punch (device 17). Line printer (device 18). Typewriter terminal (device 19). Paper tape (device 20). Instructions Each machine instruction in memory occupies one word, and co
https://en.wikipedia.org/wiki/Overfitting
In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise) as if that variation represented underlying model structure. Underfitting occurs when a mathematical model cannot adequately capture the underlying structure of the data. An under-fitted model is a model where some parameters or terms that would appear in a correctly specified model are missing. Under-fitting would occur, for example, when fitting a linear model to non-linear data. Such a model will tend to have poor predictive performance. The possibility of over-fitting exists because the criterion used for selecting the model is not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set of training data, and yet its suitability might be determined by its ability to perform well on unseen data; then over-fitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend. As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions. The potential for overfitting depends not only on the number of parameters and data but also the conformability of the model structure with the data shape, and the magnitude of model error compared to the expected level of noise or error in the data. Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new data set than on the data set used for fitting (a phenomenon sometimes known as shrinkage). In particular, the value of the coefficient of determination will shrink relative to the original data. To lessen the chance or amount of overfitting, several techniques are available (e.g., model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout). The basis of some techniques is either (1) to explicitly penalize overly complex models or (2) to test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter. Statistical inference In statistics, an inference is drawn from a statistical model, which has been selected via
https://en.wikipedia.org/wiki/Paul%20Graham%20%28programmer%29
Paul Graham (; born 1964) is an English computer scientist, essayist, entrepreneur, investor, and author. He is best known for his work on the programming language Lisp, his former startup Viaweb (later renamed Yahoo! Store), co-founding the influential startup accelerator and seed capital firm Y Combinator, his essays, and Hacker News. He is the author of several computer programming books, including: On Lisp, ANSI Common Lisp, and Hackers & Painters. Technology journalist Steven Levy has described Graham as a "hacker philosopher". Graham was born in England, where he and his family maintain permanent residence. However he is also a citizen of the United States, where he was educated, lived, and worked until 2016. Education and early life Graham and his family moved to Pittsburgh, Pennsylvania in 1968, where he later attended Gateway High School. Graham gained interest in science and mathematics from his father who was a nuclear physicist. Graham received a Bachelor of Arts in philosophy from Cornell University (1986). He then attended Harvard University, earning Master of Science (1988) and Doctor of Philosophy (1990) degrees in computer science. He has also studied painting at the Rhode Island School of Design and at the Accademia di Belle Arti in Florence. Career In 1996, Graham and Robert Morris founded Viaweb and recruited Trevor Blackwell shortly after. They believed that Viaweb was the first application service provider. Viaweb's software, written mostly in Common Lisp, allowed users to make their own Internet stores. In the summer of 1998, after Jerry Yang received a strong recommendation from Ali Partovi, Viaweb was sold to Yahoo! for 455,000 shares of Yahoo! stock, valued at $49.6 million. After the acquisition, the product became Yahoo! Store. Graham later gained notice for his essays, which he posts on his personal website. Essay subjects range from Beating the Averages, which compares Lisp to other programming languages and introduced the hypothetical programming language Blub, to Why Nerds are Unpopular, a discussion of nerd life in high school. A collection of his essays has been published as Hackers & Painters by O'Reilly Media, which includes a discussion of the growth of Viaweb and what Graham perceives to be the advantages of Lisp to program it. In 2001, Graham announced that he was working on a new dialect of Lisp named Arc. It was released on 29 January 2008. Over the years since, he has written several essays describing features or goals of the language, and some internal projects at Y Combinator have been written in Arc, most notably the Hacker News web forum and news aggregator program. In 2005, after giving a talk at the Harvard Computer Society later published as How to Start a Startup, Graham along with Trevor Blackwell, Jessica Livingston, and Robert Morris started Y Combinator to provide seed funding to a large number of startups, particularly those started by younger, more technically oriented founders. Y
https://en.wikipedia.org/wiki/GNU%20Units
GNU Units is a cross-platform computer program for conversion of units of quantities. It has a database of measurement units, including esoteric and historical units. This for instance allows conversion of velocities specified in furlongs per fortnight, and pressures specified in tons per acre. Output units are checked for consistency with the input, allowing verification of conversion of complex expressions. History GNU Units was written by Adrian Mariano as an implementation of the units utility included with the Unix operating system. It was originally available under a permissive license. The GNU variant is distributed under the GPL although the FreeBSD project maintains a free fork of units from before the license change. units (Unix utility) The original units program has been a standard part of Unix since the early Bell Laboratories versions. Source code for a version very similar to the original is available from the Heirloom Project. The GNU implementation GNU units includes several extensions to the original version, including Exponents can be written with ^ or **. Exponents can be larger than 9 if written with ^ or **. Rational and decimal exponents are supported. Sums of units (e.g., ) can be converted. Conversions can be made to sums of units, termed unit lists (e.g., from degrees to degrees, minutes, and seconds). Units that measure reciprocal dimensions can be converted (e.g., S to megohm). Parentheses for grouping are supported. This sometimes allows more natural expressions, such as in the example given in Complex units expressions. Roots of units (e.g., can be computed. Nonlinear units conversions (e.g., °F to °C) are supported. Functions such as sin, cos, ln, log, and log2 are included. A script for updating the currency conversions is included; the script requires Python. Units definitions, including nonlinear conversions and unit lists, are user extensible. The plain text database definitions.units is a good reference in itself, as it is extensively commented and cites numerous sources. Other implementations UDUNITS is a similar utility program, except that it has an additional programming library interface and date conversion abilities. UDUNITS is considered the de facto program and library for variable unit conversion for netCDF files. Version history GNU Units version 2.19 was released on 31 May 2019, to reflect the new 2019 revision of the SI; Version 2.14 released on 8 March 2017 fixed several minor bugs and improved support for building on Windows. Version 2.10, released on 26 March 2014, added support for rational exponents greater than one, and added the ability to save an interactive session in a file to provide a record of the conversions performed. Beginning with version 2.10, a 32-bit Windows binary distribution has been available on the project Web page (a 32-bit Windows port of version 1.87 has been available since 2008 as part of the GnuWin32 project). Version 2.02, released on 11 J
https://en.wikipedia.org/wiki/APC
APC most often refers to: Armoured personnel carrier, an armoured fighting vehicle APC or Apc may also refer to: Computing and technology Auto Power Control, a system of powering e.g. laser diodes Adaptive predictive coding, an analog-to-digital conversion system Advanced process control, a concept in control theory Alternative PHP Cache, a PHP accelerator program Angled physical contact, a technique used in optical fiber connections APC (magazine), Australian Personal Computer magazine APC III or Advanced Personal Computer, a 1983 NEC microcomputer American Power Conversion Corporation or simply APC, a manufacturer of uninterruptible power supplies, electronics peripherals and data center products, nowadays called APC by Schneider Electric APC-7 connector, a coaxial connector used for high frequency applications Application Program Command, a C1 control code Asynchronous procedure call, a function that executes asynchronously in the context of a specific thread on Microsoft Windows Atari Punk Console, a simple DIY noisemaker circuit VIA APC, a low-cost Android PC computer Science General Article processing charge, a fee charged to authors for publication in an open access journal Biology and medicine Activated protein C, an anti-coagulant and anti-inflammatory protein Adenomatous polyposis coli, a tumor suppressor protein encoded by the APC gene, mutations in which can cause colon cancer Anaphase-promoting complex, a ubiquitin ligase cell cycle protein Antigen-presenting cell, a type of cell that displays foreign antigens Amino Acid-Polyamine-Organocation (APC) Family of transport proteins APC Superfamily, a superfamily of transport proteins APC tablet, analgesic compound of aspirin, phenacetin, and caffeine Argon plasma coagulation, an endoscopic technique for controlling hemorrhage Atrial premature complexes, a type of premature heart beat or irregular heart beat or arrhythmia which start in the upper two chambers of the heart Chemistry Allophycocyanin, a protein from the light-harvesting phycobiliprotein family Allylpalladium chloride dimer, a chemical compound Ammonium perchlorate, a powerful oxidizer used in solid rocket motors Military Armoured personnel carrier, type of armoured military vehicle Armour-piercing capped, an anti-armor shell type Army Proficiency Certificate, the training syllabus of the Army Cadet Force B&T APC9 (Advanced Police Carbine), a submachine gun produced by B&T AG B&T APC45, a variant of the B&T APC9 Organizations Astroparticle and Cosmology Laboratory, a research laboratory located in Paris A.P.C., A French design group and clothing retailer African, Caribbean and Pacific Group of States, a group of countries African Paralympic Committee, a sports organization based in Cairo, Egypt Alianza Popular Conservadora, a political party in Nicaragua Alien Property Custodian, a former office within the Government of the United States All People's Congress, a political party i
https://en.wikipedia.org/wiki/Time%20domain
Time domain refers to the analysis of mathematical functions, physical signals or time series of economic or environmental data, with respect to time. In the time domain, the signal or function's value is known for all real numbers, for the case of continuous time, or at various separate instants in the case of discrete time. An oscilloscope is a tool commonly used to visualize real-world signals in the time domain. A time-domain graph shows how a signal changes with time, whereas a frequency-domain graph shows how much of the signal lies within each given frequency band over a range of frequencies. Though most precisely referring to time in physics, the term time domain may occasionally informally refer to position in space when dealing with spatial frequencies, as a substitute for the more precise term spatial domain. Origin of term The use of the contrasting terms time domain and frequency domain developed in U.S. communication engineering in the late 1940s, with the terms appearing together without definition by 1950. When an analysis uses the second or one of its multiples as a unit of measurement, then it is in the time domain. When analysis concerns the reciprocal units such as Hertz, then it is in the frequency domain. See also Frequency domain Fourier transform Laplace transform Blackman–Tukey transform References Time domain analysis
https://en.wikipedia.org/wiki/Trinitron
Trinitron was Sony's brand name for its line of aperture-grille-based CRTs used in television sets and computer monitors, one of the first television systems to enter the market since the 1950s. Constant improvement in the basic technology and attention to overall quality allowed Sony to charge a premium for Trinitron devices into the 1990s. Patent protection on the basic Trinitron design ran out in 1996, and it quickly faced a number of competitors at much lower prices. The name Trinitron was derived from trinity, meaning the union of three, and tron from electron tube, after the way that the Trinitron combined the three separate electron guns of other CRT designs into one. History Color television Color television had been demoed since the 1920s starting with John Logie Baird's system. However, it was only in the late 1940s that it was perfected by both CBS and RCA. At the time, a number of systems were being proposed that used separate red, green and blue signals (RGB), broadcast in succession. Most systems broadcast entire frames in sequence, with a colored filter (or "gel") that rotated in front of an otherwise conventional black and white television tube. Because they broadcast separate signals for the different colors, all of these systems were incompatible with existing black and white sets. Another problem was that the mechanical filter made them flicker unless very high refresh rates were used. In spite of these problems, the United States Federal Communications Commission selected a sequential-frame 144 frame/s standard from CBS as their color broadcast in 1950. RCA worked along different lines entirely, using the luminance-chrominance system. This system did not directly encode or transmit the RGB signals; instead it combined these colors into one overall brightness figure, the "luminance". Luminance closely matched the black and white signal of existing broadcasts, allowing it to be displayed on existing televisions. This was a major advantage over the mechanical systems being proposed by other groups. Color information was then separately encoded and folded into the signal as a high-frequency modification to produce a composite video signal – on a black and white television this extra information would be seen as a slight randomization of the image intensity, but the limited resolution of existing sets made this invisible in practice. On color sets the signal would be extracted, decoded back into RGB, and displayed. Although RCA's system had enormous benefits, it had not been successfully developed because it was difficult to produce the display tubes. Black and white TVs used a continuous signal and the tube could be coated with an even deposit of phosphor. With the compatible color encoding scheme originally developed by Georges Valensi in 1938, the color was changing continually along the line, which was far too fast for any sort of mechanical filter to follow. Instead, the phosphor had to be broken down into a discrete pat
https://en.wikipedia.org/wiki/Egon%20Zakraj%C5%A1ek
Egon Zakrajšek (July 7, 1941 – September 2002) was a Slovene mathematician and computer scientist. Zakrajšek was born in Ljubljana, SFR Yugoslavia (today Slovenia). He became an orphan even before he started school. He went to elementary school and gymnasium in Jesenice. He was a good student and he soon showed his talents and abilities. He graduated from technical mathematics at the Department of mathematics and physics of the then Faculty for natural sciences and technology (FNT) of the University of Ljubljana. He received his Master's degree from the University of Zagreb for his work Numerična realizacija Ritzovega procesa (Numerical realization of the Ritz process) and his doctorate in 1978 in Ljubljana with his dissertation O invariantni vložitvi pri reševanju diferencialnih enačb (About the invariable embedding in solving of differential equations). Professor Zakrajšek was one of the pioneers of computer science in Slovenia. He became an expert on the first computers of the University of Ljubljana, the Zuse Z-23 and its successor the IBM 1130. He later participated in the development of programming languages, tools and operating systems. At the same time he wrote textbooks and manuals for them: for Z-23 assembly, Algol, Fortran, Algol 68, Pascal, for domestic structran. In 1982 he set off for the United States and he became the manager of programming equipment at the Cromemco company. In 1994 he returned to homeland, where he was again a professor. With his advocacy for the C language and open operating systems, that is, Unix and Linux, he helped to modernize the lessons of computer science. He additionally became an expert for TeX, LaTeX and MATLAB. Beside his computer science skills he was also an excellent mathematician with a broad profile. He taught and solved problems from many fields: the usage of mathematics in natural and social sciences, statistics, mechanics, classical applied mathematics, discrete mathematics, graph and network theory, linear programming, operational researches, numerical analysis. References Marija Vencelj, Umrl je prof. dr. Egon Zakrajšek (Professor Dr. Egon Zakrajšek has died), (Obzornik mat, fiz. 49 (2002) 6, pp 184 – 186). 1941 births 2002 deaths Scientists from Ljubljana Slovenian computer scientists 20th-century Slovenian mathematicians Faculty of Science, University of Zagreb alumni University of Ljubljana alumni University of Zagreb alumni Academic staff of the University of Ljubljana
https://en.wikipedia.org/wiki/Mighty%20Morphin%20Power%20Rangers
Mighty Power Rangers (MMPR) is an American superhero television series that premiered on August 28, 1993, on the Fox Kids programming block. It is the first entry of the Power Rangers franchise, and became a 1990s pop culture phenomenon along with a large line of toys, action figures, and other merchandise. The show adapted stock footage from the Japanese TV series Kyōryū Sentai Zyuranger (1992–1993), which was the 16th installment of Toei's Super Sentai franchise. The second and third seasons of the show drew elements and stock footage from Gosei Sentai Dairanger and Ninja Sentai Kakuranger, respectively, though the Zyuranger costumes were still used for the lead cast in these two seasons. Only the mecha and the Kiba Ranger (White Ranger) costume from Dairanger were featured in the second season while only the Kakuranger mecha was featured in the third season, though the Kakuranger costumes were later used for the mini-series Mighty Morphin Alien Rangers. The series was produced by MMPR Productions and distributed by Saban Entertainment, while the show's merchandise was produced and distributed by Bandai Entertainment. While a global storyline would continue in Power Rangers Zeo, Power Rangers Turbo, Power Rangers in Space, and Power Rangers Lost Galaxy (which could be considered respectively and unofficially as the fourth, fifth, sixth, and seventh seasons of the original series), the subsequent series would not be sequels or spin-offs in the traditional sense, having self-contained plots with no strong connection with the original series (except taking place in the same universe, not being reboots). The exceptions would be Power Rangers Dino Thunder, which could be considered as a continuation of the original classic series by having the presence of the character Tommy Oliver (the Green Ranger and later White Ranger, portrayed by Jason David Frank) as part of the regular team of Rangers of the generation of that series (in some of the other series the character only made special participations). Another series connected to the original classic series would be Power Rangers Operation Overdrive, as one of the main villains of this series, Thrax, is the son of Rita Repulsa and Lord Zedd, main villains of the classic series. In 2010, a remake of Mighty Morphin Power Rangers, with a revised new look of the original 1993 logo, comic book-referenced graphics, and extra alternative visual effects, was broadcast on ABC Kids, and Bandai produced brand new toys to coincide with the series. Only the first 32 of season one's 60 episodes were remade. It was the final Power Rangers season to air on ABC Kids as Haim Saban re-acquired the franchise from Disney, who took over the rights in 2002. With the beginning of Power Rangers Samurai in 2011, the franchise had moved to Nickelodeon. The original series also spawned the feature film Mighty Morphin Power Rangers: The Movie, released by 20th Century Fox on June 30, 1995. Despite mixed reviews, it was a mod
https://en.wikipedia.org/wiki/Differentiated%20services
Differentiated services or DiffServ is a computer networking architecture that specifies a mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. DiffServ can, for example, be used to provide low-latency to critical network traffic such as voice or streaming media while providing best-effort service to non-critical services such as web traffic or file transfers. DiffServ uses a 6-bit differentiated services code point (DSCP) in the 8-bit differentiated services field (DS field) in the IP header for packet classification purposes. The DS field replaces the outdated IPv4 TOS field. Background Modern data networks carry many different types of services, including voice, video, streaming music, web pages and email. Many of the proposed QoS mechanisms that allowed these services to co-exist were both complex and failed to scale to meet the demands of the public Internet. In December 1998, the IETF replaced the TOS and IP precedence fields in the IPv4 header with the DS field. In the IPv6 header the DS field is part of the Traffic Class field where it occupies the 6 most significant bits. In the DS field, a range of eight values (class selectors) is used for backward compatibility with the former IPv4 IP precedence field. Today, DiffServ has largely supplanted TOS and other layer-3 QoS mechanisms, such as integrated services (IntServ), as the primary architecture routers use to provide QoS. Traffic management mechanisms DiffServ is a coarse-grained, class-based mechanism for traffic management. In contrast, IntServ is a fine-grained, flow-based mechanism. DiffServ relies on a mechanism to classify and mark packets as belonging to a specific class. DiffServ-aware routers implement per-hop behaviors (PHBs), which define the packet-forwarding properties associated with a class of traffic. Different PHBs may be defined to offer, for example, low-loss or low-latency service. Rather than differentiating network traffic based on the requirements of an individual flow, DiffServ operates on the principle of traffic classification, placing each data packet into one of a limited number of traffic classes. Each router on the network is then configured to differentiate traffic based on its class. Each traffic class can be managed differently, ensuring preferential treatment for higher-priority traffic on the network. The premise of Diffserv is that complicated functions such as packet classification and policing can be carried out at the edge of the network by edge routers. Since no classification and policing is required in the core routers, functionality there can then be kept simple. Core routers simply apply PHB treatment to packets based on their markings. PHB treatment is achieved by core routers using a combination of scheduling policy and queue management policy. A group of routers that implement common, administratively defined DiffServ policies are referred to as a DiffServ domain. Whi
https://en.wikipedia.org/wiki/An%20Wang
An Wang (; February 7, 1920 – March 24, 1990) was a Chinese–American computer engineer and inventor, and cofounder of computer company Wang Laboratories, which was known primarily for its dedicated word processing machines. An Wang was an important contributor to the development of magnetic-core memory. Early life and career A native of Kunshan County in Suzhou (Soochow) Prefecture, he was born in Shanghai, China. His father taught English at an elementary school outside Shanghai, while his mother Zen Wan (Chien) Wang was a homemaker. He graduated from Shanghai Jiao Tong University with a degree in electrical engineering in 1940. He immigrated to the United States in June 1945 to attend Harvard University for graduate school, earning a PhD in applied physics in 1948. After graduation, he worked at Harvard with Howard Aiken on the design of the Mark IV, Aiken's first fully electronic computer. Wang coinvented the pulse transfer controlling device with Way-Dong Woo, a schoolmate from China who fell ill before their patent was issued. The new device implemented write-after-read which made magnetic core memory possible. Harvard reduced its commitment to computer research in 1951, prompting Wang to start his own engineering business. Wang Laboratories Wang founded Wang Laboratories in June 1951 as a sole proprietorship. The first years were lean and Wang raised working capital by selling one third of the company to a machine tools manufacturer Warner & Swasey Company. In 1955, when the core memory patent was issued, Wang sold it to IBM for and incorporated Wang Laboratories with Ge-Yao Chu, a schoolmate. The company grew slowly and in 1964 sales reached . Wang began making desktop electronic calculators with digital displays, including a centralised calculator with remote terminals for group use. By 1970, the company had sales of and 1,400 employees. They began manufacturing word processors in 1974, copying the already popular Xerox Redactron word processor, a single-user, cassette-based product. In 1976, they started marketing a multi-user, display-based product, based on the Zilog Z80 processor. Typical installations had a master unit (supplying disk storage) connected to intelligent diskless slaves which the operators used. Connections were via dual coax using differential signaling in an 11-bit asynchronous ASCII format clocked at 4.275 MHz. This product became the market leader in the word processing industry. In addition to calculators and word processors, Wang's company diversified into minicomputers in the early 1970s. The Wang 2200 was one of the first desktop computers with a large CRT display and ran a fast hardwired BASIC interpreter. The Wang VS system was a multiuser minicomputer whose instruction set was very close to the design of IBM's System/370. It was not binary-compatible because register usage conventions and system call interfaces were different. The Wang VS serial terminals could be used in data processing mode and wo
https://en.wikipedia.org/wiki/Competitive%20analysis
Competitive analysis may refer to: Competitor analysis Competitive analysis (online algorithm)
https://en.wikipedia.org/wiki/Wang%20Laboratories
Wang Laboratories was a US computer company founded in 1951 by An Wang and G. Y. Chu. The company was successively headquartered in Cambridge, Massachusetts (1954–1963), Tewksbury, Massachusetts (1963–1976), and finally in Lowell, Massachusetts (1976–1997). At its peak in the 1980s, Wang Laboratories had annual revenues of US$3 billion and employed over 33,000 people. It was one of the leading companies during the time of the Massachusetts Miracle. The company was directed by An Wang, who was described as an "indispensable leader" and played a personal role in setting business and product strategy until his death in 1990. The company went through transitions between different product lines, beginning with typesetters, calculators, and word processors, then adding computers, copiers, and laser printers. Wang Laboratories filed for bankruptcy protection in August 1992. After emerging from bankruptcy, the company changed its name to Wang Global. It was acquired by Getronics of the Netherlands in 1999, becoming Getronics North America, then was sold to KPN in 2007 and CompuCom in 2008. Public stock listing Wang went public on August 26, 1967, with the issuance of 240,000 shares at $12.50 per share on the American Stock Exchange. The stock closed the day above $40, valuing the company's equity at approximately $77 million, of which An Wang and his family owned about 63%. An Wang took steps to ensure that the Wang family would retain control of the company even after going public. He created a second class of stock, class B, with higher dividends but only one-tenth the voting power of class C. The public mostly bought class B shares; the Wang family retained most of the class C shares. The letters B and C were used to ensure that brokerages would fill any Wang stock orders with class B shares unless class C was specifically requested. Wang stock had been listed on the New York Stock Exchange, but this maneuver was not quite acceptable under NYSE's rules, and Wang was forced to delist with NYSE and relist on the more liberal American Stock Exchange. After Wang's 1992 bankruptcy, holders of class B and C common stock were treated the same. Products Typesetters The company's first major project was the Linasec in 1964, an electronic special-purpose computer designed to justify paper tape for use on automated Linotype machines. It was developed under contract to phototypesetter manufacturer Compugraphic, which retained the manufacturing rights of the Linasec. The success of the machine led Compugraphic to decide to manufacture it themselves, causing Wang to lose out on a million dollars in revenue. Calculators The Wang LOCI-2 (Logarithmic Computing Instrument) desktop calculator (the earlier LOCI-1 in September 1964 was not a real product) was introduced in January 1965. Using factor combining, it was the first desktop calculator capable of computing logarithms, which was quite an achievement for a machine without any integrated circuits. The electr
https://en.wikipedia.org/wiki/Ludic
Ludic may refer to: Ludic language, a Finnic language in the Uralic language family Ludic fallacy, is "the misuse of games to model real-life situations." Ludic interface, are types of computer interface that are inherently "playful". Ludology, Game studies (Not to be confused with Game theory.)
https://en.wikipedia.org/wiki/Tracy%20Kidder
John Tracy Kidder (born November 12, 1945) is an American writer of nonfiction books. He received the Pulitzer Prize for his The Soul of a New Machine (1981), about the creation of a new computer at Data General Corporation. He has received praise and awards for other works, including his biography of Paul Farmer, a physician and anthropologist, titled Mountains Beyond Mountains (2003). Kidder is considered a literary journalist because of the strong story line and personal voice in his writing. He has cited as his writing influences John McPhee, A. J. Liebling, and George Orwell. In a 1984 interview he said, "McPhee has been my model. He's the most elegant of all the journalists writing today, I think." Kidder wrote in a 1994 essay, "In fiction, believability may have nothing to do with reality or even plausibility. It has everything to do with those things in nonfiction. I think that the nonfiction writer's fundamental job is to make what is true believable." Early life and education John Tracy Kidder was born November 12, 1945, in New York City. He graduated from Phillips Academy in 1963. He attended Harvard College, originally majoring in political science, but switching to English after taking a course in creative writing from Robert Fitzgerald. He received an AB degree from Harvard in 1967. Kidder served in the United States Army as a first lieutenant, Military Intelligence, Vietnam, from 1967 to 1969. After returning from Vietnam, he wrote for some time and was admitted to the Iowa Writers' Workshop. He received an MFA degree from the University of Iowa in 1974. Career Kidder wrote his first book, The Road to Yuba City: a Journey into the Juan Corona Murders, while at the University of Iowa. The Atlantic Monthly commissioned the work, and he continued writing as a freelancer for the magazine during the 1970s. The Road to Yuba City was a critical failure, and Kidder said in a 1995 interview that I can't say anything intelligent about that book, except that I learned never to write about a murder case. The whole experience was disgusting, so disgusting, in fact, that in 1981 I went to Doubleday and bought back the rights to the book. I don't want The Road to Yuba City to see the light of day again. Kidder has said that, unlike many other writers, he was not much influenced by his Vietnam experience: "Of course, whenever you're in an experience like Vietnam, it is bound to influence your work; it's inevitable, but I really don't think it greatly shaped me as a writer." His works for The Atlantic Monthly include several essays and short stories about the Vietnam War, including "The Death of Major Great" (1974), "Soldiers of Misfortune" (1978), and "In Quarantine" (1980). Writing in 1997, David Bennett rated these three pieces "among the finest reporting to come out of Vietnam." Kidder's second book, The Soul of a New Machine (1981), was much more successful than his first. His account of the complex community and environment of progr
https://en.wikipedia.org/wiki/Dr.%20Dobb%27s%20Journal
Dr. Dobb's Journal (DDJ) was a monthly magazine published in the United States by UBM Technology Group, part of UBM. It covered topics aimed at computer programmers. When launched in 1976, DDJ was the first regular periodical focused on microcomputer software, rather than hardware. In its last years of publication, it was distributed as a PDF monthly, although the principal delivery of Dr. Dobb's content was through the magazine's website. Publication ceased at the end of 2014, with the archived website continuing to be available online. History Origins Bob Albrecht edited an eccentric newspaper about computer games programmed in the BASIC computer language, with the same name as the tiny nonprofit educational corporation that he had founded, People's Computer Company (PCC). Dennis Allison was a longtime computer consultant on the San Francisco Peninsula and sometime instructor at Stanford University. The Dobbs title was based on a mashup of the first letters of their names: Dennis and Bob. First issues In the first three quarterly issues of the PCC newspaper published in 1975, Albrecht had published articles written by Allison, describing how to design and implement a stripped-down version of an interpreter for the BASIC language, with limited features to be easier to implement. He called it Tiny BASIC. At the end of the final part, Allison asked computer hobbyists who implemented it to send their implementations to PCC, and they would circulate copies of any implementations to anyone who sent a self-addressed stamped envelope. Allison said, Let us stand on each others' shoulders; not each others' toes. The journal was originally intended to be a three-issue xerographed publication. Titled dr. dobb's journal of Tiny BASIC Calisthenics & Orthodontia (with the subtitle Running Light Without Overbyte) it was created to distribute the implementations of Tiny BASIC. The original title was created by Eric Bakalinsky, who did occasional paste-up work for PCC. Dobb's was a contraction of Dennis and Bob. It was at a time when computer memory was very expensive, so compact coding was important. Microcomputer hobbyists needed to avoid using too many bytes of memory. After the first photocopies were mailed to those who had sent stamped addressed envelopes, PCC was flooded with requests that the publication become an ongoing periodical devoted to general microcomputer software. PCC agreed, and hired Jim Warren as its first editor. He immediately changed the title to Dr. Dobb's Journal of Computer Calisthenics & Orthodontia prior to publishing the first issue in January 1976. The title refers to "jumping through hoops" (calisthenics) and "pulling teeth" (orthodontia). Early years Jim Warren was DDJ's editor for about a year and a half. While he went on to make a splash with his series of West Coast Computer Faires, subsequent DDJ editors like Marlin Ouverson, Hank Harrison, Michael Swaine and Jonathan Erickson appear to have focused on the jour
https://en.wikipedia.org/wiki/Harvard%20Mark%20I
The Harvard Mark I, or IBM Automatic Sequence Controlled Calculator (ASCC), was one of the earliest general-purpose electromechanical computers used in the war effort during the last part of World War II. One of the first programs to run on the Mark I was initiated on 29 March 1944 by John von Neumann. At that time, von Neumann was working on the Manhattan Project, and needed to determine whether implosion was a viable choice to detonate the atomic bomb that would be used a year later. The Mark I also computed and printed mathematical tables, which had been the initial goal of British inventor Charles Babbage for his "analytical engine" in 1837. The Mark I was disassembled in 1959; part of it was given to IBM, part went to the Smithsonian Institution, and part entered the Harvard Collection of Historical Scientific Instruments. For decades, Harvard's portion was on display in the lobby of the Aiken Computation Lab. About 1997, it was moved to the Harvard Science Center. In 2021, it was moved again, to the lobby of Harvard's new Science and Engineering Complex in Allston, Massachusetts. Origins The original concept was presented to IBM by Howard Aiken in November 1937. After a feasibility study by IBM engineers, the company chairman Thomas Watson Sr. personally approved the project and its funding in February 1939. Howard Aiken had started to look for a company to design and build his calculator in early 1937. After two rejections, he was shown a demonstration set that Charles Babbage’s son had given to Harvard University 70 years earlier. This led him to study Babbage and to add references to the Analytical Engine to his proposal; the resulting machine "brought Babbage’s principles of the Analytical Engine almost to full realization, while adding important new features." The ASCC was developed and built by IBM at their Endicott plant and shipped to Harvard in February 1944. It began computations for the US Navy Bureau of Ships in May and was officially presented to the university on August 7, 1944. Although not the first working computer, the machine was the first to automate the execution of complex calculations, making it a significant step forward for computing. Design and construction The ASCC was built from switches, relays, rotating shafts, and clutches. It used 765,000 electromechanical components and hundreds of miles of wire, comprising a volume of – in length, in height, and deep. It weighed about . The basic calculating units had to be synchronized and powered mechanically, so they were operated by a drive shaft coupled to a electric motor, which served as the main power source and system clock. From the IBM Archives: The Automatic Sequence Controlled Calculator (Harvard Mark I) was the first operating machine that could execute long computations automatically. A project conceived by Harvard University’s Dr. Howard Aiken, the Mark I was built by IBM engineers in Endicott, N.Y. A steel frame 51 feet long and 8 feet hig
https://en.wikipedia.org/wiki/Non-volatile%20random-access%20memory
Non-volatile random-access memory (NVRAM) is random-access memory that retains data without applied power. This is in contrast to dynamic random-access memory (DRAM) and static random-access memory (SRAM), which both maintain data only for as long as power is applied, or forms of sequential-access memory such as magnetic tape, which cannot be randomly accessed but which retains data indefinitely without electric power. Read-only memory devices can be used to store system firmware in embedded systems such as an automotive ignition system control or home appliance. They are also used to hold the initial processor instructions required to bootstrap a computer system. Read-write memory can be used to store calibration constants, passwords, or setup information, and may be integrated into a microcontroller. If the main memory of a computer system were non-volatile, it would greatly reduce the time required to start a system after a power interruption. Current existing types of semiconductor non-volatile memory have limitations in memory size, power consumption, or operating life that make them impractical for main memory. Development is going on for the use of non-volatile memory chips as a system's main memory, as persistent memory. A standard for persistent memory known as NVDIMM-P has been published in 2021. Early NVRAMs Early computers used core and drum memory systems which were non-volatile as a byproduct of their construction. The most common form of memory through the 1960s was magnetic-core memory, which stored data in the polarity of small magnets. Since the magnets held their state even with the power removed, core memory was also non-volatile. Other memory types required constant power to retain data, such as vacuum tube or solid-state flip-flops, Williams tubes, and semiconductor memory (static or dynamic RAM). Advances in semiconductor fabrication in the 1970s led to a new generation of solid state memories that magnetic-core memory could not match on cost or density. Today dynamic RAM forms the vast majority of a typical computer's main memory. Many systems require at least some non-volatile memory. Desktop computers require permanent storage of the instructions required to load the operating system. Embedded systems, such as an engine control computer for a car, must retain their instructions when power is removed. Many systems used a combination of RAM and some form of ROM for these roles. Custom ROM integrated circuits were one solution. The memory contents were stored as a pattern of the last mask used for manufacturing the integrated circuit, and so could not be modified once completed. PROM improved on this design, allowing the chip to be written electrically by the end-user. PROM consists of a series of diodes that are initially all set to a single value, "1" for instance. By applying higher power than normal, a selected diode can be "burned out" (like a fuse), thereby permanently setting that bit to "0". PROM facili
https://en.wikipedia.org/wiki/GiFT
giFT Internet File Transfer (giFT) is a computer software daemon that allows several file sharing protocols to be used with a simple client having a graphical user interface (GUI). The client dynamically loads plugins implementing the protocols, as they are required. General Clients implementing frontends for the giFT daemon communicate with its process using a lightweight network protocol. This allows the networking protocol code to be completely abstracted from the user interface. The giFT daemon is written using relatively cross-platform C code, which means that it can be compiled for and executed on a big variety of operating systems. There are several giFT GUI front-ends for Microsoft Windows, Apple Macintosh, and Unix-like operating systems. The name giFT (giFT Internet File Transfer) is a so-called recursive acronym, which means that it refers to itself in the expression for which it stands. One of the biggest drawbacks of the giFT engine is that it currently lacks Unicode support, which prevents sharing files with Unicode characters in their file names (such as "ø","ä", "å", "é" etc.). Also, giFT lacks many features needed to use the gnutella network effectively. Available plugins Available protocols are: Stable OpenFT, giFT's own file sharing protocol gnutella (used by FrostWire, Shareaza) Turtle F2F Beta version FastTrack (used by Kazaa). The giFT plugin is giFT-FastTrack. Alpha version OpenNap eDonkey network Soulseek OpenFT protocol giFT's sibling project is OpenFT, a peer-to-peer file-sharing network protocol that has a structure in which nodes are divided into 'search' nodes and 'index' supernodes in addition to common nodes. Since both projects are related very closely, when one says 'OpenFT', one can mean either one of two different things: the OpenFT protocol, or the implementation in the form of a plugin for giFT. Although the name OpenFT stands for "Open FastTrack", the OpenFT protocol is an entirely new protocol design: only a few ideas in the OpenFT protocol are drawn from what little was known about the FastTrack protocol at the time OpenFT was designed. OpenFT file-sharing protocol Like FastTrack and Napster, OpenFT is a network where nodes submit lists of shared files to other nodes to keep track of which files are available on the network. This reduces the bandwidth consumed from search requests at the price of additional memory and processing power on the nodes that store that information. The transmission of shared lists is not fully recursive: a node will only transmit its list of shared files to a single search node randomly chosen as that node's "parent", and the list of those files will not be further transmitted to other nodes. OpenFT is also similar to the gnutella network in that search requests are recursively forwarded in between the nodes that keep track of the shared files. There are three different kinds of nodes on the OpenFT network: USER Most nodes are USER nodes; these have no special
https://en.wikipedia.org/wiki/Inductive%20bias
The inductive bias (also known as learning bias) of a learning algorithm is the set of assumptions that the learner uses to predict outputs of given inputs that it has not encountered. Inductive bias is anything which makes the algorithm learn one pattern instead of another pattern (e.g. step-functions in decision trees instead of continuous function in a linear regression model). In machine learning, one aims to construct algorithms that are able to learn to predict a certain target output. To achieve this, the learning algorithm is presented some training examples that demonstrate the intended relation of input and output values. Then the learner is supposed to approximate the correct output, even for examples that have not been shown during training. Without any additional assumptions, this problem cannot be solved since unseen situations might have an arbitrary output value. The kind of necessary assumptions about the nature of the target function are subsumed in the phrase inductive bias. A classical example of an inductive bias is Occam's razor, assuming that the simplest consistent hypothesis about the target function is actually the best. Here consistent means that the hypothesis of the learner yields correct outputs for all of the examples that have been given to the algorithm. Approaches to a more formal definition of inductive bias are based on mathematical logic. Here, the inductive bias is a logical formula that, together with the training data, logically entails the hypothesis generated by the learner. However, this strict formalism fails in many practical cases, where the inductive bias can only be given as a rough description (e.g. in the case of artificial neural networks), or not at all. Types The following is a list of common inductive biases in machine learning algorithms. Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence. This is the bias used in the Naive Bayes classifier. Minimum cross-validation error: when trying to choose among hypotheses, select the hypothesis with the lowest cross-validation error. Although cross-validation may seem to be free of bias, the "no free lunch" theorems show that cross-validation must be biased. Maximum margin: when drawing a boundary between two classes, attempt to maximize the width of the boundary. This is the bias used in support vector machines. The assumption is that distinct classes tend to be separated by wide boundaries. Minimum description length: when forming a hypothesis, attempt to minimize the length of the description of the hypothesis. Minimum features: unless there is good evidence that a feature is useful, it should be deleted. This is the assumption behind feature selection algorithms. Nearest neighbors: assume that most of the cases in a small neighborhood in feature space belong to the same class. Given a case for which the class is unknown, guess that it belongs to the same cla
https://en.wikipedia.org/wiki/Edwin%20Catmull
Edwin Earl Catmull (born March 31, 1945) is an American computer scientist who is the co-founder of Pixar and was the President of Walt Disney Animation Studios. He has been honored for his contributions to 3D computer graphics, including the 2019 ACM Turing Award. Early life Edwin Catmull was born on March 31, 1945, in Parkersburg, West Virginia. His family later moved to Salt Lake City, Utah, where his father first served as principal of Granite High School and then of Taylorsville High School. Early in his life, Catmull found inspiration in Disney movies, including Peter Pan and Pinocchio, and wanted to be an animator; however, after finishing high school, he had no idea how to get there as there were no animation schools around that time. Because he also liked math and physics, he chose a scientific career instead. He also made animation using flip-books. Catmull graduated in 1969, with a B.S. in physics and computer science from the University of Utah. Initially interested in designing programming languages, Catmull encountered Ivan Sutherland, who had designed the computer drawing program Sketchpad, and changed his interest to digital imaging. As a student of Sutherland, he was part of the university's DARPA program, sharing classes with James H. Clark, John Warnock and Alan Kay. From that point, his main goal and ambition were to make digitally realistic films. During his time at the university, he made two new fundamental computer-graphics discoveries: texture mapping and bicubic patches; and invented algorithms for spatial anti-aliasing and refining subdivision surfaces. Catmull says the idea for subdivision surfaces came from mathematical structures in his mind when he applied B-splines to non-four sided objects. He also independently discovered Z-buffering, which had been described eight months before by Wolfgang Straßer in his PhD thesis. In 1972, Catmull made his earliest contribution to the film industry: a one-minute animated version of his left hand, titled A Computer Animated Hand, created with Fred Parke at the University of Utah. This short sequence was eventually picked up by a Hollywood producer and incorporated in the 1976 film Futureworld, which was the first film to use 3D computer graphics and a science-fiction sequel to the 1973 film Westworld, itself being the first to use a pixelated image generated by a computer. A Computer Animated Hand was selected for preservation in the National Film Registry of the Library of Congress in December 2011. Career Early career In 1974, Catmull earned his doctorate in computer science, and was hired by a company called Applicon. By November of that year, he had been contacted by Alexander Schure, the founder of the New York Institute of Technology, who offered him the position as the director of the institute's new Computer Graphics Lab. In that position, in 1977, he invented Tween, software for 2D animation that automatically produced frames of motion in between two frames. Ho
https://en.wikipedia.org/wiki/Franklin%20C.%20Crow
Franklin C. (Frank) Crow is a computer scientist who has made important contributions to computer graphics, including some of the first practical spatial anti-aliasing techniques. Crow also proposed the shadow volume technique for generating geometrically accurate shadows. Education Crow studied electrical engineering at the University of Utah College of Engineering under Ivan Sutherland, a pioneer in computer graphics. Career Crow taught at the University of Texas, NYIT and Ohio State University and was involved with research at Xerox PARC, Apple Computer's Advanced Technology Group, and Interval Research. From 2001 to 2008, he worked for NVIDIA as a GPU architect designing rasterization algorithms. Publications "Parallel Computing for Graphics." Advances in Computer Graphics, 1990:113-140. "Parallelism in rendering algorithms." in Graphics Interface 88, June 6–10, 1988, Edmonton, Alberta, Canada. p. 87-96 "Advanced Image Synthesis - Anti-Aliasing." Advances in Computer Graphics, 1985:419-440. "Advanced Image Synthesis - Surfaces." Advances in Computer Graphics, 1985:457-467. "Computational Issues in Rendering Anti-Aliased Detail." COMPCON, 1982:238-244. "Toward more complicated computer imagery." Computers & Graphics, 5(2-4):61-69 (1980). "The Aliasing Problem in Computer-Generated Shaded Images." Commun. ACM, 20(11):799-805 (1977). "Shadow Algorithms for Computer Graphics", Computer Graphics (SIGGRAPH '77 Proceedings), vol. 11, no. 2, 242–248. See also University of Texas at Austin Texture mapping References Computer graphics professionals Living people University of Utah alumni Ohio State University faculty New York Institute of Technology faculty Nvidia people Scientists at PARC (company) Year of birth missing (living people)
https://en.wikipedia.org/wiki/Herbert%20Freeman
Dr. Herbert Freeman (born Herbert Freinmann, December 13, 1925 – November 15, 2020) was an American computer scientist who made important contributions to the field of automatic label placement, computer graphics, including spatial anti-aliasing, and machine vision. Personal life Herbert Freeman was born Herbert Freimann in Frankfurt, Germany on December 13, 1925. Freeman's parents, Leo and Johanna, and his brother, Henry, emigrated to the United States in 1936. Herbert was diagnosed with tuberculosis, and was unable to join his family in the United States until 1938. He received his B.S.E.E. degree from Union College, NY, and his Master's and Eng.Sc.D. degree from Columbia University, NY. He married Joan Sleppin in 1955 and they had three children, Nancy, Susan, and Robert. Freeman died on November 15, 2020, in his home in New Jersey, USA. Career in Computer Science Freeman held many professorial posts such as in RPI (Rensselaer Polytechnic Institute), NYU, and Rutgers University. Freeman was the recipient of several awards, including the IEEE Computer Society's Computer Pioneer award (1999). Freeman was also a Fellow of the ACM, a Life Fellow of the IEEE, and a Guggenheim Fellow. Professor Freeman also founded MapText, Inc., in 1997. See also Dr. Freeman's homepage at Rutgers University Dr. Freeman's White Paper on Automated Cartographic Text Placement Guide to the Herbert Freeman Family Collection, Leo Baeck Institute, New York, NY. Freeman's memoir Cobblestones. References Computer vision researchers Fellows of the Association for Computing Machinery Fellow Members of the IEEE 2020 deaths Polytechnic Institute of New York University faculty Fellows of the International Association for Pattern Recognition 1925 births
https://en.wikipedia.org/wiki/SDF
SDF may refer to: Computing File formats Simple Data Format, for binary data Spatial Data File, for geodatabases Standard Delay Format, for timing data SQL Server Compact Edition Database File (filename extension: .sdf) Structure data file, for chemical tables Scientific Data Format, a Hierarchical Data Format implementation Formal computer science Signed distance function (or field), in mathematical applications Syntax Definition Formalism, to describe formal languages Other uses in computing Software development folder, a physical or virtual container for software project artifacts Synchronous Data Flow, a restriction of Kahn process networks SDF Public Access Unix System, a shared shell provider Entertainment and media Südtirol Digital Fernsehen, a TV station in South Tyrol, Italy Super Dimensional Fortress, warships in the Robotech/Macross franchise Settlement Defense Front (SDF), a enemy faction of Call of Duty: Infinite Warfare Organizations Military forces Japan Self-Defense Forces State defense force, US Sudan Defence Force, 1925–1955 Syrian Democratic Forces, North and East Syria Political parties Sikkim Democratic Front, India Social Democratic Federation (United States), 1936–1956 Social Democratic Federation, UK, 1884–1911 Social Democratic Forum, Hong Kong, 2000 Social Democratic Front (disambiguation), Cameroon and Ghana Socialist Democratic Federation (Japan), 1978–1994 Other organizations Scouts de France, youth group Serb Democratic Forum, Serbs of Croatia Places Louisville International Airport, Kentucky, US (by IATA code) Soquel Demonstration State Forest, California, US Science Silver diammine fluoride, in dentistry Stromal cell-derived factor (disambiguation), SDF1, SDF1a, etc. Subwavelength-diameter optical fibre Synchronous diaphragmatic flutter or hiccup Other uses SDF Group (SAME Deutz-Fahr), an Italian-based agricultural machine manufacturer Simplified directional facility, an aviation instrument approach navigational aid Stochastic discount factor, in econometrics
https://en.wikipedia.org/wiki/Compiled%20language
A compiled language is a programming language whose implementations are typically compilers (translators that generate machine code from source code), and not interpreters (step-by-step executors of source code, where no pre-runtime translation takes place). The term is somewhat vague. In principle, any language can be implemented with a compiler or with an interpreter. A combination of both solutions is also common: a compiler can translate the source code into some intermediate form (often called p-code or bytecode), which is then passed to an interpreter which executes it. Advantages and disadvantages Programs compiled into native code at compile time tend to be faster than those translated at runtime due to the translation process's overhead. Newer technologies such as just-in-time compilation, and general improvements in the translation process are starting to narrow this gap, though. Mixed solutions using bytecode tend toward intermediate efficiency. Low-level programming languages are typically compiled, especially when efficiency is the main concern, rather than cross-platform support. For such languages, there are more one-to-one correspondences between the programmed code and the hardware operations performed by machine code, making it easier for programmers to control the use of central processing unit (CPU) and memory in fine detail. With some effort, it is always possible to write compilers even for traditionally interpreted languages. For example, Common Lisp can be compiled to Java bytecode (then interpreted by the Java virtual machine), C code (then compiled to native machine code), or directly to native code. Programming languages that support multiple compiling targets give developers more control to choose either execution speed or cross-platform compatibility or usage. Languages Some languages that are commonly considered to be compiled: Tools ANTLR Lex Flex GNU bison Yacc See also Compiler List of compiled languages Interpreter (computing) Scripting language References External links Official Website for Gully Bet Programming language classification
https://en.wikipedia.org/wiki/SATA
SATA (Serial AT Attachment) is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives, optical drives, and solid-state drives. Serial ATA succeeded the earlier Parallel ATA (PATA) standard to become the predominant interface for storage devices. Serial ATA industry compatibility specifications originate from the Serial ATA International Organization (SATA-IO) which are then released by the INCITS Technical Committee T13, AT Attachment (INCITS T13). History SATA was announced in 2000 in order to provide several advantages over the earlier PATA interface such as reduced cable size and cost (seven conductors instead of 40 or 80), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. Revision 1.0 of the specification was released in January 2003. Serial ATA industry compatibility specifications originate from the Serial ATA International Organization (SATA-IO). The SATA-IO group collaboratively creates, reviews, ratifies, and publishes the interoperability specifications, the test cases and plugfests. As with many other industry compatibility standards, the SATA content ownership is transferred to other industry bodies: primarily INCITS T13 and an INCITS T10 subcommittee (SCSI), a subgroup of T10 responsible for Serial Attached SCSI (SAS). The remainder of this article strives to use the SATA-IO terminology and specifications. Before SATA's introduction in 2000, PATA was simply known as ATA. The "AT Attachment" (ATA) name originated after the 1984 release of the IBM Personal Computer AT, more commonly known as the IBM AT. The IBM AT's controller interface became a de facto industry interface for the inclusion of hard disks. "AT" was IBM's abbreviation for "Advanced Technology"; thus, many companies and organizations indicate SATA is an abbreviation of "Serial Advanced Technology Attachment". However, the ATA specifications simply use the name "AT Attachment", to avoid possible trademark issues with IBM. SATA host adapters and devices communicate via a high-speed serial cable over two pairs of conductors. In contrast, parallel ATA (the redesignation for the legacy ATA specifications) uses a 16-bit wide data bus with many additional support and control signals, all operating at a much lower frequency. To ensure backward compatibility with legacy ATA software and applications, SATA uses the same basic ATA and ATAPI command sets as legacy ATA devices. The world's first SATA hard disk drive is the Seagate Barracuda SATA V, which was released in Jan 2003. SATA has replaced parallel ATA in consumer desktop and laptop computers; SATA's market share in the desktop PC market was 99% in 2008. PATA has mostly been replaced by SATA for any use; with PATA in declining use in industrial and embedded applications that use CompactFlash (CF) storage, which was designed around the legacy PATA standard. A 2008 standard, C
https://en.wikipedia.org/wiki/SQL%20Slammer
SQL Slammer is a 2003 computer worm that caused a denial of service on some Internet hosts and dramatically slowed general Internet traffic. It also crashed routers around the world, causing even more slowdowns. It spread rapidly, infecting most of its 75,000 victims within 10 minutes. The program exploited a buffer overflow bug in Microsoft's SQL Server and Desktop Engine database products. Although the MS02-039 (CVE-2002-0649) patch had been released six months earlier, many organizations had not yet applied it. The most infected regions were Europe, North America, and Asia (including East Asia and southern Asia (India) etc. ). Technical details The worm was based on proof of concept code demonstrated at the Black Hat Briefings by David Litchfield, who had initially discovered the buffer overflow vulnerability that the worm exploited. It is a small piece of code that does little other than generate random IP addresses and send itself out to those addresses. If a selected address happens to belong to a host that is running an unpatched copy of Microsoft SQL Server Resolution Service listening on UDP port 1434, the host immediately becomes infected and begins spraying the Internet with more copies of the worm program. Home PCs are generally not vulnerable to this worm unless they have MSDE installed. The worm is so small that it does not contain code to write itself to disk, so it only stays in memory, and it is easy to remove. For example, Symantec provides a free of charge removal utility, or it can even be removed by restarting SQL Server (although the machine would likely be reinfected immediately). The worm was made possible by a software security vulnerability in SQL Server first reported by Microsoft on 24 July 2002. A patch had been available from Microsoft for six months prior to the worm's launch, but many installations had not been patched – including many at Microsoft. The worm began to be noticed early on 25 January 2003 as it slowed systems worldwide. The slowdown was caused by the collapse of numerous routers under the burden of extremely high bombardment traffic from infected servers. Normally, when traffic is too high for routers to handle, the routers are supposed to delay or temporarily stop network traffic. Instead, some routers crashed (became unusable), and the "neighbour" routers would notice that these routers had stopped and should not be contacted (aka "removed from the routing table"). Routers started sending notices to this effect to other routers they knew about. The flood of routing table update notices caused some additional routers to fail, compounding the problem. Eventually the crashed routers' maintainers restarted them, causing them to announce their status, leading to another wave of routing table updates. Soon a significant portion of Internet bandwidth was consumed by routers communicating with each other to update their routing tables, and ordinary data traffic slowed or in some cases stopped altogeth
https://en.wikipedia.org/wiki/Computer%20addiction
Computer addiction is a form of behavioral addiction that can be described as the excessive or compulsive use of the computer, which persists despite serious negative consequences for personal, social, or occupational function. Another clear conceptualization is made by Block, who stated that "Conceptually, the diagnosis is a compulsive-impulsive spectrum disorder that involves online and/or offline computer usage and consists of at least three subtypes: excessive gaming, sexual preoccupations, and e-mail/text messaging". Computer addiction is not currently included in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) as an official disorder. The concept of computer addiction is broadly divided into two types, namely offline computer addiction, and online computer addiction. Offline computer addiction is normally used when speaking about excessive gaming behavior, which can be practiced both offline and online. Online computer addiction, also known as Internet addiction, gets more attention in general from scientific research than offline computer addiction, mainly because most cases of computer addiction are related to the excessive use of the Internet. Experts on Internet addiction have described this syndrome as an individual working intensely on the Internet, prolonged use of the Internet, uncontrollable use of the Internet, unable to use the Internet in an efficient, timely matter, not being interested in the outside world, not spending time with people from the outside world, and an increase in their loneliness and dejection. Symptoms Being drawn by the computer as soon as one wakes up and before one goes to bed Replacing old hobbies with excessive use of the computer and using the computer as one's primary source of entertainment and procrastination Lacking physical exercise and/or outdoor exposure because of constant use of the computer, which could contribute to many health problems such as obesity Backache Headaches Weight gain or loss Disturbances in sleep Carpal tunnel syndrome Blurred or strained vision Depression and marital infidelity Effects Excessive computer use may result in, or occur with: Lack of face to face social interaction Computer vision syndrome Causes Kimberly Young indicates that previous research links internet/computer addiction with existing mental health issues, most notably depression. She states that computer addiction has significant effects socially, such as low self-esteem, psychologically and occupationally, which led many subjects to academic failure. According to a Korean study on internet/computer addiction, pathological use of the internet results in negative life impacts such as job loss, marriage breakdown, financial debt, and academic failure. 70% of internet users in Korea are reported to play online games, 18% of whom are diagnosed as game addicts, which relates to internet/computer addiction. The authors of the article conducted a study using Kimberly Young's questionnaire.
https://en.wikipedia.org/wiki/Yonhap%20News%20Agency
Yonhap News Agency () is a major South Korean news agency. It is based in Seoul, South Korea. Yonhap provides news articles, pictures and other information to newspapers, TV networks and other media in South Korea. History Yonhap (, ) was established on 19 December 1980, through the merger of Hapdong News Agency and Orient Press. The Hapdong News Agency itself emerged in late 1945 out of the short-lived Kukje News, which had operated for two months out of the office of the Domei, the former Japanese news agency that had functioned in Korea during the Japanese colonial era. In 1999, Yonhap took over the Naewoe News Agency. Naewoe was a South Korea government-affiliated organization, created in the mid 1970s, tasked with publishing information and analysis on North Korea from a South Korean perspective through books and journals. Naewoe was known to have close links with South Korea's intelligence agency, and according to the British academic and historian James Hoare, Naowoe's publications became "less partisan after the late 1980s and are often useful source of information on North Korea". In 1999, Naewoe merged with Yonhap News Agency, with materials on North Korea continued to be "distributed for free as part of the government's propaganda effort". According to the U.S. Library of Congress, "Originally a propaganda vehicle that followed the government line on unification policy issued, Naowae Press became increasingly objective and moderate in tone in the mid-1980s in interpreting political, social, and economic developments in North Korea". Naowae's principal publication was the monthly magazine Vantage Point: Developments in North Korea, which continued to be published by Yonhap until its discontinuation in 2016. It launched the television news channel YTN on 2 December 1996, but separated the channel in 1998. It launched again in 2011 by its namesake channel Yonhap News. Activity Yonhap maintains various agreements with 90 non-Korean news agencies, and also has a services-exchange agreement with North Korea's Korean Central News Agency (KCNA) agency, signed in 2002. It is the only Korean wire service that works with foreign news agencies, and provides a limited but freely-available selection of news on its website in Korean, English, Chinese, Japanese, Spanish, Arabic, and French. Yonhap was the host news agency of the 1988 Seoul Summer Olympics and was elected twice to the board of the Organization of Asia-Pacific News Agencies (OANA). Yonhap is South Korea's only news agency large enough to have some 60 correspondents abroad and 600 reporters across the nation. Its largest shareholder is the Korea News Agency Commission (KONAC). In 2003, the South Korean government passed a law giving financial and systematic assistance to the agency, to reinforce staff and provide equipment. In the legislation, it was also given the role of “promoting the country's image” to an international audience. The head of the Yonhap agency is usually affili
https://en.wikipedia.org/wiki/BASIC%20Programming
BASIC Programming is an Atari Video Computer System (later called the Atari 2600) cartridge that teaches simple computer programming using a dialect of BASIC. Written by Warren Robinett and released by Atari, Inc. in 1979, this BASIC interpreter is one of a few non-game cartridges for the console. The Atari VCS's RAM size of 128 bytes restricts the possibilities for writing programs. Details The BASIC Programming display is divided into six regions: Program is where instructions are typed. It has a maximum of eleven lines of code. Stack shows temporary results of what the program does. Variables stores the values of any variables that the program is using. Output displays any output values that the program creates. Status shows the amount of available memory remaining. Graphics contains two colored squares that can be manipulated by the program. Input is given through two Atari keypad controllers, which came with special overlays to show how to type the different commands and letters. Programs are restricted to 64 characters in size and normally 9 lines of code, limiting the programs that can be written (users can disable all windows except Program and keep selecting "New Line" until 11 lines of code are present). Language features VCS BASIC supports the following keywords: Statements: Print Structure: Goto, If-Then-Else Graphics: Clear Functions: Hit, Key Math: + - × ÷ Mod Relational operators: < > = Unlike most BASIC implementations of the time: VCS BASIC uses ← instead of = for assignment; e.g., A←A+1. Statements can be strung together on a line without a delimiter; e.g., Note←APrintA. An If statement can be used as a function, returning a value: Mod292 If statements can take an Else clause. Special variable names: Note sounds a musical note, assigned numbers from 0 to 7 Numbers assigned to Note are implicitly assigned modulus 8, thus 8 becomes 0, 9 becomes 1, etc. Hor1, Hor2 - the horizontal coordinate of one of two squares Ver1, Ver2 - the vertical coordinate of one of two squares The language supports 26 unsigned integer variables A to Z. VCS BASIC supports integers from 0 to 99. Math operations wrap, so 99+1 becomes 0, 99+2 becomes 1, etc. Sample code The following example of a Pong game is provided. See also List of Atari 2600 games Spectravideo CompuMate Family BASIC References External links BASIC Programming at Atari Mania 1979 software Atari 2600 BASIC interpreters BASIC programming language family Discontinued BASICs Video game development software
https://en.wikipedia.org/wiki/Peek
Peek or PEEK may refer to: Computing Peek (data type operation), an operation on data types such as stacks and queues PEEK and POKE, the low-level commands of the BASIC programming language Peek (mobile Internet device), an email-only mobile handheld device Peek, an ADABAS/NATURAL utility Peek (software), a Linux application to create GIF animations People with the surname Antwan Peek (born 1979), American footballer Ben Peek (born 1976), Australian author Bertrand Meigh Peek (1891–1965), British astronomer Burton Peek (1872–1960), former president of Deere & Company Dan Peek (1950–2011), musician Frank William Peek (1881–1933), American electrical engineer and inventor Kim Peek (1951–2009), American savant Paul Peek (1937–2001), musician Paul Peek (politician) (1904–1987) Peek baronets Henry Peek (1825–1898), 1st Baronet, importer of spices and tea Cuthbert Peek (1855–1901), 2nd Baronet, astronomer and meteorologist Other uses Polyether ether ketone (PEEK), the family of thermoplastic resins Peek, Oklahoma, a US ghost town Peek (crater), a small lunar impact crater in the northern part of the Mare Smythii near the eastern limb of the Moon See also Peak (disambiguation)
https://en.wikipedia.org/wiki/Code%20Red%20%28computer%20worm%29
Code Red was a computer worm observed on the Internet on July 15, 2001. It attacked computers running Microsoft's IIS web server. It was the first large-scale, mixed-threat attack to successfully target enterprise networks. The Code Red worm was first discovered and researched by eEye Digital Security employees Marc Maiffret and Ryan Permeh when it exploited a vulnerability discovered by Riley Hassell. They named it "Code Red" because they were drinking the Mountain Dew flavor of the same name at the time of discovery. Although the worm had been released on July 13, the largest group of infected computers was seen on July 19, 2001. On that day, the number of infected hosts reached 359,000. It spread worldwide, becoming particularly prevalent in North America, Europe and Asia (including China and India). Concept Exploited vulnerability The worm showed a vulnerability in the growing software distributed with IIS, described in Microsoft Security Bulletin MS01-033, for which a patch had become available a month earlier. The worm spread itself using a common type of vulnerability known as a buffer overflow. It did this by using a long string of the repeated letter 'N' to overflow a buffer, allowing the worm to execute arbitrary code and infect the machine with the worm. Kenneth D. Eichman was the first to discover how to block it, and was invited to the White House for his discovery. Worm payload The payload of the worm included: Defacing the affected web site to display: HELLO! Welcome to http://www.worm.com! Hacked By Chinese! Other activities based on the day of the month: Days 1-19: Trying to spread itself by looking for more IIS servers on the Internet. Days 20–27: Launch denial of service attacks on several fixed IP addresses. The IP address of the White House web server was among these. Days 28-end of month: Sleeps, no active attacks. When scanning for vulnerable machines, the worm did not test to see if the server running on a remote machine was running a vulnerable version of IIS, or even to see if it was running IIS at all. Apache access logs from this time frequently had entries such as these: The worm's payload is the string following the last 'N'. Due to a buffer overflow, a vulnerable host interpreted this string as computer instructions, propagating the worm. Similar worms On August 4, 2001, Code Red II appeared. Although it used the same injection vector, it had a completely different payload. It pseudo-randomly chose targets on the same or different subnets as the infected machines according to a fixed probability distribution, favoring targets on its own subnet more often than not. Additionally, it used the pattern of repeating 'X' characters instead of 'N' characters to overflow the buffer. eEye believed that the worm originated in Makati, Philippines, the same origin as the VBS/Loveletter (aka "ILOVEYOU") worm. See also Nimda Worm Timeline of computer viruses and worms References External links Code Red II
https://en.wikipedia.org/wiki/Timeline%20of%20computer%20viruses%20and%20worms
This timeline of computer viruses and worms presents a chronological timeline of noteworthy computer viruses, computer worms, Trojan horses, similar malware, related research and events. 1960s John von Neumann's article on the "Theory of self-reproducing automata" is published in 1966. The article is based on lectures given by von Neumann at the University of Illinois about the "Theory and Organization of Complicated Automata" in 1949. 1970s 1970 The first story written about a computer virus is The Scarred Man by Gregory Benford. 1971 The Creeper system, an experimental self-replicating program, is written by Bob Thomas at BBN Technologies to test John von Neumann's theory. Creeper infected DEC PDP-10 computers running the TENEX operating system. Creeper gained access via the ARPANET and copied itself to the remote system where the message "I'm the creeper, catch me if you can!" was displayed. The Reaper program was later created to delete Creeper. At the University of Illinois at Urbana-Champaign, a graduate student named Alan Davis (working for Prof. Donald Gillies) created a process on a PDP-11 that (a) checked to see if an identical copy of itself was currently running as an active process, and if not, created a copy of itself and started it running; (b) checked to see if any disk space (which all users shared) was available, and if so, created a file the size of that space; and (c) looped back to step (a). As a result, the process stole all available disk space. When users tried to save files, the operating system advised them that the disk was full and that they needed to delete some existing files. Of course, if they did delete a file, this process would immediately snatch up the available space. When users called in a system administrator (A. Ian Stocks) to fix the problem, he examined the active processes, discovered the offending process, and deleted it. Of course, before he left the room, the still existing process would create another copy of itself, and the problem wouldn't go away. The only way to make the computer work again was to reboot. 1972 The science fiction novel, When HARLIE Was One, by David Gerrold, contains one of the first fictional representations of a computer virus, as well as one of the first uses of the word "virus" to denote a program that infects a computer. 1973 In fiction, the 1973 Michael Crichton movie Westworld made an early mention of the concept of a computer virus, being a central plot theme that causes androids to run amok. Alan Oppenheimer's character summarizes the problem by stating that "...there's a clear pattern here which suggests an analogy to an infectious disease process, spreading from one...area to the next." To which the replies are stated: "Perhaps there are superficial similarities to disease" and, "I must confess I find it difficult to believe in a disease of machinery." (Crichton's earlier work, the 1969 novel The Andromeda Strain and 1971 film were about an extraterr
https://en.wikipedia.org/wiki/Backslash
The backslash is a typographical mark used mainly in computing and mathematics. It is the mirror image of the common slash . It is a relatively recent mark, first documented in the 1930s. It is sometimes called a hack, whack, escape (from C/UNIX), reverse slash, slosh, downwhack, backslant, backwhack, bash, reverse slant, reverse solidus, and reversed virgule. History , efforts to identify either the origin of this character or its purpose before the 1960s have not been successful. The earliest known reference found to date is a 1937 maintenance manual from the Teletype Corporation with a photograph showing the keyboard of its Kleinschmidt keyboard perforator WPE-3 using the Wheatstone system. {{Original research span|The symbol was called the "diagonal key",<ref> In June 1960, IBM published an "Extended character set standard" that includes the symbol at 0x19. In September 1961, Bob Bemer (IBM) proposed to the X3.2 standards committee that , and be made part of the proposed standard, describing the backslash as a "reverse division operator" and cited its prior use by Teletype in telecommunications. In particular, he said, the was needed so that the ALGOL boolean operators (logical conjunction) and (logical disjunction) could be composed using and respectively. The Committee adopted these changes into the draft American Standard (subsequently called ASCII) at its November 1961 meeting. These operators were used for min and max in early versions of the C programming language supplied with Unix V6 and V7. Usage Programming languages In many programming languages such as C, Perl, PHP, Python, Unix scripting languages, and many file formats such as JSON, the backslash is used as an escape character, to indicate that the character following it should be treated specially (if it would otherwise be treated literally), or literally (if it would otherwise be treated specially). For instance, inside a C string literal the sequence produces a newline byte instead of an 'n', and the sequence produces an actual double quote rather than the special meaning of the double quote ending the string. An actual backslash is produced by a double backslash . Regular expression languages used it the same way, changing subsequent literal characters into metacharacters and vice versa. For instance searches for either '|' or 'b', the first bar is escaped and searched for, the second is not escaped and acts as an "or". Outside quoted strings, the only common use of backslash is to ignore ("escape") a newline immediately after it. In this context it may be called a "continued line" as the current line continues into the next one. Some software replaces the backslash+newline with a space. To support computers that lacked the backslash character, the C trigraph was added, which is equivalent to a backslash. Since this can escape the next character, which may itself be a , the primary modern use may be for code obfuscation. Support for trigraphs in C++ wa
https://en.wikipedia.org/wiki/Null-move%20heuristic
In computer chess programs, the null-move heuristic is a heuristic technique used to enhance the speed of the alpha–beta pruning algorithm. Rationale Alpha–beta pruning speeds the minimax algorithm by identifying cutoffs, points in the game tree where the current position is so good for the side to move that best play by the other side would have avoided it. Since such positions could not have resulted from best play, they and all branches of the game tree stemming from them can be ignored. The faster the program produces cutoffs, the faster the search runs. The null-move heuristic is designed to guess cutoffs with less effort than would otherwise be required, whilst retaining a reasonable level of accuracy. The null-move heuristic is based on the fact that most reasonable chess moves improve the position for the side that played them. So, if the player whose turn it is to move can forfeit the right to move (or make a null move – an illegal action in chess) and still have a position strong enough to produce a cutoff, then the current position would almost certainly produce a cutoff if the current player actually moved. Implementation In employing the null-move heuristic, the computer program first forfeits the turn of the side whose turn it is to move, and then performs an alpha–beta search on the resulting position to a shallower depth than it would have searched the current position had it not used the null move heuristic. If this shallow search produces a cutoff, it assumes the full-depth search in the absence of a forfeited turn would also have produced a cutoff. Because a shallow search is faster than deeper search, the cutoff is found faster, accelerating the computer chess program. If the shallow search fails to produce a cutoff, then the program must make the full-depth search. This approach makes two assumptions. First, it assumes that the disadvantage of forfeiting one's turn is greater than the disadvantage of performing a shallower search. Provided the shallower search is not too much shallower (in practical implementation, the null-move search is usually 2 or 3 plies shallower than the full search would have been), this is usually true. Second, it assumes that the null-move search will produce a cutoff frequently enough to justify the time spent performing null-move searches instead of full searches. In practice, this is also usually true. Problems with the null-move heuristic There are a class of chess positions where employing the null-move heuristic can result in severe tactical blunders. In these zugzwang (German for "forced to move") positions, the player whose turn it is to move has only bad moves as their legal choices, and so would actually be better off if allowed to forfeit the right to move. In these positions, the null-move heuristic may produce a cutoff where a full search would not have found one, causing the program to assume the position is very good for a side when it may in fact be very bad fo
https://en.wikipedia.org/wiki/Hartree
The hartree (symbol: Eh or Ha), also known as the Hartree energy, is the unit of energy in the Hartree atomic units system, named after the British physicist Douglas Hartree. Its CODATA recommended value is = The hartree energy is approximately the electric potential energy of the hydrogen atom in its ground state and, by the virial theorem, approximately twice its ionization energy; the relationships are not exact because of the finite mass of the nucleus of the hydrogen atom and relativistic corrections. The hartree is usually used as a unit of energy in atomic physics and computational chemistry: for experimental measurements at the atomic scale, the electronvolt (eV) or the reciprocal centimetre (cm−1) are much more widely used. Other relationships = 2 Ry = 2 R∞hc ≜ ≜ ≜ ≜ ≜ ≜ ≜ ≜ where: ħ is the reduced Planck constant, me is the electron rest mass, e is the elementary charge, a0 is the Bohr radius, ε0 is the electric constant, c is the speed of light in vacuum, and α is the fine-structure constant. Note that since the Bohr radius is defined as one may write the Hartree energy as in Gaussian units where . Effective hartree units are used in semiconductor physics where is replaced by and is the static dielectric constant. Also, the electron mass is replaced by the effective band mass . The effective hartree in semiconductors becomes small enough to be measured in millielectronvolts (meV). References Units of energy Physical constants
https://en.wikipedia.org/wiki/Covert%20channel
In computer security, a covert channel is a type of attack that creates a capability to transfer information objects between processes that are not supposed to be allowed to communicate by the computer security policy. The term, originated in 1973 by Butler Lampson, is defined as channels "not intended for information transfer at all, such as the service program's effect on system load," to distinguish it from legitimate channels that are subjected to access controls by COMPUSEC. Characteristics A covert channel is so called because it is hidden from the access control mechanisms of secure operating systems since it does not use the legitimate data transfer mechanisms of the computer system (typically, read and write), and therefore cannot be detected or controlled by the security mechanisms that underlie secure operating systems. Covert channels are exceedingly hard to install in real systems, and can often be detected by monitoring system performance. In addition, they suffer from a low signal-to-noise ratio and low data rates (typically, on the order of a few bits per second). They can also be removed manually with a high degree of assurance from secure systems by well established covert channel analysis strategies. Covert channels are distinct from, and often confused with, legitimate channel exploitations that attack low-assurance pseudo-secure systems using schemes such as steganography or even less sophisticated schemes to disguise prohibited objects inside of legitimate information objects. The legitimate channel misuse by steganography is specifically not a form of covert channel. Covert channels can tunnel through secure operating systems and require special measures to control. Covert channel analysis is the only proven way to control covert channels. By contrast, secure operating systems can easily prevent misuse of legitimate channels, so distinguishing both is important. Analysis of legitimate channels for hidden objects is often misrepresented as the only successful countermeasure for legitimate channel misuse. Because this amounts to analysis of large amounts of software, it was shown as early as 1972 to be impractical. Without being informed of this, some are misled to believe an analysis will "manage the risk" of these legitimate channels. TCSEC criteria The Trusted Computer Security Evaluation Criteria (TCSEC) was a set of criteria, now deprecated, that had been established by the National Computer Security Center, an agency managed by the United States' National Security Agency. Lampson's definition of a covert channel was paraphrased in the TCSEC specifically to refer to ways of transferring information from a higher classification compartment to a lower classification. In a shared processing environment, it is difficult to completely insulate one process from the effects another process can have on the operating environment. A covert channel is created by a sender process that modulates some condition (such a